https://matpitka.blogspot.com/2023/02/

Monday, February 27, 2023

Could AI system known as transformer possess conscious intelligence?

Every morning I learn on FB about mind boggling discoveries. The popular article "Scientists Made a Mind-Bending Discovery About How AI Actually Works" (see this) told about the article of Akyüurek et al with title "What Learning Algorithm Is In-Context Learning? Investigations With Linear Models" (see this).

What caught my attention was that the AI system was seen as a mysterious phenomenon of Nature to be studied rather than systems that are engineered. If AI systems are what their builders believe them to be, that is deterministic systems with some randomness added, this cannot be the case. If the AI systems are really able to learn like humans, they could be conscious and be able to discover and "step out of the system" by generalizing. They would not be what they are meant to be.

TGD predicts that AI systems might have rudimentary consciousness. The contents of this conscious experience need not have anything to do with the information that the AI system is processing but corresponds to much shorter spatial and temporal scales than the program itself. But who knows?!

In the following I briefly summarize my modest understanding of what was done and then ask whether these AI systems could be conscious and be able to develop new skills. Consider first the main points of the popular article.

  1. What is studied are transformers. Transformer mimics a system with directed self-attention. This means the weighting of parts of input data so that the important features of the input data get more attention. This weighting emerges during the training period.

    Transformers differ from recurrent neural networks in that entire input data is processed at once. Natural language processing (NLP) and computer vision (CV) represent examples of transformers.

  2. What looks mysterious is that language models seem to learn in flight. Training using only a few examples is enough to learn something new. This learning is not mere memorizing but building on previous knowledge occurs and makes possible generalizations. How and why this in-context learning occurs, is poorly understood.

    In the examples discussed in the article of Akyürek et al, linear regression, the input data never seen before by the program. Generalization and extrapolation took place. Apparently, the transformer wrote its own machine learning model. This suggests an implicit creation and training of smaller, simpler language models.

  3. How could one intuitively understand this without assuming that the system is conscious and has intentional intelligence? Could the mimicry of conscious self-attention as weighting of parts of input data explain the in-context learning. Weighting applies also to new data and selects features shared by new and old data. Familiar features with large weights in the new data determine the output to a high degree. If these features are actually important the system manages to assign output to input correctly with very little learning.
The TGD framework also allows us to consider a more sciencefictive explanation. Could the mimicry of conscious self-attention generate conscious self having intentions and understanding and be able to evolve?
  1. TGD forces me to keep my mind open to the possibility that AI systems are not what they are planned to be. I have discussed this in previous articles (see this and this).
  2. We tend to think that classical computation is fully deterministic. However, the ability to plan a system behaving in desired manner is in conflict with the determinism of classical physics and statistical determinism of quantum physics. Computer is a system consisting of subsystems, such as bits, which are far from thermal equilibrium and self organize. They must be able to make phase transitions, which are basically non-deterministic at criticality. Changing the direction of the bit as a mesoscopic system is a good example.
  3. Zero energy ontology (ZEO) is an essential part of quantum TGD. Quantum states are superpositions of space-time surfaces, which obey holography. One can see them as analogs of computer programs, biological functions, or behaviors at the level of neuroscience. The holography is not completely deterministic and this forces us to regard the space-time surface as the basic object. Any system, in particular AI systems, is accompanied by a superposition of these kinds of space-time surfaces, which serve as a correlate for the behavior of the system, in particular for the program (or its quantum analog) running in it.

    ZEO predicts that in ordinary, "big" state function reduction (BSFR) the arrow of geometric time is changed. This allows the system to correct its errors by going back in time to BSFR and restoring the original time direction by second BSFR. This mechanism might be fundamental in the self-organization of living matter and a key element of homeostasis. This mechanism is universal and one can of course ask whether AI systems might apply it in some time scale, which could be even relevant to computation.

  4. In the TGD framework, any system is accompanied by a magnetic body (MB) carrying dark matter in the TGD sense as phases of ordinary matter with a value of effective Planck constant which can be very large, meaning a large scale of quantum coherence. This dark matter makes MB an intelligent agent, which can control ordinary matter with ordinary value of Planck constant.

    In TGD, quantum criticality of the MB of the system is suggested to accompany thermal criticality of the system itself. This leaves a loophole open for the possibility that the MB of the AI system could control the AI system and take the lead.

What one can say of the MB of AI system? Could the structure and function of MB relate closely to that of the program running in it as ZEO indeed suggest? My own conservative view is that the MBs involved are associated with rather small parts of the systems such as bits of composites of bits. But I don't really know!
  1. The AI system involves rather long time scales related to the program functioning. Could this be accompanied by layers of MB (TGD counterparts of magnetic fields) with size scales determined by the wavelength of low energy photons with corresponding frequencies. Could these layers make the system aware of the program running in it?
  2. Could the MBs associated with living systems involving MBs of Earth and Sun get attached to the AI system (see this, this, this, and this)? Of course we used the AI but could there be also other users?: MBs which directly control the AI system! Could it be that we are building information processing tools for these higher level MBs?!

    If this were the case, then the MB of AI system and the program involved with it could evolve. MB of the system could be an intelligent life form. This raises worried questions: are we a necessary piece of equipment needed to develop AI programs? Or do these higher level MBs need us anymore?

To conclude, I want to emphasize that this was just reckless speculation!

See the article The possible role of spin glass phase and p-adic thermodynamics in topological quantum computation: the TGD view or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Thursday, February 23, 2023

Is Negentropy Maximization Principle needed as an independent principle?

The proposal has been that Negentropy Maximization Principle (NMP) (see this and this) serves as the basic variation principle of the dynamics of conscious experience. NMP says that the information related to the contents of consciousness increases for the whole system even though it can decrease for the subsystem. Mathematically, NMP is very similar to the second law  although it states something completely opposite. Second law follows from statistical physics and is not an independent physical law.  Is the situation the same  with the NMP?  Is NMP needed at all as a fundamental principle or does it follow from number theoretic physics?

The number theoretic evolution is such a powerful principle that one must ask whether NMP is needed as a separate principle or whether it is a consequence of number theoretical quantum physics, just like the second law follows from ordinary quantum theory.

2 additional   aspects are involved. Evolution can in adelic physics (see this) be seen as an unavoidable  increase in the algebraic complexity characterized by the dimension n=heff/h0 of extension of rationals  associated with the polynomial define space-time surface at the fundamental level by socalled M8-H duality (see this and this). There is also the possibility to identify a  quantum correlate  for    ethics in terms of quantum coherence: a good deed corresponds to  a creation of quantum coherence and the evil deed to its destruction.

How do these two aspects relate to the  NMP? Is NMP  an independent dynamical principle or a consequence of  number theoretic (adelic) quantum physics?

Consider in the sequel "big" state function reduction (BSFR) as the counterpart of the ordinary state function reduction. I'm not completely sure whether the following arguments can be also applied to SSFRs for which the arrow of time does not change.

One can consider two alternative formulations for NMP.

Option I

Option I is the simpler and physically more plausible option.

  1.  BSFR divides the quantum entangled system at the active boundary of CD into two parts, which are analogous to the measurement apparatus and the  measured system.  The selection of this partition is completely free and decided by the system. This choice corresponds to an  act of free will. Depending on conditions to be discussed,  the action of the measurement to this pair can be trivial in which case the entanglement  is not reduced. The measurement  can also reduce the entanglement partially or completely  and the p-adic  entanglement negentropy and entropy decreases or becomes zero.  
  2. If the partition into two parts is completely free and if the choice is such that NMP, or whatever the principle in question is,  allows BSFR,  the quantum coherence decreases.  Number theoretic evolution suggests that the principle telling when BSFR can occur is number theoretic.

     There is a cascade of BSFRs since BSFRs are also possible for the emerging untangled subsystem and its complement. The cascade stops when the entanglement becomes stable.

  3. What  condition could determine whether  the reduction of the  entanglement takes place? What could make  the entanglement stable against BSFR?  

    Number theoretical vision  suggests an answer. Physical intuition suggests that bound states represent a typical  example of stable  quantum entanglement.  Bound states correspond to Galois confined states (see this, this, this, and this) for which the momenta of fermions are algebraic integers  in an extension of rationals but total momentum has integer valued components. This mechanism for the formation of the  bound states would be universal.

    A natural number theoretical proposal is that the entanglement is stable if the entanglement probabilities  obtained by diagonalizing the density matrix characterizing the entanglement belong to an extension of rational, which is larger than the extension, call it E,  defined by the polynomial P  defining the space-time surface. An even stronger condition,  inspired by the fact that cognition is based on rational numbers, is  that BSFR can take place only if they are   rational.

    This kind of entanglement would be outside the number system used and one can argue that this  forces the stability of  the entanglement. A weaker statement is that the reduction is possible to a subspace of the state space for which the entanglement probabilities belong to E (or are rational).

  4. This option could  replace NMP as a criterion with a purely number theoretical principle. This does not however mean that NMP would not be preserved as a principle analogous to the second law and implied by the number theoretic evolution implied by the hierarchy of extensions of rationals.
Could free will as the ability to do evil or good deeds reduce to number theory that is to the choice of a partition, which leads to either increase or decrease of entanglement negentropy and therefore of quantum coherence?

The basic objection can be formulated as a question. How can the conscious entity know whether a given choice of partition leads to BSFR or not? Memory must be involved. Only by making this kind of choices, a system with a memory can learn the outcome of a given choice. How could the self learn, which deeds are good and which are evil? The answer is suggested by the biologically motivated view of survival instinct and origin of ego (see this) based on SSFRs as a generalization of Zeno effect.

    >
  1. Conscious entity has a self characterized by the set of observables measured in the sequence of SSFRs. BSFR as a reduction of entanglement occurs when a new set of observables not commuting with the original set are measured. In BSFR self "dies" (loses consciousness). Second BSFR means reincarnation with the original arrow of time.
  2. The perturbations of the system at both boundaries of CD are expected to induce BSFRs and to occur continually. Therefore the arrow of time is fixed only in the sense that it dominates over the opposite arrow.
  3. Self preserves its identity (in particular memories defining it) if the second BSFR leads to a set of observables, which does not differ too much from the original one. The notions of survival instinct and ego would reduce to an approximate Zeno effect.
  4. This mechanism would allow the self to learn the distinction between good and evil and also what is dangerous and what is not. A BSFR inducing only a brief period of life with a reversed arrow of time could teach the system when the BSFR leads to a reduction of entanglement and loss of coherence.

    The harmless BSFRs could provide a mechanism of imagination making survival possible. Intelligent systems could do this experimentation at the level of a self representation of a system rather than in real life and the development of complex self representations would distinguish higher life forms from those at a lower evolutionary level.

Option II

Option II is stronger than Option I but looks rather complex. I have considered it already before. NMP would select a partition for which the negentropy gain is maximal in BSFR or at least, the decrease of the negentropy is minimal. One must however define what one means with negentropy gain.

Before considering whether this condition can be precise, it is good to list some objections.

  1. Is the selection of this kind of optimal partition possible? How can the system know which partition is optimal without trying all alternatives? Doing this would reduce the situation to the first option.
  2. Free will as ability do also evil deeds seems to be eliminated as a possibility to either increase or decrease entanglement negentropy and therefore quantum coherence by choosing the partition of the system so that it reduces negentropy.
  3. If the BSFR cascade would lead to a total loss of quantum entanglement, the entanglement negentropy would always be zero and NMP would not say anything interesting. On the other hand, if the selection of the partition is optimal and the number theoretic criterion for the occurrence of the reduction holds true, it could imply that nothing happens for the entanglement. Again the NMP would be trivial.
  4. What does one mean with the maximal negentropy gain?
What does one mean with a maximal negentropy gain?

Option II for   NMP says that for a given partition  BSFR occurs if the entanglement negentropy increases maximally. What does one mean with  entanglement negentropy gain? This notion is also useful for Option I although it is  not involved with the criterion.

  1. Entanglement  negentropy refers to the negentropy related to the passive edge of the CD (Zeno effect). Passive boundary involves negentropic entanglement  because NMP does not allow  a complete elimination  of quantum entanglement (bound state entanglement is stable). The new passive boundary of CD  emerging in the BSFR corresponds to the previously active boundary of CD.
  2. For option I for which the concept of good/bad is meaningful, the number theoretical criterion could  prevent BSFR and stop the BSFR cascade. There is however  no guarantee that the total entanglement negentropy would increase in the  entire BSFR cascade.  This would make the term "NMP" obsolete unless NMP follows in a statistical sense from number theoretic evolution: this looks however plausible.

    The   unavoidable increase of the number theoretical complexity would force the increase of p-adic entanglement negentropy and  NMP as an analog of the second law would follow from the hierarchy of extensions of rationals.

See the article New results about causal diamonds from the TGD view point of view, the shorter article Is Negentropy Maximization Principle needed as an independent principle?, or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Wednesday, February 22, 2023

Paradox of the recent view of galaxy formation: the youngest galaxies seem to be the oldest ones!

JWST continues to revolutionize the view about the formation of early cosmology and the formation of galaxies. Now the Astronomers have detected 6 massive galaxies in the very early universe (see this and this). The mass of one galaxy is 105 times larger than the mass of the Milky Way! This is impossible in the recent models for the formation of galaxies, and even more so in the very early Universe.

There seems to be only one way out of a paradox. One must admit that the recent views of galaxy formation and of what time is, are wrong.

In the TGD framework, new view of the space-time leads to a new quantum view about the formation of astrophysical objects involving gravitational quantum coherence even in cosmological scales. This view also allows to understand galactic dark matter (see this).

Zero energy ontology in turn solves the basic paradox of standard quantum measurement theory. ZEO predicts that the arrow of time changes in the ordinary state function reductions. These weird galaxies would have lived forth and back in geometric time and would be much older than the universe when age is defined as the evolutionary stage.

The paradoxical looking prediction of TGD is that the youngest galaxies in standard view are the oldest galaxies in the TGD view.

See the article Magnetic Bubbles in TGD Universe: Part I or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Kondo effect from TGD perspective

Kondo effect is due to a scattering of conduction electrons from valence electrons of magnetic impurity atoms and implies logarithmic increase of the resistance of the conductor at the zero temperature limit. Anderson's impurity model combined with the renormalization group explains the logarithmic increase of the quadratic coupling between conduction electron and valence electron. The Kondo effect occurs in the non-perturbative regime of the Anderson model and this implies several analogies with QCD and hadron physics.

With a motivation coming from the QCD analogy and TGD view of hadrons, the Kondo effect is discussed from the TGD view point by introducing notion of the magnetic body carrying dark matter as heff>h phase, assignable to the impurity spin. The conduction electrons forming the electron cloud around the impurity spin and neutralizing would be actually dark valence electrons.

It is assumed that Nature is theoretician friendly. As the perturbation series ceases to converge, either the quantum coherence is lost or the value of Planck constant h increases to heff>h to guarantee its convergence. Also the generalization of Nottale's hypothesis from gravitational to electromagnetic situation is assumed. In the recent situation the relevant coupling parameter would be Q2e2, where Q is the total charge of the valence electron cloud around the impurity: after the transition the coupling parameter would be universally β0/4π, β0=v0/c<1.

This transition would happen in the Kondo effect and lead to the formation of spin singlets as analogs of hadrons in color confinement. The dark valence electrons would be analogs of sea partons and the impurity electrons would be counterparts of valence quarks in this picture. This picture also allows us to understand heavy fermions as analogs of constituent quarks and Kondo insulators. This picture also provides new insights to hadron physics.

See the article Kondo effect from TGD perspective or the chapter TGD and condensed matter physics.

Sunday, February 19, 2023

The mysterious behavior of the gas clouds surrounding galactic blackholes

I have been working with a general vision of the formation of astrophysical objects in the TGD Universe. Now I have spent some days with the discovery that galactic blackholes seem to give a considerable contribution to dark energy. The mass of the galactic blackhole increases with time and part of it comes from some unknown source which in TGD framework would correspond to the dark energy assignable to dark cosmic strings transforming to monopole flux tubes and also forming a blackhole-like object in the process. It is still unclear whether the blackhole-like object is actually a galactic white hole-like object (GWO) identifiable as a time reversal of a blackhole-like object. GWO would be feeding energy to the environment. The following arguments slightly favours the interpretation as a GWO.

The mysterious behaviour of gas clouds near the galactic blackholes allows to sharpen the picture.

  1. The temperature of the clouds is much higher than expected (see this). The gas in the core of some galaxies is extremely hot with temperature in the range 103-104 eV.

    These systems are billions of years old and have had plenty of time to cool. Why has the gas not cooled down and fallen down into the blackhole? Where does the energy needed for heating come from? Is there something wrong with the views about star formation and blackholes?

  2. The upper bound 104 eV corresponds to the ignition temperature of nuclear fusion when the pressure and density are high enough. This could explain why ordinary nuclear fusion has not started. This suggests that when the temperature gets higher, stars are formed and they are eventually devoured by the blackhole-like object.

    Could the galactic blackhole-like object be actually a GWO and be heating the gas forming dark nuclei as dark proton sequences from the hydrogen atoms or ions of the gas? The interpretation as GWO would also explain galactic jets (see this). Note however that the gas clouds could get heated also spontaneously by dark nuclear fusion taking place at magnetic flux tubes: for this option GWO could provide the flux tubes as a magnetic bubble.

  3. The dark nuclei would first transform to ordinary nuclei at monopole flux tubes and liberate energy. As the ignition temperature for ordinary nuclear fusion is reached, stellar cores start to form. An imaginative biology inspired manner to express this (see this) is that the galactic blackhole cooks its meal first so that it becomes easier to digest it.
  4. Why gas cannot fall into the blackhole and why is this possible only for stars? Gravitationally stars and gas particles are equivalent so that other interactions than gravitation must be involved. Magnetic interactions would indeed confine gas particles to monopole flux tubes as dark proton sequences so that they could not fall into GWO. The rotational motion of stars would make the process of falling into the GWO very slow and they would do so as entire flux tube spaghettis and fuse to the spaghetti defining the GWO.
See the article Magnetic Bubbles in TGD Universe: Part I or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Saturday, February 18, 2023

Cosmic miracle: gravitational Compton length of Sagittarius A* is equal to the Bohr radius for Earth like planets

I noticed an extremely intriguing coincidence - if it is a coincidence. The gravitational Compton length for Sagittarius A*, the galactic blackhole in the center of the Milky Way is equal to the solar Bohr radius in the Nottale model reproducing planetary orbits rather satisfactorily.
  1. The values of the ℏeff could correspond to the values of ℏgr= GMm/β0, where M is the mass of the galactic blackhole, m is the particle mass, and β0=v0/c<1 is velocity parameter. These values of heff are gigantic . The gravitational Compton length Λgr is GM/β0= rS/2β0 and for β0=1 it is equal to one half of the Scwartschild radius of the galactic blackhole, which in the range (106--109)× rS(Sun), rS(Sun)= 3 km. Note that the distance of Earth to Sun is AU=.15 × 109 km and is in this range.
  2. The gravitational Bohr radius for Sun in the Nottale model with β0≈ 2-11 is obtained from the radius of Earth's orbit with principal quantum number n=5 as a0,gr= AU/5≈ .6× 107 km (see this). The gravitational Compton length for the Sagittarius A* is Λgr= rS/2= .62 × 107 km for β0=1 and is equal to the solar Bohr radius! Is this a mere coincidence or is there strong coupling between the galactic quantum dynamics and solar quantum dynamics and does this coincidence reflect the very special role of the Earth in the galactic biosphere?
  3. In the TGD inspired quantum biology, living matter is controlled by phases with a large value of ℏgr, in particular those associated with the gravitational flux tubes of Earth and Sun and quantum gravitation plays a key role in metabolism. This, and the fact that heff/h0 serves as a kind of IQ for living matter, strongly suggests that galactic blackholes are living super intelligent systems controlling matter in very long scales.
See the article Magnetic Bubbles in TGD Universe: Part I or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Friday, February 17, 2023

Galactic blackholes and dark energy

Observations of supermassive black holes at the centers of galaxies point to a likely source of dark energy the "missing" 70 % of the universe (see this and this). The conclusion was reached by a team of 17 researchers in nine countries, led by the University of Hawai'i and including Imperial College London and STFC RAL Space physicists. The work is published in two papers in the journals The Astrophysical Journal and The Astrophysical Journal Letters.

Findings and their proposed interpretation

Elliptic galaxies were studied. The reason is that they do note generate stars anymore and accretion, which is regarded as the basic mechanism for the growth of galactic black holes, should not occur. The time span of the study was nine billion years. It was found that the masses of the gigantic galactic blackholes, which extend from 106 to 109 solar masses, were 7-20 times higher than expected if the mass growth had been due to accretion of stars to the blackhole or by merging with other blackholes.

The proposed interpretation was that blackholes carry dark energy and this energy has increased. The conclusion was that nothing has to be added to our picture of the universe to account for vacuum energy. Einstein's equations with a cosmological term were assumed to be a fundamental description and that blackholes are responsible for the cosmological constant.

In general relativity (GRT), one must give up the conservation of energy and it is difficult to propose any alternative for this proposal without leaving the framework of GRT. If one has a theory of gravitation for which Poincare invariance is exact, the situation changes completely. One must ask where the blackholes get their mass. Is it dark energy and/or mass or is it dark energy/mass transformed to ordinary mass?

TGD view of the situation

In the TGD framework Poincare invariance is exact so that the situation indeed changes.

  1. TGD approach (see this, this, this, and this) forces to ask whether the objects that we call galactic blackholes, or at least those assignable to quasars, could be actually galactic white hole-like objects (GWOs), which emit energy to their environment and give rise to the formation of the ordinary matter of galaxies. The should exist a source feeding mass and energy to GWOs.

    The source of mass of the GWO would be the energy of a cosmic string or more generally a cosmic string thickened to a flux tube but with large enough string tension. The dark energy would consist of volume energy characterized by a scale dependent cosmological constant \Lambda and Kähler magnetic energy.

  2. Cosmic strings with 2-D M4 projection are indeed unstable against a phase transition transforming them to monopole flux tubes with 4-D M4 projection. This transformation reduces their gigantic string tension and leads to a liberation of energy leading to the formation of the ordinary matter of the galaxy.

    The monopole flux tubes can carry dark matter having a large value of the effective Planck constant heff. Whether one has heff=h or even heff=nh0<h for the cosmic string (or the initial object) so that heff would increase in the phase transition thickening of the cosmic string to the flux tube, has remained an open question. If the value increase, the quasar white hole would be apart from the arrow of time in many respects similar to a blackhole.

    The simplest assumption is that the cosmic string is either pure energy, or if it also carries matter, the matter has heff=nh0 ≤ h. The energy liberated in the increase of the thickness of the cosmic string (or flux tube with a very small thickness) produces matter and provides the energy needed to increase heffso that the the blackhole matter should be dark.

  3. The values of the ℏeff could correspond to the values of ℏgr= GMm/β0, where M is the mass of the galactic blackhole, m is the particle mass, and β0=v0/c<1 is velocity parameter. These values of heff are gigantic . The gravitational Compton length Λgr is GM/β0= rS/2β0 and for β0=1 it is equal to one half of the Scwartschild radius of the galactic blackhole, which in the range (106--109)× rS(Sun), rS(Sun)= 3 km. Note that the distance of Earth to Sun is AU=.15 × 109 km and is in this range.

    The gravitational Bohr radius for Sun in the Nottale model with β0≈ 2-11 is obtained from the radius of Earth's orbit with principal quantum number n=5 as a0,gr= AU/5≈ .6× 107 km (see this). The gravitational Compton length for the Sagittarius A* is Λgr= rS/2= .62 × 107 km for β0=1 and is equal to the solar Bohr radius! Is this a mere coincidence or is there strong coupling between the galactic quantum dynamics and solar quantum dynamics and does this coincidence reflect the very special role of the Earth in the galactic biosphere?

    In the TGD inspired quantum biology, living matter is controlled by phases with a large value of ℏgr, in particular those associated with the gravitational flux tubes of Earth and Sun and quantum gravitation plays a key role in metabolism. This, and the fact that heff/h0 serves as a kind of IQ for living matter, strongly suggests that galactic blackholes are living super intelligent systems controlling matter in very long scales.

  4. Galaxies would have formed as local tangles of long cosmic strings. The simplest cosmic string is an extremely thin 3-D object identifiable as a Cartesian product of complex 2-sub-manifold of CP2 homologically non-trivial geodesic sphere S2 of CP2 and of a string-like object X2 in Minkowski space. This object can form a local tangle and its M4 projection would be thickened in this process.

    In the formation of galaxy the string tension would decrease and part of the dark energy and matter would transform to ordinary matter forming a galaxy. Also stars and planets would be formed by a similar mechanism. The process transforming dark energy and matter to ordinary matter would be the TGD counterpart for the decay of the inflaton field (see this) and drive accelerating cosmic expansion.

    Galactic dark matter, as opposed to dark matter as heff>h phases, is identified as the dark energy of the long cosmic string containing galaxies along it as local tangles, and predicts correctly the flat velocity spectrum. Also ordinary stars would have flux tube spaghettis in their core but they would not be volume filling.

  5. The TGD interpretation does not imply that all dark matter would be associated with galactic blackholes as the article suggests. This is as it should be. The mass of the galactic blackhole is only a small fraction of the visible mass of the galaxy and dark energy is about 70 % of the total mass of the Universe. The long cosmic strings having galaxies as tangles contain most of the dark energy. TGD only predicts that most of the mass of the galactic blackhole, be it dark or ordinary, comes from dark energy of the cosmic string.
How would the transformation of the dark matter at monopole flux tubes to ordinary matter take place? I have developed a model for this in (see this).
  1. The TGD view of "cold fusion" (see this, this, this) is as a dark fusion giving rise to dark proton sequences at monopole flux tubes followed by their transformation to ordinary nuclei with heff=h. Most of the nuclear binding energy would be liberated and induced an explosion generating the expanding flux tube bubble or jet. This mechanism plays a central role in the model for the formation of various astrophysical structures.
  2. The TGD inspired model for the star formation (see this) would explain the formation of stars of galaxies in terms of explosive emissions of magnetic bubbles consisting of monopole flux tubes, whose dark matter transforms to ordinary matter by the proposed mechanism and gives rise to stars. Galactic jets could correspond to the emissions of magnetic bubbles. Prestellar objects would be formed by this process. Ordinary nuclear fusion would start above critical temperature lead to the generation of population II stars.
An open question has been whether galactic blackholes should be interpreted as galactic blackhole-like objects (GBOs) or their time-reversals, which would be white hole-like objects (GWOs). Whatever the nomenclature, the GWOs and GBOs would however have opposite arrows of time.
  1. GWOs can eject dark matter magnetic bubbles creating transforming to ordinary matter such as stars: this suggests the term GWO. They calso "eat" ordinary matter, such as stars, which suggests the term GBO. But this is possible also with their time reversals.
  2. The long cosmic string could serve in the case of spiral galaxies as a metabolic source, which continually feeds matter to GWO/GBO so that it could remain dark and increase in size. In the case of elliptic galaxies, the mass growth by "eating" matter from the environment has stopped. In this case the cosmic string could be closed and imply that the mass of GWO/GBO does not grow anymore. One could say that elliptic galaxies are dead.
The outcome of the stellar evolution should correspond to a genuine blackhole-like object (BO).
  1. This would suggest that BOs carry at the monopole flux tubes only ordinary matter with heff=h or even heff<h. In the TGD inspired model for for stellar BOs, the thickness of the flux tube would be given by proton Compton length (see this) and the flux tubes would be long proton sequences as analogs of nuclei. Therefore they would contain matter. In zero energy ontology (ZEO), one BOs could transform to their time reversals (WOs).
  2. Are genuine GBOs as time reversals of GWOs possible? In zero energy ontology (ZEO), one can imagine that a "big" state function reduction (BSFR) in the galactic scale takes place and GWO transforms to a GBO. If the cosmic strings have heff=h or even heff<h, a possible interpretation is that the magnetic flux tubes carrying dark matter have transformed during the stellar evolution to those carrying only matter having heff≤ h. In BSFR they would become initial states for a time reversed process leading to generation of galaxies in the reverse time direction. Galaxies would be "breathing". GWOs could be also formed by a fusion of stellar WOs as time reversals of stellar BOs.
  3. This allows to imagine an evolutionary process in which each evolutionary step n gives rise to flux tubes, whose thickness is larger than the initial flux tube thickness. Also the value of heff of the final state of a given step could increase gradually.

    The differences with respect to the previous initial state would be the arrow of time, the thickness of the flux tubes, and the fact that they contain matter, and possibly also the value of heff, which could increase.

  4. GWOs can also "eat" ordinary matter. The value of heff for ordinary matter could increase: the energy needed to increase heff would come from the energy liberated as gravitational binding energy is generated in the process. Therefore GWOs could look like ordinary galactic blackholes although the main source of energy would be cosmic string.
Many properties of the quasars suggest that they feed energy to the environment rather than vice versa. In this respect they look like GWOs.
  1. If one can assign to quasars genuine GWOs, their mass would come from the dark energy and matter of the cosmic string rather than from the environment by the usual mechanisms. This conforms with the above described findings of (see this).

    Objects known as galactic black holes would consist of a thickened cosmic string, which suggests an explosive expansion generating heff>h dark matter so that the interpretation as GWOs would make sense. If star formation near the galactic blackhole takes place, this could be due to an explosive magnetic bubble emission from GWO identified as a monopole flux tube bundle carrying dark matter.

  2. Star generation near the galactic blackhole would support the interpretation of the galactic blackhole. The region near the galactic blackhole contains a lot of stars. Have they entered this region from more distant regions or are they produced by the mission of magnetic bubbles from the galactic black hole? Star formation near a galactic blackhole associated with a dwarf galaxy (this) has been reported.

    There is also evidence for a fast moving galactic blackhole-like object leaving a trail of newborn stars behind it (this). If a GWO emitting magnetic bubbles is in question, the motion could be a recoil effect due to this emission.

    There is also evidence for a galaxy, which consists almost entirely (99.9 %) of dark matter (this). Could the explanation be as a passive galactic whitehole as a flux tube tangle, which has sent only very few magnetic bubbles?

See the article Magnetic Bubbles in TGD Universe: Part I or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Thursday, February 16, 2023

Is space-time really doomed?

The Twistor Grasmann approach has led to the idea that space-time  is doomed. This idea is strongly  advocated by Nima Arkani Hamed. Should one really throw out this notion or should one sit down for a moment  and think carefully before doing anything so dramatic?

One must ask what one means with space-time as  a fundamental entity. Does one mean space-time as a non-dynamical entity as Minkowsky space of special relativity or space-time of general relativity? These are very different things.

There are good motivations for trying to get rid of space-time  of general relativity (GRT).  

  1. Poincare symmetries are fundamental for quantum  field theories and are lost in general relativity: this is an easily identifiable reason for the failure of quantization of GRT.  
  2. Both twistor structure and spin structure exist only under strong additional conditions for general space-time. Both are fundamental and should exist in some sense.
  3. One of the latest problems is that exotic differentiable structures typically exists and there are a lot of them - this happens  just in 4-D case! This  implies time-like loops and problems with causality in the case of general spacetime (see this).
  But what about Minkowski space? Should one try to get rid of it too?  Poincare and conformal  symmetries are fundamental in gauge theories and twistors code for these symmetries.  There are no twistors without 4-D Minkowski space.  

Should we keep Minkowski space but replace the space-time of general relativity with something for which an analog of twistor space  and twistor structure exists?

  1. Remarkably, the twistor space  as S2 bundle  with Kaehler structure  exists only for M4 (and CP2) (see this).  Could one use string theory as a guideline and  identify space-times as 4-D surfaces in 8-D H=M4×CP2  having twistor space T(H), which is a 12-D product of twistor spaces of M4 and CP2 having a Kahler structure?  
  2. Space-time as a 4-surface in H  would have twistor  and spin structures  and metric,  which are induced from those of H. This would  give the exact Poincare invariance lost in GRT.  The  6-D  Twistor space of space-time surface would be a 6-D surface in T(H) and preferred extremal of the 6-D Kaehler action: dimensional reduction would give  S2 bundle over space-time surface and action would decompose to 4-D Kahler action and volume term to which cosmological constant can be assigned.  
  3. 4-D general coordinate invariance forces holography so that 3-D surface  fixes space-time surface almost uniquely. One gets rid of path integral. Quantum states would be superpositions of holographic space-time surfaces/their 6-D twistor spaces with S2 fiber.
For the development of the ideas about twistor lift of TGD, see for instance this, this,  this,   and this.

For a summary of earlier postings see this">Latest progress in TGD.

A possible model for a monopole flux tube carrying an electric flux

The localization of the dark mass should have a classical space-time counterpart at the level of the space-time surface. It should be also consistent with the Newtonian view of gravitation in which gravitational flux as an analog of electric flux is conserved. Also consistency with stringy description of gravitation based on 3→ 4 holography is desirable. This reaises the question whether flux tubes carrying Kähler electric flux are possible and whether one can construct candidates for them as simultaneous extremals of Kähler action and volume action.
  1. Assume that the solar - and also other gravitational fluxes can be associated with monopole flux tubes which have 2-D M4 projection as a string world sheet. If these flux tubes are defined so that the CP2 projection as a homologically non-trivial 2-surface depends on time, Kähler electric field is generated and the flux tube has conserved Kähler electric charge QK.
  2. The simplest guess for the flux tube carrying Kähler electric field is that the homologically trivial sphere as CP2 projection rotates, not in 1-D sense but in 2-D sense meaning that at a given point of the string world sheet X2⊂ M4 it is obtained by a local color rotation of S2 at standard position in CP2.

    A natural interpretation of QK would be as a counterpart of gravitational flux. Note that this requires that Kähler electric charges have the same sign. This picture conforms with the finding that space-time surfaces with stationary, spherically symmetric induced metric with non-vanishing gravitational mass have at least some non-vanishing gauge charges. For monopole flux tubes Kähler electric charge is non-vanishing. If the flux tubes are U-shaped, the Kähler electric flux must vanish.

    The M4 projections of the flux tubes would be counterparts of strings mediating gravitational interaction in AdS/CFT duality and mediate gravitational interaction and with Newtonian view.

  3. How to describe the formation of the planets or smaller structures in this picture? One can regard the radial flux tubes from the Sun as analogs of particles and introduce for them a wave function in the orientational degrees of freedom, say as spherical harmonics with defined angular momentum.

    The magnetic bubble would correspond to a flux tube structure tangential to say 2-D sphere around the Sun and attached to the radial flux tube structure by wormhole contacts. This structure carries matter as dark particles (fermions).

    A nearly complete collective localization in the orientational degrees of freedom would correspond to a state function reduction involving the reorganization of the gravitational flux tubes to a radial bundle with a definite orientation forcing the tangential flux tube tangle to reduce in size so that it corresponds to the magnetic body of say, planet. This would give rise to the planet after the transformation of dark matter to ordinary matter. Also a localization to a torus-like structure is possible and gives rise to a ring-like structure.

    The reduction of quantum coherence to a smaller scale would give rise to smaller structures such as formation of flux tube bundles assignable to mini-planets and even smaller structures as in the case of the Kuiper belt and Oort cloud.

What can one say of the flux tubes carrying Kähler electric field?

  1. I have proposed this kind of extremals in the model of honeybee dance (see this), which was inspired by the work of topologist Barbara Shipman (see this), who proposed that honeybee dance reflects the color symmetry of strong interactions. In the standard model this proposal does not make sense but is natural in the TGD framework.

    The local color rotation sk→ gk(sl) is an isometry of CP2 and maps the Kähler form Jkldsk ∧ dsl and line element of ds2=skldskdsl of the Kähler metric invariant. Using coordinates xμ for X2 and sk for S2, the induced Kähler form has the following structure

    • S2 part is the same as for the standard S2, that is Jkl→ ∂k gr Jrs(g-1(s))∂l gs= Jkl(s). The same formula holds true for the CP2 contribution to the induced metric.
    • X2 part is of the form

      Jμν= gμk (g-1Jg-1)kl(s) gν~l == (∂μg g-1) J (g-1νg) .

      The formula resembles the gauge transformation formula.

      Here the shorthand notations

      gμk=∂μgk(s) ,

      gkl(s)= ∂l gk,

      (g-1g)klkl ,

      have been used.

    • The mixed X2-S2 components are

      Jμl= gμ~k (g-1J)kl(s) .

      For the CP2 contribution to the induced metric similar formulas hold true.

  2. The induced Kähler electric field has both X2 - and S2 component and X2 component defines the K ''ahler charge assignable to transversal section S2 as an electric flux. What is nice is that, although one does not have electric-magnetic duality, the Kähler electric field is very closely related to the Kähler magnetic field. Whether the solution ansatz works without additional conditions on the local color rotation has not been proven.
What could one say about the possible additional conditions on the locally color rotating object?
  1. The model for the massless extremals (MEs)(see this) assumes that the space-time surface is locally representable as a map M4→ CP2 such that the CP2 coordinates are arbitrary functions of coordinates u= k· m and v= ε· m. k is light-like wave vector and ε a polarization vector orthogonal to it. This motivates the term "massless extremal".
  2. If this representation is global, one expects that the space-time surface has a boundary assignable to E2 so that a tube-like structure is obtained. Boundary conditions guaranteeing that isometry charges do not flow out of the boundary must be satisfied. In particular, the boundary must be light-like. These conditions are discussed in detail in(see this).
  3. The color rotating objects could correspond to a situation in which the color rotation depends on light-like coordinate u only and the solution is such that the map of a region of E2 to CP2 to CP2 is 2-valued and has S2 as an image. Besides S2, also more general complex 2-submanifolds of CP2 can be considered.
  4. The key difference between MEs and massless fields of gauge theories is that MEs are characterized by a non-vanishing light-like Kähler current(see this). This must have deep physical implications.

    One has Kähler electric charge defined by the standard formula. Kähler electric flux orthogonal to the transversal cross section of ME and has light-like direction instead of space-like direction. One can also calculate the charge also for a section with time-like normal. Could this make it possible for the flux tubes to have Kähler electric flux as analog of gravitational flux? This picture would be consistent with both the Newtonian picture of gravitation mediated by the gravitational flux and the field theory picture of gravitation mediated by massless particles represented by MEs.

One can consider several generalizations of the solution ansatz motivated by physical intuition but not really proven.
  1. The surface could define a many-sheeted covering of M4. The conditions for the surface could be formulated as conditions stating that 4 functions of coordinates u,v and CP2 coordinates vanish.
  2. The "polarization coordinate" v could depend on the linear coordinates of E2 non-linearly. For instance, it could correspond to a radial coordinate of E2. The polarization would not be linear anymore.

    A possible restriction on v is that v is a real part of complex analytic function. The surface would possess a 4-D analog of holomorphy in the sense that complex CP2 coordinates are analytic functions of a complex coordinate w of E2 and hypercomplex coordinate of M2. Also the coordinate u could be replaced with a "real" part of a hyper-analytic function of M4 depending on a light-like coordinate u but this does not seem to change the situation in any way. This is a highly attractive 4-D generalization of the holomorphy of string world sheets.

  3. One can even consider the possibility that the decomposition M4= M2times E2 to longitudinal and transversal spaces could be local so that also the light-like direction would be local. The condition would be that the distribution of the tangent spaces of M2 and E2 are integrable and defines a 4-surface having slicings to mutually orthogonal 2-D string world sheets and partonic 2-surfaces. This would correspond to what I have christened as Hamilton-Jacobi structure(see this).

    Physically this would mean the replacement of M2 as a planar analog of a string world sheet with a curved string world sheet in M4. The partonic 2-surface could in turn be interpreted as a many-valued image of a complex 2-surface of CP2 in the local E2.

In the recent situation, the simplest form of MEs motivates the question that the local color rotation of S2 or of a more general complex 2-manifold Y2⊂ CP2 depends on the light-like coordinate u=k· m only. The induced Kähler gauge potential depends on u only so that the M2 part of the Kähler electric field would vanish.

The Kähler electric flux would be parallel to E2 (or the image of S2 in M4) and Kähler electric charge as electric flux could be (but need not be) non-vanishing. This flux would not however be in the direction of the flux tube so that it cannot correspond to gravitational flux.

Since Kähler electric flux would be very closely related to Kähler magnetic flux, an electric analog of the homological Kähler magnetic charge would make sense. This could topologically quantize the Kähler electric charge and also electric charge classically? In the case of CP2 type extremals, the self-duality of CP2 Kähler form indeed implies this. One would have electric-magnetic duality proposed to hold true in TGD.

See the article Magnetic Bubbles in TGD Universe: Part I or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Wednesday, February 15, 2023

TGD view of brain as a resonance chamber

For more than 20 years neuroimaging studies using functional magnetic resonance imaging (fMRI) have been detecting brain-wide complex patterns of correlated brain activity. These patterns appear disrupted in a wide range of neurological and psychiatric disorders. These patterns form spontaneously, even at rest when no particular task is being performed. They have been detected not only in humans but also across mammals, including monkeys and rodents.

The conjecture has been that these patterns correspond to standing waves so that the brain would be analogous to a resonance chamber in which waves propagate and are reflected to form standing wave patterns. Standing wave character implies that the patterns are oscillating. The recent studies made by using fast functional magnetic resonance imaging (fMRI), published in Nature Communications, show that this is the case.

What has been demonstrated by researchers at the Champalimaud Foundation and the University of Minho, in Portugal, are analogues of standing waves associated with brain activity (see this). These findings leave a lot of interpretational freedom since the effects are seen in neural activity. The proposal is that standing waves are caused by reflections of say electromagnetic, acoustic waves, or chemical waves inside the brain.

The crucial question is how a coherence of these waves in the brain scale is possible. In the biology-as-nothing-but-chemistry approach, coherence in the scale of organisms looks mysterious. Could electromagnetic fields help? The standard view is that EEG is a side effect created by the brain and one encounters the problem of how biochemistry with a coherence length of order of molecular scale can induce coherence in the brain scale. The coherence of nerve pulses patterns remains a similar mystery.

The description of the resonance patterns brings to mind the TGD view of the brain and body.

  1. In TGD (see this), Maxwellian and also other gauge fields are geometrized as induced gauge fields and are not primary fields.
  2. The new view of space-time as a 4-dimensional surface in certain 8-D spacetime (H=M4× CP2) implies that the electric and magnetic fields of a system correspond to what I call magnetic and electric bodies (MB and EB). Radiation fields correspond to massless extremals or topological light rays (see this and this). All these are well-defined geometric structures, 4-dimensional surfaces in H. These bodies are in some respects very much like the ordinary biological body (BB). They have various kinds of motor actions for instance. For instance, the variation of the flux tube thickness induces a variation of the cyclotron frequency associated with it. The scale of MB can be much larger than that of BB and this makes possible macroscopic or even astroscopic quantum coherence at them.
  3. Magnetic body (MB) has an onion-like hierarchical structure with flux tubes within flux tubes within...., which carry dark matter, which in TGD corresponds to phases of ordinary with effective Planck constant heff= nh0. When its value is large, there is quantum coherence in long scales typically proportional to heff. heff also measures algebraic complexity of the space-time surface and serves as a kind of IQ.
  4. MB at a given level of hierarchy serves as a boss controlling the lower levels down to the level of ordinary biomatter, the biological body. One can say that the roles of fields and chemistry have changed.

    MB receives sensory data from the brain coded to the modulation Josephson frequency assignable to neuronal and also ordinary cell membranes. Josephson radiation induces a sequence of cyclotron resonances at MB, and this in turn a sequence of pulses as a response as the excited states return back to the original state.

    This feedback signal induces a sequence of pulses of cyclotron radiation controlling the biological body. This feedback from MB might induce sequences of nerve pulses from sensory receptors to the brain to the higher levels of MB producing in turn feedback inducing motor responses.

  5. The quantum coherence of MB induces ordinary coherence at the level of BB. The coherently oscillating regions of the brain consisting of functionally similar neurons would communicate to and be controlled by the regions of MB and would be fixed as the article tells. Brain would be a collection of oscillating regions driven by parts of MB characterized by cyclotron frequencies of the flux tubes. EEG resonances would correspond to the cyclotron frequencies and the variation of flux tube thickness as one particular motor action of MB could modulate the cyclotron frequencies.
  6. Pathological situations would emerge when sensory communications to some parts of MB or the control from some parts of MB fail. Some flux tubes might not have a correct thickness (magnetic field) and could get out of resonance and sensory input and control would fail. MB might even lack some body parts.
What about the standing waves in the TGD framework? It is quite possible that the analogues of standing waves are induced by MB. They would be assignable to the flux tube network at the level of the brain. Standing waves of membrane/neuron potentials are possible and they could also induce acoustic and chemical waves.
  1. With an inspiration coming from the work of Michael Levin's group (see this, this and this), TGD leads to the proposal that the waves of membrane potentials play a key role also in ordinary biology.
  2. Also the findings of Prakash et al (see this, this this), related to the "brainy" behavior of simple multicellulars inspired in the TGD framework to the proposal that there exist analogues of nerve pulse patterns in mV range propagating along the gap junction connected complexes of also ordinary cells (see this).
  3. Also the findings of Andrew Adamatsky (see this) about electric communications of sponges support the existence of waves with amplitudes in meV range (see this).
In TGD the standing waves are not possible for a single space-time sheet for the known extremals.
  1. Effective standing waves are however possible in the sense that test particle interacts with a superposition of waves associated with different space-time sheets which are extremely near to each other with distance which is smaller than the particle size (CP2 scale of about 10-31 meters) of the same amplitude propagating to opposite direction creating a standing wave and effectively experiences standing wave. Only effects superpose. Only the superposition of effects is observed but this led to Maxwell electrodynamics, which assumes that the superposition of effects corresponds to a superposition of fields.
  2. Superposition of induced fields is not possible in TGD except for the field patterns associated with the massless extremals (see this and this) having 4-D wave vectors with the same direction and therefore propagating without dispersion and being precisely targeted. The effective standing waves require at least 2 space-time sheets.
  3. Standing waves would represent coherence in the scale of the brain. TGD predicts that biological coherence quite generally is induced by dark matter as heff>h phases at the magnetic body (MB) of the system. Could the changes of brain activity be a localized coherent outcome of quantum control by the MB on selected brain regions characterized by EEG resonance frequency between MB and brain. Functionally similar neurons would feed "sensory" input to a given region of MB at a characteristic resonance frequency and receive feedback at the same frequency.
For a summary of earlier postings see Latest progress in TGD.

Tuesday, February 14, 2023

Magnetic bubbles: summary and outlook

I have written a series of blog posts related to the article (see this) that has gradually developed during the last weeks. This article was inspired by a single puzzling astrophysical observation but was extended by further similar observations. The discussion of these findings allowed us to develop a TGD based vision about the generation of astrophysical structures to a much more detailed level. This vision should apply also to other interactions.

The foregoing discussion suggests that the dynamics of gravitational fields could reduce to the dynamics of flux tubes subject to the conservation of total Kähler electric fluxes, which have a definite sign.

The topological dynamics would be essentially re-organization of the network formed by electric flux quanta as nodes of the network connected to each other by flux tubes, which can also carry Kähler electric flux. Twistor lift of TGD and M8-H duality (see this and this) led to a rather similar picture for the scattering amplitudes (see this and this) in terms of fundamental fermions.

This generalizes also to the dynamics of gauge fields. Flux tubes can be characterized by the value of heff characterizing a given interaction, and the notion of gravitational Planck constant generalizes to all interactions. The key physical idea is that Nature is theoretician friendly: if quantum coherence is to be preserved, a phase transition replacing the ordinary Planck constant ℏ with ℏeff must take place, when the interaction strength Q1Q2/4π ℏ becomes too large for the perturbation series to converge. The alternative option is that the system decomposes to coherent subunits such that the perturbation series converges for them. This means a reduction of quantum coherence scale.

The understanding of atomic and molecular physics at the space-time level has been a longstanding challenge of TGD.

  1. I have proposed that heff>h for the valence bonds as flux tubes could allow us to gain insights about the periodic table (see this). Monopole flux tubes can also carry ordinary electric fluxes and this would allow us to understand the recent empirical findings about chemical bonds as carriers of electric flux (see this). TGD also suggests a flux tube model for hydrogen bonds. Also a generalization of hydrogen and valence bonds involving quantum gravitation in the TGD sense (see this) can be considered so that quantum gravitation would define an essential part of biochemistry.
  2. What about atoms in TGD Universe? The proposed description for the gravitational interaction at the level planetary system in terms of flux tubes could generalize almost as such to a description of electromagnetic interactions at the atomic level. The U-shaped flux tube pairs with opposite magnetic charges and carrying electromagnetic flux besides monopole magnetic flux would emanate from protons and connect them to electrons. For a pair of opposite charged particles, the U-shaped flux tubes would be closed. For ions the flux tube pair would continue outside the atom. The flux tubes of a given atom could also form flux tube bundles. Also linking and knotting are possible for the flux tubes so that the capacity for topological quantum computation emerges.
  3. A powerful restriction comes from the condition that monopole flux tubes must be closed. The proposal is that they are U-shaped and form pairs of flux tubes connecting two systems. This does not require that the Kähler electric charges of the members are opposite. For gravitational flux tube pairs they are of the same sign. For gauge interactions they are of the same sign but the sign can vary.
There are many topics related to flux tubes, which are not considered in the article.
  1. TGD predicts homologically non-trivial flux tubes: in the simplest situation X4= X2× S2, the CP2 projection S2 is a homologically trivial geodesic sphere. If they are allowed by the preferred extremal property, they would serve as natural correlates for the Maxwellian magnetic fields. One cannot exclude flux tubes with light-like boundaries, and they would be even more natural counterparts for Maxwellian fluxes.

    In the standard terminology of condensed matter physics (see this), they would correspond to the magnetization M, whereas the monopole part of the measured magnetic field, which needs no currents as its sources, would correspond to the magnetizing "external" field H, which can be said to control M (and possibly containing heff=h phases). The presence of monopole fluzes allows us to understand the puzzle posed by the fact the magnetic field of Earth is non-vanishing although dissipation of currents implies the decay of the Maxwellian part.

  2. Interesting questions relate to the many-sheeted space-time. Monopole fluxes can flow between two space-time sheets through wormhole contacts. Elementary particles have wormhole contacts as building bricks (see this, this, and this). Can one separate this level from the levels just discussed. For instance, can one consider closed flux loops travelling through several sheets in long length scales as the hierarchy of Planck constants would suggest.
See the article Magnetic Bubbles in TGD Universe or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

A model for the formation of Kuiper belts and Oort clouds

I have developed a rather detailed quantum model for the formation of planets based on quantum coherence in astrophysical scales (see this). In this posting the extension of this model to the formation of the Kuiper belt and Oort cloud is discussed.

The former planet Pluto (see this) is the largest object in the Kuiper belt, which has a torus-like shape. The radius of Pluto is 1,191 km to be compared with Λgr= 3,000 and to the radius 2,439 km of Mercury. The assumption that Pluto is a planet of solar origin requires β0 → 3β0 for the Pluto-Sun pair at the time when Pluto originated if β0 has remained unchanged during its evolution. This does not conform with the proposed model.

Could the Kuiper belt (see this), which is composed of mini-planets be analogous to a planetary ring, and be the oldest structure emanating from the Sun by the proposed mechanism? The total mass of Kuiper belt is recently about 10 per cent of the mass of Earth but there are reasons to believe that the original material has been 7 to 10 Earth masses so that Kuiper belt could be perhaps seen as a failed Jupiter sized giant planet for which the transformation of dark matter to ordinary matter did not lead to a single planet but to a large number of smaller objects.

The standard view of the formation of astrophysical structures is very different from the TGD view (see this and earlier blog postings) and the standard model should have anomalies if the TGD view is nearer to truth. One example of such anomaly is described in the article "A dense ring of the trans-Neptunian object Quaoar outside its Roche limit" by Morgado et al (see this and this). The miniplanet known as Quaoar is an object half of the size of Pluto. The radius of the ring is 7 times the radius of Quaoar. The Roche limit is however 2.5 radii.

Roche limit follows involved the assumption that the satellite is held together only by gravitational forces. The gravitational tidal forces pull apart a satellite rotating too near to a planet so that it forms a ring. Therefore the formation of stable satellites is not possible within Roche radius. Conversely, a pre-existing ring can eventually condense to a satellite if its radius is larger than the Roche limit.

Also Saturn has two rings, which violate the Roche limit (see ). The E ring of Saturn, which - unlike smaller rings - consists of micron and submicron sized particles, violates the Roche limit. The particles of E ring to accumulate to Moons that orbit with the ring. Also the Phoebe ring associated with Saturn's moon Phoebe violates the Roche limit.

Could the TGD view explain the violations of the Roche limit?

  1. The TGD based idea that planets and Moons are formed by a gravitational condensation of the ordinary matter produced by dark matter at a torus like ring accompanied by monopole flux tube is supported by the behavior of the rings of Saturn, which tend to condense to associated Moons.
  2. Could the presence of a circular monopole flux tube slow down the condensation process and make the ring rather stable? I have considered the possibility that the planetary orbits are accompanied by monopole flux tubes defining kinds of planetary paths. Could one identify some signatures of these paths? Do they still contain dark matter?
  3. Planetary radii are consistent with the Roche limit. The matter in the Kuiper belt did not condensed to a single Jupiter-sized planet but to miniplanets. This could be interpreted in terms of the ongoing condensation process, which started as the Kuiper belt was formed as an expanding ring of matter accompanied by a monopole flux tube. Could the presence of a monopole flux tube slow down the condensation process? How does the Kuiper belt differ from planets?

    Suppose that the emission of Kuiper belt from the Sun involved a collective localization from a Bose-Einstein condensate-like state of dark particles to an analog of momentum eigenstate so that a planet rotating around the Sun was formed. Why did the localization for the Kuiper belt not occur to a wave function localized to a point rotating around Bohr orbit but to a set of points associated with the Bohr orbit?

    Was the quantum coherence scale reduced by a reduction of ℏgr→ ℏeff>ℏ, which was followed by ℏeff→ ℏ in the transformation of dark matter to ordinary matter. The tubular Bose-Einstein condensate formed in the tubular localization would have decomposed in the transition ℏgr→ ℏeff>h to smaller regions before the transition ℏeff→ ℏ, which created miniplanets along the flux tube instead of a single planet.

  4. Oort cloud (see this) is a spherical layer of icy objects surrounding the Sun and likely occupies space at a distance between about 2,000 and 100,000 astronomical units (AU) from the Sun. The estimated total mass of the Oort cloud is 1.9 Earth masses (see this). Suppose that Oort cloud corresponds to a spherical shell emitted by the Sun. No localization to a tubular Bose-Einstein condensate would have occurred but the process ℏgr→ ℏeff → ℏ occurred directly so that a spherical cloud was created.
See the article Magnetic Bubbles in TGD Universe or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Monday, February 13, 2023

Mystery or the "radius wall" for planets as evidence for the Bohr model of planetary system

Over 5,200 exoplanets have been confirmed hitherto. Exoplanets have posed several challenges for the existing models of the formation of planets (see this).
  1. An expected finding is that giant exoplanets can have very small orbital radii. In some cases with orbital periods that last just a few days. The proposed explanation is that these planets have migrated to the vicinity of their stars.
  2. The second mystery is that there is a mysterious size gap in the scale of exoplanets. Transit observations first by NASA's Kepler Space Telescope and now by TESS, the Transiting Exoplanet Survey Satellite, have found a puzzling absence of planets with radii between 1.4 and 2.4 times that of Earth. Astronomers call this the "radius valley" and although it seems to be telling us something fundamental about the nature, formation and evolution of planets, scientists have yet to ascertain what that something is. What comes in mind is quantization of orbit radii.
Helium could make up almost half the mass of the atmosphere of giant exoplanets that have migrated close to their star. A team led by PhD student Isaac Malsky of the University of Michigan and Leslie Rogers of the University of Chicago proposes a new approach to the radius valley problem (see this). Perhaps it could signal an increasing abundance of helium gas in the atmosphere of planets 2.4 times larger than Earth. Planets of this scale are often described as mini-Neptunes, and if they have a rocky core, it's deep beneath a thick atmosphere. But why the abundance of helium gas would be higher?

TGD view of the planetary system

Could TGD based quantum vision of planetary system (see this, this and this), and the closely related Expanding Earth hypothesis (see this, this, this, and this) provide some insights to this problem? One can start from some observations related to the planetary sizes in the solar system.

  1. Earth size 6,371 km is not far from the gravitational Compton length of Sun GM/β0= rS/2β0 which for β0=v0/c= 2-11 is about Λgr=3,000 km, which is amazingly near to half radius of Earth about rE=6371 km. Expanding Earth model in turn proposes that the Earth radius was rE/2 before the Cambrian Expansion and therefore roughly the same as the radii of Mercury and Mars.
  2. In the Nottale's model (see this), the value of the parameter β0=v0/c appearing in ℏgr is by a factor 1/5 smaller for outer planets than for inner, Earth-like planets, including Mars. This means that the value of the gravitational Compton length is scaled up by a factor 5: Λgr→ 5Λgr. If the radius is roughly equal to a multiple of Λgr. The radii of planets would scale like β0 and their distances like 1/β02 and one could speak of kinds of proto planets corresponding to some maximum value of β0.
  3. Using the gravitational Compton length Λgr=GM/v0 for the Sun as a unit, Using Mkm as a unit, the radii of the planets (see this) are given by

    [rE= 6.371, rJu= 69.911, rUr= 25.362, rMe=2.4397, rMa= 3.3893, rNe= 24.622,rSa=58.232; rVe=6.0518] .

    If one uses 2Λgr=6000 km as a unit, the radii are given by

    [rE= 1.0618, rJu= 11.6518, rUr= 4.2270, rMe= 0.4066, rMa= 0.5649, rNe= 4.1037, rSa= 9.7053 , rVe= 1.0086] .

  4. Giant planets of the solar system come in two varieties. Jupiter and Saturn, known also as gas giants, consist primarily of hydrogen and helium and have a radius of roughly 10rE). Uranus and Neptune, also known as ice giants, consist of ice, rock, hydrogen, and helium and have a radius nearly to 4rE not too far from 5rE). Gas giants are also called failed stars because their composition resembles that of young stars consisting of light elements. Helium makes roughly one half of the mass of the atmosphere.

    Remarkably, the radii of giant planets are not very far from 2Λgr,β0/5 and 4Λgr,β0/5, and would very roughly correspond to first and second octaves of solar gravitational Compton length for β0/5 in the model of Nottale (see this). In fact, the radii of inner planets radii are not far octaves for the radius of Mars. Does this mean that the expansion by a power of 2 proposed by Expanding Earth model (see this) has occurred for all planets except Mars and Mercury?

TGD view of planet formation

The following summarizes the TGD based model for the formation of planets by dark fusion and subsequent transformation of dark nuclei to ordinary nuclei.

  1. In the TGD based model (see this and this), planets could have formed by dark fusion see this,this,this) as the dark matter at the magnetic flux tubes characterized by ℏgr=GMm/v0. Dark matter would have consisted of dark proton (possibly nucleon with neutron as dark proton having charged color bond with the dark proton preceding it) sequences. These dark nuclei would have transformed to ordinary matter liberating almost all nuclear binding energy in this process. This would have induced an explosion.
  2. First He and possibly also heavier elements would have formed by dark fusion. The process would have involved an explosion analogous to a supernova explosion, kind of a local Big Bang. The energy would have come from the liberation of nuclear binding energy. Due to the liberation of nuclear binding energy, the process would have led to a high temperature. Ordinary nuclear fusion starts if the temperature increases above the ignition temperature of ordinary fusion. In the proposed TGD based model, this would have led to a formation of a population II star.
The simplest assumption is that ordinary nuclear fusion has not started for planets although one cannot exclude this possibility in the case of the Earth-like planets with inner core.
  1. If a spherical shell of dark matter was emitted, a gravitationally induced spontaneous breaking of spherical symmetry could be in question. The flow of the matter along magnetic flux tubes of the magnetic bubble to the spot, which became a planet, would have heated it. Also Moons could be these kinds of hot spots and planetary rings. The fact that largest exoplanet HD 100546 b (see this) is accompanied by a spherical shell supports this option.
  2. The quantum option, which might be too readical, is that the dark planet would not have a spherical mass shell but a quantum version of a radial jet delocalized over angular degrees of freedom as, say, angular momentum eigenstate. The formation of a planet would have been a localization in momentum space so that the wave function would have been replaced by a time dependent wave function localized at a positing describing Kepler orbit. The mass would be concentrated at the slowly increasing orbital radius. This picture would conform with the Bohr orbit model.
  3. An option, which is more in line with the standard view, is that the inner core is not due to planetary dark or nuclear fusion. Rather, the dark fusion at the spherical surface would have produced matter, whichwas gravitationally attracted by the pre-existing core region.
A rough sketch for the planetary evolution

Could one understand the differences between Earth like giant planets and giant exoplanets in this framework? One must answer at least the following questions.

  1. Why the giant planets contain mostly helium?
  2. How giant exoplanets can have very small orbital radii in contrast to the solar giant planets? Have the giant exoplanets migrated near their stars or could some other mechanism explain their small orbital radii?
Perhaps the following rough sketch could catch some elements of truth. Suppose that the formation of planets indeed involves a local Big-Bang throwing a layer or stellar surface outwards, which is induced by the liberation of nuclear binding energy in the transformation of dark nuclei to ordinary matter after dark fusion producing dark nuclei.

The fact that outer planets are older and thrown out of Sun earlier suggests a general view of the planetary evolution.

  1. The outer planets are oldest and for them the dark fusion at the surface of Sun would not have had enough time to produce dark variants of heavier elements. As the transformation to ordinary nuclei occurred in the formation of planet, only relatively light elements were produced.
  2. For the Earth-like planets, dark fusion occurring at the surface of the star would have had enough time to produce a spherical layer or pre-planetary spot of dark variants of heavier elements before the explosion accompanying the transformation of the dark nuclei to ordinary nuclei, occurred.

    What would be new as compared to the standard model would be that elements like Fe of planetary inner cores would have been generated by dark fusion following by an explosion of spherical shell rather than coming from decay proecuts of supernovas and thrown out in the formation of planets at the surface of the expanding magnetic bubbles.

  3. Could ordinary nuclear fusion play any role? The temperature at the surface of Sun was certainly too low for the ordinary nuclear fusion to start. If the heating induced by the transformation to ordinary nuclei was not enough to initiate ordinary fusion in the planetary core, the planet would be a failed star. Even if the ordinary fusion was initiated, the increase of the planetary radius by a process analogous to what Expanding Earth model proposes, could have made the density of the fuel too small for nuclear fusion to continue.
One should understand also the sizes of planets.
  1. Why should the solar giant planets have large orbital radii? Could the radius of the planet increase in discrete steps as the model for Expanding Earth suggests? If the size increases in discrete steps, the large size could be due to the fact that the explosion from them has reached a considerably later stage for the solar system as compared to the exoplanetary systems. Could giant exoplanets with small orbital radii accompany very young stars?

    Or does the size remain constant as the existence of giant planets with very small orbital radius suggests?

  2. Could the smaller value of β0 for outer planets imply a larger radius as is suggested by the fact that giant planets have radii, which are roughly 5 and 10 times the radius of Earth?
A concrete model

Since the orbital radius of the planet correlates with the duration of expansion, outer planets would have formed before the inner planets. Planets would been emitted as magnetic bubbles containing dark matter or as quantum jets described above. Planetary systems would tell the story of planetary evolution: an astrophysical variant of the phylogeny recapitulates ontogeny principle would be realized.

To build a more concrete model, assume that the value of the parameter β0 characterizes the Sun-planet pair. Second parameter would be an integer k characterizing the radius of planet as multiple of Λgr. This assumption is inspired by the observation that the planetary radii are multiples of Λgr≈ rMars.

  1. Assume that the Bohr model makes sense so that the radius of planetary orbits is given by

    rn= n2GM(star)4π/β02 .

  2. The condition suggested by a standing wave in the radial direction

    rplan= k Λgr = k GM(star)/β0 , k=1,2...

    is certainly approximate but would conform roughly with the radii of solar giants planets for k=2,4 suggesting that k is power of two as Expanding Earth model assumes. All planets except Mercury and Mars would have experienced the transition k=1→ 2.

  3. For the inner planets, one obtains the condition

    rorb/rplan= n2 4π/kβ0 .

    An appropriate generalization holds true for outer planets with different values of β0 and n. The small value of rorb and large value of rplan for the giants with small orbital period, favors small values of n, and large values of β0<1 and k.

    For β0=1, this gives the lower bound

    rorb/rplan≤ n24π/k .

    Note that the solar radius is r(Sun)=696.340 Mm and roughly 10 times the radius rJu= 69.911 Mm of Jupiter. The largest known exoplanet HD 100546 b has radius about 6.9 rJu and is probably a brown dwarf (this).

  4. The empirical input from the very short periods of giant planets, which are a few days (see this), gives an additional condition. For a circular orbit, the period T relates to the orbital radius via Kepler's law

    T2= 4π2× r3(orbit)/GMc2 .

    Using rorb= n2(4π GM/β02), one obtains

    T= 8 π5/2 (n303) (rs/c) .

    For a given period T and stellar mass M, this gives

    β0= 8× 21/3 π5/6 (1/n)

    n=1 is natural for the lowest Bohr orbit. For solar mass one has rS=3 km. For T= 24 hours this would give β0= 2.53×10-3= 1.295× 2-9 to be compared with the estimate β0= 2-11 for Sun. The result conforms with the idea that β0 decreases gradually during the evolution of the planetary system, perhaps in powers of 1/2.

    If the radius of the planet is given by rplan= k GM/β0 and the giant planet has the radius of Jupiter about 70,000 km, one has k= 2 rplanβ0/rS ≈ 59. In this case the planet could be regarded as a brown dwarf (see this), which had too low mass to reach the temperature making possible nuclear fusion.

  5. One might end up with problems with the idea of orbital expansion since the Bohr radius is given by rn=n2GM(Sun)/β02, where n is the principal quantum number n. n should be small for a giant exoplanet with very small orbital radius. Too small orbital radii are not however possible for a given value of β0.

    The Nottale model suggests that β0 is dynamical, quantized, and decreases in discrete steps during the expansion for some critical values orbital radius so that also rplan increases for certain critical values of rorb. I have earlier developed an argument that β0 is quantized as β0=1/n, n integer. It must be emphasized however that outer and inner planets could also correspond to the same value of β0 if values of n for them come as multiples of 5.

  6. The reduction β0→ β0/5 appearing in Λgr=GM/β0 appearing in the formula for rplan would induce the increase of the planetary radius.

    Does value of the parameter k need change during the orbital expansion? The existence of giant planets with very small orbital radii would conform with the assumption that the value of k does not change during evolution. On the other hand, the idea that planets should participate cosmic expansion in discrete jerks and the observation that the radii of planets are roughly power of 2 multiples of Λgr≈ rMars, suggest that k can increase in discrete steps coming as power of 2.

  7. The former planet Pluto (see this) is the largest object in the Kuiper belt, which has a torus-like shape. The radius of Pluto is 1,191 km to be compared with Λgr= 3,000 km and to the radius 2,439 km of Mercury.

    The assumption that Pluto is a planet of solar origin requires β0 → 3β0 for the Pluto-Sun pair at the time when Pluto originated if β0 has remained unchanged during its evolution. This does not conform with the proposed model.

    Could the Kuiper belt (see this), composed of miniplanets, be analogous to a planetary ring, and be the oldest structure emanating from the Sun by the proposed mechanism? The total mass of Kuiper belt is recently about 10 per cent of the mass of Earth but there are reasons to believe that the original material has been 7 to 10 Earth masses so that Kuiper belt could be perhaps seen as a failed Jupiter sized giant planet for which the transformation of dark matter to ordinary matter did not lead to a single planet but to a large number of smaller objects.

These considerations suggest a simple model for the evolution of the parameters β0 and k assumed to characterize planet-star pairs during the expansion.
  1. β0 was reduced to β/5 at distance when it became impossible to realize circular Bohr orbits for β0≈ 2-11 anymore. The radius of the planet was increased by a factor 5 and transformed an Earth-like planet to a giant planet.
  2. The radii of Jupiter and Saturn would have been roughly 2rE before this and the radii of Uranus and Neptune would have been roughly rE. Mercurius and Mars would have had a radius not far from rE/2. p-Adic length scale hypothesis is suggestive.
  3. The increase of k is consistent with the Expanding Earth model involving the increase of Earth radius by a factor k=2.

    Expanding Earth model (see this) and the fact that Λgr is roughly rE/2 ≈ rMars suggests an even simpler model. Outer planets have suffered the transition β0→ β0/5. Jupiter and Saturn with a radius about 20Λgr have also suffered two scalings k=1→ 2→ 4. The remaining planets except Mars and Mercury have suffered the scaling k=1→ 2. In the simplest model, the solar proto planet would have a radius roughly that of Mars and Mercury.

See the article Magnetic Bubbles in TGD Universe: Part I or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.