Thursday, January 25, 2024

Modified Dirac equation and the holography=holomorphy hypothesis

The understanding of the modified equation as a generalization of the massless Dirac equation for the induced spinors of the space-time surface X4 is far from complete. It is however clear that the modified Dirac equation is necessary.

Two problems should be solved.

  1. It is necessary to find out whether the modified Dirac equation follows from the generalized holomorphy alone. The dynamics of the space-time surface is trivialized into the dynamics of the minimal surface thanks to the generalized holomorphy and is universal in the sense that the details of the action are only visible at singularities which define the topological particle vertices. Could holomorphy solve also the modified Dirac equation? The modified gamma matrices depend on the action: could the modified Dirac equation fix the modified gamma matrices and thus also the action or does not universality hold true also for the modified Dirac action?
  2. The induction of the second quantized spinor field of H on the space-time surface means only the restriction of the induced spinor field to X4. This determines the fermionic propagators as H-propagators restricted to X4. The induced spinor field can be expressed as a superposition of the modes associated with X4. The modes should satisfy the modified Dirac equation, which should reduce to purely algebraic conditions as in the 2-D case. Is this possible without additional conditions that might fix the action principle? Or is this possible only at lower-dimensional surfaces such as string world sheets?
In the article Modified Dirac equation and the holography=holomorphy hypothesis a proposal for how to meet these challenges is proposed and a holomorphic solution ansatz for the modified Dirac equation is discussed in detail.

See the article Modified Dirac equation and the holography=holomorphy hypothesis or the chapter Symmetries and Geometry of the ”World of Classical Worlds”.

Tuesday, January 23, 2024

Questions related to the notion of color symmetry in the TGD framework

One of the longstanding open problems of TGD has been which of the following options is the correct one.
  1. Quarks and leptons are fundamental fermions having opposite H-chiralities. This predicts separate conservation of baryon and lepton numbers in accordance with observations.
  2. Leptons correspond to bound states of 3 quarks in CP2 scale. This option is simple but an obvious objection is that they should have mass of order CP2 mass. Baryons could decay to 3 leptons, which is also a problem of GUTs.
I haven't been able to answer this question yet and several arguments supporting the quarks + leptons option have emerged.

Consider first what is known.

  1. Color is real and baryons are color singlets like leptons.
  2. In QCD, it is assumed that quarks are color triplets and that color does not correlate with electroweak quantum numbers, but this is only an assumption of QCD. Because of quark confinement, we cannot be sure of this.
The TGD picture has two deviations from the QCD picture, which could also cause problems.
  1. The fundamental difference is that color and electroweak quantum numbers are correlated for the spinor harmonics of H in both the leptonic and quark sector. In QCD, they are not assumed to be correlated. Both u and d quarks are assumed to be color triplets in QCD, and charged lepton L and νL are color singlets.
    1. Could the QCD picture be wrong? If so, the quark confinement model should be generalized. Color confinement would still apply, but now the color singlet baryons would not be made up of color triplet quark states, but would have more general irreducible representations of the color group. This is possible in principle, but I haven't checked the details.
    2. Or can one assume, as I have indeed done, that the accompanying color-Kac Moody algebra allows the construction of "observed" quarks as color triplet states. In the case of leptons, one would get color singlets. I have regarded this as obvious. One should carefully check out which option works or whether both might work.
  2. The second problem concerns the identification of leptons. Are they fundamental fermions with opposite H-chirality as compared to quarks or are they composites of three antiquarks in the CP2 scale (wormhole contact). In this case, the proton would not be completely stable since it could decay into three antileptons.
    1. If leptons are fundamental, color singlet states must be obtained using color-Kac-Moody. It must be admitted that I am not absolutely sure that this is the case.
    2. If leptons are states of three antiquarks, then first of all, other electroweak multiplets than spin and isospin doublets are predicted. There are 2 spin-isospin doublets (spin and isospin 1/2) and 1 spin-isospin quartets (spin and isospin 3/2). This is a potential problem. Only one duplicate has been detected.
    3. Limitations are brought by the antisymmetrization due to Fermi statistics, which drops a large number of states from consideration. In addition, masses are very sensitive to quantum numbers, so it will probably happen that the mass scale is the CP2 mass scale for the majority of states, perhaps precisely for the unwanted states.
It is good to start by taking a closer look at the tensor product of the irreducible representations (irreps) of the color group (for details see this).
  1. The irreps are labeled by two integers (n1,n2) by the maximal values of color isospin and hypercharge. The integer pairs (n1,n2) are not additive in the tensor product, which splits into a direct sum of irreducible representations. There is however a representation for which the weights are obtained as the sum of the integer pairs (n1,n2) for the representations appearing in the tensor product.

    Rotation group presentations simplified example. We get the impulse moment j1+j2,... |j1-j2|. Further, three quarks make a singlet.

  2. On basis of the triality symmetry, one expects that, by adding Kac-Moody octet gluons, the states corresponding to (p,p+3)-type and (p,p)-type representations can be converted to each other and even the conversion to color singlet (0,0) is possible. This is the previous assumption that I took for granted and there is no need to give it up.
Let's look at quarks and baryons first.
  1. U type spinor harmonics correspond to (p+1,p) type color multiplets, while D type spinor harmonics correspond to (p,p+2) type representations.

    From these, quark triplets can be obtained by adding Kac-Moody gluons and the QCD picture would emerge. But is this necessary? Could one think of using only quark spinor harmonics?

  2. The three-quark state UUD corresponds to irreducible representations in the decomposed tensor product. The maximum weight pair is (3p+2,3p+2) if p is the same for all quarks, while UDD with this assumption corresponds to the maximum weights (3p+1,3p+1+3). The value of p may depend on the quark, but even then we get (P,P) and (P,P+3) as maximal weight pairs. UUU and DDD states can also be viewed.

    Besides these, there are other pairs with the same triallity and an interesting question is whether color singlets can be obtained without adding gluons. This would change the QCD picture because the fundamental quarks would no longer be color triplets and the color would depend on the weak isospin.

  3. The tensor product of a (p,p+3)-type representation and (possibly more) gluon octets yields also (p,p)-type representations. In particular, it should be possible to get (0,0) type representation.

    Consider next the identification of leptons.

    1. For leptons, neutrino nuL corresponds to a (p,p)-type representation and charged lepton L to a (p+3,p)-type representation.
    2. Could charged antilepton correspond to a representation of the type UDD and antineutrino to a representation of the type UUD?

      Here comes the cold shower! This assumption is inconsistent with charge additivity! UDD is neutral and corresponds to (p,p+3) rather than (p,p). You would expect the charge to be 1 if the correspondence for color and electroweak quantum numbers is the same as for the lepton + quark option!

      UUD corresponds to (p,p) rather than (p,p+3) and the charge is 1. You would expect it to be zero. Lepton charges cannot be obtained correctly by adding charge +1 or -1 to the system.

      In other words, the 3-quark state does not behave for its quantum numbers like a lepton, i.e. an opposite spinor with H-chirality as a spinor harmonic.

      Therefore bound states of quarks cannot be approximated in terms of spinor modes of H for purely group-theoretic reasons. The reason might be that leptonic and quark spinors correspond to opposite H-chiralities. Of course, it could be argued that since the physical leptons are color singlets, this kind of option could be imagined. Aesthetically it is an unsatisfactory option.

    To sum up, the answers to the questions posed above would therefore be the following:
    1. Quark spinor harmonics can be converted into color triplets by adding gluons to the state (Kac-Moody). Even if this is not done, states built from three non-singlet quarks can be converted into singlets by adding gluons.
    2. The states of the fundamental leptons can be converted into color singlets by adding Kac-Moody gluons. Therefore the original scenario, where the baryon and lepton numbers are preserved separately, is group-theoretically consistent.
    3. Building of analogs of leptonic spinor harmonics from antiquarks is not possible since the correlation between color and electroweak quantum numbers is not correct. I should have noticed this a long time ago, but I didn't. In any case, there are also other arguments that support the lepton + quark option. For example, symplectic resp. conformal symmetry representations could involve only quarks resp. leptons
    For a summary of earlier postings see Latest progress in TGD.

    For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Monday, January 22, 2024

Stochastic resonance and sensory perception

In the TGD framework, subjective existence corresponds universally to the sleep-wakeup cycle defined by the periods of wake-up with opposite arrows of time defined by a sequence of "big" state function reductions (BSFRs) changing the arrow of time. In BSFR, a self with a given arrow of time dies (or falls asleep) and reincarnates as a self with an opposite arrow of time.

The TGD view, the stochastic resonance would synchronize the signals realized as amplitude modulated carrier waves with the sleep-wakeup cycle. The wakeup period would correspond to T(spont)= 1/f(spont). Stochastic resonance would correlate the rhythms of subjective and physical existence.

The basic prediction is that this synchrony is optimal when the noise level is optimum. Taking the ordinary sleep-wake-up cycle as an example, one can understand what this means. If the stimulus level is too high, concentration to a given task is difficult and problems with sleep appear. If the stimulus level is too low, drowsiness becomes the problem and the resonance with the circadian rhythm tends to be lost.

Concerning the identification of the counterpart of the white noise, there are several guidelines.

  1. White noise could correspond to any signal for which the frequency distribution is constant in the time scale of modulations. The rate of BSFRs should be f(spont)= 2f. In stochastic resonance, the white noise would keep the system in optimal wakeup state.
  2. Many neuroscientists believe that the rate of nerve pulses codes for the sensory input. This need not be quite true but inspires the question whether the nerve pulses define the white noise and whether a single nerve pulse wakes up the neuron. If so, then the rate of nerve pulses could correspond to f=f(spont)/2 since only the nerve pulses with a standard arrow of time are observed.

    Nerve pulse duration is about 1 ms and defines the maximum rate of nerve pulses. On the other hand, f= 1 kHz frequency is a resonance frequency of the brain synchrony and also the average mechanical resonance frequency of the skull.

  3. This observation brings to mind an interesting old observation. For electrons with mass .5 eV the secondary p-adic time scale T2(e) corresponds to frequency 10 Hz, alpha frequency. The mass estimates for the light quarks u and d vary in the range 2-20 MeV. T2 scales like mass scale squared so that the mass scale estimate for quarks is T2≈ 1 kHz.

    The TGD inspired quantum biology indeed predicts that QCD allows dark variants with same masses but Compton length scaled up by \hbareff/\hbar. Does this mean that the kHz frequency scale of nerve pulses corresponds to T2 for quarks and 10 Hz EEG frequency scale corresponds to T2 for electrons? If this is the case, secondary p-adic length scales for electrons and quarks are fundamental for the brain.

This raises some questions.
  1. It would seem that cyclotron pulses inducing BSFRs correspond to the white noise behind stochastic resonance. The rate of the detected nerve pulses would correspond to f=f(spont)/2 and to a frequency of modulated carrier wave. Can one imagine a general mechanism for producing the noise realized as nerve pulses?
  2. One can also ask whether a system could keep itself awake and in stochastic resonance in presence of the necessary metabolic energy feed. Could the system itself produce the white noise as pulse patterns and stay in a stochastic resonance with it. If so, the amount of metabolic energy could control the level of noise in turn controlling the presence of the stochastic resonance.
  3. A nontrivial question is what one means with a system. In TGD, the system involves both the biological body and the magnetic body (MB) carrying dark matter associated with it. MB has a hierarchical structure with levels labelled by the values of heff.
The model for the communication of sensory input from the cell membrane to the magnetic body and for the control of the biological body suggests itself as a mechanism transforming sensory input at the cell membrane to pulse patterns.
  1. At the level of the cell membrane, sensory input corresponds to the oscillations of the membrane potential and to nerve pulses.
  2. This sensory input is communicated to the MB as a generalized Josephson radiation modulated by the variation membrane potential representing sensory input. The generalized Josephson frequency is the sum of two parts. The first part corresponds to the ordinary Josephson frequency fJ= ZeV/heff. The second, usually dominating, part corresponds to the difference of the cyclotron frequencies of monopole flux tubes at the two sides of the cell membrane and transverse to it. The energies involved are of the order of ZeV and just above the thermal energy as required by the minimal consumption of metabolic energy. Josephson frequencies are in the EEG range.
  3. At the MB, the dark Josephson radiation generates cyclotron resonance, which transforms the frequency modulated Josephson radiation to a sequence of pulses, which define a feedback to the brain. A natural proposal is that the cyclotron pulse sequences generate nerve pulse patterns serving as the white noise.

    The rate of nerve pulses would dictate the resonant frequency f which can vary from its maximum value of kHz down to 1 Hz and even below it. The cyclotron frequencies for the body parts of the MB would thus select, which frequencies from the frequency spectrum of the Josephson radiation are amplified. Essentially, a Fourier analysis of the sensory input is performed and the spectrum would be represented at the MB.

  4. The nerve pulse patterns would in turn generate a response as modulations of geneneralized Josephson frequency sent to the MB. There the response of the system to the white noise generates the white noise. This feedback loop would define a nearly autonomous system staying in a stochastic resonance in presence of a suitable metabolic feed.
  5. Only the frequency modulation by the sensory input appears in this mechanism. Frequency modulation however reduces to the amplitude modulation for the membrane potentials.
  6. The generalized Josephson frequency must be equal to the cyclotron frequency at a given body part of the MB. It can control by a variation of the flux tube thickness whether it receives information from the cell membrane at a given generalized Josephson frequency.
  7. The failure of the communication line between the brain and the MB could cause various disorders since the MB cannot anymore take care of BB. Since the cyclotron frequencies of the biologically important ions in Bend=.2 Gauss are in a key role, the concentration of these ions in biomatter is an important factor. Lithium ions serve as a basic example. Its cyclotron frequency is 50 Hz, which corresponds to fgr,Sun. The depletion of lithium ions in the soild is known to induce depression and even suicides.
How does sensory perception relate to the stochastic resonance in the proposed sense? The stochastic resonance would be associated with the communications with the MB and the information representable as a modulation of the carrier wave.
  1. Sensory qualia would be labelled by quantum numbers measured repeatedly during the sequences of "small" state function reductions (SSFRs) between BSFRs. Primary sensory qualia would be associated with the sensory organs and the feedback from the MB of the brain to the sensory organs could generate virtual sensory input explaining hallucinations and dreams. This picture fits nicely to vision, olfaction and tactile senses, which are spatial.
  2. The generation of sensory qualia at the level of sensory organs could involve stochastic resonance amplifying the primary sensory input. The sensory input would be transformed to dark Josephson radiation to the MB of the sensory organ and returned back as a pattern of cyclotron resonance pulses in turn generating BSFRs and a modified Josephson radiation but without modification due to nerve pulses.

    When the membrane potential is reduced below the critical value, a nerve pulse would be generated and lead to a processing of the signal at the higher levels of the hierarchy. The rate of the nerve pulses would determine the intensity of the signal at the higher levels of the hierarchy. Similar feedback loops with the local magnetic bodies would take place at the higher levels of the hierarchy and generate higher level representations of the sensory input. The virtual sensory input from MB would lead to the generation of standardized mental images as a pattern completion and recognition.

  3. Stochastic resonance for the sensory receptors would allow code for various characteristics of the sensory input (such as colors, intensity and frequency of light or sound,...) to cyclotron frequencies characterizing parts of the MB. Essentially a generalized Fourier analysis of the sensory input locating Fourier components to different parts of MB would be in question.
Hearing is an exceptional sense in that the temporal aspect is essential.
  1. It would be natural to identify the intensity and frequency of auditory qualia with the cyclotron frequencies labelling the magnetic body parts. In the case of speech and "almost heard" internal speech, the meaning of the speech represents a higher level element related to the temporal aspects, and could be associated with the communications to the MB rather than being purely spatial quale.
  2. If the heard sound frequencies correspond to Josephson frequencies, why are the other qualia not accompanied by an auditory experience? A partial answer is that hearing involves the sensation of the pitch and intensity of the sound as non-temporal qualia at the neuronal level.

    The temporal aspects of hearing responsible for the meaning of the speech would naturally correspond to the modulations of the membrane potential and of Josephson frequencies. But also other senses involve this aspect. Could these aspects correspond to internal speech providing a cognitive interpretation of the experience, its naming? Could this aspect be universal and accompany all experiences? This would also conform with the fact that the oscillations of magnetic flux tubes are analogous to acoustic waves.

The 12-note scale defines a set of very special frequencies in that these frequencies have a deep emotional meaning. Also octave equivalence is a fascinating phenomenon. Could this be due the fact that these audible frequencies appear as resonance frequencies in the spectra of the cell membrane Josephson frequencies and cyclotron frequencies for the magnetic flux tubes? If this is the case, magnetic flux tubes would define an analog of an organ played by the sensory input to MB. How do these special frequencies relate to the gravitational Compton frequencies?
  1. The model for bioharmony, leading to a model for the genetic code (see this, this, and this) leads to a proposal that Pythagorean scale defines a spectrum of preferred cyclotron frequencies and thus a spectrum of strengths of the endogenous magnetic field Bend. Quint cycle (3/2)n of fundamental frequency and octave equivalence would yield the 12-note scale.
  2. β0≈ 1 has been assumed for the Earth and β0≈ 2-11 for the inner planets of the Sun. Could β0≤ 1 have a spectrum? Could this spectrum explain in the case of the Sun the EEG spectrum below 50 Hz frequency spanning 7 octaves (DNA corresponds to 1 Hz), and in the case of the Earth the microwave spectrum in the range .5-67 GHz?
  3. I have considered the possibility that β0 is for number-theoretical reasons quantized as an inverse integer: β0=1/n (see this). Number theoretical constraints allow a more general quantization as rational numbers: β0=m/n. The spectrum of the gravitational Compton frequencies would resonate with the spectrum of the cyclotron frequencies if β0 in fgr = β0/GM obeys a quantization producing the 12-note scale. It would be interesting to check whether EEG exhibits 12-note scale as a finite structure realized as preferred frequencies.
Consider next the microwave hearing as a possible explanation of taos hum.
  1. In microwave hearing the carrier wave amplitude, modulated in the frequency scale of audible frequencies with typical frequency in the range of EEG frequencies and therefore below 100 Hz, creates a sensation of sound. The electromagnetic signal would be amplified by stochastic resonance to a variation of neuronal membrane potentials in turn generating an acoustic signal by piezoelectric effect.

    This acoustic signal could serve as a virtual auditory input to the ear and generate a sensation with auditory qualia. The mechanism would be the same as in the case of hallucinations and dreams.

  2. Assume that the frequency spectrum associated with the gravitational body of Earth (fgr=67 GHz) spans as many octaves as that for the Sun. Assume that the frequency spectrum for Sun (fgr=50 Hz) corresponds to that for EEG assumed to span 7 octaves (1-128 Hz). The scaling gives in the case of the Earth for the microwave scaled variant of EEG realized at biomolecular level the range .5-149.5 GHz: the upper bound corresponds to energy 1.5 meV and is somewhat below the maximum frequency 160 GHz for the frequency distribution of CMB. Note that miniature membrane potentials correspond to meV energy scale.

    If one replaces EEG range with the range of frequencies 20 Hz-20 kHz audible for humans spanning 10 octaves the upper bound for scale frequency spectrum would be 12 THz which corresponds to energy of .1 eV which is the energy of Cooper pair for cell membrane Josephson function with voltage .05 V. For bats the audible frequencies extend to 110 kHz and the upper bound would be now .510 THz and correspond to energy of .5 eV which is the nominal value of the metabolic energy quantum.

  3. There are indications that also the gravitational body of Moon (with mass 1/83 times that of Earth) (see this and this) could play a role in quantum biology. The proposed analog of the EEG range for the Earth would be scaled up by factor 83 with an upper bound corresponding to .12 eV, which corresponds to the energy of the Cooper pair for the cell membrane. For the range of audible frequencies the upper bound would scale up to 8.3 eV covering visible and UV frequencies.
See the article Taos hum, stochastic resonance, and sensory perception or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Saturday, January 20, 2024

About the recent findings of Michael Levin's group

It seems that the findings of Michael Levin's group are revolutionizing biology. The Darwinian vision of life as a struggle for existence is being replaced by life as survival based on cooperation, where conscious collective intelligence plays a key role. Control hierarchies are suggestive. The findings also challenge genetic determinism and suggest that membrane potential serves as a control tool of epigenesis during the embryonic stage. The findings suggest that life forms can be artificially created for various purposes: the applications in medicine can only be guessed at.

The most recent findings of Levin's group are highly interesting from the TGD point of view. The first finding is that it is possible to generate non-standard phenotypes also in the case of human cells, that structure implies function and there are only discrete number structures. Second finding is that the population of embryos behaves in an unexpected manner: the larger the number of embryos, the better the chances of embryos to recover from external harmful perturbations. This suggests the emergence of collective intelligence and consciousness.

See the article About the recent findings of Michael Levin's group or the chapter TGD view of Michael Levin’s work.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Friday, January 19, 2024

Could taos hum and quantum gravitation relate to each other?

I have been tormented from time to time by an unpleasant sensation of sound. Always at night. Last night it came back after a long time and lasted for several hours and I had to think about what it was.

During my stay here in Karkkila, it has been a very frequent night experience from time to time, especially in the summer. The sound source does not move. As if someone were keeping the car idling or even screaming the car engine to drive his fellow men to the brink of rage. It's hard to think that anyone could continue this kind of bullying for very long. Even the option that someone would listen to, say, a video of formula race at night time year after year, seems unlikely already because of the reaction of the neighbors.

I had to think about these options too, because a few years ago my hometown was a victim of moped terrorists and speeders for a few summers until the police finally became active. Fortunately, that time has passed.

The option that the voices were hallucinations didn't seem likely. Another option was that they are sensory memories. Such are possible and can be induced by electrically stimulating the temporal lobes. For example, some previously experienced pain due to some real cause can be chronically repeated as a sensory memory.

Then at night I realized a possible explanation. When I was living in Hanko, I wrote about a strange phenomenon called taos hum (taos hum) (see this). For the TGD view of taos hum see this and this . While writing about taos hum, I realized that I had this syndrome myself!

  1. An idling diesel engine is a good characterization for the sound. Here in Karkkila, the sound has been only more aggressive: as if deliberate gassing had been involved. Taos hum is not detected by microphones and does not create a normal sensation of hearing.
  2. Taos hum cannot be connected to any device produced by technology. It starts after sunset and the initial cause seems to be biological. Interestingly, also animals and plants start producing electrostatic noise after sunset. In Karkkila, during the winter, there are no other options than trees.
  3. Microwave hearing (Frey effect) could be involved. The series of microwave pulses can be modulated with low but audible frequencies, for example around 50 Hz. Microwave frequencies range is 3 decades: from 300 MHz to 300 GHz. They interact in the brain and produce an auditory experience. In which parts of the brain are not clear. The effect can also occur in the ears, but not in the normal way.
The carrier frequencies in the Frey effect are of the order of GHz. They are technologically significant (mobile phones for example) and this may explain why the effect has been reported for them. I am not aware of any reports regarding higher microwave frequencies that are not so technologically central. The piezoelectric effect, which converts electromagnetic radiation into sound and vice versa, could be essential to the effect.

Some people can sense the amplitude modulated frequency as a sound from radio masts, for example those used in radiotelephone connections.

Could also my unpleasant experience in Karkkila be the taos hum, which I already suffered in Hanko! Funnily enough, once I realized this connection, I stopped hearing anything! As if someone had worked hard to force me to realize this connection!

1. Connection with quantum gravitation?

Next, it occurred to me to ask what the frequency for the carrier wave of the taos hum could be.

  1. In quantum biology based on TGD, quantum gravity is essential and here Nottale's hypothesis is generalized and assigns macroscopic and even astrophysical quantum coherence to classical gravitational fields created by astrophysical objects.

    In the Earth's gravitational field, the gravitational Compton wavelength is Λgr= GME0, where the velocity parameter satisfies β0= v0/c<1. The corresponding frequency does not depend on the mass of the particle (Equivalence Principle). For β=1 one has Λgr = .45 cm. It corresponds to the microwave frequency fgr=67 GHz. This would be some kind of universal clock frequency of quantum biology.

  2. I have considered also the possibility that computers (see this, this, and this) could acquire some characteristics of a biological organism, if their clock frequency is higher than 67 GHz, because then the statistical determinism would no longer apply. In fact, the gravitional Compton wavelength associated with the Sun is half the radius of the Earth for β0≈ 2-11 deduced from the orbital radii of inner planets identified as Bohr orbits and corresponds to the EEG frequency of 50 Hz, which inspires many questions.
  3. For biomolecules, microwave frequencies play an essential role. Microwaves are associated with many strange effects such as ball lightning and light balls that have often been interpreted as UFOs. The creation of crop circles could be based on the same mechanisms as the explosion of a tomato in a microwave oven, which can be also used to produce this kind of light balls. There are also reports of lightballs in the act of building a crop circle.
  4. Could the amplitude modulation of the radiation with gravitational Compton frequency produce the taos hum?! The modulating frequencies are in the EEG range and quite low (this brings in mind the gravitational magnetic body of the Sun). Why would this give the impression of an idling diesel engine? Could it correspond to some kind of random noise but what about the impression of deliberate gassing? The carrier frequencies would be microwave frequencies and by a factor of 67 higher than in the Frey effect, which has been associated with the microwave hearing.
There is also another important microwave frequency. The maximum for the frequency distribution of the cosmic microwave background is at the frequency 160 GHz and to wavelength .2 cm. This frequency is roughly 2 times twice the gravitational Compton frequency for Earth. This is close to the upper limit of microwave frequencies of 300 GHz. Is it a coincidence that these two frequencies are so near to each other?

2. Generalization of stochastic resonance as an explanation of Taos hum?

Stochastic resonance occurs in the brain and its quantum analog serves as a candidate for the mechanism behind the perception of taos hum.

Consider first the classical variant of the stochastic resonance, which I have considered here.

  1. Classical stochastic resonance is an amplification mechanism for a signal represented as an amplitude modulation of a carrier wave with a basic frequency f acting as a harmonic perturbation of a bistable system, which is also subject to a white noise. In the recent case the message could correspond to the amplitude modulated signal with frequency f in the microwave range. f=fgr is an interesting option. One might say that the system manages to extract the energy of the noise, which creates the question whether the mechanism conforms with the second law of thermodynamics.
  2. In the resonance, the signal frequency f must be one half of the average frequency f(spont) for the jumps between two states of the bistable system: f= f(spont)/2. This condition has a simple physical interpretation: the height of the potential barrier separating the two potential wells varies periodically with a period which is half of the period defined by f, and the best opportunity to get to another potential well is to hop when the potential barrier is lowest possible.
  3. For the mechanical analog system the rate f(spont)=r0A is proportional to an "Arrhenius factor" A= exp(-Δ V/D), where Δ V is the height of the potential barrier and D characterizes the intensity of the white noise. f(spont) is also proportional to a factor r0= ω ωb/\gamma where ω is the frequency of small oscillations at either bottom of the symmetric potential well, ωb is the analogous quantity at the top of the barrier (for harmonic oscillator potential one would have ω=ωb), and \gamma characterizes the linear dissipative force (overcritical damping is assumed).
  4. Thus, when the white noise has a correct intensity, a weak harmonic perturbation with a given frequency is amplified in the sense that the Fourier expansion of the system's time development regarded as jumps between the two states contains a peak at the multiples of the frequency of the amplitude modulated harmonic perturbation. Neuroscientists refer to this phenomenon as phase locking. The peaks for the higher multiples of the input frequency f are exponentially suppressed. The notion of stochastic resonance makes sense also in the quantum context: now quantum tunnelling would replace the jumps induced by the stochastic noise.
In stochastic resonance the system extracts energy from the environment to amplify the signal. Does this really conform with the second law of thermodynamics: it would seem that the second law temporarily fails but is true with an opposite arrow of time. The TGD view of stochastic resonance could be motivated by this question.

Could stochastic resonance generalize to a quantum situation but with the ordinary ontology of quantum theory replaced with the zero energy ontology (ZEO) of TGD (see this)? What would be new is the identification of the ordinary quantum jump as a "big" state function reduction (BSFR) in which the arrow of time changes. One can consider two interpretations.

  1. Consider first the TGD analog of the standard interpretation. The jump between the potential wells corresponds to a quantum tunnelling as a transition of states with the same arrow of time and therefore involves two subsequent BSFRs. In stochastic resonance, the frequency f(spont) for these tunnellings should satisfy f=fspont/2. Each period T= 1/f would correspond to two pairs of BSFRs. In the TGD framework, this interpretation looks too complicated.
  2. For the second option, a single BSFR defines the counterpart for the hopping between two potential wells and 2 BSFRs define quantum tunnelling. Bistability has nothing to do with the details of the dynamics and is universal and corresponds to the two arrows of time. f(spont) is identified as the rate for BSFRs rather than their pairs and characterizes external perturbations.

    In the stochastic resonance, the rate f(spont)/2 for a pair of BSFRs would be equal to the carrier frequency f so that quantum tunnelling is in synchrony with the driving frequency f and each period corresponds to a quantum tunnelling. The intensity of the noise could be used to induce this synchrony.

    This synchronization mechanism applies to all transitions and to all frequencies f but f=fgr,E would be in a special role since fgr,E defines a universal gravitational Compton frequency of the Earth. For instance, EEG could involve this mechanism and the halves of the EEG period would correspond to different arrows of time as I have indeed proposed here) on basis of observations of brothers Fingelkurts (see this). As already noticed, the gravitational Compton frequency fgr,S=50 Hz of Sun is EEG frequency and EEG frequencies appear as modulation frequencies in Taos hum.

See the article Taos hum, stochastic resonance, and sensory perception or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Sunday, January 14, 2024

What happens in the transition to superconductivity?

I learned about very interesting discoveries related to the quantum phase transition between the ordinary and superconducting phase (see this).

These kinds of findings are very valuable in the attempts to build a TGD based view of what exactly happens in the transition to super-conductivity. I have developed several models for high Tc superconductivity (see this, this and this) but there is no single model.

Certainly, the TGD based view of magnetic fields distinguishing them from their Maxwellian counterparts is bound to be central for the model. However, the view about what happens at the level of magnetic fields in the transition to superconductivity, has remained unclear.

Consider first the findings of the research group. The basic question of how two-dimensional superconductivity can be destroyed without raising the temperature. The ordinary phase transition to superconductivity is induced by thermal fluctuations. Now the temperature is very close to the absolute zero and the phase transition is quantum phase transition induced by quantum fluctuations.

  1. The material under study was a bulk crystal of tungsten ditelluride (WTe2) classified as a layered semi-metal. The tungsten ditelluride was converted into a two-dimensional material consisting of a single atom-thin layer. This 2-D material behaves as a very strong insulator, which means its electrons have limited motion and hence cannot conduct electricity.
  2. Surprisingly, the material exhibits a lot of novel quantum behaviors, in particular, a switching between insulating and superconducting phases. It was possible to control this switching behavior by building a device that functions like an "on and off" switch.
  3. At the next step, the researchers cooled the tungsten ditelluride down to exceptionally low temperatures, roughly 50 milliKelvin (mK). Then the material was converted from an insulator into a superconductor by introducing some extra electrons to the material. It did not take much voltage to achieve the superconducting state. It turned out to be possible to precisely control the properties of superconductivity by adjusting the density of electrons in the material via the gate voltage.
  4. At a critical electron density, the quantum vortices rapidly proliferated and destroyed the superconductivity. To detect the presence of these vortices, the researchers created a tiny temperature gradient on the sample, making one side of the tungsten ditelluride slightly warmer than the other. This generated a flow of vortices towards the cooler end. This flow generated a detectable voltage signal in a superconductor, which can be understood in terms of the integral form of Faraday's law. Voltage signals were in nano-volt scale.
Several surprising findings were made.
  1. Vortices were highly stable and persisted to much higher temperatures and magnetic fields than expected. They survived at temperatures and fields well above the superconducting phase, in the resistive phase of the material.
  2. The expectation was that the fluctuations perish below the critical electron density on the non-superconducting side, just as they do in ordinary thermal transition to superconductivity.

    In contrast to this, the vortex signal abruptly disappeared when the electron density was tuned just below the critical value of density at which the quantum phase transition of the superconducting state occurs. At this critical (quantum critical point (QCP) quantum fluctuations drive the phase transition.

What could be the interpretation of these findings in the TGD Universe and could they give hints for a more precise formulation of the TGD inspired model.
  1. TGD leads to a rather detailed proposal for high Tc - and bio-superconductivity. There are reasons to think that this model might work also in the case of low temperature superconductivity, in particular in the proposed situation with one-atom-thick layer (see this, this, this and this).
  2. The unique feature of the monopole flux tube is that its magnetic field needs no currents as a source. The cross section of the flux tube is not a disk, but a closed 2-surface. There is no boundary along which the current could flow and generate the magnetic field. In the absence of these ohmic boundary currents there is no dissipation and the natural interpretation is that electrons form Cooper pairs.

    These monopole flux tubes are central for TGD based physics in all length scales and explain numerous anomalies related to the Maxwellian view of magnetic fields. The stability of the Earth's magnetic field and the existence of magnetic fields in cosmic scales are two examples.

  3. There are also ordinary flux tubes with disk-like cross sections for which current along the boundary creates the magnetic field just like in an inductance coil. The loss of superconductivity means generation of these disk-like magnetic vortices with quantized flux created by ordinary current at the boundaries of the disk-like flux quantum.

    The monopole flux has a cross sectional area twice that of disk-like flux tube so that one can see the monopole flux tube as being obtained by gluing two disk-like flux tubes along the boundaries. The signature of the monopole flux tube is that magnetic flux is twice that of ordinary flux tubes.

  4. Whether the disk-like flux tubes are possible in the TGD Universe has remained uncertain. My latest view is that they are and I have written a detailed article about how boundary conditions could be satisfied at the boundaries (see this).

    The orbits of the disk-like boundaries would be light-like 3-surfaces. This is not in conflict with the fact that the boundaries look like static structure. The reason is that the metric of the space-time surface is induced from that of M4× CP2 and the large CP2 contribution to the induced 3-metric makes it light-like. One might say that the boundary is analogous to blackhole horizon.

This picture allows us to sketch what could happen at the quantum critical point.
  1. Both monopole flux tubes and disk-like flux tubes are present at the critical point. Monopole flux tubes dominate above the critical electron density whereas disk-like flux tubes dominate above it. In the transition pairs of disk-like flux tubes fuse to form monopole flux tubes and electrons at the boundaries combine to Cooper pairs inside the monopole flux tube and form a supra current. The transition would be a topological phase transition at the level of the space-time topology and something totally new from the standard model perspective.
  2. Cyclotron energy scale, determined by the monopole flux quantization and flux tube radius, is expected to characterize the situation. The difference of the cyclotron energies for the monopole flux tube with Cooper pair and for two disk-like flux tubes with one electron should correspond to the binding energy of the Cooper pair. If the thermal energy exceeds this energy, superconductivity is lost. The disk-like flux tubes can however remain stable.
  3. The transition could involve the increase of the effective Planck constant heff but its value would remain rather small as compared to its value of high Tc superconductivity. The value of heff should be correlated with the transition temperature since the difference of total cyclotron energies would be proportional to heff.
This picture does not yet explain why the vortices suddenly disappear at the critical electron density. The intuitive guess is that the density of electrons is not high enough to generate the disk-like monopole flux tubes.
  1. Suppose that these flux tubes have a constant radius and fill the 2-D system so that a lattice like system consistent with the underlying lattice structure is formed.
  2. There must be at least 1 electron per flux tube to create the magnetic field inside it. The magnetic flux is quantized and if the boundary of the disk contains single electron, the number of electrons per flux tube area S is 1: the density of electrons is n=1/S. If the electron density is smaller than this, the formation of disk-like flux tubes is not possible as also the transition to superconductivity.
This proposal is not consistent with the earlier TGD based model for high Tc superconductivity (see this and this) In high Tc superconductivity there are two critical temperatures. At the higher critical temperature Tc1 something serving as a prerequisite for super-conductivity appears. Superconductivity however appears only at a lower critical temperature Tc.
  1. The earlier TGD based proposal is that the superconductivity appears at Tc1> Tc in a short length scale so that no long scale supra currents are possible. The magnetic flux tubes would form short loops. At Tc the flux loops would reconnect to form long flux loops. The problem with this option is that it is difficult to understand the energetics.
  2. The option suggested by the recent findings, is that disk-like half-monopole flux tubes carrying Ohmic currents at their boundaries are stabilized at Tc1. At Tc they would combine to form monopole flux tubes.
  3. The difference Δ Ec of the cyclotron energies of the monopole- and non-monopole states would naturally correspond to Tc whereas the cyclotron energy scale Ec= ℏeff{eB/m of the non-monopole state would correspond to Tc1.
  4. In the first approximation, the value of B is the same for the two states. For the non-monopole state the electrons reside at the boundary and the effective harmonic potential energy is maximal. For the monopole state Cooper pair resides in the interior so that the cyclotron energy is smaller in this case. This gives Δ Ec<0. The simplest interpretation is that the binding energy of the Cooper pair corresponds to this contribution.
  5. If the value of heff is the same for the pair of half-monopole flux tubes and monopole tube states, both Ec and Δ Ec scale like heff/h. Also the critical temperatures Tc and Tc1 would scale like heff/h. High Tc superconductivity would therefore provide a direct support for the hierarchy of Planck constants.
See the article What happens in the transition to superconductivity? or the chapter TGD and condensed matter.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Saturday, January 06, 2024

The twistor space of H=M4× CP2 allows Lagrangian 6-surfaces: what does this mean physically?

This article was inspired by the article "A note on Lagrangian submanifolds of twistor spaces and their relation to superminimal surfaces" of Reinier Storm. For curiosity, I decided to look at Lagrangian surfaces in the twistor space of H=M4× CP2. The 6-D Kähler action of the twistor space existing only for H=M4× CP2 gives by a dimensional reduction rise to 6-D analog of twistor space assitable to a space-time surface. In the dimensional reduction the action reduces to 4-D Kähler action plus a volume term characterized by a dynamically determined cosmological constant Λ.

One can identify space-time surfaces, which are Lagrangian minimal surfaces and therefore have a vanishing Kähler action. If the Kähler structure of M4 is non-trivial as strongly suggested by the notion of twistor space, these vacuum extremals are products X2× Y2 of Lagrangian string world sheet X2 and 2-D Lagrangian surface Y2 of CP2, and are deterministic so that they allow holography. As minimal surfaces they allow a generalization of holography= holomorphy principle: now the holomorphy is not induced from that of H but by 2-D nature of X2 and Y2. Therefore holography=holomorphy principle generalizes.

Λ can vanish and in this case the dimensionally reduced action equals Kähler action. In this case, vacuum extremals are in question and symplectic transformations generate a huge number of these surfaces, which in general are not minimal surfaces. Holography= holomorphy principle is not however lost. Λ=0 sector contains however only classical vacua and also the modified gamma matrices appearing in the modified Dirac action vanish so that this sector contributes nothing to physics.

See the article The twistor space of H=M4× CP2 allows Lagrangian 6-surfaces: what does this mean physically? or the chapter Symmetries and Geometry of the ”World of Classical Worlds” .

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Friday, January 05, 2024

Comments about Vopson's second law of infodynamics

I read an article Euronews article about Melvin Vopson's work claiming that the second law of this infodynamics gives support for the simulation hypothesis (see this). Thanks for Matti Vallin for informing me about Vopson's work.

First I want to drop from consideration what looks to me nonsense.

  1. Our senses are only electrical signals that the brain encodes. We are just a biocomputers. This is the first fatal mistake, which, thank God, w are getting rid of as theories of consciousness develop.
  2. The assumption that we are a simulation is non-sensical and explains nothing except maybe Vopson's second rule of infodynamics, which is most probably wrong as will be found. It however creates myriads of questions: for instance, who are those simulators and what physics do they obey?
  3. Vopson's second law of infodynamics is motivated by facts: by the experience with computer codes and corona viruses. Interpretation of these facts does not howerve require Vopson's second law. The system's goal is to have just enough information, only the most significant bits, for it to cope with its tasks. This is because the maintenance of information requires (metabolic) energy and this must be saved.

    If an increase in information as an increase in complexity is followed by an increase in entropy then this would explain why entropy decreases as the genome develops and gets simpler. Complexity, the maintenance of which requires metabolic energy and its input is minimized subject to the constraint that the genome works. In computer science one speaks about compression.

Therefore there seems to be no need for the second main rule of Vopson's infodynamics.

Above I however made some assumptions: the increase of conscious information means increase of complexity accompanied by the increase of entropy and metabolic energy is needed to preserve complexity. How does TGD justify these assumptions and can it explain the findings.

  1. One must explain why an increase of conscious information results in an increase of entropy. One must of course define first what conscious information is, and this cannot be achieved without the theory of consciousness, cognition and quantum biology. In TGD, number theory is an integral part of the physics of cognition.

    The first results of TGD-based cognition was the concept of p-adic entropy as a generalization of Shannon entropy. p-Adic entropies can be interpreted as measures of both algebraic complexity and the amount of conscious information, a universal IQ.

    Conscious information defined as the sum of p-adic entropies turns out to be greater than the usual entropy even though these two quantities are strongly correlated. This information tends to grow in the number theory evolution. Its increase also results in an increase of entropy. Jeremy England has postulated the same on basis of empirical findings, but in TGD it is predicted.

  2. One must also understand why metabolic energy is necessary for the growth of information/complexity. Here, too, number-theoretic physics is needed. It predicts a hierarchy of Planck's constants heff which corresponds to the hierarchy of extensions of rational numbers. heff is proportional to the dimension of the extension and serves as a measure of the algebraic complexity of the extension. The deviation of heff from h means that the phase of ordinary matter in question behaves like dark matter, which would be an essential part of biosystems and would control ordinary biomatter.
  3. The increase of heff requires energy, i.e. metabolic energy. heff tends to decrease spontaneously, so that the system remains complex/intelligent/aware only if it receives metabolic energy continually.

    The minimization metabolic energy feed forces the system to eliminate unnecessary complexity, to represent just the signification bits, and this explains the findings of Vopson and others.

All in all, this endless stream of increasingly bizarre proposals pouring out from the web reflects deep gaps in our understanding and underlying wrong assumoptions. Our view of dark matter is completely wrong, and quantum biology and quantum theories of consciousness and cognition do not exist, and we have completely missed number theory as basic aspect of physics and cognition. There is not even an understanding of what would serve a measure of conscious information!

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Thursday, January 04, 2024

About Lagrangian surfaces in the twistor space of M4×CP2

I received from Tuomas Sorakivi a link to the article "A note on Lagrangian submanifolds of twistor spaces and their relation to superminimal surfaces" (see this). The author of the article is Reinier Storm from Belgium.

The abstract of the article tells roughly what it is about.

In this paper a bijective correspondence between superminimal surfaces of an oriented Riemannian 4-manifold and particular Lagrangian submanifolds of the twistor space over the 4-manifold is proven. More explicitly, for every superminimal surface a submanifold of the twistor space is constructed which is Lagrangian for all the natural almost Hermitian structures on the twistor space. The twistor fibration restricted to the constructed Lagrangian gives a circle bundle over the superminimal surface. Conversely, if a submanifold of the twistor space is Lagrangian for all the natural almost Hermitian structures, then the Lagrangian projects to a superminimal surface and is contained in the Lagrangian constructed from this surface. In particular this produces many Lagrangian submanifolds of the twistor spaces and with respect to both the Kähler structure as well as the nearly Kähler structure. Moreover, it is shown that these Lagrangian submanifolds are minimal submanifolds.

The article examines 2-D minimal surfaces X2 in the 4-D space X4 assumed to have twistor space. From superminimality which looks somewhat peculiar assumption, it follows that in the twistor space of X4 (assuming that it exists) there is a Lagrangian surface, which is also a minimal surface. Superminimality means that the normal spaces of the 2-surface form a 1-D curve in the space of all normal spaces, which for the Euclidian signature is the 4-D Grassmannian SO(4)/SO(2)× SO(2)= S2× S2 (SO(1,3)/SO(1,1)× SO(2) for M4). Superminimal surface is therefore highly flattened. Of course, already the minimal surface property favours flatness.

Why the result is interesting from the TGD point of view?

It is interesting to examine the generalization of the result to TGD because the interpretation for Lagrangian surfaces, which are vacuum extremals for the Kähler action with a vanishing induced symplectic form, has remained open. Certainly, if M4Käher form vanishes, they do not fulfill the holomorphy=holography assumption, i.e. they are not surfaces for which the generalized complex structure in H induces a corresponding structure at 4-surface.

Superminimal surfaces look like the opposite of holomorphic minimal surfaces (this expectation turned tou to be wrong!). If M4Käher form vanishes, their counterparts give a huge vacuum degeneracy and non-determinism for the pure Kähler action, which turned out to be mathematically undesirable. The cosmological constant, which follows from twistoralization, was believed to correct the situation.

I had not noticed that the Kähler action, whose existence for T(H)=T(M4)× T(CP2) fixes the choice of H, gives a huge number of 6-D Lagrangian manifolds! Are they consistent with dimensional reduction, so that they could be interpreted as induced twistor structures? Can a complex structure be attached to them? Certainly not as an induced complex structure. Does the Lagrangian problem of Kähler action make a comeback? Furthermore, could one extend the very promising looking holography=holomorphy picture by allowing also Lagrangian 6-surfaces T(H)?

Do they have a physical interpretation, most naturally as vacuums? The volume term of the 4-D action characterized by the cosmological constant Λ does not allow vacuum extremals unless Λ vanishes. But Λ is dynamic for the twistor lift and can vanish! Do Lagrangian surfaces in twistor space correspond to 4-D minimal surfaces in H, which are vacuums and have a vanishing cosmological constant? Could even the original formulation of TGD using only Kähler action be an exact part of the theory and not just a long-length-scale limit? And does one really avoid the original problem due to huge non-determinism of vacuum extremals!? And what about the Lagrangian minimal surfaces possibly obtained when Λ is non-vanising?

The question is whether the result presented in the article could generalize to the TGD framework even though the super-minimality assumption does not seem physically natural at first.

Lagrangian surfaces in H=M4× CP2 and its twistor space

So let's consider the 12-D twistor space T(H)=T(M4)× T(CP2) and its 6-D Lagrangian surfaces having a local decomposition X6=X4× S2. Assume a twistor lift with Kähler action on T(H). It exists only for H=M4× CP2.

Let us for a moment forget the requirement that these Lagrangian surfaces correspond to minimal surfaces in H. Let us first consider the situation in which there is no generalized Kähler and symplectic structure for M4.

One can actually identify Lagrangian surfaces in 12-D twistor space T(H).

  1. Since X6=X4× S2 is Lagrangian, the induced symplectic form of the for it must vanish. This is also true in S2. Fibers S2 together with T(M4) and T(CP2) are identified by an orientation-changing isometry. The induced Kähler form S2 in the subset X6=X4× S2 is zero as the sum of these two contributions of different signs. If this sum appears in the 6-D Kähler action, its contribution to the 6-D Kähler action vanishes. The cosmological constant is zero because the S2 contribution to the 4-D action vanishes.
  2. The 6-D Kähler action reduces in X4 to the 4-D Kähler action, which was the original guess for the 4-D action. The problem is that in its original form, involving only CP2 Kähler form, it involves a huge vacuum degeneracy. The CP2 projection is a Lagrangian surface or its subset but the dynamics of M4 projection is essentially arbitrary, in particular with respect to time. One obtains a huge number of different solutions. Since the time evolution is non-deterministic, the holography, and of course holography=holomorphy principle, is lost. This option is not physically acceptable.
How the situation changes when also M4 has a generalized Kähler form that the twistor space picture strongly suggests, and actually requires.
  1. Now the Lagrangian surfaces would be products X2× Y2, where X2 and Y2 are the Lagrangian surfaces of M4 and CP2. The M4 projections of these objects look like string world sheets and in their ground state are vacuums.

    Furthermore, the situation is deterministic! The point is that X2 is Lagrangian and fixed as such. In the previous case much more general surface M4 projection, even 4-D, was Lagrangian. There is no loss of holography! Holography = holomorphy principle is however lost. Holography would be replaced with the Lagrangian property.

  2. The symplectic transformations of H produce new Lagrangian vacuum surfaces. If they are allowed, one might talk of symplectic phase. The second phase would be the holomorphic phase. The two major symmetry groups of physics would both be involved. For Λ= 0 these Lagrangian surfaces are classical vacua and also fermionic vacua because the modified gamma matrices appearing in the modified Dirac action vanish identically. Therefore Λ=0 sector does not contribute to physics at all. For non-vanishing Λ one has only minimal Lagrangian surfaces, which are string like entities and they contribute to physics.

    It should be however made clear that the symplectic transformations are not isometries so that minimal surface property is not preserved. Minimal surface property would reduce the vacuum symmetries to isometries.

  3. In this phase induced Kähler form and induced color gauge fields vanish and it would not involve monopole fluxes. This phase might be called Maxwell phase for nonvanishing Λ. Could it correspond to the Coulomb phase as the perturbative phase of the gauge theories, while the monopole flux tubes (large heff and dark matter) would correspond to the non-perturbative phase in which magnetic monopole fluxes are present. If so, there would be an analogy with the electric-magnetic duality of gauge theories although the two phases does not look like two equivalent descriptions of one and the same thing unless one restricts the consideration to fermions.
Can Lagrangian surfaces be minimal surfaces?

I have not yet considered the question whether the Lagrangian surfaces can be minimal surfaces as they should be for a non-vanishing Λ. In the theorem the minimal Lagrangian surfaces were superminimal surfaces.

  1. For super-minimal surfaces, a unit vector in the normal direction defines a 1-D very specific curve in normal space. It should be noted that for minimal surfaces, however, the second fundamental form disappears and cannot be used to define the normal vector. Lagrangian surfaces in twistor space also turned out to be minimal surfaces.
  2. The field equations for the Kähler action do not force the Lagrangian surfaces to be minimal surfaces. However, there exists a lot of minimal Lagrangian surfaces.
    1. In CP2, a homologically trivial geodesic sphere is a minimal surface. Note that the geodesic spheres obtained by isometries are regarded here as equivalent. Also g=1 minimal Lagrangian surface in CP2 is known. There are many other minimal Lagrangian surfaces and second order differential equations for these surfaces are known (see this).
    2. In M4, the plane M2 is an example of a minimal surface, which is a Lagrangian surface. Are there others? Could Hamilton-Jacobi structures (see this) that also involve the symplectic form and generalized Kähler structure (more precisely, their generalizations) define Lagrangian surfaces in M4? There is a general construction for Lagrangian minimal surfaces in M4 allowing to construct them from the solutions of a massless Dirac equation.
    As found, minimal surface property requires additional assumptions that could correspond to the somewhat strange-looking super-minimality assumption of the theorem. Could super-minimalism be another way to state these assumptions?
  3. In the case considered now, the Lagrangian surfaces in H would be products X2 × Y2. Interestingly, in the 2-D case the induced metric always defines a holomorphic structure. Now, however, this holomorphic structure would not be the same as the one related to the holomorphic ansatz for which it is induced from H.
So What?

These findings raise several questions related to the detailed understanding of TGD. Should one allow only non-vanishing values of Λ? This would allow minimal Langrangian surfaces X2× Y2 besides the holomorphic ansatz. The holomorphic structure due to the 2-dimensionality of X2 and Y2 means that holography=holomorphy principle generalizes.

If one allows Λ=0, all Lagrangian surfaces X2× Y2 are allowed but also would have a holomorphic structure due to the 2-dimensionality of X2 and Y2 so that holography=holomorphy principle would generalize also now! Minimal surface property is obtained as a special case. Classically the extremals correspond to a vacuum sector and also in the fermionic sector modified Dirac equation is trivial. Therefore there is no physics involved.

Minimal Lagrangian surfaces are favored by the physical interpretation in terms of a geometric analog of the field particle duality. The orbit of a particle as a geodesic line (minimal 1-surface) generalizes to a minimal 4-surface and the field equations inside this surface generalizes massless field equations.

See the article The twistor space of H=M4× CP2 allows Lagrangian 6-surfaces: what does this mean physically? or the chapter Symmetries and Geometry of the "World of Classical Worlds" .

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Wednesday, January 03, 2024

About mathematical cognition in TGD Universe

When one ponders about consciousness, one sooner or later realizes that the emergence of mathematical consciousness has meant an enormous evolutionary leap. Something completely exceptional seems to have occurred when the apple fell down about 382 years ago. Of course, also the emergence of our species was this kind of event? And there are my heros of science. This raises questions. What really happens in this kind of giant leaps of mathematical consciousness or cognitive consciousness in general. What does being or becoming conscious of a mathematical concept mean? Could one see this kind of event as an emergence of a new reflective level of consciousness? One cannot answer these questions unless one can identify the physical correlates of cognition, in particular mathematical cognition. In the sequel these questions are considered in the TGD framework in which the number theoretic and geometric views of physics are dual to each other.

This raises questions. What really happens in this kind of giant leaps of mathematical consciousness or cognitive consciousness in general. Is our species even in principle able to answer this kind of questions?

What does being or becoming conscious of a mathematical concept mean? Could one see this kind of event as an emergence of a new reflective level of consciousness? But how to describe this kind of hierarchy of levels of consciousness? What kind of phenomenon, bringing in mind phase transition, took place when humankind became conscious of differential and integral calculus, number theory, algebra, logic? Or did already the emergence of our civilization lead to this even?

One can imagine a more modest goal. What could be the physical correlates of these kinds of cognition. The easy solution of the problem would be that deterministic computations are conscious and one can formally regard any time deterministic time evolution as a computer program. This hypothesis does not however explain anything and is untestable.

Even the understanding of how the basic notions and algorithms are realized consciously at the level of cognitive consciousness seems very difficult in the framework of the recent day physics which has hitherto refused to say anything about conscious experience. Is the existing view of physics enough to meet the challenge?

These challenges look formidable but one can try! Maybe one could say at least something about cognition and mathematical cognition?

  1. What are the physical correlates of cognition? Cognition is discrete and finite and cognition represents. Could one identify cognitive representations as a discretization of the sensory world. TGD leads to a number theoretic vision about physics dual to the geometric vision and provides a theory of cognition.

    p-Adic topologies seems to be very natural candidates for the topology of cognition. p-Adic number fields fusing together with reals to what is called adele. One can also defined entire hierarchy of adeles induced by algebraic extensions of rationals. There is also a second adele-like structure defined by the union of the p-adic number fields. Two p-adic number fields are glued together at interfaces formed by numbers, which have an expansion in powers of an integer divisible by both primes.

  2. Concepts, in particular mathematical concepts, are a key element of cognition. What could be the quantum description of concepts and their emergence. Here standard quantum theory suggests an answer. Classically the field of concept is the set of its instances. In quantum theory wave functions in the set could define the instances of the concept.
  3. What gives a concious meaning to the concept? Category theoretical thinking suggests that "arrows" as entanglements between concept and other concepts provide the meaning as state function reduction selecting one particular instance of the rule representd by entanglement. In physics this means quantum measurements.
  4. What are the quantum physical correlates for Boolean logic? The Fock states define a Boolean algebra and in TGD framework these states span an infinite-D state space. In zero energy ontology (ZEO) \cite{allb/ZEO} \cite{btar/zeoquestions} of TGD this leads to a natural realization of Boolean algebras and zero energy state defines a quantum version of Boolean map \cite{allb/intsysc}.
  5. We learn in school mathematics as associations such as "1+2"$\rightarrow$ "3" and the recent successes of GPT have demonstrated how powerful tool associations are. I have consider the possible quantum aspects of AI and GPT in \cite{btart/GPT,tgdcomp}) Could associative rules be represented by using quantum entanglement? This could reduce the development of mathematical understanding to the emergence of entangled states representing correlations as rules.
  6. What conscious computation could mean? This requires a theory of consciousness and here TGD inspired theory of consciousness provides an approach. Could state function reduction reducing the entanglement defining associative rule give rise to a conscious experience associated with the association? Could one also imagine some kind of quantum hardware of mathematical consciousness perhaps representing the basic arithmetical operations?
Returning to the motivating question about the emergence of new reflective levels of consciousness and cognition, humans are distinguished from the other species by a highly evolved social organizations and in the TGD Universe the emergence of higher levels of consciousness assignable to social structures could be a central element. Single human viz. society would be like a single neuron viz. the entire brain. Here the hierarchy of magnetic bodies (MBs) is highly suggestive. The emergence of revolutionary ideas certainly requires a highly developed society with a large and complex MB. MBs involve an onion-like hierarchy of extensions of rationals and their dimension measuring their complexity serves as a kind of IQ. Could the emergence of a new level in this hierarchy give rise to these kinds of revolutionary events?

See the article About mathematical cognition in the TGD Universe .

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.