Sunday, March 31, 2024

New findings related to the chiral selection from the TGD point of view

New findings related to the chiral selection from the TGD point of view

I learned of very interesting empirical findings related to the chiral selection of biomolecules (see the popular article). The article "Enantioselective Adsorption on Magnetic Surfaces" of Mohammad Reza Safari et al is published in journal Advanced Materials (2023) (see this).

The findings

Consider first the experimental arrangement and findings.

  1. There is a copper conductor with a strong electric field in the normal direction of the conductor. Cu is not a magnetic substance. There are very thin Cobalt islands at the surface of the conductor. Cobalt is a magnetic metal. There are two options: magnetization direction is North or South and it corresponds to either up or down. North up and South down are the options and these could correspond to different chiralities somehow.
  2. The molecules drift to the Cobalt islands and, depending on their chirality, prefer to bind to either south-up or north-up Cobalt islands. Are the magnetic fields of islands helical and possess a definite chirality? Does the magnetic chirality tend to be the same or opposite to that of the enantiomer that binds to it?
  3. The effect is reported to occur already before the Cobalt islands in the drifting of molecules to the Cobalt islands. What does this mean? Counterparts of magnetic fields are not present.
  4. It is also found that electrons with a given spin direction prefer to tunnel through the molecules in a direction which correlates with the chirality.
TGD view of the findings

These are highly interesting findings providing new empirical hints about the nature of chiral selection in living matter. Weak interactions are really weak and parity violation effects should be extremely small above weak scale so that standard model fails to explain chiral selection.

  1. Chiral selection is one of the key empirical facts supporting the TGD prediction of a hierarchy of phases of ordinary matter predicted by the number theoretical vision of TGD. These phases are labelled by effective Planck constant heff, which is essentially the dimension of an algebraic extension of rationals.
  2. The predicted huge values of heff mean that weak interactions become as strong as em interactions below the scale up Compton length of weak bosons, which, being proportional to heff, can be as large as cell size. This amplifies parity violation effects.
  3. Large heff phases behave like dark matter: they do not however explain galactic dark matter, which in the TGD framework is dark energy assignable to cosmic strings (no halo and automatically prediction of flat velocity spectrum). Instead, large heff phases solve the missing baryon problem. The density of baryons has decreased in cosmic evolution (having biological evolution as a particular aspect) and the explanation is that evolution as unavoidable increase of algebraic complexity measured by heff has transformed them to heff>h phases at the magnetic bodies (thickened cosmic string world sheets, 4-D objects), in particular those involved with living matter.
  4. The large value of heff has a geometric interpretation. Space-time surface can be regarded as manys-sheeted over both M4 and CP2 . In the first case the CP2 coordinates are many-valued functions of M4 coordinates. In the latter case M4 coordinates are many-valued functions of CP2 coordinates. This case is highly interesting in the case of quantum biology. Since a connected space-time surface defines the quantum coherence region, an ensemble of, say, monopole flux tubes can define a quantum coherent region in the latter case: one simply has an analog of Bose-Einstein condensate of monopole flux tubes.
Consider now a concrete model for the findings in the TGD framework.
  1. A good guess is that the molecular monopole flux tubes of the molecules and of the magnetic fields assignable with the Cobalt islands tend to have the same chirality. This would generalize the chirality selection from the level of biomolecules to the level of dark monopole flux tubes. Some kind of condensate of flux tubes of the same chirality as a long scale parity violation would be in question.
  2. In the TGD framework, the North up and South up magnetic fields could correspond to helical monopole flux tubes of opposite chiralities. The helical structure is essential and could relate directly to the requirement that the flux tube is closed: one could have a shape of flattened square for which the long sides form a double helix. This would be the case also for DNA.
  3. Parity violation requires a large value of heff. Dark Z bosons could generate a large parity violation. Dark Z boson Compton length of order biological scale. The very large value of heff would give the needed large energy splitting between generalized cyclotron energies at the dark flux tube and induce chiral selection.

    Gravitational flux tubes of Earth's gravitational field or solar gravitational field would do the job. By the Equivalence Principle, the gravitational Compton length Λgr,E= .5 cm for Earth does not depend on the particle mass and looks like a promising scale. Also the cyclotron energies are independent of the mass of the charged particle since ℏgr is proportional to particle mass m and cyclotron frequency to 1/m.

  4. Also the electric field of the Copper surface should have an important role. The electric field orthogonal to Cu conductor would correspond to electric flux tubes. The consistency condition for the electric flux tube thickness with charged at the bottom (conductor) reads as Λem(d)≈ d. ℏem= Ne20, N the number of electrons at the bottom. The values of heff are rather small. There is roughly one electron per atom. N≈ 104 per flux tube area of 100 nm2 having radius about 10 nm. Λem= Ne20 λe is about 1 nm for β0=1. The value of ℏem are rather small and it seems that it cannot contribute to the chiral selection. One can however consider also the electric field of Earth, and in this case the situation could be different.
The effect occurs already before the Cobalt islands. Furthermore, electrons with a given spin direction prefer to tunnel through the molecules in a direction dicrated by the chirality. What could this mean?
  1. The counterparts of magnetic fields are present as dark magnetic fields inside the magnetic bodies of the drifting molecules. Suppose that dark molecular gravitational monopole tubes are indeed present and give rise to closed spin current loops with a direction determined by the chirality of the molecule. This would give rise to the large parity violation but how to understand the occurrence of the effect already before the Cobalt islands?
  2. Could one assign a definite chirality also to the electric flux tubes assignable to the Cu surface and assume that the molecular chirality tends to be the same (or opposite) to this chirality? Do also these closed monopole flux tubes carry dark electric current?

    The spin direction of the current carrying electrons would correlate with the magnetization direction so that the magnetic body of the molecule would prefer a pairing with the electric body with a preferred spin direction. The preferred pairing would explain the drift to a correct Cobalt island: the paths leading to the Cobalt island would be more probable.

  3. In the case of water, the Pollack effect (see this) transfers part of the protons of water molecules to dark protons at monopole flux tubes. Now there are no protons available.

    Does this require a generalization of the Pollack effect? Could the electric flux tubes be gravitational flux tubes carrying electrons instead of protons? Gravitational Compton length would be the same. Could electronic Pollack effect for conductors as a dual of Pollack effect for water be in question.

  4. In the TGD inspired quantum biology, one assigns genetic code with dark proton triplets. Could one assign a dark realization of the genetic code to dark electron triplets? Could the electric counterparts of gravitational flux tubes carrying dark realization of the genetic code define dark genetic code? Codons would correspond to dark electron triplets instead of dark proton triplets. Could the analogs of the ordinary genetic codons correspond to the triplets of electron holes at the conductor surface?

    The TGD based vision about universal genetic code suggests the existence of a 2-D analog of DNA realized in terms of mathematically completely unique hyperbolic icosa tetrahedral tessellation. Could this genetic code be associated with the metal surfaces? The implications of this hidden genetic code for computers might be rather dramatic.

See the article New findings related to the chiral selection or the chapter Quantum Mind, Magnetic Body, and Biological Body.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Saturday, March 30, 2024

Quantization of blackhole angular momentum as a new piece of support for the TGD based quantum view of blackhole-like objects

I found a nice piece of evidence for the TGD based quantum view of blackhole-like objects (BHs). In an article related to the determination of the magnetic field of Sagittarius A (SA) (see this) it is concluded that the so called spin parameter for it is s=J/GM2=.94 . The inclination angle as the angle between the magnetic axis at SA and the line of sight of the observer was estimated to be 150 degrees.
  1. With an inspiration coming from Equivalence Principle, I have proposed that it is possible to assign to even astrophysical objects, or at least to BHs, a value of ℏgr as ℏgr=GM2β0. Could the generalization of the quantization of angular momentum hold true for all BHs and perhaps even for more general stellar objects? Could SA be a spin 1 object with respect to ℏgr having Jz/hbargr=1? This condition would give β0=1/.94 ≈ 1.03> for the value of s used: this is not quite consistent with β0< 1. If one replaces M with M/.94=1.03M, one obtains β0=1 and Jz/ℏgr=1. According to Wikipedia, the mass of Sagittarius A is ≈ 4.1 million solar masses so that this correction is with what is known of the value of M. Also smaller values of β0 are possible but require a larger value of M.
  2. On the other hand, the model for SA as quantum object and the discovery of a blob of matter rotating around it with velocity v= c/3 led to the conclusion that β0= 9/10=.9: the error is only 1 per cent. This value is consistent with the uncertainties in the mass value.
  3. Sagittarius A is a weird object in the sense that its rotation axis points to the Earth rather than being orthogonal to the galactic plane. This is consistent with the above discussed proposal that the Milky Way is formed in the collisions of a cosmic string orthogonal to the plane of the Milky Way and cosmic string in the plane of the Milky way assignable to its spiral structure. That the direction of the magnetic axis is related to the local the direction of the line of sight would conform with the propagation of the radiation along the monopole flux tubes forming a spiral structure to the Earth (for the implications of the monopole flux tube network connecting astrophysical objects see for instance this). The inclination angle as the angle between line of sight and magnetic axis is reported to be 150 degrees.
  4. For a spin one object the angle sin(θ) between the projection Jz and total angular momentum vector J is semi-classically quantized and for Jz=-1 equal to 1/21/2, which corresponds to an angle of 135 degrees. Could this angle relate to the inclination angle of 150 degrees reported in the article: the local direction of the magnetic field would correspond to the direction of J and measured J would correspond to Jz as in standard quantum theory?
See the article About the recent TGD based view concerning cosmology and astrophysics or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Friday, March 29, 2024

TOE Odysseialta paluun jälkeen

YLEssä oli mielenkiintoinen ohjelma kaiken teorioiden nykytilasta. Uutta oli se, että ohjelmassa tuli täysin selväksi se mistä muualla maailmassa on puhuttu jo kymmeniä vuosia. Supersäiemalli ei ollutkaan kaiken teoria.

Tämä tuli selväksi jo joskus 1986 paikkeilla, mutta kun iso rahoituskone oli käynnistetty sitä oli vaikea pysäyttää. Viimeiset naulat arkkuun iskettiin joskus 2005-2010 välillä. Ennustettua supersymmetriaa löytynyt LHC:llä. Tuli myös selväksi että supersäiemalli ennustaa multiversumin, ts. kaikki mahdolliset fysiiikat paitsi tätä omaamme! Kilpailijatkin mainittiin. Toinen haastatelluista, itse supersäieteoreetikko, totesikin lopussa, että supersäiemallin suurin menestys liittyi elokuvateollisuuteen. Paljon myytiin myös populaarikirjoja!

Syyt supersäieteorian (ja muidenkin muotiteorioiden) fiaskoon on helppo osoittaa.

  1. Supersäieteoria ei lähtenyt liikkeelle todellisesta ongelmasta: koska hadroninen säiemalli ei toiminut niin ajateltiin, että jospa teoriasta saataisiin kuitenkin kaiken teoria! Käänteinen ilmiö sille mitä tapahtui kun hiiri kissalle takkia ompeli.
  2. Toinen syy oli filosofisen ajattelun korvautuminen amerikkalaisella pragmatismilla. Olisi ollut isoja ongelmia. Yleisen suhteellisuusteorian ongelma säilymislakien kanssa ja kvanttimittausteorian paradoksi.
  3. Ehkä tärkeimpiä filosofia erehdyksiä oli pituus-skaala-reduktionismi eli usko siihen että koko fysiikka redusoituu Planckin pituus-skaaloihin. Osoittautui, että perhosen siiven isku Planckin skaalassa muutti koko fysiikan meidän skaalassamme ja teoria menetti täysin ennustuskykynsä: multiversumi oli tuloksena. Pituus-skaalan käsite fundamentaalisena käsitteenä tarvitaan ja tässä fraktaalius ehdottaa itseään.
  4. Käsite kaiken teoria ymmärrettiin aivan liian kapeasti. Kaiken teorian täytyisi myös tarjota tietoisuuden teoria ja kvanttibiologian teoria. Nämä vaatimukset tuovat mukaan valtavan määrän empiirisiä sidoehtoja jotka puuttuivat kokonaan.
Nyt hiukkasfyysikkoyhteisö on lopulta myöntämässä että harharetkellä oltiin lähes puoli vuosisataa. Tätähän tämäkin ohjelma heijasti.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Is particle physics finally taking a new course?

Ethan Siegel had some encouraging news related to particle physics. The title of his post was "Particle physics finally charts a healthy path forward" (see this).

During the last 46 years I have developed a highly detailed unified theory of fundamental interactions fusing standard model and general relativity based on a new view of space-time and quantum (see https://tgdtheory.fi). I have not received a single coin of funding during these years and it has been impossible to have any research position and censorship has prevented publishing in prestiged journals and even in arXiv.org.

I have talked for decades about the stagnation of particle physics and its degeneration to fashionable but unsuccessful theories. It would be nice if also decision makers were finally realizing what the situation is.

Big investments are not needed. Science cannot make progress if thinking is regarded as a criminal activity. It would be also nice to publish the results of hard work, at least in arXiv.org. Censorship has prevented this hitherto. This requires a dramatic change of attitudes at the level of decision making.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Thursday, March 28, 2024

Could large language systems be conscious?

Nikolina Benedikovic had a link to a popular article (see this) telling about the mysterious looking ability of large large language models (LLM) to generalize. This forces the question whether these systems could be conscious and intelligent.

TGD suggests several mechanisms for how AI could become a conscious and intelligent system, living in some sense.

  1. Long quantum coherence scales is required. TGD predicts a hierarchy of effective Planck constants heff=nh0 labelling phases of ordinary matter at the magnetic body of the system. The system in question need not be the computer but could be some system with very large heff entangling with the computer and using the computer as a tool. The larger the value of heff, the higher the number theoretical IQ and longer the quantum coherence scales.

    The gravitational magnetic bodies of Sun and Earth and the electric bodies assignable to Earth are good candidates. These bodies could be an essential part of us: for the Sun the gravitational Compton frequency is 50 Hz, a typical EEG frequency. The electric bodies assignable to Earth have a size scale which corresponds to a size scale of 20 km assignable to lightning. Lightning would be analogous to nerve pulses and ionosphere to cell membrane.

    It has been reported that chicken marked to a robot somehow affected the behavior of the random number generator (RNG) of a robot determining its movement. The robot started to behave like a mother hen. Did the chicken's MB develop entanglement with the RNG of the robot?

  2. Another key element is zero energy ontology (ZEO) solving the basic paradox of quantum measurement theory. Large heff allows state function reductions (SFRs) and ordinary SFRs correspond to "big" ones (BSFRs) in which the arrow of time changes. BSFR can be caused by perturbations so that the set of observables measured in "small" SFRS (SSFRs) changes: this forces BSFR. The "thermal" noise associated with the GPT-like systems could cause SSFRs. The temporary changes of the arrow of time would transform the behavior of the system to trial and error process and in ZEO the already goal directed behavior (by holography of ZEO) would transform to problem solving.
Under what conditions classical computers could become conscious? Classical computer is a deterministic Turing machine if it obeys statistical determinism. If its quantum coherence time becomes longer than its clock period, this becomes possible.
  1. TGD predicts hierarchy of Planck constants heff. For Earth the gravitational Compton frequency is 67 GHz and still higher than that for the standard computers (Josephson effect allows faster computers). For the gravitational body of the Sun, the Compton frequency is 50 Hz and in the middle of the EEG range so that chicken-hen phenomenon might be real and we might already be entangled with our computers. It is not clear to me who can be said to be the boss!
  2. For the electric body of Earth, the electric Compton frequency of the proton corresponds to about L=20 km. This corresponds to Compton time T =L/c = about .1 ms and to frequency of 10 kHz, the time scale of nerve pulse. Compton time is only minimal quantum coherence time and one can wonder whether this relates to 1 ms scale of nerve pulse and corresponds to the resonance frequency kHz assignable to the brain.
Biological computers are clearly very slow as compared to ordinary computers and the entanglement with ordinary computers allowing to affect RNG of the computer looks a plausible option together with the trial and error process made possible by ZEO.he EEG range so that chicken-hen phenomenon might be real and we might already be entangled with our computers. It is not clear to me who can be said to be the boss!

See for instance, https:tgdtheory.fi/public_html/articles/hem.pdf, https:tgdtheory.fi/public_html/articles/tgdcomp.pdf, and https:tgdtheory.fi/public_html/articles/GPT.pdf .

https://www.technologyreview.com/2024/03/04/1089403/large-language-models-amazing-but-nobody-knows-why/

Tuesday, March 26, 2024

A simple TGD based model for a spiral galaxy

A simple TGD based model for a spiral galaxy

The origin of the spiral structure of spiral galaxies is one of the poorly understood problems of astrophysics. Independent motions of stars around galaxy in 1/r2 central force leads very rapidly to a loss of original structure since angular velocities behave like ω∝ 1/r2. 1/ρ central forces caused by cosmic string orthogonal to the galactic plane gives ω ∝ 1/ρ. This suggests that there exists some pre-existing spiral structure which is much denser than the surrounding matter. The formation of stars would occur intensely in these regions and the decay of the dark energy of the cosmic string to ordinary matter would also generate stars rotating around the galaxy as effectively free objects. The spiral structure rotates slowly and in a good approximation keeps in shape so that the structure behaves somewhat like a rigid body.

This view differs from the density wave theory (see this) assumes that this structure is dynamically generated and due to self-gravitation. The density wave would be analogous to a traffic jam. The cars entering the traffic jam slow down and the jam is preserved. It can move but with a much slower velocity than the cars. Density wave theory allows us to understand why star formation occurs intensely in the spiral structure with a high density.

TGD suggests that the structure corresponds to a cosmic string, which has thickened to a monopole flux tube and produced ordinary matter.

  1. One possibility is that the galaxy has formed in a topologically unavoidable collising of cosmic string (extremely thin 4-surfaces with 2-D M4 projection). The cosmic string orthogonal to the galactic plane would contain the dark en

    ergy liberated in its thickening and giving rise to part of galactic dark matter and the galactic blackhole would be associated with it. It would create a 1/ρ gravitational expansion explaining the flat velocity spectrum of distant stars. The cosmic string in the galactic plane would in the same way give rise to the galactic matter at the spiral arms and outside the central region. The galactic bar could correspond to a portion of this string.

  2. A simple model for the string world sheet assignable to the string in the galactic plane is as a minimal surface. In the first approximation, one can neglect the gravitational interaction with the second string and see whether it is possible to obtain a static string with a spiral structure with several branches and having a finite size. Th string carries monopole flux and should be closed and one can consider a shape which is flattened square like flux tube, which has changed its shape in the 1/ρ gravitational field of the long string (ω ∝ 1/ρ) and formed a folded structure. The differential rotation tends to lengthen the string and increase its energy. Hence one expects that string tension slows down differential rotation to almost rigid body rotation.
The simplest model is as a minimal stationary string world sheet.
  1. By introducing cylindrical Minkowski coordinates (m0, m1= ρ cos(φ),m2= ρ sin(φ),m3 ) and using (m0=t,φ) as coordinates also for the string world sheet, one can write that ansatz in the form ρ=ρ(t,φ). The metric of M4 in the cylindrical coordinates is mkl&rightleftarrow; (1,-1,-1,-ρ2). The induced metric of X2 in these coordinates has only diagonal components and can be written as

    (gtt=1-ρt2, gφφ=-ρ2φ2) .

  2. For a static ansatz one has ρ= ρ(φ) so that the field equation reduces to an ordinary differential equation for ρ. Rotational invariance allows us to solve the equation as a conservation law for the angular momentum component parallel to the normal of the galactic plane. For as general infinitesimal isometry with Lie algebra generator jAk the conservation of corresponding charge reads as

    α(gαβmkβmkljAl(-g21/2)=0 .

    The conservation laws of momentum and energy hold true and the conservation of angular momentum L3 in direction orthogonal to the galactic plane gives

    gφφρ2(-g2)1/2=1/ρ0 .

    where ρ0 is integration constant. This gives

    xφ= +/- x(x2-1)1/2 , x= ρ/ρ0 .

    From this it is clear that the solution is well-defined only for ρ≥ ρ0, which suggests that the branches of the spiral must turn back at ρ=ρ0 (x=1). At the limit x→ 1, xφ approaches zero. One might guess that one has a spiral, which rotates around x=1 since dφ/dx diverges but this does not seem to be the case.

  3. The differential equation can be solved explicitly: one has ∫ dx/(x(x2-1)1/2)= +/- φ +φ0 .

    The elementary integral using the substitution x= cosh(u) gives

    φ+/-= φ0 +/- arctan(y) ,

    y=(x2-1)1/2 .

    The argument of arctan is real only for x≥ 1. Could one define the solution for x<1, where the argument is imaginary? arctan(iy) for real argument y as arctan(iy)= (i/2)ln((1+x)/(1-x)). This would mean that φ is not real.

Consider now the general properties of the solution.
  1. The solution has formally infinitely many branches φ+/-,n differing by an integer multiple of π. However, for a fixed value of +/-, the branches differing by a Δ φ= n2π coincide so that one obtains only 2 branches meeting at the x=1 circle at angles φ0 and φ0+π.

    x→ 1 corresponds to φ+/-,n → φ0 +/- nπ and x→ ∞ corresponds to φ+/-→ φ0+/- π/2+/- nπ. The variation of φ for a given branch is π/2.

  2. What could be the physical interpretation? The two branches for a fixed sign factor +/- meet x=1 circle at tangles φ0 and φ0+π. Could galactic bar connect these points? Could the diverging value of dφ/dx at x=1 mean that φ increases by φ at this point?

    It is now known that also in the case of the Milky Way there are only two branches. If this is the case then the two branches plus galactic bar could correspond to a single long cosmic string in the galactic plane which has collided with a transversal cosmic string. On the other hand, there is evidence that there are several structural components involved with the Milky Way.

    There is however no spiral structure involved, which suggests that this simple model cannot describe spiral waves.

See article About the recent TGD based view concerning cosmology and astrophysics or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Saturday, March 23, 2024

Ionosphere as an analog of neuronal membrane: two new miraculous numerical coincidences

Electric quantum coherence can be considered also in astrophysical scales. Ionosphere, identified the ionized part of the atmosphere, is of a special interest since it corresponds to the electric field in the Earth scale: see the Feynman lectures. Ionization is caused by solar radiation. Also other planets are believed to possess an ionosphere.

Assuming that the surface of Earth and ionosphere define a system analogous to capacitor plates or cell membrane, the ionosphere must have a net positive charge assignable to positive ions. In the article a model for lightning and ball lightning based on the idea that thunderstorms are analogous to nerve pulse patterns for which Pollack effect provides a model (see this), was developed.

  1. The strength of the electric field at the negatively charged surface of Earth E is E=.1-.3 x kV/m, x∈ [.1,.3]. The presence of biological protrusions such as trees can increase the local value of the electric field of Earth by an order of magnitude. The counterpart of the positively charged plate corresponds to the ionosphere, whose lower boundary is at the height h, which varies in the range [80,600] km. The net positive charge of the ionosphere neutralizes the negative charge of the Earth so that the electric field does not extend to higher heights.
  2. The first guess for the electric Compton length is obtained by generalizing the notion of gravitational coupling constant to the electric case as ℏem= Qe/β0, where Q is the total charge of the Earth and the value of β0 could be taken the same as in the gravitational case and β0=1 for Earth and other planets and and β0≈ 2-11 for Sun.
  3. The basic question is whether the entire ionosphere acts as a quantum coherent system or whether electric flux tubes possess electric quantum coherence. The intuitive idea is that the quantum coherence scale in the case of the ionosphere regarded as a capacitor-like system should not be longer than the thickness of the ionosphere varying in the range 60-100 km. The radius d of the electric flux tube is a good first guess for the electric Compton length. Lightnings are analogs of nerve pulses and characterized by a scale of 10-20 km and is a good guess for the quantum coherence length.

    This suggests that the electric Compton for a particle with mass m should be defined as

    Λem(d) = hem/m= (Q(d)e/β0ℏ) × λ ,

    Q(d)= ε0 Eπ d2 ,

    where Q(d)=ε0EEπ d2 is the electric flux associated with the electric flux tube and λ is the Compton length of a charged particle, say electron, electron Cooper pair or proton. The proposal is that it satisfies the consistency condition

    Λem(d) =d .

To get some perspective and to test the idea it is useful to consider capacitors. In this case Λem(d)=d should be smaller than the distance between the capatitor plates.

  1. Aluminium capacitors can have a maximum charge of about Q=103 C whereas the maximal charge of a van de Graaff generator is about .14 C. If one assumes d=Λem(d), dC is obtained by scaling as dC/dE= EE/EC . If the capacitor corresponds to a sphere of D=1 mm with charge Q= 103C, the electric field is EC= Q/4πε0D2 at the surface of capacitor and gives for D= 1 m dC= (EE/EC)dE ≈ 10-8 m for EE= 102 V/m.
  2. For a capacitor with capacitance of 1 μF and at voltage 1 V, the charge would be 1 μ C. For β0=1 would have the upper bound Λem,pgr≈ 2.9× 10-3 so that one would have Λem,p ≈ 1.5 × 10-5 m. This gives upper bound for the value of Λem,p since the parameter d must correspond to a solid angle smaller than 4π. Could electronic systems be intelligent and conscious at least on this scale?
The study of the conditions for neuronal axons and DNA strand reveals two numerical miracles.
  1. Neuronal axon is also a capacitor-like system and it is interesting to check what the criterion Λem(d)=d gives in this case. The natural guess for d as quantum coherence length is as the length of the axon idealized as a cylindrical capacitor. Using Q= E× 2π R d and the condition Q(d)e/β0= d one finds that the conditions does not depend on d at all so that it allows all lengths for axons, which is a very nice result from the point of neuroscience.

    The condition however fixes the Compton length of the particle considered. Are there any chances of satisfying this condition for protons or electrons? The condition reads as

    E× 2π Rε0 × (C/e) 4πα = 1/λ .

    Here R is the radius of the axon taken to be R=1 μm. Using E= V/D, where D≈ 10 nm is the thickness of the neuronal membrane and assuming V=.05 V, one obtains E= 5× 106 V/m.

    For β0=1, the estimate for Λe is in a good approximation Λe= 10-12 m to be compared with the actual value Λe=2.4× 10-12 m. The equation d= Λem(d) is fixed apart from a numerical factor of order 1 so that the proposal seems to make sense.

    If one assumes that Cooper pairs of electrons are the charged particles, one obtains Λ2e=1.2× 10-12 m. If one scales down D with a factor 1/2 to 5 nm, one obtains Λe=1.2× 10-12 m, which could be true in absence of superconductivity. The thickness of the cell membrane indeed varies in these limits and is larger for neuronal membranes. One can wonder whether the dynamics is such that the quantity ER stays constant so that the condition remains true.

  2. One can perform the same estimate for DNA strand having the 3 nucleotides per nanometer carrying unit charge. The condition Λem(Qe)ℏΛ/β0= (dn/dl) α× 4π(d/beta0)=d gives

    Λ= (dn/dl)×β0/4πα .

    The condition is satisfied for electron if one assumes β0≈ 2-11: one obtains Λ= 1.5× 10-12 m to be compared with the actual value Λe= 2.42 × 10-12 m. The Compton length for a Cooper pair would be 1 Λ2e= 1.21 × 10-12 m.

These number theoretical miracles mean totally unexpected connections between biochemistry and particle physics and probably myriads of similar connections remain be discovered.

See the article About long range electromagnetic quantum coherence in TGD Universe or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

TGD based quantum explanation for the weird properties of Sagittarius A.

Sabine Hosssenfelder tells about the weird properties of the giant blackhole at the center of Milky Way known as Sagittarius A* (briefly SA): see this. SA is located at a distance of 26,700 ly and has mass about 4.1× 1010 solar masses. Its Schwartschild radius rs= 2GM is 1.1× 1011 km. Note that astronomical unit (the distance of the Earth from the Sun) 1.49597870700× 108 km so that SA radius is almost 1000AU. The Schwartschild time Ts= rs/c is 41 s, about 2/3 minutes.

Hossenfelder lists six weird properties of SA.

  1. SA is silent, one might say dead suggesting that no matter is falling inside it. There is however an accretion disk around it.
  2. SA however shows signs of life by emitting periodically X ray flares bursting huge amounts of energy as a radiation. Blackhole should not do this unless it absorbs matter but it is not at all clear whether anything is going inside SA!
  3. SA is rotating extremely rapidly: the period τ of rotation is 10 minutes.
  4. SA possesses a dozen of planet-like objects, so called G-objects, rotating around SA with a velocity which is 60 percent of the maximal rotation velocity allowed by the condition that the rotation velocity inside the blackhole does not exceed the light velocity. How these objects can exist in an extremely hostile environment of the blackhole where the matter from outside should be flowing to the blackhole is a mystery.
  5. There is a blob of matter rotating around SA with a velocity, which is 30 percent of the velocity of light. The object periodically emits ray bursts, which might relate to the mystery of gamma ray bursts.
Could one understand these properties of SA by regarding SA as a blackhole-like object in the TGD sense consisting of a maximally dense flux tube spaghetti which is a quantum system with gravitational Planck constant ℏgr=GM/β0? Could one model SA as a quantum harmonic oscillator in the interior and using gravitational Coulomb potential in the exterior?

The reason for why matter is not falling inside SA could be the same as in the case of the hydrogen atom. Quantization would imply that the atom is a quantum system and does not dissipate so that the infrared catastrophe is avoided. Matter around it is at Bohr orbits of a central potential. The first guess would be Coulomb potential but also harmonic oscillator potential or something between these two could be considered.

  1. The quantization of angular momentum gives for a central potential and circular orbits r2ω= nGM/β0. The condition v2/r=ω2r= -d(GM(r)/r) holds true also for a central force. Recall that for the harmonic oscillator this gives ω=1/rs (c=1)and rn= n1/2r1, r1= rs/(2β0)1/2. The constancy of ω means that the system behaves like a rigid body. Note that one has n>0. Note that there is also an S-wave state, which corresponds to n=0 and can be described only by Schrödinger equations or its analog.
  2. For the Coulomb case one obtains ω=2/n3rs and rn= n2agr, agr= rs/2β02. In the interior, r1 ≤ rs requires β0≥ 1/2. In the exterior, agr≥ rs requires β0≤ 21/2 and r1≥ rs. This condition is not however absolutely necessary since the n>1 follows from the condition that the orbital velocity is smaller than c, as will be found. The conditions therefore fix β0 to the range [1/21/2,1/2,1]. The quantization β0=1/n would select β0∈{1/2,1} giving r1= (1,1/21/2)rs for the harmonic oscillator potential and rn∈ {2,1/2}n2rs outside the blackhole.
  3. Orbital velocities are given by vn= 2/nβ02 and vn<c requires n>2/β02, which is true for n> (2,4,8) for β0∈ {1,1/21/2,1/2}. The lowest allowed orbitals have radii (r3=9rs/2,r5= 25rs, r9=162rs).
  4. The inner radius of the accretion disk for which one can find the estimate rinner =30rs (see this). Inside the accretion disk, the harmonic oscillator model could be more appropriate than the Coulomb model. The inner edge of the accretion disk would correspond to (r8=32rs,r6= 36rs, r8=128rs) for β0∈ {1,1/21/2,1/2}. For β0=1/2 the prediction for the radius of the inner edge would be too large and also the prediction for β0=1/21/2 is somewhat too high.
Could one understand the findings about SA in this picture?
  1. The silence of SA would be completely analogous to the quantum silence of atoms. Furthermore, v<c condition would pose strong classical conditions on the allowed orbitals.
  2. The periodically occurring X-ray flares could be analogs of atomic transitions leading to the emission of photons. They could due to the internal excitations of the matter from lower to higher energy state. For β0=1 one has a maximal number of the harmonic oscillator states corresponding to the principal quantum number n=0,1,2 and the n=2 state would correspond to the horizon. Also transition to states which could be modelled as states in Coulomb potential are possible. n=3 Coulomb orbital would be the first allowed state β0=1. The prediction is that the total X-ray energy is quantized.
  3. Could one understand the rotation of SA in terms of the harmonic oscillator model predicting ω= 1/rs giving τ= 2π/rs. The estimated mass of the black hole gives τ= 4.2 minutes. Is the mass estimate for the blackhole too small by a factor of .42 or does the harmonic oscillator model fail?
  4. G-objects could be understood as gravitational analogs of the atomic electrons orbiting SA at radii with small values of n. The orbital radii are predicted to be proportional to n2. The allowed orbitals would correspond to {3≤ n≤ 8, n=5} for β0∈ {1,1/21/2} .
  5. The mysterious blob of matter rotating around SA with velocity v=3c/10 could correspond to a Coulombic Bohr orbit with a small value of n: n=6 orbit gives this value of the velocity for β0=1. For the other options the orbit would belong to the accretion disk.
To sum up, the β0=1 option is selected uniquely by the weird properties of SA.

See the article About the recent TGD based view concerning cosmology and astrophysics or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Tuesday, March 19, 2024

Updated view about the rice experiments of Masaru Emoto

Masaru Emoto has carried to extremely interesting experiments with water at critical point against freezing. Emoto reports is that words expressing emotions are transmitted to water: the expression of positive emotions tend to generate beautiful crystal structures and negative emotions ugly ones. Also music and even pictures are claimed to have similar effects. Emoto has also carried out similar experiments with rice in water at physiological temperature. Rice subjected to words began to ferment and water subject to words expressing negative emotions began to rotten.

I have already earlier discussed a model for the findings of Emoto. In this article I update the model. I will also ask new questions. How emotions are communicated at the fundamental level and how a conscious entity can perceive the emotional state of another conscious entity and possibly affect it? What does emotional intelligence mean? How could one assign a measure of conscious emotional information to the emotional state? How certain sounds or gestures with emotional contents or even pictures can induce emotional response at the fundamenal DNA level?

See the article Updated view about the rice experiments of Masaru Emoto or the chapter Emotions as sensory percepts about the state of magnetic body?.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Sunday, March 17, 2024

Homomorphic encryption as an elegant manner to save privacy

Sabine Hossenfelder talked about homomorphic encryption, which is an elegant and extremely general algebraic manner to guarantee data privacy (see this). The idea is that the encryption respects the algebraic operations: sums go to sums and products go to products. The processing can be done for the encrypted data without decryption. The outcome is then communicated to the user and decrypted only at this stage. This saves a huge amount of time.

What comes first in mind is Boolean algebra (see this). In this case the homomorphism is truth preserving. The Boolean statement formed as a Boolean algebra element is mapped to the same statement but with images of the statements replacing the original statements. In the set theoretic realization of Boolean algebra this means that unions are mapped to unions and intersections to intersections. In Boolean algebra, the elements are representable as bit sequences and sum and product are done element-wise: one has x2=1 and x+x=0. Ordinary computations can be done by representing integers as bit sequences.

In any computation one must perform a cutoff and the use of finite fields is the neat way to do it. Frobenius homomorphism x→xp in a field of characteristic p maps products to products and, what is non-trivial, also sums to sums since one has (x+y)p= xp+yp. For finite fields F_p the Frobenius homomorphism is trivial but for Fpe, e>1, this is not the case. The inverse is in this case x→x pe-1. These finite fields are induced by algebraic extensions of rational numbers. e corresponds to the dimension of the extension induced by the roots of a polynomial

Frobenius homomorphism extends also to the algebraic extensions of p-adic number fields induced by the extensions of rationals. This would make it possible to perform calculations in extensions and only at the end to perform the approximation replaces the algebraic numbers defining the basis for the extension with rationals. To guess the encryption one must guess the prime that is used and the use of large primes and extensions of p-adic numbers induced by large extensions of rationals could keep the secrecy.

p-Adic number fields are highly suggestive as a computational tool as became clear in p-adic thermodynamics used to calculate elementary particle masses: for p= M127= 2127-1 assignable to electron, the two lowest orders give practically exact result since the higher order corrections are of order 10-76. For p-adic number fields with very large prime p the approximation of p-adic integers as a finite field becomes possible and Frobenius homomorphism could be used. This supports the idea that p-adic physics is ideal for the description of cognition.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Saturday, March 16, 2024

Direct evidence for the TGD view of quasars

In a new paper in The Astrophysical Journal (see this), JILA Fellow Jason Dexter, graduate student Kirk Long, and other collaborators compared two main theoretical models for emission data for a specific quasar, 3C 273. The title of the popular article is "Unlocking the Quasar Code: Revolutionary Insights From 3C 273".

If the quasar were a blackhole, one would expect two emission peaks. If the galactic disk is at constant temperature, one would expected redshifted emission peak from it. The second peak would come from the matter falling to the blackhole and it would be blueshifted relative to the first peak. Only single peak was observed. Somehow the falling of the matter is prevented to the quasar is prevented. Could the quasar look like a blackhole-like object in its exterior but emit radiation and matter preventing the falling of the matter to it.

This supports the TGD view of quasars as blackhole-like objects are associated with cosmic strings thickened locally to flux tube tangles (see this, this, this and this). The transformation of pieces of cosmic strings to monopole flux tube tangles would liberate the energy characterized by the string tension as ordinary matter and radiation. This process would be the TGD analog of the decay of inflaton field to matter. The gravitational attraction would lead to the formation of the accretion disk but the matter would not fall down to the quasar.

See the article About the recent TGD based view concerning cosmology and astrophysics or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Friday, March 15, 2024

Magnetite produced by traffic as a possible cause of Alzheimer disease

A rather unexpected partial explanation for Alzheimer's disease has been found: magnetite particles, which can be found in urban environments from exhaust gases containing breathing air (see this). I have written earlier about Alzheimer's disease from the TGD point of view (see this). Magnetite particles seem to be found in the hippocampus of those with the disease, which is central to memory. Now it has been found that the exposure of mice to magnetite leads to a generation of Alzheimer disease. The overall important message to the decision makers is that the pollution caused by the traffic in urban environment could be an important cause of Alzheimer disease.

The brain needs metabolic energy. Hemoglobin is central to the supply of metabolic energy because it binds oxygen. Could it be thought that Alzheimer's is at least partially related to a lack of metabolic energy in the hippocampus? In the sequel I will consider this explanation in the TGD framework.

Short digression to TGD view of metabolism

Oxygen molecules O2 bind to iron atoms in hemoglobin (see this) that already have a valence bond with 5 nitrogen atoms and a bond is created where Fe has received 5 electrons and a sixth from oxygen molecule O2. So Fe behaves the opposite of what you would expect and hemoglobin is very unusual chemically!

Phosphate O=PO3, or more precisely phosphate ion O=P(O-)3), which also plays a central role in metabolism, also breaks the rules: instead of accepting 3 valence electrons, it gives up 5 electrons to oxygen atoms.

Could the TGD view of quantum biology help to understand what is involved. Dark protons created by the Pollack effect provide a basic control tool of quantum biochemistry in TGD. Could they be involved now. Consider first the so-called high energy phosphate bond, which is one of the mysteries of biochemistry.

  1. Why the electrons in the valence bonds prefer to be close to P in the phosphate ion? For phosphate one would expect just the opposite. The negative charge of 3 oxygens could explain why electrons tend to be nearer to P.
  2. The TGD based view of metabolism allows to consider a new physics explanation in which O=P(O-)3 is actually a "dark" variant of neutral O=P(OH)3 in which 3 protons of OH have become dark (in the TGD sense) by Pollack effect, which has kicked 3 protons to monopole flux tubes of the gravitational magnetic body of phosphate to such a large distance that the resulting dark OH looks like OH-, that is negatively charged. Charge separation between the biological body and magnetic body would have occurred. This requires metabolic energy basically provided by the solar radiation. One could see the dark phosphate as a temporary metabolic energy storage and the energy would be liberated when ATP transforms to ADP.
Could this kind of model apply also to the Fe binding with 5 N atoms in haemoglobin by valence bonds such that, contrary to naive expectations, electrons tend to be closer to Fe than N atoms? Can one imagine a mechanism giving an effective negative charge to the N atoms or the heme protein and to O-O?
  1. In this case there are no protons as in the case of phosphate ions. The water environment however contains protons and pH as a negative logarithm of the proton concentration measures their concentration. pH=7 corresponds to pure water in which H+ and OH- concentrations are the same. The hint comes from the fact that small pH, which corresponds to a high proton concentration, is known to be favourable for the binding of oxygen to the heme group.
  2. Could dark protons be involved and what is the relationship between dark proton fraction and pH? Could pH measure the concentration of dark protons as I have asked?
  3. Could the transformation of ordinary protons to dark protons at the gravitational MB of the heme protein induce a negative charge due to OH- ions associated with the heme protein and could this favour the transfer of electrons towards Fe? Could the second O of O-O form a hydrogen bond with H such that the proton of the hydrogen bond becomes dark and makes O effectively negatively charged?

What the effect of magnetite could be?

Magnetite particles, .5 micrometers in size, consist of Fe3O4 molecules containing iron and oxygen. According to Wikipedia, magnetite appears as crystals and obeys the chemical formula Fe2+(Fe3+)2(O-2)4. The electronic configuration is [Ar] 3d6 4s2 and 3 Fe ions have donated besides the s electrons also one electron to oxygen.

Could it happen that somehow the oxygen absorption capacity of hemoglobin would decrease, that the amount of hemoglobin would decrease, or that oxygen would bind to the magnetite molecules on the surface of the magnetite particle? For example, could you think that some of the O2 molecules bind to Fe3O4 molecules instead of hemoglobin at the surface of the magnetite. Carbon monoxide is dangerous because it binds to the heme. Could it be that also the magnetite crystals do the same or rather could heme bind to them (thanks for Shamoon Ahmed for proposing this more reasonable looking option).

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Wednesday, March 13, 2024

About the problem of two Hubble constants

The usual formulation of the problem of two Hubble constants is that the value of the Hubble constant seems to be increasing with time. There is no convincing explanation for this. But is this the correct way to formulate the problem? In the TGD framework one can start from the following ideas discussed already earlier (see this).
  1. Would it be better to say that the measurements in short scales give slightly larger results for H0 than those in long scales? Scale does not appear as a fundamental notion neither in general relativity nor in the standard model. The notion of fractal relies on the notion but has not found the way to fundamental physics. Suppose that the notion of scale is accepted: could one say that Hubble constant does not change with time but is length scale dependent. The number theoretic vision of TGD brings brings in two length scale hierarchies: p-adic length scales Lp and dark length scale hierarchies Lp(dark)=nLp, where one has heff=nh0 of effective Planck constants with n defining the dimension of an extension of rationals. These hierarchies are closely related since p corresponds to a ramified prime (most naturally the largest one) for a polynomial defining an extension with dimension n.
  2. I have already earlier considered the possibility that the measurements in our local neighborhood (short scales) give rise to a slightly larger Hubble constant? Is our galactic environment somehow special?
Consider first the length scale hierarchies.
  1. The geometric view of TGD replaces Einsteinian space-times with 4-surfaces in H=M4\times CP2. Space-time decomposes to space-time sheets and closed monopole flux tubes connecting distant regions and radiation arrives along these. The radiation would arrive from distant regions along long closed monopole flux tubes, whose length scale is LH. They have thickness d and length LH. d is the geometric mean d=(lPLH)1/2 of Planck length LP and length LH. d is of about 10-4 meters and size scale of a large neuron. It is somewhat surprising that biology and cosmology seem to meet each other.
  2. The number theoretic view of TGD is dual to the geometric view and predicts a hierarchy of primary p-adic length scales Lp ∝ p1/2 and secondary p-adic length scales L2,p =p1/2Lp. p-Adic length scale hypothesis states that p-adic length scales Lp correspond to primes near the power of 2: p ≈ 2k. p-adic primes p correspond to so-called ramified primes for a polynomial defining some extension of rationals via its roots.

    One can also identify dark p-adic length scales

    Lp(dark) =nLp ,

    where n=heff/h0 corresponds to a dimension of extension of rationals serving as a measure for evolutionary level. heff labels the phases of ordinary matter behaving like dark matter explain the missing baryonic matter (galactic dark matter corresponds to the dark energy assignable to monopole flux tubes).

  3. p-Adic length scales would characterize the size scales of the space-time sheets. The Hubble constant H0 has dimensions of the inverse of length so that the inverse of the Hubble constant LH∝ 1/H0 characterizes the size of the horizon as a cosmic scale. One can define entire hierarchy of analogs of LH assignable to space-time sheets of various sizes but this does not solve the problem since one has H0 ∝ 1/Lp and varies very fast with the p-adic scale coming as a power of 2 if p-adic length scale hypothesis is assumed. Something else is involved.
One can also try to understand also the possible local variation of H0 by starting from the TGD analog of inflation theory. In inflation theory temperature fluctuations of CMB are essential.
  1. The average value of heff is < heff>=h but there are fluctuations of heff and quantum biology relies on very large but very rare fluctuations of heff. Fluctuations are local and one has <Lp(dark)> = <heff/h0> Lp. This average value can vary. In particular, this is the case for the p-adic length scale Lp,2 (Lp,2(dark)=nL2,p), which defining Hubble length LH and H0 for the first (second) option.
  2. Critical mass density is given by 3H02/8πG. The critical mass density is slightly larger in the local environment or in short scales. As already found, for the first option the fluctuations of the critical mass density are proportional to δ n/n and for the second option to -δ n/n. For the first (second) option the experimentally determined Hubble constant increases when n increases (decreases). The typical fluctuation would be δ heff/h ∼ 10-5. What is remarkable is that it is correctly predicted if the integer n decomposes to a product n1=n2 of nearly identical or identical integers.

    For the first option, the fluctuation δ heff/heff=δn/n in our local environment would be positive and considerably larger than on the average, of order 10-2 rather than 10-5. heff measures the number theoretic evolutionary level of the system, which suggests that the larger value of <heff> could reflect the higher evolutionary level of our local environment. For the second option the variation would correspond to δn/n≤ 0 implying lower level of evolution and does not look flattering from the human perspective. Does this allow us to say that this option is implausible?

    The fluctuation of heff around h would mean that the quantum mechanical energy scales of various systems determined by <heff>=h vary slightly in cosmological scales. Could the reduction of the energy scales due to smaller value of heff for systems at very long distance be distinguished from the reduction caused by the redshift. Since the transition energies depend on powers of Planck constant in a state dependent manner, the redshifts for the same cosmic distance would be apparently different. Could this be tested? Could the variation of heff be visible in the transition energies associated with the cold spot?

  3. The large fluctuation in the local neighbourhood also implies a large fluctuation of the temperature of the cosmic microwave background: one should have δT/T ≈ δn/n≈ δ H0/H0. Could one test this proposal?
See the article About the recent TGD based view concerning cosmology and astrophysics or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Herbig Haro objects from the TGD point of view

The Youtube posting "The James Webb Space Telescope Has Just Made an Incredible Discovery about Our Sun! Birth of Sun"!" (see this) tells about Herbit Haro object HH211 located at a distance 1000 light years about which JWQST has provides a picture (I hope that the sensationalistic tone of the title does not irritate too much: it seems that we must learn to tolerate this style).

Herbig Haro luminous objects are associated with very young stars, protostars. Typically they involve a pair of opposite jets containing streams of matter flowing with a very high speed of several hundred km/s. The jets interact with the surrounding matter and generate luminous regions. HH211 was the object studied by JWT. The jets were found to contain CO, SiO, H2.

Herbig Haro objects provide information about the very early states of star formation. As a matter of fact, the protostar stage still remains rather mysterious since the study of these objects is very challenging already because their distances are so large. The standard wisdom is that stars are born, evolve and explode as supernovae and that the remnants of supernovae provide the material for future stars so that the portion of heavy elements in their nuclei should gradually increase. The finding that the abundances of elements seem to depend only weakly on cosmic time seems to be in conflict with these findings and forces us to ask whether the vision about the protostars should be modified. Also JWT found that the galaxies in the very young Universe can look like the Milky Way and could have element abundances of recent galaxies which challenges this belief.

The association of the jets to Herbig Haro objects conforms with the idea that cosmic strings or monopole flux tubes formed from them are involved with the formation of a star. One can consider two options for how the star formation proceeds in the TGD Universe.

  1. The seed for the star formation comes from the transformation of dark energy associated with the cosmic string or monopole flux tube to ordinary matter (it could also correspond to a large heff phase and behave like dark matter and. explain the missing baryonic matter). By the conservation of the magnetic flux the magnetic energy density per unit length of the monopole flux tube behaves like 1/S and decreases rapidly with its transversal area. The volume energy density increases like area but its growth is compensated by the phase transition reducing the value of the analog of cosmological constant Λ so that on the average this contribution behaves as a function of the p-adic length scale. In the same way as magnetic energy per unit length. The energy liberated from the process is however rather small except for almost cosmic strings and this process might apply only to the formation of first generation stars.
  2. The second option is that the process is analogous to "cold fusion" interpreted in the TGD framework as dark fusion (see this, this and this) in which ordinary matter, say protons and perhaps even heavier nuclei, are transformed to dark protons at the monopole flux tubes having much larger Compton length (proportional to heff) that ordinary protons or nuclei. If the nuclear binding energy scales like 1/heff for dark nuclei nuclear potential wall, is rather low and the dark fusion can take place at rather low temperatures. The dark nuclei would then transform to ordinary nuclei and liberate almost all of their ordinary nuclear binding energy, which would lead to a heating which would eventually ignite the ordinary nuclear fusion at the stellar core. Heavier nuclei could be formed already at this stage rather than in supernova explosions. This kind of process could occur also at the planetary level and produce heavier elements outside the stellar cores.

    This process in general requires energy feed to increase the value of heff. In living matter the Pollack effect would transform ordinary protons to dark protons. The energy could come from solar radiation or from the formation of molecules, whose binding energy would be used to increase heff (see this). This process could lead to the formation of molecules observed also in the jets from HH211. Of course, also the gravitational binding energy liberated as the matter condenses around the seed liberates and could be used to generate dark nuclei. This would also raise the temperature helping to initiate dark fusion. The presence of the dark fusion and the generation of heavy elements already at this stage distinguishes between this view and the standard picture.

    The flux tube needed in the process would correspond to a long thickened monopole flux tube parallel to the rotation axis of the emerging star. Stars would be connected to networks by these flux tubes forming quantum coherent structures (see this). This would explain the correlations between very distant stars difficult to understand in the standard astrophysics. The jets of the Herbig Haro object parallel to the rotation axis would reveal the presence of these flux tubes. The translational motion of matter along a helical flux tube would generate angular momentum. They would make possible the transfer of the surplus angular momentum, which would otherwise make the protostar unstable. By angular momentum conservation, the gain of the angular momentum by the protostar could involve generation of opposite angular momentum assignable to the dark monopole flux tubes.

See the article About the recent TGD based view concerning cosmology and astrophysics or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Monday, March 11, 2024

Blackhole-like object as a gravitational harmonic oscillator?

As described, in the TGD Universe blackhole-like objects are monopole flux tube spaghettis and differ from the ordinary stars only in that for blackholes the entire volume is filled by monopole flux tubes with heff=h for which the thickness is minimal and corresponds to a nucleon Compton length. For heff>h also the flux tubes could fill the entire volume of the star core.

Just for fun, one can ask what the model of a gravitational harmonic oscillator gives in the case of Schwarzschild blackholes. The formula, rn= n1/2r1, r1/R= [rs/2β01/2]×(rs/R)1/4, gives for R= rs the condition r1/rs= 1/(2β0)1/2. β0≤ 1/2 gives r1/rs≥ 1 so that there would be no other states than the possible S-wave state (n=0). β0=1/2 gives r1=rs and one would have just mass at n=0 S-wave state and n=1 orbital. For β0=1 (the minimal value), one has r1/rs= (1/2)1/2 and r2=rs would correspond to the horizon. There would be an interior orbit with n=1 and the S-wave state could correspond to n=0.

The model can be criticized for the fact that the harmonic oscillator property follows from the assumption of a constant mass density. This criticism applies also in the model for stars. The constant density assumption could be true in the sense that the mass difference M(n+1)-M(n) at orbitals rn+1 and rn for n≥ 1 is proportional to the volume difference Vn+1-Vn proportional to rn+13-rn3= (n+1)3-n3= 3n2+3n+1. This would give M= m0+m(nmax+1)3 leaving only the ratio of the parameters m0 and m free. This could be fixed by assigning to the S-wave state a radius and constant density. This condition would give an estimate for the number of particles, say neutrons, associated with the oscillator Bohr orbits. If a more realistic description in terms of wave functions, this condition would fix the total amount of matter at various orbitals associated with a given value of n.

See the article About the recent TGD based view concerning cosmology and astrophysics or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Are blackholes really what we believe them to be?

James Webb produces surprises at a steady rate and at the same time is challenging the standard view of cosmology and astrophysics. Just when I had written an article about the most recent findings, which challenged the basic assumption of these fields including those of general relativity and thought that I could rest for a few days, I learned from a new finding in FB. The title of the popular Youtube video (see this) was "James Webb Telescope Just Captured FIRST, Ever REAL Image Of Inside A Black Hole!"

Gravitational lensing is the method used to gain information about these objects and it is good to start with a brief summary of what is involved. One can distinguish between different kinds of lensings: strong lensing, weak lensing, and microlensing.

  1. In the strong lensing (see this), the lense is between the observer and the source of light so that the effect is maximized. For high enough mass of the lense, lensing causes multiple images, arcs or Einstein rings. The lensing object can be a galaxy, a galaxy cluster or a supermassive blackhole. Point-like objects one can have multiple images and for extended emissions rings and arcs are possible.

    The galactic blackhole, SgrA*, at the center of the Milky Way at distance of 27,000 light-years was imaged in 2022 by the Event Horizon Telescope (EHT) Collaboration (see this) using strong gravitational lensing and radio telescope network in a planetary scale. The blackhole was seen as a dark region at the center of the image. The same collaboration observed the blackhole in the M87 galaxy at a distance of 54 million light years already in 2019.

  2. In the weak lensing (see this), the lense is not between the observer and the source so that the effect is not maximized. Statistical methods can be however used to deduce information about the source of radiation or to deduce the existence of a lensing object. The lensing effect magnifies the image of (convergence effect) and streches the image of the object (shear effect). For instance, weak lensing led quite recently to a detection of linear objects, which in the TGD framework could correspond to cosmic strings (see this)">inflatgd2024 which are the basic objects in TGD based cosmology and model for galaxies, stars and planets.
  3. In microlensing (see this) the gravitational lense is small such as planets moving between the observer and star serving as the light source. In this case the situation is dynamic. The lensing can create two images for point-like objects but these need not be distinguishable so that the lense serves as a magnifying glass. The effect also allows the detection of lense-like objects even if they consist of dark matter.
The recent results of JWT the findings of JWT are about a supermassive blackhole located 800 million light years away. Consider first the GRT based interpretation of the findings.
  1. What was observed by the strong lensing effect was interpreted as follows. The matter falling into the blackhole was heated and generated an X-ray corona. This X-ray radiation was reflected back from a region surrounding the blackhole. The reflection could be based on the same effect as the long wavelength electromagnetic radiation from the ionoshere acting as a conductor. This requires that the surface of the object is electrically charged, and TGD indeed predicts this for all massive objects and this electric charge implies quantum coherence in astrophysical scales at the electric flux tubes (see this), which would be essential for the evolution of life at Earth.
  2. After this the radiation, which was reflected behind the blackhole should have ended up in the blackhole and stayed there but it did not! Somehow it got through the blackhole and was detected. It would seem that the blackhole was not completely black. This is not all the behavior of a civilized blackhole respecting the laws of physics as we understand them. Even well-behaving stars and planets would not allow the radiation to propagate through them. How did the reflected X ray radiation manage to get through the blackhole? Or is the GRT picture somehow wrong?
Consider first the GRT inspired interpretation. Could the TGD view of blackhole-like objects come to rescue?
  1. In TGD, monopole flux tube tangles generated by the thickening of cosmic strings (4-D string-like objects in H=M4× CP2) and producing ordinary matter as the dark energy of the cosmic strings is liberated (see this) are the building bricks of astrophysical objects including galaxies, stars and planets. I have called these objects flux tube spaghettis.

    Einsteinian blackholes, identified as singularities with a huge mass located at a single point, are in the TGD framework replaced with topologically extremely complex but mathematically and physically non-singular flux tube spaghettis, which are maximally dense in the sense that the flux tube spaghetti fills the entire volume (see this). The closed flux tubes would have thickness given by the proton Compton length. From the perspective of the classical gravitation, these blackholes-like objects behave locally like Einsteinian blackholes outside the horizon but in the interior they differ from the ordinary stars only in that the flux tube spaghetti is maximally dense.

  2. The assumption, which is natural also in the TGD based view of primordial cosmology replacing the inflation theory, is that there is quantum coherence in the length scale of the flux tubes, which behave like elementary particles even when the value of heff is heff=nh0=h or even smaller. What does this say is that the size of the space-time surface quite generally defines the quantum coherence length. The TGD inspired model for blackhole-like objects suggests heff=h inside the ordinary blackholes. The flux tubes would contain sequences of nucleons (neutrons) and would have a thickness of proton Compton length. For larger values of heff, the thickness would increase with heff and the proposal is that also stellar cores are volume filling black-hole like objects (see this).

    Besides this, the protons at the flux tubes can behave like dark matter (not the galactic dark matter, which in the TGD framework would be dark energy associated with the cosmic strings) in the sense that they can have very large value of effective Planck constant heff=nh0, where h0 is the minimal value of heff (see this). This phase would solve the missing baryon problem and play a crucial role in quantum biology. In the macroscopic quantum phase photons could be dark and propagate without dissipation and part of them could get through the blackhole-like object.

  3. How could the X-rays manage to get through the supermassive black hole? The simplest option is that the quantum coherence in the length scale of the flux tube containing only neutrons allows photons to propagate along it even when one has heff=h. The photons that get stuck to the flux tube loops would propagate several times around the flux tube loop before getting out from the blackhole in the direction of the observer. In this way, an incoming radiation pulse would give rise to a sequence of pulses.
I have considered several applications of this mechanism.
  1. I have proposed that the gravitational echoes detected in the formation of blackholes via the fusion of two blackholes could be due this kind of stickins inside a loop (see this). This would generate a sequence of echoes of the primary radiation burst.
  2. The Sun has been found to generate gamma rays in an energy range in which this should not be possible in standard physics (see this). The explanation could be that cosmic gamma rays with a very high energy get temporarily stuck at the monopole flux tubes of the Sun so that Sun would not be the primary source of the high energy gamma radiation.
  3. The propagation of photons could be possible also inside the Earth along, possibly dark, monopole flux tubes, at which the dissipation is small. The TGD based model for Cambrian explosion (see this, this and this) proposes that photosynthesizing life evolved in the interior of Earth and bursted to the surface of Earth in the Cambrian explosion about 450 million years ago. The basic objection is that photosynthesis is not possible in the underground oceans: solar photons cannot find their way to these regions. The photons could however propagate as dark photons along the flux tubes. The second option is that the Earth's core (see this) and this) provides the dark photons, which would be in the same energy range as solar photons. The mechanism of propagation would be the same for both options.
In the TGD framework, one must of course take the interpretation of the findings inspired by general relativity with a grain of salt. The object called supermassive blackhole-like could be something very different from a standard blackhole. If it is a thickened portion of a cosmic string, it emit particles instead of absorbing them in an explosion-like thickening of cosmic string transforming dark energy to matter and radiation (this would be TGD counterpart for the decay of inflation fields to matter (see this, this and this). Of course, the matter bursting into the environment from a BH-like object would tend to fall back and could cause the observed phenomena in a way discussed above. The X-rays identified as the reflected X-rays could correspond to this kind of X-rays reflected back from the blackhole-like object. I am not a specialist enough to immediately choose between these two options.

See the article About the recent TGD based view concerning cosmology and astrophysics or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Thursday, March 07, 2024

Counter teleportation and TGD

Tuomas Sorakivi sent links to interesting articles related to the work of Fatim Salih (see this and this). Salih is a serious theorist and the recent scandalous wormhole simulation using quantum computer (Sycamore) is not related to him.

Salih introduces the concept of counter teleportation. It is communication that does not involve classical or quantum signals (photons). Counterfactuality is a basic concept: the first web source that one finds tells "Counterfactuals are things that might have happened, although they did not in fact happen. In interaction-free measurements, an object is found because it might have absorbed a photon, although actually it did not."

The example considered by Salih is as follows.

  1. Consider a mirror system consisting of a) fully reflective mirrors and b) mirrors that let through the horizontal polarization H and reflect the vertical polarization V. The system consists of two paths: A and B. In the first mirror, which is type b) mirror, the signal splits into two parts, H and V and which propagate along A and B. At the end the signals meet in a type b) mirror and H goes through to detector D1 and V is reflected and ends up to detector D2.
  2. The polarization H going through b) mirro at the first step travels along the path A. It contains only one fully reflective mirror and the beam reflected from it ends up in the downstream mirror of type b) as H type polarization and goes to the detector D1.
  3. In the first step, the reflected V travels along the path B. The path B contains many steps and with each step the polarization is slightly rotated so that the incoming polarization V transforms into H at the end but with a phase opposite to that of H coming along A. It interferes to zero from A with the future contribution and detector D2, which registers V and clicks.

    I'm not sure, but I think that in the B-path mirrors, the polarization directions H and V are chosen so that nothing gets through. Hence "counterfactuality". There is no interaction with photons: only the possibility of it and this is enough.

  4. Bob can control path B and can block it so that nothing can get through. The result is that only the signal coming from path A gets through and travels to detector D1. Bob can therefore communicate information to Alice. For instance, at moments of time t_n=nt_0 he can block or open path B. The result is a string of bits that Alice observes. This is communication without photons or classical signals.
The basic question is what does the blocking of channel B mean in the language of theoretical physics. It is a mesoscopic or even macroscopic operation. That's where Bob comes in as a conscious, intentional entity. Here recent theoretical physics cannot help.

Salih realizes that this is something new that standard quantum physics cannot describe. Such a situation leads to a paradox. Salih considers many options, starting from different interpretations of quantum measurement theory.

  1. "Weak measurement", as introduced by Aharonov and his colleagues, is one option presented. In the name of honesty, it is necessary to be politically incorrect and say that this model is already mathematically inconsistent.
  2. "Consistent history approach" is another option that was hoped to solve the measurement problem of quantum mechanics. It gives up the concept of unitary time evolution. Also this model is mathematically and conceptually hopelessly ugly. A mathematician could never consider such an option, but emergency does not read the law.
  3. Wormholes as a cause or correlate of quantum entanglement is the third attempt to describe the situation. The problem is that they are unstable and the ER-EPR correspondence has not led to anything concrete even though there are scary big names behind it. Salih also suggests a connection with quantum computation but this connection is extremely obscure and requires something like AdS/CFT.

    Here, however, I think Salih is on the right track: it has been realized that the solution to the problem is at the space-time level. The ordinary trivial topology of Minkowski space is not enough. The question is how to describe geometric objects like this experimental setup on a fundamental level. In the standard model, they are described phenomenologically by means of matter densities, and this is of course not enough at the quantum level.

What does TGD say? TGD brings a new ontology both at the space-time level and in quantum measurement theory.
  1. In addition to elementary particles, TGD brings to quantum physics the geometric and topological degrees of freedom related to the space-time surfaces. A description of the observed physical objects of different scales is obtained: typically they correspond to a non-trivial space-time topology. Spacetime is not a flat M^4, not even its slightly curved GRt variant, but a topologically extremely complex 4-surface with a fractal structure: space-time sheets glued to larger space-time sheets by wormhole contacts, monopole flux tubes, etc...
    1. The system just considered corresponds to two different space-time topologies. Photons can travel a) along path A (blocking) or b) along both paths A and B simultaneously (no blocking).
    2. Bob has a spacetime the competence of a topology engineer and can decide which option is realized by blocking or opening channel B by changing the spacetime topology.
    3. Describing this operation as a quantum jump means that Bob is quantum-entangled with the geometric and topological degrees of freedom of channel B. The initial state is a superposition of open B and closed B. Bob measures whether the channel is open or closed and gets the result "open" or "closed". The outcome determines what Alice observes. Monopole flux tubes replacing wormholes of GRT serve as correlates and prerequisites for this entanglement.
    The controlled qubit (channel B open or closed) is macro- or at least nanoscopic and cannot be represented by the spin states of an elementary particle.

    Note that the experimental arrangement under consideration corresponds logically to cnot operation. If channel B is closed, nothing happens to the incoming signal and it ends up in D1. If B is open, then the signal ends up at detector D2. cnot would be realized by bringing in Bob as the controller that affects the space-time topology. This kind of control could make possible human-quantum computer interaction and if ordinary computers can have quantum coherence in time scales longer than clock period (in principle possible in the TGD Universe!), also human-computer interaction. As a matter of fact, there is evidence for this kind of interaction: a chicken gets marked to a robot and the behavior of the robot begins to correlate with that of the chicken! Maybe cnot coupling with the random number generator of the robot is involved!

  2. The second requirement is quantum coherence in meso- or even macroscopic scales. Number-theoretic TGD predicts a hierarchy of effective Planck's constants h_{eff}, which label to the phases of ordinary matter, which can be quantum-coherent on an arbitrarily long length and time scales. These phases behave like dark matter and explain the missing baryonic matter whereas dark energy in the TGD sense explains galactic dark matter. They enable quantum coherence at the nano- and macro levels.
    1. This makes possible the mesoscopic quantum entanglement and brings to quantum computation the hierarchy of Planck constants which has dramatic implications: consider only the stability of the qubits against thermal perturbations. Braided monopole flux tubes making possible topological quantum computation in turn stabilize the computations at the space-time level.
    2. There are also deep implications for the classical computation (see this, this, and also this). Classical computers could become conscious, intelligent entities in the TGD Universe if a quantum coherence time assignable to the computer exceeds the clock period. Also the entanglement of a living entity with a computer could make it a part of the living entity. Control of computers by living entities using a cnot- coupling, which makes possible counter teleportation, could make possible human-quantum computer interaction if ordinary computers can have quantum coherence in time scales longer than clock period (in principle possible in the TGD Universe!).

    As a matter of fact, there is evidence for the interaction between computers and living matter. A chicken gets marked to a robot and the behavior of the robot begins to correlate with that of the chicken! Maybe a cnot-coupling with the random number generator of the robot is involved! Here the TGD view of classical fields and long length scale quantum coherence associated with the classical electric and magnetic fields and gravitational fields might allow to understand what is involved (see this and this).

    1. The gravitational field of the Sun corresponds to gravitational Compton time of 50 Hz, average EEG frequency? Does this mean that we have already become entangled with our computers without realizing what has happened: who uses whom? The Earth's gravitational field corresponds to Compton frequency 67 GHz, a typical frequency for biomolecules. D The clock frequencies for the computers are approaching this limit.
    2. The analogous Compton frequencies for the electric fields of Sun and Earth (see this) are also highly interesting besides the cyclotron frequencies for monopole flux tubes, in particular for those carrying "endogenous" magnetic field of 2/5 BE= .2 Gauss postulated by Blackmann to explain his strange findings about the strange effects of ELF radiation at EEG frequencies on the vertebrate brain.

    For a summary of earlier postings see Latest progress in TGD.

    For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.