Wednesday, August 29, 2018

Dark valence electrons, dark photons, bio-photons, and carcinogens

The possible role of bio-photons in living matter is becoming gradually accepted by biologists and neuroscientists. It seems that the intensity of bio-photon emission increases in sick organisms and bio-photons are used as a diagnostic tool. Fritz Popp (see this) started his work with bio-photons with some observations about the interaction of UV light with carcinogens (see this). Veljckovic has also published results suggesting correlations between carcinogenity and the absorption spectrum of photons in UV (ultraviolet).

I have proposed that bio-photons emerge as ordinary photons from what I call dark photons, which differ from ordinary photons in that they have non-standard value heff= nh0 of Planck constant. Also other particles - electrons, protons, ions,..., can be dark in this sense.

One of the mysteries of biology, which mere biochemistry cannot explain, is that living systems behave coherently in macroscopic scales. The TGD explanation for this is that dark particles forming Bose-Einstein condensates (BECs) and super-conducting phases at magnetic flux tubes of what I call magnetic body possess macroscopic quantum coherence due to the large value of heff. This quantum coherence would force the coherent behavior of living matter. I have already earlier developed rather concrete models for bio-photons on basis of this assumption.

In the sequel I will discuss bio-photons from a new perspective by starting from bio-photon emission as a signature of a morbid condition of organism. The hypothesis is that in sick organism dark photons tend to transform to bio-photons in absence of metabolic feed increasing the value of heff. Hence BECs of dark photons and also of other dark particles decay and this leads to a loss of quantum coherence.

A further hypothesis is that at least a considerable part of bio-photons emerge in the transformations of dark photons emitted in the transitions of lonely dark valence electron of any atom able to have such. Since dark electron has a scaled up orbital radius, it sees the rest of atom as a unit charge and its spectrum is in good approximation hydrogen spectrum. Therefore the corresponding part of the spectrum of bio-photons would be universal in accordance with quantum criticality.

This picture allows to develop some ideas about quantum mechanisms behind cancer in TGD framework.

Some basic notions related to carcinogens

Before continuation it is good to clarify some basic notions. Toxins are poisonous substances created in metabolism. Carcinogens (this) are substances causing cancer, which often cause damage to DNA and induce mutations (mutagenicity).

Free radicals (see this) provide a basic example about carcinogens. They have one un-paired valence electron and are therefore very reactive. The un-paired electron has a strong tendency to pair with an electron and steals it from some molecule. The molecule providing the electron is said to oxidize and free radical to act as oxidant. The outcome is a reaction cascade in which carcinogen receives electron but electron donor becomes highly reactive. Anti-oxidants stop the reaction cascade by getting oxidized to rather stable molecules (this and this).

Benzo[a]pyrene (BAP) C20H12 (see this) is one example of carcinogen. It contains several carcinogenic rings and is formed as a product of incomplete burning and reacts with powerful oxidizers. As such BAP is not free radical but its derivatives BAP+/- obtained by one-electron reduction or oxidation are such (see this).

There are also carcinogens such as bentzene, which as such is not dangerous. What happens is that to the carbon at the ends of bentzene's double bond binds single oxygen atom and so called epoxy bond is formed. This molecule penetrates to the DNA chain and causes damage. Perhaps the fact that DNA nucleotide also contains aromatic 6-rings relates to this.

The emission of bio-photons (see this) increases if carcinogens such as oxidants are present. The idea is that bio-photons could be relevant concerning the understanding of the problem. It has been proposed that bio-photons could be created when anti-oxidants interact with molecules generating triplet states (spin 1) which decay by photon emission. The photons generated in this manner would have discrete spectrum whereas bio-photons seem to have continuous and rather featureless spectrum. Therefore this model must be taken with caution.

It could be that the origin of bio-photons is not chemical. If so, carcinogens would not produce bio-photons in ordinary atomic or molecular transitions. They could be however induce generation of bio-photons indirectly. The understanding of bio-photons might help to understand the mechanisms between carcinogenic activity. I have discussed bio-photons from TGD point of view earlier.

Some basic notions of TGD inspired quantum biology

In the sequel I try to develop a necessarily speculative picture about carcinogen action on basis of TGD based quantum about biology. The goal is to develop the general theory by developing a concrete model for a problem.

Magnetic flux tube and field body/magnetic body are basic notions of TGD implied by the modification of Maxwellian electrodynamics . Actually a profound generalization of space-time concept is in question. Magnetic flux tubes are in well-defined sense building bricks of space-time - topological field quanta - and lead to the notion of field body/magnetic body as a magnetic field identity assignable to any physical system: in Maxwell's theory and ordinary field theory the fields of different systems superpose and one cannot say about magnetic field in given region of space-time that it would belong to some particular system. In TGD only the effects on test particle for induced fields associated with different space-time sheets with overlapping M4 projections sum.

The hierarchy of Planck constants heff=n× h0, where h0 is the minimum value of Planck constant, is second key notion. h0 need not correspond to ordinary Planck constant h and both the observations of Randell Mills and the model for color vision suggest that one has h=6h0. The hierarchy of Planck constants labels a hierarchy of phases of ordinary matter behaving as dark matter.

Magnetic flux tubes would connect molecules, cells and even larger units, which would serve as nodes in (tensor-) networks Flux tubes would also serve as correlates for quantum entanglement and replace wormholes in ER-EPR correspondence proposed by Leonard Susskind and Juan Maldacena in 2014 (see this and this). In biology and neuroscience these networks would be in a central role. For instance, in brain neuron nets would be associated with them and would serve as correlates for mental images. The dynamics of mental images would correspond to that for the flux tube networks.

The proposed model briefly

In the sequel the basic hypothesis will be that dark photons emerging from the transitions of dark valence electrons of any atom possessing lonely unpaired valence electron could give rise to part of bio-photons in they decays to ordinary photons. The hypothesis is developed by considering a TGD based model for a finding, which served as a starting point of the work of Popp (see this): the irradiation of carcinogens with light at wavelength of 380 nm generates radiation with wavelength 218 nm so that the energy of the photon increases in the interaction. Also the findings of Veljkovic about the absorption spectrum of carcinogens have considerably helped in the development of the model.

The outcome is a proposal for dark transitions explaining the findings of Popp and Veljkovic. The spectrum of dark photons also suggests a possible identification of metabolic energy quantum of .5 eV and of the Coulomb energy assignable to the cell membrane potential. The possible contribution to the spectrum of bio-photons is considered, and it is found that spectrum differs from a smooth spectrum since the ionization energies for dark valence electrons depending on the value of heff as 1/heff2 serve as accumulation points for the spectral lines. Also the possible connections with TGD based models of color vision and of music harmony are briefly discussed.

See the article Dark valence electrons, dark photons, bio-photons, and carcinogens or the chapter of "TGD based view about consciousness, living matter, and remote mental interactions" with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.


Friday, August 24, 2018

Confession

I have decidde to come out of closet and confess something, which might deeply upset my readers. I have never been able to understand how atoms can ionize in electrolyte! The voltages/electric fields are quite too small to give for electrons or ions energies allows ionization by collisions with atoms. It is of course quite possible that I have not understood something totally trivial, which is perfectly clear for every normal first year student. This is indeed my personal problem: I have seen no-one to suffer from it. Perhaps colleagues are right: I am a miserable crackpot who despite efforts lasting 40 years has not been able to understand basics of electrolytes.

I try to explain my personal problem. For instance, in air the dielectric breakdown by ionization occurs for electric field, which is about 3V/micrometer. The mean free path of electron in air is 68 nm and more than 1/100 of the distance of about 10 micrometers - cell size by the way - needed to generate voltage of order 10 V needed to ionize an atom. For hydrogen atom the ionization energy is 13.6 eV. In electrolyte the mean free path is much shorter and and the voltage is much weaker. According to my humble and probably dumb mind, the charge simply cannot accelerate to the needed ionization energy.

Despite horrible feelings of fear in my stomach about becoming decapitized by angry colleagues I dare to conclude that new physics is needed and that this physics is essential also for strange phenomenon of "cold fusion" assigned with electrons and for the strange fact that in living matter ions play key role.

Even worse, I dare to propose that TGD provides a solution to the problem. TGD forces to generalize the notion of field and for instance magnetic field is replaced with flux tubes and flux sheets serving as basic building bricks of the space-time surface having extremely complex topology even in macroscopic length scales. The flux tubes would serve as kind of super-conducting wires along which charged particles can flow without dissipation and accelerate to the needed ionization energies in cell size scale about 10 micrometers.

It is now done! What a relief! Honesty is painful but it is the best option! Let jury decide!

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Thursday, August 23, 2018

How could Planck length be actually equal to much larger CP2 radius?!


The following argument stating that Planck length lP equals to CP2 radius R: lP=R and Newton's constant can be identified G= R2/ℏeff. This idea looking non-sensical at first glance was inspired by an FB discussion with Stephen Paul King.

First some background.

  1. I believed for long time that Planck length lP would be CP2 length scale R squared multiplied by a numerical constant of order 10-3.5. Quantum criticality would have fixed the value of lP and therefore G=lP2/ℏ.

  2. Twistor lift of TGD led to the conclusion that that Planck length lP is essentially the radius of twistor sphere of M4 so that in TGD the situation seemed to be settled since lP would be purely geometric parameter rather than genuine coupling constant. But it is not! One should be able to understand why the ratio lP/R but here quantum criticality, which should determine only the values of genuine coupling parameters, does not seem to help.

    Remark: M4 has twistor space as the usual conformal sense with metric determined only apart from a conformal factor and in geometric sense as M4× S2: these two twistor spaces are part of double fibering.

Could CP2 radius R be the radius of M4 twistor sphere, and could one say that Planck length lP is actually equal to R: lP=R? One might get G= lP2/ℏ from G= R2/ℏeff!
  1. It is indeed important to notice that one has G=lP2/ℏ. ℏ is in TGD replaced with a spectrum of ℏeff=nℏ0, where ℏ= 6ℏ0 is a good guess. At flux tubes mediating gravitational interactions one has

    eff=ℏgr= GMm/v0 ,

    where v0 is a parameter with dimensions of velocity. I recently proposed a concrete physical interpretation for v0 (see this). The value v0=2-12 is suggestive on basis of the proposed applications but the parameter can in principle depend on the system considered.

  2. Could one consider the possibility that twistor sphere radius for M4 has CP2 radius R: lP= R after all? This would allow to circumvent introduction of Planck length as new fundamental length and would mean a partial return to the original picture. One would lP= R and G= R2/ℏeff. ℏeff/ℏ would be of 107-108!

The problem is that ℏeff varies in large limits so that also G would vary. This does not seem to make sense at all. Or does it?!

To get some perspective, consider first the phase transition replacing hbar and more generally hbareff,i with hbareff,f=hgr .

  1. Fine structure constant is what matters in electrodynamics. For a pair of interacting systems with charges Z1 and Z2 one has coupling strength Z1Z2e2/4πℏ= Z1Z2α, α≈ 1/137.

  2. One can also define gravitational fine structure constant αgr. Only αgr should matter in quantum gravitational scattering amplitudes. αgr wold be given by

    αgr= GMm/4πℏgr= v0/4π .

    v0/4π would appear as a small expansion parameter in the scattering amplitudes. This in fact suggests that v0 is analogous to α and a universal coupling constant which could however be subject to discrete number theoretic coupling constant evolution.

  3. The proposed physical interpretation is that a phase transition hbareff,i→ hbareff,f=hgr at the flux tubes mediating gravitational interaction between M and m occurs if the perturbation series in αgr=GMm/4π/hbar fails to converge (Mm∼ mPl2 is the naive first guess for this value). Nature would be theoretician friendly and increase heff and reducing αgr so that perturbation series converges again.

    Number theoretically this means the increase of algebraic complexity as the dimension n=heff/h0 of the extension of rationals involved increases fron ni to nf and the number n sheets in the covering defined by space-time surfaces increases correspondingly. Also the scale of the sheets would increase by the ratio nf/ni.

    This phase transition can also occur for gauge interactions. For electromagnetism the criterion is that Z1Z2α is so large that perturbation theory fails. The replacement hbar→ Z1Z2e2/v0 makes v0/4π the coupling constant strength. The phase transition could occur for atoms having Z≥ 137, which are indeed problematic for Dirac equation. For color interactions the criterion would mean that v0/4π becomes coupling strength of color interactions when αs is above some critical value. Hadronization would naturally correspond to the emergence of this phase.

    One can raise interesting questions. Is v0 (presumably depending on the extension of rationals) a completely universal coupling strength characterizing any quantum critical system independent of the interaction making it critical? Can for instance gravitation and electromagnetism are mediated by the same flux tubes? I have assumed that this is not the case. It it could be the case, one could have for GMm<mPl2 a situtation in which effective coupling strength is of form (GmMm/Z1Z2e2) (v0/4π).

The possibility of the proposed phase transition has rather dramatic implications for both quantum and classical gravitation.
  1. Consider first quantum gravitation. v0 does not depend on the value of G at all!The dependence of G on ℏeff could be therefore allowed and one could have lP= R. At quantum level scattering amplitudes would not depend on G but on v0. I was happy of having found small expansion parameter v0 but did not realize the enormous importance of the independence on G!

    Quantum gravitation would be like any gauge interaction with dimensionless coupling, which is even small! This might relate closely to the speculated TGD counterpart of AdS/CFT duality between gauge theories and gravitational theories.

  2. But what about classical gravitation? Here G should appear. What could the proportionality of classical gravitational force on 1/ℏeff mean? The invariance of Newton's equation

    dv/dt =-GM r/r3

    under heff→ xheff would be achieved by scaling vv/x and t→ t/x. Note that these transformations have general coordinate invariant meaning as transformations of coordinates of M4 in M4×CP2. This scaling means the zooming up of size of space-time sheet by x, which indeed is expected to happen in
    heff→ xheff!

What is so intriguing that this connects to an old problem that I pondered a lot during the period 1980-1990 as I attempted to construct to the field equations for Kähler action approximate spherically symmetric stationary solutions. The naive arguments based on the asymptotic behavior of the solution ansatz suggested that the one should have G= R2/ℏ. For a long time indeed assumed R=lP but p-adic mass calculations and work with cosmic strings forced to conclude that this cannot be the case. The mystery was how G= R2/ℏ could be normalized to G=lP2/ℏ: the solution of the mystery is ℏ→ ℏeff as I have now - decades later - realized!

See the article About the physical interpretation of the velocity parameter in the formula for the gravitational Planck constant or the new chapter About the Nottale's formula for hgr and the possibility that Planck length lP and CP2 length R are identical giving G= R2/ℏeff of "Physics in many-sheeted space-time".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Tuesday, August 21, 2018

TGD explanation for the finding challenging the reported high Tc superconductivity

I already earlier commented the results of Indian physicists Kumar Thapa and Anshu Pandey have found evidence for superconductivity at ambient (room) temperature and pressure in nanostructures (see this). I learned about a strange finding of Brian Skinner which could be even seen to incidate the results involve fabrication. In the following I develop and argument suggesting that this is not the case. The arguments involves Haas-van Alphen effect and the notion of magnetic flux tube.

The strange observation of Brian Skinner about the effect

After writing the above comments I learned from a popular article (see this) about and objection (see this) challenging the claimed discovery (see this). The claimed finding received a lot of attention and physicist Brian Skinner in MIT decided to test the claims. At first the findings look quite convincing to him. He however decided to look for the noise in the measured value of volume susceptibility χV. χV relates the magnetic field B in superconductor to the external magnetic field Bext via the formulate B= (1+χV)Bext (in units with μ0=1 one has Bext=H, where H is used usually).

For diamagnetic materials χV is negative since they tend to repel external magnetic fields. For superconductors one has χV=-1 in the ideal situation. The situation is not however ideal and stepwise change of χV from χV=0 to χV to some negative value but satisfying |μV| <1 serves as a signature of high Tc superconductivity. Both superconducting and ordinary phase would be present in the sample.

Figure 3a of the article of authors gives χV as function of temperature for some values of Bext with the color of the curve indicating the value of Bext. Note that μV depends on Bext, whereas in strictly linear situtation it would not do so. There is indeed transition at critical temperature Tc= 225 K reducing χV=0 to negative value in the range χV ∈ [-0.05 ,-.06 ] having no visible temperature dependence but decreasing somewhat with Bext.

The problem is that the fluctuations of χV for green curve (Bext=1 Tesla) and blue curve (Bext=0.1 Tesla) have the same shape. With blue curve only only shifted downward relative to the green one (shifting corresponds to somewhat larger dia-magnetism for lower value of Bext). If I have understood correctly, the finding applies only to these two curves and for one sample corresponding to Tc= 256 K. The article reports superconductivity with Tc varying in the range [145,400] K.

The pessimistic interpretation is that this part of data is fabricated. Second possibility is that human error is involved. The third interpretation would be that the random looking variation with temperature is not a fluctuation but represents genuine temperature dependence: this possibility looks infeasible but can be tested by repeating the measurements or simply looking whether it is present for the other measurements.

TGD explanation of the effect found by Skinner

One should understand why the effect found by Skinner occurs only for certain pairs of magnetic fields strengths Bext and why the shape of pseudo fluctuations is the same in these situations.

Suppose that Bext is realized as flux tubes of fixed radius. The magnetization is due to the penetration of magnetic field to the ordinary fraction of the sample as flux tubes. Suppose that the superconducting flux tubes assignable 2-D surfaces as in high Tc superconductivity. Could the fraction of super-conducting flux tubes with non-standard value of heff - depends on magnetic field and temperature in predictable manner?

The pseudo fluctuation should have same shape as a function temperature for the two values of magnetic fields involved but not for other pairs of magnetic field strengths.

  1. Concerning the selection of only preferred pairs of magnetic fields Haas-van Alphen effect gives a
    clue. As the intensity of magnetic field is varied, one observes so called de Haas-van Alphen effect (see this) used to deduce the shape of the Fermi sphere: magnetization and some other observables vary periodically as function of 1/B. In particular, this is true for χV.

    The value of P is

    PH-A== 1/BH-A= 2π e/hbar Se ,

    where Se is the extremum Fermi surface cross-sectional area in the plane perpendicular to the magnetic field and can be interpreted as area of electron orbit in momentum space (for illustration see this).

    Haas-van Alphen effect can be understood in the following manner. As B increases, cyclotron orbits contract. For certain increments of 1/B n+1:th orbit is contracted to n:th orbit so that the sets of the orbits are identical for the values of 1/B, which appear periodically. This causes the periodic oscillation of say magnetization. From this one learns that the electrons rotating at magnetic flux tubes of Bext are responsible for magnetization.

  2. One can get a more detailed theoretical view about de Haas-van Alphen effect from the article of Lifschitz and Mosevich (see this). In a reasonable approximation one can write

    P= e× ℏ/meEF = [4α/32/3π1/3]× [1/Be] , Be == e/ae2 =[x-216 Tesla ,

    ae= (V/N)1/3= =xa , a=10-10 m .

    Here N/V corresponds to valence electron density assumed to form free Fermi gas with Fermi energy EF= ℏ2(3pi2N/V)2/3/2me. a=10-10 m corresponds to atomic length scale. α≈ 1/137 is fine structure constant. For P one obtains the approximate expression

    P≈ .15 x2 Tesla-1 .

    If the difference of Δ (1/Bext) for Bext=1 Tesla and Bext=.1 Tesla correspond to a k-multiple of P, one obtains the condition

    kx2 ≈ 60 .

  3. Suppose that Bext,1=1 Tesla and Bext,1=.1 Tesla differ by a period P of Haas-van Alphen effect. This would predict same value of χV for the two field strengths, which is not true. The formula used for χV however holds true only inside given flux tube: call this value χV,H-A.

    The fraction f of flux tubes penetrating into the superconductor can depend on the value of Bext and this could explain the deviation. f can depend also on temperature. The simplest guess is that that two effects separate:

    χV= χV,H-A(BH-A/Bext)× f(Bext,T) .

    Here χV,H-A has period PH-A as function of 1/Bext and f characterizes the fraction of penetrated flux tubes.

  4. What could one say about the function f(Bext,T)? BH-A=1/PH-A has dimensions of magnetic field and depends on 1/Bext periodically. The dimensionless ratio Ec,H-A/T of cyclotron energy Ec,H-A= hbar eBH-A/me and thermal energy T and Bext could serve as arguments of f(Bext,T) so that one would have

    f(Bext,T)=f1(Bext)f2(x) ,

    x=T/EH-A(Bext)) .

    One can consider also the possibility that Ec,H-A is cyclotron energy with hbareff=nh0 and larger than otherwise. For heff=h and Bext= 1 Tesla one would have Ec= .8 K, which is same order of magnitude as variation length for the pseudo fluctuation. For instance, periodicity as a function of x might be considered.

    If Bext,1=1 Tesla and Bext,1=.1 Tesla differ by a period P one would have

    χV(Bext,1,T)/χV(Bext,2,T) =f1(Bext,1)/f1(Bext,2)

    independently of T. For arbitrary pairs of magnetic fields this does not hold true. This property and also the predicted periodicity are testable.

See the article Two new findings related to high Tc super-conductivity or the chapter Quantum criticality and dark matter of "Hyper-finite factors, p-adic length scale hypothesis, and dark matter hierarchy".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Sunday, August 19, 2018

Large scale fluctuations in metagalactic ionizing background for redshift six

I learned about a very interesting result related to early cosmology and challenging the standard cosmology. The result is described in popular article " Early opaque universe linked to galaxy scarcity" (see this). The original article " Evidence for Large-scale Fluctuations in the Metagalactic Ionizing Background Near Redshift Six" of Becker et al is published in Astrophysical Journal (see this).

The abstract of the article is following.

" The observed scatter in intergalactic Lyα opacity at z ≤ 6 requires large-scale fluctuations in the neutral fraction of the intergalactic medium (IGM) after the expected end of reionization. Post-reionization models that explain this scatter invoke fluctuations in either the ionizing ultraviolet background (UVB) or IGM temperature. These models make very different predictions, however, for the relationship between Lyα opacity and local density. Here, we test these models using Lyα-emitting galaxies (LAEs) to trace the density field surrounding the longest and most opaque known Lyα trough at z < 6. Using deep Subaru Hyper Suprime-Cam narrowband imaging, we find a highly significant deficit of z ≈ 5.7 LAEs within 20 h-1 Mpc of the trough. The results are consistent with a model in which the scatter in Lyα opacity near z ∼ 6 is driven by large-scale UVB fluctuations, and disfavor a scenario in which the scatter is primarily driven by variations in IGM temperature. UVB fluctuations at this epoch present a boundary condition for reionization models, and may help shed light on the nature of the ionizing sources. "

The basic conclusion is that the opaque regions of the early Universe about 12.5 billion years ago (redshift z∼ 6) correspond to to small number of galaxies. This is in contrast to standard model expectations. Opacity is due to the absorption of radiation by atoms and the UV radiation generated by galaxies ionizes atoms and makes Universe transparent. In standard cosmology the radiation would arrive from rather large region. The formation of galaxies is estimated to have begun .5 Gy years after Big Bang but there is evidence for galaxies already for .2 Gy after Big Bang (see this). Since the region studied corresponds to a temporal distance about 12.5 Gly and the age of the Universe is around 13.7 Gy, UV radiation from a region of size about 1 Gly should have reached the intergalactic regions and have caused the ionization.

Second conclusion is that there are large fluctuations in the opacity. What is suggested is that either the intensity of the UV radiation or that the density of intergalactic gas fluctuates. The fluctuations in the intensity of UV radiation could be understood if the radiation from the galaxies propagates only to finite distance in early times. Why this should be the case is difficult to understand in standard cosmology.

Could TGD provide the explanation.

  1. In TGD framework galaxies would have born as cosmic strings thickened to flux tubes. This causes reduction of the string tension as energy per unit length. The liberated dark energy and matter transformed to ordinary matter and radiation. Space-time emerges as thickened magnetic flux tubes. Galaxies would correspond to knots of cosmic strings and stars to their sub-knots.

  2. If the UV light emerging from the galaxies did not get far away from galaxies, the ionization of the intergalactic gas did not occur and these regions became opaque if distance to nearest galaxies was below critical value.

  3. Why the UV radiation at that time would have been unable to leave some region surrounding galaxies? The notion of many-sheeted space-time suggests a solution. Simplest space-time sheets are 2-sheeted structure if one does not allow space-time to have boundaries. The members of the pair with boundary are glued to together along their common boundary. The radiation would have left this surface only partially. Partial reflection should occur as the radiation along first member of pair is reflected as a reflected signal propagating along second member. This model could explain the large fluctuations in the opacity as fluctuations in the density of galaxies.

  4. Cosmic expansion occurring in TGD framework in jerk-wise manner as rapid phase transitions would have expanded the galactic space-time sheets and in the recent Universe this confinement of UV radiation would not occur and intergalactic space would be homogenously ionized and transparent.

The echo phenomenon could be completely general characteristic of the many-sheeted space-time.
  1. The popular article "Evidence in several Gamma Ray Bursts of events where time appears to repeat backwards" (see this) tells about the article " Smoke and Mirrors: Signal-to-Noise and Time-Reversed Structures in Gamma-Ray Burst Pulse Light Curve" of Hakkila et al (see this). The study of gamma ray bursts (GRBs) occurring in the very early Universe with distance of few billion light years (smaller than for opacity measurements by an order of magnitude) has shown that the GRB pulses have complex structures suggesting that the radiation is reflected partially back at some distance and then back in core region.The duration of these pulses varies from 1 ms to 200 s. Could also this phenomenon be caused by the finite size of the space-time sheets assignable to the object creating GRBs?

  2. There is also evidence for blackhole echoes, which could represent example of a similar phenomenon. Sabine Hossenfelder (see this) tells about the new evidence for blackhole echoes in the fusion of blackholes for GW170817 event observed by LIGO reported by Niayesh Afshordi, Professor of astrophysics at Perimeter Institute in the article " Echoes from the Abyss: A highly spinning black hole remnant for the binary neutron star merger GW170817" (see this). The earlier 2.5 sigma evidence has grown into 4.2 sigma evidence. 5 sigma is regarded as a criterion for discovery. For TGD based comments see this.

See the article Some new strange effects associated with galaxies or the chapter TGD and Astrophysics of "Physics in many-sheeted space-time".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Friday, August 17, 2018

Conformal cyclic cosmology of Penrose and zero energy ontology based cosmology

Penrose has proposed an interesting cyclic cosmology (see this, , and this) in which two subsequent cosmologies are glued along conformal boundary together. The metric of the next cosmology is related to that of previous by conformal scaling factor, which approaches zero at the 3-D conformal boundary. The physical origin of this kind of distance scaling is difficult to understand. The prediction is the existence of concentric circles of cosmic size interpretable as kind of memories about previous cosmic cycles.

In TGD framework zero energy ontology (ZEO) inspired theory of consciousness suggest an analogous sequence of cosmologies. Now the cycles would correspond to life cycles of cosmic size serving as a conscious entity having causal diamond (CD) as imbedding space correlate. The arrow of geometric time is defined as the time direction to which the temporal distance between the ends of CD increases in sequence of state function reductions leaving passive boundary of CD unaffected and having interpretation as weak measurements. The arrow of time changes "big" state function reductions changing the roles of the boundaries of CD and meaning the death and re-incarnation of self with opposite arrow of time. Penrose's gluing procedure would be replaced with "big" state function reduction in TGD framework. This proposal is discussed in some detail and the possibility that also now concentric low variance circles in CMB could carry memories about the previous life cycles of cosmos. This picture applies to all levels in the hierarchy of cosmologies (hierarchy of selves) giving rise to a kind of Russian doll cosmology.

See the article Conformal cyclic cosmology of Penrose and zero energy ontology based cosmology or the chapter TGD based cosmology of "Physics in many-sheeted space-time".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wednesday, August 15, 2018

Unexpected support for the nuclear string model

Nuclear string model (see this) replaces in TGD framework the shell model. Completely unexpected support for nuclear string model emerged from a research published by CLAS Collaboration in Nature (see this). The popular article "Protons May Have Outsize Influence on Properties of Neutron Stars" refers to possible implications for the understanding of neutron stars but my view is that the implications might dramatically modify the prevailing view about nuclei themselves. The abstract of popular article reads as (see this).

"A study conducted by an international consortium called the CLAS Collaboration, made up of 182 members from 42 institutions in 9 countries, has confirmed that increasing the number of neutrons as compared to protons in the atom’s nucleus also increases the average momentum of its protons. The result, reported in the journal Nature, has implications for the dynamics of neutron stars."

The finding is that protons tend to pair with neutrons. If the number of neutrons increases, the probability for the pairing increases too. The binding energy of the pair is liberated as kinetic energy of the pair - rather than becoming kinetic energy of proton as the popular text inaccurately states.

Pairing does not fit with shell model in which proton and neutron shells correlate very weakly. The weakness of proton-neutron correlations in nuclear shell model looks somewhat paradoxical in this sense since - as text books tell to us - it is just the attractive strong interaction between neutron and proton, which gives rise to the nuclear binding.

In TGD based view about nucleus protons and neutrons are connected to nuclear strings with short color flux tubes connecting nucleons so that one obtains what I call nuclear string (see this). These color flux tubes would bind nucleons rather than nuclear force in the conventional sense.

What can one say about correlations between nucleons in nuclear string model? If the nuclear string has low string tension, one expects that nucleons far away from each other are weakly correlated but neighboring nuclei correlate strongly by the presence of the color flux tube connecting them.

Minimization of repulsive Coulomb energy would favor protons with neutrons as nearest neighbors so that pairing would be favored. For instance, one could have n-n-n... near the ends of the nuclear string and -p-n-p-n-... in the middle region and strong correlations and higher kinetic energy. Even more neutrons could be between protons if the nucleus is neutron rich. This could also relate to neutron halo and the fact that the number of neutrons tends to be larger than that of protons. Optimistic could see the experimental finding as a support for nuclear string model.

Color flux tubes can certainly have charge 0 but also charges 1 and -1 are possible since the string has quark and antiquark at its ends giving uubar, ddbar, udbar, dubar with charges 0,0,-1,+1. Proton plus color flux tube with charge -1 would effectively behave as neuron. Could this kind of pseudo neutrons exist in nucleus? Or even more radically: could all neurons in the nucleus be this kind of pseudo neutrons?

The radical view conforms with the model of dark nuclei as dark proton sequences - formed for instance in Pollack effect (see this) - in which some color bonds can become also negatively charged to reduce Coulomb repulsion. Dark nuclei have scaled down binding energy and scaled up size. They can decay to ordinary nuclei liberating almost all ordinary nuclear binding energy: this could explaining "cold fusion" (see this).

See the chapter Nuclear string model of "Hyper-finite factors, p-adic length scale hypothesis, and dark matter hierarchy".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Sunday, August 12, 2018

Could also RNA and protein methylation be involved with expression molecular emotions?

Some time ago I wrote an piece of text ) (see this) about learning of slime molds. The proposal was based on the vision inspired by the model of bio-harmony and stating that harmony of music of light (and maybe of also sound) realized as 3-chords of dark photons with frequencies of 12-note scale expresses and creates emotions and that each harmony corresponds to a particular mood. The painful conditioning of the slime mold would generate a negative mood which would infect DNA and induce epigenetic change. This picture conforms also with the finding that RNA can induce learning of conditionings in snails (see this). Slime mold does not have central nervous system but a natural guess would be that also synaptic learning involves similar mechanism.

One can ask whether also RNA and protein methylation could be involved with learning. If molecular moods correspond to bio-harmonies and if the conditioning by say painful stimulus involves a change of the emotional state of RNA inducing that of DNA, it must change some of the chords of the bio-harmony. Since bio-harmony is essential for communications by dark photons between dark proton triplets representing dark variants of the basic biomolecules and also between communications between bio-molecules and their dark variants, one expects that the change of the harmony occurs for all dark analogs of biomolecules and also for their ordinary biomolecules.

Some chords represented by DNA-, RNA-, and tRNA codons, and amino-acids - briefly basic bio-molecules - would be affected.

  1. In the case of DNA epigenetic modifications (see this) affect mRNA and thus also protein expression. There are two basic mechanisms involved. Methylation of C nucleotide of DNA and protein modification for histone.

    Methylation (addition of CH3 to N) of C nucleotide leads to a silencing of gene expression. Methylation occurs typically for CpG pairs and for both strands. Before embryogenesis demethylation occurs for the entire DNA (stem cell state) but cell differentiation means methylation of genes not expressed. In vertebrates 60-80 percent of CpG is methylated in somatic cells. CpG islands form an exception involving no methylation. Demethylation (see this) as the reversal of methylation occurs either spontaneously or actively.

    The effects on gene expression can be also inherited to next generations. The mechanism of inheritance is poorly understood. The epigenetic change should be also somehow communicated to the DNA of germ cells but this seems impossible. The mystery is deepened because before embryogenesis demethylation occurs for the entire genome. It is difficult to understand how the chemical storage of the information about methylation patterns to be transferred to the next generation is possible at all.

    The TGD view about emotional expression inducing epigenesis by communications via dark photons between basic biomolecules and their dark variants suggests an elegant mechanism. What would be inherited would be the emotional states represented by bio-harmonies assignable to the dark variants of biomolecules.

  2. In the case of pre-RNA post-transcriptional chemical modifications (see this) - in particular methylation, are known to occur, and they affect RNA splicing rates and change the distribution of mRNAs and thus of proteins. The modifications affect also un-translated RNA (UTR) but not the protein translation from mRNA.

  3. Protein modifications (see this) in turn affect the dynamics of proteins - in particular their properties as enzymes by affecting therefore the rates for various basic processes.

    As already noticed, protein modifications are important in epigenesis by histone modification. Wikipedia article mentions lys acetylization by adding CH3=O group (see this), lys and arg methylation (see this), ser and thr phosphorylation, lys ubiquintination and sumoylation. For N-terminus (H2 group in the start of protein) the process is irreversible and new amino acid residues emerge. Methylation in C terminus (O=C-OH end of protein) can increase chemical repertoire. Note that the methylation occurs at the ends of the protein just like it tends to occur in the case of RNA as will be found.

RNA modifications deserve to be discussed in more detail. This field of study is known as epitranscriptomics (see this). These chemical modifications does not affect protein expression except in the case that they affect the rates of various alternative pre-RNA splicing so that the distribution of alternative protein outcomes changes. Clearly, the effect is somewhat like the effect of mood on overall activity. There are also many other modifications of RNA. One of the is A-I de-amination which changes in RNA but does not affect protein expression.

The methylation of RNA is the most common and best understood modification of RNA.

  1. The modelling of the methylation of both DNA and RNA is based on writer-reader-eraser model. Writing corresponds to methylation. Reading corresponds to attachment of enzymes involved in the splicing or protein synthesis with higher rate to methylated sites. Demethylation is example of erasing.

  2. Methylation is known to occur for various variants of RNA (ribosomal rRNA, tRNA, mRNA, and small nuclear RNA snRNA related to metabolic machinery) after transcription. The biochemical modifications of RNA are called epitranscriptomes (see this). N6-Methyladenosine (m6A) is the most common and best understood modification of RNA. m6A tells that nitrogen in position 6 of adenosine (A) is methylated by adding group CH3. m6A sites are often located in the last exon near the end of mRNA, in untranslated RNA (UTR) at 3' end, and inside long exons.

    It has been found that 3 members of so called YTH domain protein family acting as readers have larger affinity to bind to methylated sites. One of them shortens the lifetime of mRNA after translation.

  3. Methylation in general shortens the UTR (un-translated regions) of mRNA in its 5' and 3' ends (head and tail of mRNA) ). One speaks of alternative poly-adenylation (APA, see this) of the tail of the mRNA: poly-adenylation (PA) adds A-sequences to the end of mRNA affecting its dynamics: shortening of UTRs means shortening of PAs.

  4. Methylation affects the rates in the dynamics of translation but does not affect the product of translation itself. A-sequences shields mRNA and during its life cycle its length is reduced somewhat like telomere (see this) consisting of a repeated sequence TTAGGG and also shortening during the life cycle of DNA. APA affects rates for the dynamics of translation. Also stem loops of pre-RNA can be methylated and this can increase the rate of an alternative splicing and thus change relative rates of alternative gene expressions.

The basis question is which of the following options is correct.
  1. The chemical modification of the basic biomolecules required by the preservation of resonance condition. In this case the modification would be associated with all codons and mean a drastic change of both DNA and RNA and also amino-acids. The modifications, in particular methylation, are however associated with with highly restricted portions of DNA and RNA. On particular, only A nucleotide of RNA is methylated. Hence this option is definitely excluded.

  2. The basic bio-molecules have several resonance frequencies corresponding to various moods so that chemical modifications are not needed for preserving the resonance conditions. This was assumed about the emotional effect of RNA to DNA in the earlier considerations. Chemical modifications could be seen as emotional expression of dark variants of bio-molecules.

    This option conforms with the above facts about RNA methylation. Only UTRs at the ends of RNA and associated with the stem loops are sensitive to modifications and the interpretation is that these allow the emotional expression of RNA. Note that somewhat similar situation is encountered in the case of microtubules for which the other end is highly dynamical. One can ask whether the shortening of the A-sequences and telomeres could be seen as outcome of expression of negative emotions.

What inspired this piece of text was a highly interesting popular article "Methyl marks on RNA discovered to be key to brain cell connections" about methylation of RNA in brain (see this). The research article (see this) by Daria Merkuvjev et al has title "Synaptic N6-methyladenosine (m6A) epitranscriptome reveals functional partitioning of localized transcripts".

The researchers isolated brain cells from adult mice and compared epitranscriptomes found at synapses to those elsewhere in the cells. At more than 4,000 spots on the genome, the mRNA at the synapse was methylated more often. In more than half of genes the epitranscriptomes were found in genes coding for proteins found mostly in synapses. If the methylation was disrupted, the brain cells did not function normally. It was concluded that the methylation probably makes signalling faster.

These findings conform with the idea about representation of molecular emotions as bio-harmony. Synaptic contacts are the places where emotions should be expressed to give rise to learning by conditioning realized in terms of changed synaptic strengths. Methylation would induced as emotional expression due to the changing of the 3-chords of the harmony.

See the article Emotions as sensory percepts about the state of magnetic body?, a shorter article Could also RNA and protein methylation of RNA be involved with the expression of molecular emotions? or the chapter of "TGD based view about consciousness, living matter, and remote mental interactions" with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Thursday, August 09, 2018

Are space-time surfaces minimal surfaces everywhere except at 2-D interaction vertices?

The action S determining space-time surfaces as preferred extremals follows from twistor lift and equals to the sum of volume term Vol and Kähler action SK. The field equation is a geometric generalization of d'Alembert (Laplace) equation in Minkowskian (Eucidian) regions of space-time surface coupled with induced Kähler form analogous to Maxwell field. Generalization of equations of motion for particle by replacing it with 3-D surface is in question and the orbit of particle defines a region of space-time surface.


  1. Zero energy ontology (ZEO) suggests that the external particles arriving to the boundaries of given causal diamond (CD) are like free massless particles and correspond to minimal surfaces as a generalization of light-like geodesic. This dynamic reduces to mere algebraic conditions and there is no dependence on the coupling parameters of S. In contrast to this, in the interaction regions inside CDs there could be a coupling between Vol and SK due to the non-vanishing divergences of energy momentum currents associated with the two terms in action cancelling each other.

  2. Similar algebraic picture emerges from M8-H duality at the level of M8 and from what is known about preferred extremals of S assumed to satisfy infinite number of super-symplectic gauge conditions at the 3-surfaces defining the ends of space-time surface at the opposite boundaries of CD.

  3. At M8 side of M8-H duality associativity is realized as quaternionicity of either tangent or normal space of the space-time surface. The condition that there is 2-D integral distribution of sub-spaces of tangent spaces defining a distribution of complex planes as subspaces of octonionic tangent space implies the map of the space-time surface in M8 to that of H. Given point m8 of M8 is mapped to a point of M4× CP2 as a pair of points (m4,s) formed by M4 ⊂ M8 projection m4 of m8 point and by CP2 point s parameterizing the tangent space or the normal space of X4⊂ M8.

  4. If associativity or even the condition about the existence of the integrable distribution of 2-planes fails, the map to M4× CP2 is lost. One could cope with the situation since the gauge conditions at the boundaries of CD would allow to construct preferred extremal connecting the 3-surfaces at the boundaries of CD if this kind of surface exists at all. One can however wonder whether giving up the map M8→ H is necessary.

  5. Number theoretic dynamics in M8 involves no action principle and no coupling constants, just the associativity and the integrable distribution of complex planes M2(x) of complexified octonions. This suggests that also the dynamics at the level of H involves coupling constants only via boundary conditions. This is the case for the minimal surface solutions suggesting that M8-H duality maps the surfaces satisfying the above mentioned conditions to minimal surfaces. The universal dynamics conforms also with quantum criticality.

  6. One can argue that the dependence of field equations on coupling parameters in interactions leading to a perturbative series in coupling parameters in the interior of the space-time surface spoils the extremely beautiful purely algebraic picture about the construction of solutions of field equations using conformal invariance assignable to quantum criticality. Classical perturbation series is also in conflict with the vision that the TGD counterparts twistorial Grassmannian amplitudes do not involve any loop contributions coming as powers of coupling constant parameters.

Thus both M8-H duality, number theoretic vision, quantum criticality, twistor lift of TGD reducing dynamics to the condition about the existence of induced twistor structure, and the proposal for the construction of twistor scattering amplitudes suggest an extremely simple picture about the situation. The divergences of the energy momentum currents of Vol and SK would be non-vanishing only at discrete points at partonic 2-surfaces defining generalized vertices so that minimal surface equations would hold almost everywhere as the original proposal indeed stated.
  1. The fact that all the known extremals of field equations for S are minimal surfaces conforms with the idea. This might be due to the fact that these extremals are especially easy to construct but could be also true quite generally apart from singular points. The divergences of the energy momentum currents associated with SK and Vol vanish separately: this follows from the analog of holomorphy reducing the field equations to purely algebraic conditions.

    It is essential that Kähler current jK vanishes or is light-like so that its contraction with the gradients of the imbedding space coordinates vanishes. Second condition is that in transversal degrees of freedom energy momentum tensor is tensor of form (1,1) in the complex sense and second fundamental form consists of parts of type (1,1) and (-1-1). In longitudinal degrees of freedom the trace Hk of the second fundamental form Hkαβ= Dβαhk vanishes.

  2. Minimal surface equations are an analog of massless field equation but one would like to have also the analog of massless particle. The 3-D light-like boundaries between Minkowskian and Euclidian space-time regions are indeed analogs of massless particles as are also the string like word sheets, whose exact identification is not yet fully settled. In any case, they are crucial for the construction of scattering amplitudes in TGD based generalization of twistor Grassmannian approach. At M8 side these points could correspond to singularities at which Galois group of the extension of rationals has a subgroup leaving the point invariant. The points at which roots of polynomial as function of parameters co-incide would serve as an analog.

    The intersections of string world sheets with the orbits of partonic 2-surface are 1-D light-like curves X1L defining fermion lines. The twistor Grassmannian proposal is that the ends of the fermion lines at partonic 2-surfaces defining vertices provide the information needed to construct scattering amplitudes so that information theoretically the construction of scattering amplitudes would reduce to an analog of quantum field theory for point-like particles.

  3. Number theoretic vision reduces coupling constant evolution to a discrete evolution. This implies that twistor scattering amplitudes for given values of discretized coupling constants involve no radiative corrections. The cuts for the scattering amplitudes would be replaced by sequences of poles. This is unavoidable also because there is number theoretical discretization of momenta from the condition that their components belong to an extension of rationals defining the adele.

What could the reduction of cuts to poles for twistorial scattering amplitudes at the level of momentum space mean at space-time level?
  1. Poles of an analytic function are co-dimension 2 objects. d'Alembert/Laplace equations holding true in Minkowskian/Euclidian signatures express the analogs of analyticity in 4-D case. Co-dimension 2 rule forces to ask whether partonic 2-surfaces defining the vertices and string world sheets could serve analogs of poles at space-time level? In fact, the light-like orbits X3L of partonic 2-surfaces allow a generalization of 2-D conformal invariance since they are metrically 2-D so that X3L and string world sheets could serve in the role of poles.

    X3L could be seen as analogs of orbits of bubbles in hydrodynamical flow in accordance with the hydrodynamical interpretations. Particle reactions would correspond to fusions and decays of these bubbles. Strings would connect these bubbles and give rise to tensor networks and serve as space-time correlates for entanglement. Reaction vertices would correspond to common ends for the incoming and outgoing bubbles. They would be analogous to the lines of Feynman diagram meeting at vertex: now vertex would be however 2-D partonic 2-surface.

  2. What can one say about the singularities associated with the light-like orbits of partonic 2-surfaces? The divergence of the Kähler part TK of energy momentum current T is proportional to a sum of contractions of Kähler current jK with gradients ∇ hk of H coordinates. jK need not be vanishing: it is enough that its contraction with ∇ hk vanishes and this is true if jK is light-like. This is the case for so called massless extremals (MEs). For the other known extremals jK vanishes.

    Could the Kähler current jK be light-like and non-vanishing and singular at X3L and at string world sheets? This condition would provide the long sought-for precise physical identification of string world sheets. Minimal surface equations would hold true also at these surface. Even more: jK could be non-vanishing and thus also singular only at the 1-D intersections X1L of string world sheets with X3L - I have called these curves fermionic lines?

    What it means that jK is singular - that is has 2-D delta function singularity at string world sheets? jK is defined as divergence of the induced Kähler form J so that one can use the standard definition of derivative to define jK at string world sheet as the limiting value jKα= (Div+- J)α = limΔ xn→ 0 (J+α n- J-α n)/Δ xn, where xn is a coordinate normal to the string world sheet. If J is not light-like, it gives rise to isometry currents with non-vanishing divergence at string world sheet. This current should be light like to guarantee that energy momentum currents are divergenceless. This is guaranteed if the isometry currents T&n; A are continuous through the string world sheet.

  3. If the light-like jK at partonic orbits is localized at fermionic lines X1L, the divergences of energy momentum currents could be non-vanishing and singular only at the vertices defined at partonic 2-surfaces at which fermionic lines X1L meet. The divergences of energy momentum tensors TK of SK and TVol of Vol would be non-vanishing only at these vertices. They should of course cancel each other: Div TK=-Div TVol.

  4. Div TK should be non-vanishing and singular only at the intersections of string world sheets and partonic 2-surfaces defining the vertices as the ends of fermion lines. How to translate this statement to a more precise mathematical form? How to precisely define the notions of divergence at the singularity?

    The physical picture is that there is a sharing of conserved isometry charges of the incoming partonic orbit i=1 determined TK between 2 outgoing partonic orbits labelled by j=2,3 . This implies charge transfer from i=1 to the partonic orbits j=2,3 such that the sum of transfers sum up to to the total incoming charge. This must correspond to a non-vanishing divergence proportional to delta function. The transfer of the isometry charge for given pair i,j of partonic orbits that is Divi→ j TK must be determined as the limiting value of the quantity Δi→ j TKα,A/Δ xα as Δ xα approaches zero. Here Δi→ j TKα,A is the difference of the components of the isometry currents between partonic orbits i and j at the vertex. The outcome is proportional delta function.

  5. Similar description applies also to the volume term. Now the trace of the second fundamental form would have delta function singularity coming from Div TK. The condition Div TK= -Div TVol would bring in the dependence of the boundary conditions on coupling parameters so that space-time surface would depend on the coupling constants in accordance with quantum-classical correspondence. The manner how the coupling constants make themselves visible in the properties of space-time surface would be extremely delicate.

This picture conforms with the vision about scattering amplitudes at both M8 and H sides of M8-H duality.
  1. M8 dynamics based on algebraic equations for space-time surfaces leads to the proposal that scattering amplitudes can be constructed using the data only at the points of space-time surface with M8 coordinates in the extension of the rationals defining the adele. I call this discrete set of points cognitive representation.

  2. At H side the information theoretic interpretation would be that all information needed to construct scattering amplitudes would come from points at which the divergences of the energy momentum tensors of SK and Vol are non-vanishing and singular.

Both pictures would realize extremely strong form of holography, much stronger than the strong form of holography that stated that only partonic 2-surfaces and string world sheets are needed.

See the article The Recent View about Twistorialization in TGD Framework or the shorter article Further comments about classical field equations in TGD framework, or the chapter chapterwith the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wednesday, August 08, 2018

Three dualities of the field equations of TGD

The basic field equations of TGD allow several dualities. There are 3 of them at the level of basic field equations (and several other dualities such as M8-M4× CP2 duality).

  1. The first duality is the analog of particle-field duality. The spacetime surface describing the particle (3-surface of M4× CP2 instead of point-like particle) corresponds to the particle aspect and the fields inside it geometrized in terms of sub-manifold geometry in terms of quantities characterizing geometry of M4× CP2 to the field aspect. Particle orbit serves as wave guide for field, one might say.

  2. Second duality is particle-spacetime duality. Particle identified as 3-D surface means that particle orbit is space-time surface glued to a larger space-time surface by topological sum contacts. It depends on the scale used, whether it is more appropriate to talk about particle or of space-time.

  3. The third duality is hydrodynamics-massless field theory duality Hydrodynamical equations state local conservation of Noether currents. Field equations indeed reduce to local conservation conditions of Noether currents associated with isometries of M4× CP2. One the other hand, these equations have interpretation as non-linear geometrization of massless wave equation with coupling to Maxwell fields. This realizes the ultimate dream of theoretician: symmetries dictate the dynamics completely. This is expected to be realized also at the level of scattering amplitudes and the generalization of twistor Grassmannian amplitudes could realize this in terms of Yangian symmetry.

    Hydrodynamics-wave equations duality generalizes to the fermionic sector and involves superconformal symmetry.

  4. What I call modified gamma matrices are obtained as contractions of the partial derivatives of the action defining space-time surface with respect to the gradients of imbedding space coordinate with imbedding space gamma matrices. Their divergences vanish by field equations for the space-time surface and this is necessary for the internal consistency the Dirac equation. The modified gamma matrices reduces to ordinary ones if space-time surface is M4 and one obtains ordinary massless Dirac equation.

  5. Modified Dirac equation expresses conservation of super current and actually infinite number of super currents obtained by contracting second quantized induced spinor field with the solutions of modified Dirac. This corresponds to the super-hydrodynamic aspect. On the other hand, modified Dirac equation corresponds to fermionic analog of massless wave equation as super-counterpart of the non-linear massless field equation determining space-time surface.

See the article The Recent View about Twistorialization in TGD Framework or the shorter article Further comments about classical field equations in TGD framework, or the chapter chapterwith the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Tuesday, August 07, 2018

About the physical interpretation of the velocity parameter in the formula for the gravitational Planck constant

Nottale's formula for the gravitational Planck constant hbargr= GMm/v0 involves parameter v0 with dimensions of velocity. I have worked with the quantum interpretation of the formula but the physical origin of v0 - or equivalently the dimensionless parameter β0=v0/c (to be used in the sequel) appearing in the formula has remained open hitherto. In the following a possible interpretation based on many-sheeted space-time concept, many-sheeted cosmology, and zero energy ontology (ZEO) is discussed.

A generalization of the Hubble formula β=L/LH for the cosmic recession velocity, where LH= c/H is Hubble length and L is radial distance to the object, is suggestive. This interpretation would suggest that some kind of expansion is present. The fact however is that stars, planetary systems, and planets do not seem to participate cosmic expansion. In TGD framework this is interpreted in terms of quantal jerk-wise expansion taking place as relative rapid expansions analogous to atomic transitions or quantum phase transitions. The TGD based variant of Expanding Earth model assumes that during Cambrian explosion the radius of Earth expanded by factor 2.

There are two measures for the size of the system. The M4 size LM4 is identifiable as the maximum of the radial M4 distance from the tip of CD associated with the center of mass of the system along the light-like geodesic at the boundary of CD. System has also size Lind defined defined in terms of the induced metric of the space-time surface, which is space-like at the boundary of CD. One has Lind<LM4. The identification β0= LM4/LH<1 does not allow the identification LH=LM4. LH would however naturally corresponds to the size of the magnetic body of the system in turn identifiable as the size of CD.

One can deduce an estimate for β0 by approximating the space-time surface near the light-cone boundary as Robertson-Walker cosmology, and expressing the mass density ρ defined as ρ=M/VM4, where VM4=(4π/3) LM43 is the M4 volume of the system. ρ can be expressed as a fraction ε2 of the critical mass density ρcr= 3H2/8π G. This leads to the formula β0= [rS/LM4]1/2 × (1/ε), where rS is Schwartschild radius.

This formula is tested for planetary system and Earth. The dark matter assignable to Earth can be identified as the innermost part of inner core with volume, which is .01 per cent of the volume of Earth. Also the consistency of the Bohr quantization for dark and ordinary matter is discussed and leads to a number theoretical condition on the ratio of the ordinary and dark masses.

See the article About the physical interpretation of the velocity parameter in the formula for the gravitational Planck constant or the new chapter About the Nottale's formula for hgr and the possibility that Planck length lP and CP2 length R are identical giving G= R2/ℏeff of "Physics in many-sheeted space-time"..

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Saturday, August 04, 2018

An island at which body size shrinks

I encountered in Facebook an article claiming that the bodies of animals shrink at the island of Flores belonging to Indonesia. This news is not Dog's days news (Dog's days news is a direct translation from the finnish synonym for fake news).

Both animals and humans really are claimed to have shrinked in size. The bodies of both hominins (predecessors of humans, humans, ane even elephants) have shrinked at Flores.

  1. In 2003, researchers discovered in a mountain cave in the island of Flores fossils of tiny, humanlike individual. It had chimp sized brain and was 90 cm tall. Several villages at the area are inhabited by people with average body height about 1.45 meters.

  2. Could the small size of the recent humans at Flores be due to interbreeding between modern humans with Homo Florensiensis (HF) occurred long time ago? The hypothesis could be tested by studying the DNA of HF. Since the estimate age of fossils of HF was 10,000 years, researchers hoped that they could find some DNA to HF. DNA was not found but researchers realized that if HF as interbreeded with humans, this DNA could show itself in DNA of modern humans at Flores. It was found that this DNA can be identified but differs insignificantly from that of modern humans. It was also found that the age of the fossils was about 60,000 years.

  3. Therefore it seems that the interbreeding did not cause the reduction in size. The study also showed that at least twice in the ancient history of humans and their relatives arrived as Flores and then grew shorter. This happened also for elephants that arrived to Flores at twice.

This looks really weird! Weirdness in this proportion allows some totally irresponsible speculation.
  1. The hierarchy of Planck constants heff=nh0 (h=6h0 is a good guess ) assigned with dark matter as phases of ordinary matter and responsible for macroscopic quantum coherence is central in TGD inspired biology . Quantum scales are proportional to or its power (heff2 for atoms, heff for Compton length, and heff1/2 for cyclotron states).

  2. The value of gravitational Planck constant hgr (=heff) at the flux tubes mediating gravitational interaction could determine the size scale of the animals. Could one consider a local anomaly in which the value of hgr is reduced and leads to a shrinkage of also body size?

  3. hgr is of form hgr=GMDm/v0, where v0 a velocity parameter (see this, this, and this). MD is a large dark mass of order 10-4 times the mass of Earth. Gravitational Compton length Λgr= hgr/m=GMD/v0 for a particle with mass m. Λgr= hgr/m does not depend on the mass of the particle - this conforms with Equivalence Principle.

    The estimate of this article gives Λgr= 2πM D/v0= 2.9× rS(E)$, where the Schwartshild radius of Earth is $rS(E)=2GME=.9$ mm. This gives Λgr= 2.6 mm, which corresponds to p-adic length scale L(k=187). Brain contains neuron blobs with this size scale. The size scale of organism is expected to be some not too large multiple of this scale.

    Could one think that v0 at Flores is larger than normally and reduces the value of Λgr so that the size for the gravitational part of the magnetic body of any organism shrinks, and that this gradually leads to a reduction of the size of the biological body. Second possibility is that the value of dark mass MD is at Flores smaller than elsewhere: one would have a dark analogy of ordinary local gravitational anomaly. The reduction of hgr should be rather large so that the first option looks more plausible.

See the article An island at which body size shrinks or the chapter Quantum Criticality and dark matter of "Hyper-finite factors, p-adic length scale hypothesis, and dark matter hierarchy".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Thursday, August 02, 2018

What perfectoid and its tilt introduced by Fields medalist Peter Scholze are?

Quanta Magazine tells that 30-year old Peter Scholze is now the youngest Fields medalist due to the revolution that he launched in arithmetic geometry (see this).

Scholze's work might be interesting also from the point of view physics, at least the physics according to TGD. I have already made a attempt to understand Scholze's basic idea and to relate it to physics. About the theorems that he has proved I cannot say anything with my miserable math skills.

The notion of perfectoid

Scholze introduces first the notion of perfectoid.

  1. This requires some background notions. The characteristic p for field is defined as the integer p for which px=0 for all elements x. Frobenius homomorphism (Frob familiarly) is defined as x→ xp. For a field of characteristic p Frob: x-->xp is an algebra homomorphism mapping product to product and sum to sum: this is very nice and relatively easy to show even by a layman like me.

  2. Perfectoid is a field having either characteristic p=0 (reals, p-adics for instance) or for which Frob is a surjection meaning that Frob maps at least one number to a given number x.

    For finite fields Frob is identity: xp=x as proved already by Fermat. For reals and p-adic number fields with characteristic p=0 Frob maps all elements to unit element and is not a surjection but this is not required now.

The tilt of the perfectoid

What Scholze introduces besides perfectoids K also what he calls tilt of the perfectoid: Kb. Kb is something between p-adic number fields and reals and leads to theorems giving totally new insights to arithemetic geometry

  1. As we learned during the first student year, real numbers can be defined as Cauchy sequences of rationals converging to a real number, which can be also algebraic number or transcendental. The numbers in the tilt Kb would be this kind of sequences.

  2. Scholze starts from (say) p-adic numbers and considers infinite sequence of iterates of 1/p:th roots. At given step x→ x1/p. This gives the sequences $(x,x1/p,x1/p2,x1/p3,...)$ as elements of Kb. At the limit one obtains 1/p root of x.

    1. For finite fields each step is trivial (xp=x) so that nothing interesting results: one has (x,x,x,x,x...).

    2. For p-adic number fields the situation is non-trivial. x1/p exists as p-adic number for all p-adic numbers with unit norm having x= x0+x1p+... In the lowest order x ≈ x0 the root is just x since x is effectively an element of finite field in this approximation. One can develop the x1/pto a power series in p and continue the iteration. The sequence obtained defines the element of tilt Kb of field $K$, now p-adic numbers.

    3. If the p-adic number x has norm pn and is therefore not unity, the root operation makes sense only if one performs an extension of p-adic numbers containing all the roots p1/pk) . These roots define one particular kind of extension of p-adic numbers and the extension is infinite-dimensional since all roots are needed. One can approximate Kb by taking only finite number iterated roots: I call these almost perfectoids as precursors of perfectoids.

  3. The tilt is said to be fractal: this is easy to understand from the presence of the iterated p:th root. Each step in the sequence is like zooming. One might say that p-adic scale p becomes p:th root of itself. In TGD p-adic length scale Lp is proportional to p1/2: does the scaling mean that the p-adic length scale would defined a hierarchy of scales proportional to p1/2kp approaching the CP2 scale since the root of p approaches unity. Tilts as extensions by iterated roots would improve the length scale resolution.

One day later I got the feeling that I might have understood one more important thing about the tilt of p-adic number field: changing of the characteristic 0 of p-adic number field to characteristics p>0 of the corresponding finite field for its tilt (thanks for Ulla for the links). What could this mean?

Characteristic p (p is now the prime labelling p-adic number field) means nx=0. This property makes the mathematics of finite fields extremely simple: in the summation one need not take care of the residue as in the case of reals and p-adics. The tilt of the p-adic number field would have the same property! In the infinite sequence of the p-adic numbers coming as iterated p:th roots of starting point p-adic number one can sum each p-adic number separately. This is really cute if true!

It seems that one can formulate the arithmetics problem in the tilt where it becomes in principle as simple as in finite field with only p elements! Does the existence of solution in this case imply its existence in the case of p-adic numbers? But doesn't the situation remain the same concerning the existence of the solution in the case of rational numbers? The infinite series defining p-adic number must correspond a sequence in which binary digits repeat with some period to give a rational number: rational solution is like a periodic solution of a dynamical system whereas non-rational solution is like chaotic orbit having no periodicity? In the tilt one can also have solutions in which some iterated root of p appears: these cannot belong to rationals but to their extension by an iterated root of p.

The results of Scholze could be highly relevant for the number theoretic view about TGD in which octonionic generalization of arithematic geometry plays a key role since the points of space-time surface with coordinates in extension of rationals defining adele and also what I call cognitive representations determining the entire space-time surface if M8-H duality holds true (space-time surfaces would be analogous to roots of polynomials). Unfortunately, my technical skills in mathematics needed are hopelessly limited.

TGD inspires the question is whether the finite cutoffs of Kb - almost perfectoids - could be particularly interesting physically. At the limit of infinite dimension one would get an ideal situation not realizable physically if one believes that finite-dimensionality is basic property of extensions of p-adic numbers appearing in number theoretical quantum physics (they would related to cognitive representations in TGD). Adelic physics involves all extensions of rationals and the extensions of p-adic number fields induced by them and thus also extensions of type Kb. I have made some naive speculations about why just these extensions might be physically of a special signiticance.

See the articles Could the precursors of perfectoids emerge in TGD? and Does M8-H duality reduce classical TGD to octonionic algebraic geometry?.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.