Monday, September 21, 2020

Expanding Earth model, Cambrian Explosion, and Pangaea supercontinent

This is a response to a comment of David Whitfield in a discussion related to the life on Mars and whether it might be a good idea to reserve a ticket to the next spacecraft going to Mars. The comment is about TGD based Expanding Earth model suggesting thatin Cambrian Explosion Earth experienced a rapid phase of expansion increasing it radius by a factor 2. The model is natural in TGD and forced by the assumption that cosmic expansion occurs in average sense also for astrophysical objects. Since no smooth expansion has been observed, the expansion must occur as rapid jerks.

The TGD based model is consistent with my knowledge about geological and climate evolution: I am of course not an expert and I know well that experts disagree in many aspects. I put my money on general arguments and empirical anomalies: cosmic expansion must take place also for the planets and stars and must occur in discrete steps since smooth expansion is not observed. Here testing is possible.

  1. Climate evolution: the snowball earth model has anomalies and it must be given up in TGD framework. The evolutionary history of magnetic fields could allow testing of the model: their behavior challenges the snowball earth model.
  2. Biological evolution: the sudden emergence of a huge number of multicellulars as from nothing is the strongest argument in favor of the the burst of intraterrestrial life living in the underground ocean to the surface. The life forms in the start of the expansion lived in the world with a 6 hour long day instead of 24 hours. This might make itself visible in the biorhythms of animals living today as a kind of genetic memory. Could this biorhythm show itself somehow in the very early fossils? The expansion of Earth led to a weakening of the gravitational field: this explains the emergence of giant creatures, which lost the fight for survival: small animals with big brains were the winners.
  3. Geological evolution: An interesting question related to the Pangaea supercontinent can be raised. Pangaea is thought to have emerged from continents about 335 million years ago: there would have been Pangaea and the Ocean. The Cambrian Explosion occurred roughly 500 million years. Did Pangaea form from separate continents or did the Cambrian Explosion create a single gradually expanding Ocean and plus one super continent which did not increase in size but was like an island. Can one think that the recent continents emerged from Pangaea. I have not considered this question.

See this and this.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

How dreams are transformed to realities?

The question "Why there is something rather than nothing" was fashionable question years ago. I wrote with some irritation about the ill-posedness of the question an article to a journal edited by Huping Hu around 2012 and for some reason the article found numerous readers. Unfortunately I do not have the article at my homepage.

The emergence of a universe /sub-universe from nothing is prevented by he conservation law of energy in the standard ontology.

What about zero energy ontology (ZEO) behin TGD? In ZEO situation changes. The following assumptions are needed to avoid obvious contradictions with the existing views about time.

  1. Causal diamonds (CSa) define the 4-D perceptive fields of conscious entities.There seems to be hierarchy of CDs within CDs and CDs can overlap and define a kind of conscious atlas of perceptive fields in which chart pages as CDs form a fractal hierarchy. The notion of manifold would generalize to consciousness theory. The intersecting CDs would make possible selves having shared mental images.
  2. The perceptive fields defined by CDs increase in size during each life cycle with passive boundary and states at it being stationary and second boundary active. In BSFR the roles of active and passive boundary would change and CD would shrink in size as passive boundary would shift to future and CD would start to increase in an opposite time direction. CDS would also shift towards the geometric future inside a bigger CD.
  3. The center slice of a given CD - the largest one - would define the moment "Now" of a corresponding self and shifts towards the geometric future. The surprise is that also memories are mental images - smaller CDs - shift towards the geometric future of this slice. The roots of the polynomial defining CD would define special moments in the life of self as time slices at which SSFR could perhaps take place.
  4. What happens outside a given CD: do the space-time sheets representing incoming particles continue outside its so that there would be the analog of incoming particles and outgoing particles as in particle reactions. This would be suggested by the belief in objective 4-D reality. My recent view is that for given self having subselves having... there is the largest CD beyond which it does not continue.

After this sidetrack the question. Can realities - sub-universes represented by zero energy states inside CDs be created from nothing? Can CDs together with their contents pop up from nothing?

  1. ZEO would allow it but experimental physicists would shake his head: this is against the materialistic idea of single unchanging reality - and say something nasty about crazy theoreticians. My colleague William Shakespeare would have a different opinion but I am unable to put in englsh what he said about dreams as threads of the reality.
  2. One can however transform the ZEO based proposal into a milder form. Could dreams pop up from nothing and then can transform to realities? p-Adic physics describes intentions and cognition and M^8-H duality suggest that one could have space-time sheets defined by polynomials with coefficients which are piecewise constant rationals defined as functions of the light-like radial coordinate of CD. They cannot correspond to a real space-time surfaces but could correspond to p-adic ones.
    1. Could these space-time surfaces inside corresponding CDs be created from nothing? Could they in turn transform to real space-time surface with p-adic pseudo constants becoming genuine constants. Either in SSFRs - ("small" state function reductions happening when nothing would happen in standard ontology (Zeno effect in repeated measurements) or more probably in BSFR ("big" or ordinary SFRs). The outcome would be a real space-time surface and its p-adic cognitive variants providing its cognitive correlates: cognitive representation would be the intersection of reality and p-adities consisting of points if imbedding space in the extension of rationals defining the adele. Dream would have transformed to a reality.
    2. Partial realization would select only a single pieces of piecewise continuous polynomial, that beginning from the tip of CD in the division of ligh-like radial coordinate axis to pieces. Hence only imagination instead of full realization.

      Imagined percepts and motor actions as almost percepts and motor actions is one basic notion of TGD based quantum neuroscience. Could one learn something useful by regarding imagination as partial realizations of p-adic intentions and dreams?

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wednesday, September 16, 2020

Binaural beat as a support for TGD view about brain

The phenomenon known as binaural beat see this) provides support for the TGD view about the brain. Binaural beat occurs when sound waves with slightly different frequencies arrive in both ears. The beat can be understood as interference due to the time-varying phase difference of the waves. What is heard is the difference frequency, even when it is below 20 Hz - for instance 10 Hz-, and therefore not audible. The amplitude modulation with 10 Hz would be perceived, not the 10 Hz frequency. Strangely, the binaural beat occurs also when the signals arrive only to separated ears so that interference is not possible. In the standard neuroscience binarual beat is a mystery.

The TGD based explanation could be that the sound waves generate dark photon signals propagating along flux tubes and having classical em waves as correlates. The waves from different ears would interfere if the flux tubes meet at some point in the brain located at auditory areas perhaps. The first option is that this interference gives rise to the experience of the binaural beat and superposes with the sensory input assigned to ears (one cannot exclude the possibility that the sensory qualia are assigned to virtual sensory organs in the brain). Second option is that the virtual sensory input as feedback sent back to ears as dark photons superpose to the sensory input from ears.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, September 14, 2020

Halo model of dark matter meets further difficulties

Within few days become aware of several interesting findings challenging the halo model of dark matter. Here is the brief summary of these findings with possible TGD based interpretations.

Dark matter seems to be more clumpy than predicted

The following piece of text is from a popular article "Hubble Uncovers an Unexpected Discrepancy: An Ingredient Missing From Current Dark Matter Theories?" in Scitechdaily. The article tells about the observations by the NASA/ESA Hubble Space Telescope and the European Southern Observatory’s Very Large Telescope (VLT) in Chile. The original article “An excess of small-scale gravitational lenses observed in galaxy clusters” by Massimo Meneghette et al is published in 11 September 2020 in Science (see this).

"The researchers then used the Universe simulator to build 25 simulated clusters and performed a similar analysis with the clusters. They did so in order to identify the sites of possible lensing and the locations that could create the greatest distortions. The two didn't match. There were significantly more areas that generated high distortion in the real-Universe galaxy than there were in the model. This would be the case if the distribution of dark matter were a bit more clumpy than the models would predict-the dark matter halos around galaxies were more compact than the models would predict."

Cold matter scenario seems to be in grave difficulties. There are a lot of problems met already earlier but put under the rug. Probably an unavoidable outcome of over-specialization. During the years I have commented on these failures in articles, which can be found also at my homepage (see this).

The basic problem is that a continuous distribution of dark matter is assumed. Galaxies and other structures would be formed by the gravitational attraction of seeds producing filament like structures. Seeds would have emerged by fluctuations in mass density.

In the TGD framework the situation is exactly opposite: the seeds for the formation of galaxies are present from the very beginning as what I call cosmic strings. The model explains galactic velocity distribution, providing a model for the formation of galaxies,stars, and planets, and for the linear structures formed by galaxies.

  1. Cosmic string-like objects carrying dark matter and energy - not possible in GRT space-time - thicken to flux tubes and generate ordinary matter much like the decay of the inflaton field. There is no need to explain how cold dark matter generates clumps, cosmic strings as clumps are there from the very beginning. Galaxies and other visible structures - also filaments - are generated along cosmic strings and flat velocity spectrum for stars rotating around the galaxy is an automatic prediction.
  2. Also the invisible portions of cosmic strings not net decayed to dark matter induce a lense effect. The simulation of the presence of cosmic strings requires in halo models the distribution of the dark matter to be more lumpy and that requires too compact dark matter halos.
  3. The observed lense effect can be even 10 times stronger than predicted by the halo model. TGD explanation could be in terms of cosmic strings whose thickening to flux tubes at tangles liberates energy as ordinary matter forming the galaxies and string tension lowers leading to a smaller lense effect. The portions of the flux tube still thin and with higher string tension can explain the anomalously large lense effect.

Universe seems too thin and two large clumps of dark matter are required to explain the distribution of galaxies in clusters

It has been found that the observerved clumping of the dark matter is 10 per cent smaller than the clumping required by LambdaCDM scenario to make possible the formation of galaxies. The Universe seems to be also thinner than expected. This what one learns from Quantamagazine telling about the findings of the Kilod-Degree Survey (KiDS) published in the article "KiDS-1000 Cosmology: Cosmic shear constraints and comparison between two point statistics" (see this). The group studied about 31 million galaxies from up to 10 billion light-years away.

Could the TGD view about dark matter and energy provide understanding? Dark matter and energy would be associated with long flux tubes emerging as cosmic string-like objects thicken and liberate energy transforming to matter. Dark matter and energy are associated with effectively string-like objects. Could this help to understand these findings?

The dark energy and matter assigned to string-like entities would generate the ordinary matter around themselves and serves as seeds possibly attracting more matter also created in this manner. How would the clumping in the TGD Universe differ from the clumping in the LambdaCDM scenario?

In LambdaCDM clumps are necessary for the formation of galaxies and other structures. In the TGD framework they are not needed, one-dimensional string-like networks present from the beginning and create galaxies by decaying to ordinary matter. In long enough scales the average density would be constant and there would be no clumps at all.

Can one understand why the Universe is too thin? Could it be the energy and mass of all cosmic strings have not been taken into account. Long cosmic strings have galaxies and other structures as tangles decaying to ordinary matter. What about the straight portions. Have they been taken into account in the survey?

For the TGD view about the formation of galaxies and start see for instance this.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Life in Venus? What says TGD?

Evidence for life in a rather unexpected place - Venus - has emerged (see this). The atmosphere of Venus shows signs of phophine, which cannot be produced by inorganic processes. There are small amounts of phosphine in the Earth's atmosphere and has an organic origin. Same might be true in the case of Venus. Perhaps simple bacterial life is in question. Is it in the atmosphere or somewhere deeper is an open question - at least to me.

1. The first impressions

One can find from Wikipedia that phosphine has the chemical formula PH3. In inorganic chemistry it is very difficult to form phosphine from phosphate (PO4)-3 which is central in living matter. Somehow reduction must occur: the double valence bonds O=P of phosphates must in the final situation ordinary valence bonds in PH3.

TGD predicts that all planets have life in their interior. This would solve the Fermi paradox. Also Earth's life would have evolved in the interior and emerged to the surface in the Cambrian Explosion when a large number of multicellulars emerged as if nowhere. The reason would have been a rather fast increase of Earth radius by factor 2: in TGD cosmology continuous expansion for astrophysical objects is replaced by a sequence of fast expansions followed by steady non-expanding states (see this and this). Whether the phosphine could emerge from the interior of Venus is an interesting question.

TGD also predicts a new kind of chemistry involving the notions of magnetic body (MB) carrying dark matter identified as phases of ordinary matter with effective Planck constant heff=nh0 (h=6h0), which can have very large values. Also the notions of acid resp. base and reduction and oxidation would involve dark protons resp. Dark valence electrons but in biosystems these notions would become fundamental. For instance, in Pollack effect exclusion zones as regions in which every fourth proton goes to a magnetic flux tube as a dark proton would be formed. For pH = 7 the fraction 10-7 of protons would be dark! In biology dark protons, electrons, and also dark ions would be fundamental.

MB would be the "boss" controlling the ordinary biomatter using dark cyclotron photon signals and resonance as a control tool. This new chemistry relying on what I call number theoretical (or adelic) physics would be central for the basic biomolecules such as DNA, RNA, tRNA, and amino acids having dark analogs accompanying them. The phosphates of DNA nucleotides with negative charges would be neutralized by dark protons and dark proton triplets would define a fundamental realization of the genetic code. Also amino-acids would be accompanied by dark proton (actually dark hydrogen) triplets.

Transforming protons to dark protons in Pollack effect requires an energy feed: IR photons do the job best. This means that dark protons carry metabolic energy and in ATP there could be 3 dark protons neutralizing the negative charges of phosphates. Together with dark electrons associated with valence bonds this would explain the questionable notion of high energy phosphate bond. ATP→ ADP would transform one dark proton to ordinary one and break a valence bond, which for a dark electron has an abnormally high energy. Both of them would give energy.

If it is life, I expect that both these new phenomena predicted by TGD are involved.

2. Could there be sulfuric life in Venus?

There is an article about the chemistry involved with phosphine (see this). Not only there exists no known in-organic manners to produce phosphine in Venusian atmosphere but also the biological pathways for the production of phosphine in the Earth's atmosphere by bacteria are unknown. Note that these bacteria are non-aerobial: I do not know whether S replaces O in their metabolism.

Could the new chemistry predicted by TGD and based on dark protons and dark electrons be involved? Dark protons carry metabolic energy - Pollack effect producing dark protons indeed requires energy feed - and the transformation of one of 3 dark protons in ATP → ADP would liberate metabolic energy. Could an alog of this metabolic mechanism help formation of phosphine?

2.1 Basic fact about Venus and Venusian atmosphere

One learns from Wikipedia (see this) basic facts about Venus.

  1. Venus is one of the four terrestrial planets meaning that it has a rocky body like Earth. Surface gravity is .904 g, surface pressure is 91 atm, and surface temperature corresponding to .0740 eV (eV = 104 K), which happens to be rather near to cell membrane potential.

    In clouds at heights 50-60 km from Venusian surface, the temperature is between 0 and 50 C. The assumption that these regions contain the PH3 is theoretically justified if the life in question is similar to that at Earth.

  2. Venusian atmosphere (see this) has 95 per cent CO2, 3.5 per cent N, 150 ppm SO2, 70 ppm Ar, 20 ppm water vapor, 17 ppm CO2, 12 ppm He, 7 ppm Ne, .1-.6 ppm HCl, and 0.01 - 0.05 HF.

2.2 Some data items about the role of sulfur in terrestrial biology

There is a nice article "Sulfur: Fountainhead of life in the Universe?" by Benton Clark at the page of Nasa (see this) giving a summary about sulfur and - as the title suggests - implicitly proposing that sulfur based life might have preceded the recent life.

  1. Table 1 gives an overview about the cosmochemisty of sulfur. Note that in Sun S/Si ratio is .5.

    Remark: Even Sun has been proposed as a possible seat of life. The general vision about dark matter as a master controlling ordinary matter and dark proton sequences at magnetic flux tubes providing a universal realization of genetic code allows to consider the possibility of life at temperatures much higher than at Earth.

  2. The role of sulfur in planetary evolution is discussed. The abundance of S is as high as 15 per cent in the Earth's core. Earth's crust contains 500 ppm of S and volcanic emissions are rich in sulphur. Sea water is rich in sulfate (SO4) ions. Table 2 two lists various sulfur compounds in volcanic emissions.
  3. Sulfur compounds are discussed. Sulfur can have several valence states including oxidation numbers -2,0+2,+4,+6 and sulfur can appear in compounds with several valence numbers. By this transversality sulphur could have an important role in autotrophic metabolism involving only chemical energy sources.

    Remark: The valence of given atom in molecule (see this) is the number of valence electrons, which the atom has. For instance, the double bond corresponds to 2 units of valence. Atomic valences characterize the topology of the valence bond network assigned with the molecule. Oxidation state, which can be negative, is a more precise measure telling how many valence electrons the atom has gained or lost. In the TGD framework the valence bond network would correspond to a flux tube network.

  4. The role of sulfur in biochemistry is central. Sulfur plays major roles in energy transduction, enzyme action, and as a necessary constituent in certain biochemicals. The latter include important vitamins (biotin, thiamine), cofactors (CoA, CoM, glutathione), and hormones. Table 4 given also here summarizes the biological utilization of sulfur compounds.
    • Energy source (sulfate reduction, sulfide oxidation)
    • Photosynthesis (non-O2 -evolving)
    • Amino acids (met, cys):
    • Protein conformation (disulfide bridges)
    • Energy storage (APS, PAPS)

      These are analogous to AMP and ADP. Could one think of generalization of the TGD view for ATP → ADP to PAPS → APS as a basic metabolic mechanism? It might be that APS and PAPS do not survive in the Venusian atmosphere.

    • Enzyme Prosthetic group, (Fe-S proteins)
    • Unique biochemicals (CoA, CoM, glutathione, biotin, thiamine, thiocyanate, penicillin, vasopressin, insulin).
  5. The role of sulfur in the biogeochemical cycle is illustrated in Figure 1. In autotrophic energy metabolism, which does not have organic compounds as sources of energy, sulfur compounds are involved. One can distinguish between sulfur bacteria, sulfate reducers, and sulfur oxidizers. For sulfur bacteria the photosynthesis proceeds - not by splitting H2O as in the case of green plants and algae - but by splitting H2S to obtain H atoms: H2S replaces water.

    Sulfate (SO4) reducers liberate energy by increasing the oxidation numbers of S and O (Na2SO4 → Na2S+4H2O). Sulfur oxidizers (H2S +2(O2)2 → H2SO4)) reduce the oxidation number of S.

  6. SH-group is important for the catalytic function of many enzymes. -S-S link stabilizing systeine is important in establishing the tertiary structure of proteins. Fe-S appear as a prosthetic group (non-peptide group) in enzymes known as iron-sulfur proteins.
  7. The presence ecosystems at the mouths of active hydrothermal submarine vents not depending on photosynthesis suggests a chemosynthetic energy source. These communities however require oxides and thus photosynthesis in the surface layers. Table 6 lists sulfur based energy sources for biological systems.

2.3 The minimal option for a sulphur based life in Venus

Before speculating it is good to summarize the basic facts. Venus has a lot of H2S - analog of water H2O in its atmosphere. Also CO2 is present as also nitrogen N. There is a could layer rich in H2S and having temperature and pressure very much like at Earth. The environment is extremely acidic and this is a real challenge for terrestrial life forms. There exists however extreme terrestrial extremal acidophiles. They are bacteria.

The idea is to replace O with S in some basic molecules of life and processes to overcome the acidity problem. What are these molecules and processes?

  1. Could other biomolecules remain as such and could the cell membrane shield the DNA and proteins inside it against sulphur acid? The outer ends of lipids are hydrophobic: could they be also H2S-phobic?
  2. Could H2S replace water in some sense in Venusian life? Could water as an environment of the cell be replaced with H2S?
What could the replacement of the water environment with H2S mean?
  1. Could photosynthesis rely on the splitting of H2S rather than H2O? Ordinary photosynthesis takes place inside the cell interior and involves ordinary proteins in enzymes and sugars as products. This would however require the presence of H2S is in the cell interior. This does not look a plausible option without a profound change of the chemistry inside the cell replacing perhaps O with S in basic biomolecules such as DNA, RNA and proteins? This suggests that the photosynthesis inside Venusian bacterial cells occurs in the usual manner.
  2. The TGD based quantum biology also involves the notion of magnetic body (MB) as a controller of the biological body. Could H2S have the same role in Venusian prebiotic life as H2O in the terrestrial prebiotic life?

    In the terrestrial life according to the TGD magnetic body (MB) of the water with hydrogen bonds is accompanied by flux tubes appearing with various values of heff> h for dark protons. This would make water a multiphase system providing water with its very special thermo-dynamical properties at the temperature range 0-100 C.

    The flux tubes carrying dark protons sequences generated in the Pollack effect creating negatively charged exclusion zones (EZs) would realize the dark analog of genetic code: the negatively charged cell is an example of this kind of EZ.

    Water memory and the entire immune system would basically rely on these flux tube structures. DNA would be accompanied by parallel dark analog and the same would be true for RNA, tRNA, and amino acids. Water would be living even before the emergence of the chemical life and MB would control the chemical life.

    Could also H2S allow dark hydrogen bonds and Pollack effect? Could the basic difference with respect to terrestrial life be that cells live in H2S rather than in H2O?

What could the replacement of the water environment with H2S mean?
  1. Could photosynthesis rely on the splitting of H2S rather than H2O? Ordinary photosynthesis takes place inside the cell interior and involves ordinary proteins in enzymes and sugars as products. This would however require the presence of H2S is in the cell interior. This does not look a plausible option without a profound change of the chemistry inside the cell replacing perhaps O with S in basic biomolecules such as DNA, RNA and proteins? This suggests that the photosynthesis inside Venusian bacterial cells occurs in the usual manner.
  2. The TGD based quantum biology also involves the notion of magnetic body (MB) as a controller of the biological body. Could H2S have the same role in Venusian prebiotic life as H2O in the terrestrial prebiotic life?

    In the terrestrial life according to the TGD magnetic body (MB) of the water with hydrogen bonds is accompanied by flux tubes appearing with various values of heff> h for dark protons. This would make water a multiphase system providing water with its very special thermo-dynamical properties at the temperature range 0-100 C.

    The flux tubes carrying dark protons sequences generated in the Pollack effect creating negatively charged exclusion zones (EZs) would realize the dark analog of genetic code: the negatively charged cell is an example of this kind of EZ.

    Water memory and the entire immune system would basically rely on these flux tube structures. DNA would be accompanied by parallel dark analog and the same would be true for RNA, tRNA, and amino acids. Water would be living even before the emergence of the chemical life and MB would control the chemical life.

    Could also H2S allow dark hydrogen bonds and Pollack effect? Could the basic difference with respect to terrestrial life be that cells live in H2S rather than in H2O?

The separation of O resp. S to protocell interior resp. exterior is required for the most conservative option. This requires a formation of lipid membrane like structures consisting of hydrocarbons isolating the interior from exterior and taking care of the separation. This requires charge separation by Pollack effect and solar radiation could provide this energy. H2S must be replaced with H2O in the protocell interior. As a physicist I can only speculate about the possible chemistry of the process. For sulfur and its chemistry see the Wikipedia article (see this).
  1. How the double lipid layer of the protocell membrane separating S- and O-worlds could have formed? The formation of hydrocarbons of form CnH2n appearing as building blocks of lipids had to take place - perhaps only from CO2 and H2S. Note that that SO2 is the third most significant atmospheric gas in Venus and could have been be produced in this process and participate it. SO2 has been detected also in volcanoes and one can consider the possibility that the mono-cellular life in volcanoes could have evolved by the same mechanism as in Venus clouds.

    Did something like CO2 +H2S → CH2 + SO2 necessarily accompanied by a polymerization of CH2 to CnH2n occur? Also in the protocell interior hydrocarbons could have formed by this mechanism. The consumption of CO2 in the protocell interior would have induced a further flow of CO2 from the protocell exterior and generated more SO2 which could have flown out or been used for other processes.

  2. How was the H2S inside the protocell membrane replaced with H2O? Sulphur dioxide SO2 was generated in the formation of hydrocarbons. Is the reaction SO2+ 2H2S → 2H2O+2S a plausible option?

    The reaction 2S+ CO2 → CS2+O2 could have generated molecular oxygen O2 in the protocell interior and CS2 would have flown to cell exterior and created the analog of CO2 there.

The O-S separation would drive CO2 from the exterior to interior and bring it back as CS2 and replace S with O in the interior. Proto cell membrane would emerge before the standard chemical realisation of the genetic code. The usual hen-egg problem - which came first, cell membrane or genes - is avoided since the dark variant of the genetic code would be represented using sequences of dark proton triplets representing the analogs of DNA, RNA, tRNA, and amino acids. The fact that the lipids of the cell membrane involve phosphates with negative charge suggests that they are accompanied by dark protons and genetic code has a 2-D variant assignable to the lipid lattice as 2-D dark proton lattice and decomposing to 1-D sequences. The ordinary chemical genetic code would emerge later than this realisation after the emergence of basic biomolecules in the protocell interior.

2.4 More radical options for sulfuric life at Venus

There are also other options based on a radical modification of the chemistry of the ordinary life. They looks less realistic from TGD point of view (which has been changing on daily basis during this week!).

  1. Venus receives a lot of sunlight but one can ask whether photosynthesis be replaced with chemisynthesis? Chemical energy would be liberated in cycles involving sulphur containing compounds with varying degrees of oxidation of sulphur would liberate chemical energy as metabolic energy. At the bottoms of terrestrial oceans there are lifeforms around volcanoes, which might have this kind of metabolism.

  2. Option I below: The extreme adicity of the Venusian atmosphere is the basic problem and the data about the composition of Venusian atmosphere suggest that O should be replaced with S in basic bio-molecules and water should be replaced with hydrogen sulfide H2S (about bacteria producing H2S see this), which is a gas smelling like rotten egg and produced in the decay of organic matter. Note however that CO2 dominates in the Venusian atmosphere so that the replacement of O with S can be criticized. Carbon compounds can survive in the cloud to which PH3 is assigned. The atmosphere contains also N.

  3. Option II below: This option is radical and probably non-realistic but as a mathematician I cannot resist its formal beauty. Could Venusian life be obtained by shifting terrestrial life one row downwards along the right end of the Periodic Table so that basic elements O, N, P of terrestrial life would be replaced with their chemical analogs S, P, As?

    Remark: Phosphine PH3 reported to smell like rotten fish would be the counterpart of ammonia NH3 giving pee its aroma and would have a similar role for Option II.

    Si has boiling point .1687 eV to be compared with the surface temperature .0740 eV - note however that also carbon is solid up to very high temperature and also many hydrocarbons are solids physiological temperatures. Arsenic (As) is fused by some bacteria as a metabolite and one might think that the analog of the higher energy phosphate bond obtained by replacement (O,P) → (S,As). The basic objection is that the Venusian atmosphere contains a lot of C and in CO2 and N so that Option I seems to be enough. PH3 is produced also by the terrestrial bacteria.

2.5 Comparing the two radical options

It is interesting to look explicitly for the modifications of the basic biomolecules for the proposed radical options although the look to me unrealistic.

  1. Consider first amino-acids (see thi). The replacements would be O → S for Option I and (O → S, N → P, P → As) for Option II.This would allow a realization of analogs of nucleotides and amino-acids providing representations for their dark analogs realized in terms of dark proton sequences.

    Amino acid has the structure X-(Y-R)-Z, X= NH2, Y=C-H, Z= O=C-OH. R is the varying amino-acid residue and X,Y,Z define the constant part. The replacements would be

    Option I: Z=O=C-OH → S=C-SH

    Option II: X=C=NH2 → PH2, Y= C-H→ Si-H. Z= O=C-OH → S=Si-SH.

    In the formation of peptide one has replacement X= → C-N-H and Z→ O=C-O-C. This would give replacements of replacements:

    Option I: (Z→ O=C-O-C ) →(Z→ S=C-S -C).

    Option II: (X= → C-N-H) →( X→Si-P-H) and (Z→ O=C-O-C Z)→ (Z→ S=Si-S-Si) for Option II.

    In the TGD framework amino-acids would be accompanied by dark proteins with sulfuric analogs of amino-acids pairing with dark proton triplets: the dark amino-acid would be same and couple with amino-acids having the residues for with energy resonance coupling is possible.

    Cyclotron excitation of dark proton triplet and excitation of R would couple resonantly: the transition of dark photon triplet would generate dark photon triplet transforming to ordinary photon and exciting the R to excited state. This would select the possible residues.

    The first guess is that they are obtained by the proposed replacement too. The dark protons coming from NH2 and one dark proton coming from C-N-H would do so also for the Option I. Amino-acid residues contain as a rule OH and O= and would be replaced with SH and S=. Note that for methionine is the only amino-acid containing S.

    For Option II dark protons would come from PH2 and Si-P-H for option II and would be neutralized by dark electrons to give rise to dark hydrogens.

  2. For DNA (see this) the replacements would be following

    Option I: O → S in sugar 5-ring and in nucleotides

    Option II: (C, O, N) → (Si, S, P) in sugar 5-ring and nucleotides and PO4 → AsS4.

  3. Similar replacements would be carried in metabolic energy currencies AXP, X= M,D,T and GXP having also role as storages of metabolic energy. Saccharides like C6H12O6 as chemical energy storages would have analogs obtained by replacement

    Option I: O → S

    Option II: (C,O,N) → (Si, S, P).

  4. In the lipids of cell membrane there would be no changes for Option I and for Option II one would have (C → Si, PO4 → AsS4).

Option I is clearly favored over Option II if the Venusian life resides in clouds at height of 50-60 km, in particular by the possibility of having cell membrane identical that for the terrestrial life. However, in the TGD framework the most plausible option does not involve any changes in the basic biochemistry of life. The only change is the replacement of water with H2S as the environment of the bacterial cells. Dark protons and dark photons make possible communications between bacterial cells even in the acidic environment.The empirical test is whether the Pollack effect is possible also for H2S.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Friday, August 28, 2020

Summary of TGD (a lot of figures!)

I wrote an article summarizing the recent situation in TGD. Without Reza Rastmanesh's encouragement and help this article would not have been written. I particular, the help of Reza in preparing figures making it easier to grasp the general structure of TGD was irreplaceable. As usual, the boring duty of writing about things already done transformed to a creative process and several new mathematical and physical results about TGD emerged.

Reza even persuaded me to send the article to Nature. That the editors do not even bother to send it to the proposed referees, was not a surprise, but I managed to overcome the deep professorphobia induced during the painful 42 years by academic arrogance and stupidity. Thanks to the therapeutical efforts of Reza, I even sent the article to big names like Witten, Maldacena, Susskind, Penrose, Arkani-Hamed, etc... I received a message about receival from Susskind and Penrose only. Probably the secretaries insulate most names into their academic bubble and probably they also want to live in the bubbles they have created.

I added the article "Summary of Topological Geometrodynamics" to both Research Gate and my homepage.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wednesday, August 26, 2020

What MIP*= RE could mean in TGD Universe?

The MIP*=RE means that quantum entanglement has a crucial role in quantum computation. One the other hand, HFFs would not allow to use quantum entanglement to quantum computational purposes. This looks at first a bad news from the TGD point of view, where HFFs are highly suggestive. One must be however very careful with the basic definitions.

1. The notion of finite measurement resolution in TGD

Measurement resolution is one of the basic notions of TGD.

  1. There are intuitive physicist's arguments demonstrating that in TGD the operator algebras involved with TGD are HFFs provides a description of finite measurement resolution. The inclusion of HFFs defines the notion of resolution: included factor represents the degrees of freedom not seen in the resolution used (see this) and this).

    Hyperfinite factors involve new structures like quantum groups and quantum algebras reflecting the presence of additional symmetries: actually the "world of classical worlds" (WCW) as the space of space-time surfaces as maximal group of isometries and this group has a fractal hierarchy of isomorphic groups imply inclusion hierarchies of HFFs. By the analogs of gauge conditions this infinite-D group reduces to a hierarchy of effectively finite-D groups. For quantum groups the infinite number of irreps of the corresponding compact group effectively reduces to a finite number of them, which conforms with the notion of hyper-finiteness.

    It looks that the reduction of the most general quantum theory to TGD-like theory relying on HFFs is not possible. This would not be surprising taking into account gigantic symmetries responsible for the cancellation of infinities in TGD framework and the very existence of WCW geometry.

  2. Second TGD based approach to finite resolution is purely number theoretic (see this) and involves adelic physics as a fusion of the real physics with various p-adic physics as correlates of cognition. Cognitive representations are purely number theoretic and unique discretizations of space-time surfaces defined by a given extension of rationals forming an evolutionary hierarchy: the coordinates for the points of space-time as a 4-surface of the imbedding space H=M4× CP2 or of its dual M8 are in the extension of rationals defining the adele. In the case of M8 the preferred coordinates are unique apart from time translation. These two views would define descriptions of the finite resolution at the level of space-time and Hilbert space. In particular, the hierarchies of extensions of rationals should define hierarchies of inclusions of HFFs.
For hyperfinite factors the analog of MIP*=RE cannot hold true. Doesn't the TGD Universe allow a solution of all the problems solvable by Turing Computer? There is however a loophole in this argument
  1. The point is that for the hierarchy of extensions of rationals also Hilbert spaces have as a coefficient field the extension of rationals!. Unitary transformations are restricted to matrices with elements in the extension. In general it is not possible to realize the unitary transformation mapping the entangled situation to an un-entangled one! The weakening of the theorem would hold true for the hierarchy of adeles and entanglement would give something genuinely new for quantum computation!
  2. A second deep implication is that the density matrix characterizing the entanglement between two systems cannot in general be diagonalized such that all diagonal elements identifiable as probabilities would be in the extension considered. One would have stable or partially stable entanglement (could the projection make sense for the states or subspaces with entanglement probability in the extension). For these bound states the binding mechanism is purely number theoretical. For a given extension of p-adic numbers one can assign to algebraic entanglement also information measure as a generalization of Shannon entropy as a p-adic entanglement entropy (real valued). This entropy can be negative and the possible interpretation is that the entanglement carries conscious information.

2. What about the situation for the continuum version of TGD?

At least the cognitively finitely representable physics would have the HFF property with coefficient field of Hilbert spaces replaced by an extension of rationals. Number theoretical universality would suggest that HFF property characterizes also the physics of continuum TGD. This leads to a series of questions.

  1. Does the new theorem imply that in the continuum version of TGD all quantum computations allowed by the Turing paradigm for real coefficients field for quantum states are not possible: MIP*⊂ RE? The hierarchy of extensions of rationals allows utilization of entanglement, and one can even wonder whether one could have MIP*= RE at the limit of algebraic numbers.
  2. Could the number theoretic vision force change also the view about quantum computation? What does RE actually mean in this framework? Can one really assume complex entanglement coefficients in computation. Does the computational paradigm makes sense at all in the continuum picture?

    Are both real and p-adic continuum theories unreachable by computation giving rise to cognitive representations in the algebraic intersection of the sensory and cognitive worlds? I have indeed identified real continuum physics as a correlate for sensory experience and various p-adic physics as correlates of cognition in TGD: They would represent the computionally unreachable parts of existence.

    Continuum physics involves transcendentals and in mathematics this brings in analytic formulas and partial differential equations. At least at the level of mathematical consciousness the emergence of the notion of continuum means a gigantic step. Also this suggests that transcendentality is something very real and that computation cannot catch all of it.

  3. Adelic theorem allows to express the norm of a rational number as a product of inverses of its p-adic norms. Very probably this representation holds true also for the analogs of rationals formed from algebraic integeres. Reals can be approximated by rationals. Could extensions of all p-adic numbers fields restricted to the extension of rationals say about real physics only what can be expressed using language?
Also fermions are highly interesting in the recent context. In TGD spinor structure can be seen as a square root of Kähler geometry, in particular for the "world of classical worlds" (WCW). Fermions are identified as correlates of Boolean cognition. The continuum case for fermions does not follow as a naive limit of algebraic picture.
  1. The quantization of the induced spinors in TGD looks different in discrete and continuum cases. Discrete case is very simple since equal-time anticommutators give discrete Kronecker deltas. In the continuum case one has delta functions possibly causing infinite vacuum energy like divergences in conserved Noether charges (Dirac sea).
  2. I have proposed (see this) how these problems could be avoided by avoiding anticommutators giving delta-function. The proposed solution is based on zero energy ontology and TGD based view about space-time. One also obtains a long-sought-for concrete realization for the idea that second quantized induce spinor fields are obtained as restrictions of second quantized free spinor fields in H=M4× CP2 to space-time surface. The fermionic variant of M8-H-duality (see this) provides further insights and gives a very concrete picture about the dynamics of fermions in TGD.
These considerations relate in an interesting manner to consciousness. Quantum entanglement makes in the TGD framework possible telepathic sharing of mental images represented by sub-selves of self. For the series of discretizations of physics by HFFs and cognitive representations associated with extensions of rationals, the result indeed means something new.

3. What about transcendental extensions?

During the writing of this article an interesting question popped up.

  1. Also transcendental extensions of rationals are possible, and one can consider the generalization of the computationalism by also allowing functions in transcendental extensions. Could the hierarchy of algebraic extensions could continue with transcendental extensions? Could one even play with the idea that the discovery of transcendentals meant a quantum leap leading to an extension involving for instance e and π as basic transcendentals? Could one generalize the notion of polynomial root to a root of a function allowing Taylor expansion f(x)= ∑ qn xn with rational coefficients so that the roots of f(x)=0 could be used define transcendental extensions of rationals?
  2. Powers of e or its root define and infinite-D extensions having the special property that they are finite-D for p-adic number fields because ep is ordinary p-adic number. In the p-adic context e can be defined as a root of the equation xp-∑ pn/n!=0 making sense also for rationals. The numbers log(pi) such that pi appears a factor for integers smaller than p define infinite-D extension of both rationals and p-adic numbers. They are obtained as roots of ex-pi=0.
  3. The numbers (2n+1)π (2nπ) can be defined as roots of sin(x)=0 (cos(x)=0. The extension by π is infinite-dimensional and the conditions defining it would serve as consistency conditions when the extension contains roots of unity and effectively replaces them.
  4. What about other transcendentals appearing in mathematical physics? The values ζ(n) of Riemann Zeta appearing in scattering amplitudes are for even values of n given by ζ(2n)= (-1)n+1 B2n (2π)2n/2(2n+1)!. This follows from the functional identity for Riemann zeta and from the expression ζ(-n)= (-1)n Bn+1/(n+1) ( (B(-1/2)=-1/2) (see this). The Bernoulli numbers Bn are rational and vanish for odd values of n. An open question is whether also the odd values are proportional to πn with a rational coefficient or whether they represent "new" transcendentals.

4. What does one mean with quantum computation in TGD Universe?

The TGD approach raises some questions about computation.

  1. The ordinary computational paradigm is formulated for Turing machines manipulating natural numbers by recursive algorithms. Programs would essentially represent a recursive function n→ f(n). What happens to this paradigm when extensions of rationals define cognitive representations as unique space-time discretizations with algebraic numbers as the limit giving rise to a dense in the set of reals.

    The usual picture would be that since reals can be approximated by rationals, the situation is not changed. TGD however suggests that one should replace at least the quantum version of the Turing paradigm by considering functions mapping algebraic integers (algebraic rational) to algebraic integers.

    Quite concretely, one can manipulate algebraic numbers without approximation as a rational and only at the end perform this approximation and computations would construct recursive functions in this manner. This would raise entanglement to an active role even if one has HFFs and even if classical computations could still look very much like ordinary computation using integers.

  2. ZEO brings in also time reversal occurring in "big" (ordinary) quantum jumps and this modifies the views about quantum computation. In ZEO based conscious quantum computation halting means "death" and "reincarnation" of conscious entity, self? How the processes involving series of haltings in this sense differs from ordinary quantum computation: could one shorten the computation time by going forth and back in time.
  3. There are many interesting questions to be considered. M8-H duality gives justifications for the vision about algebraic physics. TGD leads also to the notion of infinite prime and I have considered the possibility that infinite primes could give a precise meaning for the dimension of infinite-D Hilbert space. Could the number-theoretic view about infinite be considerably richer than the idea about infinity as limit would suggest (see this). The construction of infinite primes is analogous to a repeated second quantization of arithmetic supersymmetric quantum field theory allowing also bound states at each level and a concrete correspondence with the hierarchy of space-time sheets is suggestive. For the infinite primes at the lowest level of the hierarchy single particle states correspond to rationals and bound states to polynomials and therefore to the sets of their roots. This strongly suggests a connection with M8 picture.

See the article MIP*= RE: What could this mean physically? or the chapter Evolution of Ideas about Hyper-finite Factors in TGD.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

MIP*=RE: What it could possibly mean?

I received a very interesting link to a popular article (see this) explaining a recently discovered deep result in mathematics having implications also in physics. The article (see this) by Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright, and Henry Yuen has a rather concise title " MIP*=RE". In the following I try to express the impressions of a (non-mainstream) physicist about the result. In the first posting I discuss the finding and the basic implications from the physics point of view. In the second posting the highly interesting implications of the finding in the TGD framework are discussed.

The result expressed can be expressed by using the concepts of computer science about which I know very little at the hard technical level. The results are however told to state something highly non-trivial about physics.

  1. RE (recursively enumerable languages) denotes all problems solvable by computer. P denotes the problems solvable in a polynomial time. NP does not refer to a non-polynomial time but to "non-deterministic polynomial acceptable problems" - I hope this helps the reader- I am a little bit confused! It is not known whether P = NP is true.
  2. IP problems (P is now for "prover" that can be solved by a collaboration of an interrogator and prover who tries to convince the interrogator that her proof is convincing with high enough probability. MIP involves multiple l provers treated as criminals trying to prove that they are innocent and being not allowed to communicate. MIP* is the class of solvable problems in which the provers are allowed to entangle.
The finding, which is characterized as shocking, is that all problems solvable by a Turing computer belong to this class: RE= MIP*. All problems solvable by computer would reduce to problems in the class MIP*! Quantum computation would indeed add something genuinely new to the classical computation.

Two physically interesting applications

There are two physically interesting applications of the theorem which are interesting also from the TGD point of view and force to make explicit the assumptions involved.

1. About the quantum physical interpretation of MIP*

To proceed one must clarify the quantum physical interpretation of MIP*.

  1. Quantum measurement requires entanglement of the observer O with the measured system M. What is basically measured is the density matrix of M (or equivalently that of O). State function reduction gives as an outcome a state, which corresponds to an eigenvalue of the density matrix. Note that this state can be an entangled state if the density matrix has degenerate eigenvalues.
  2. Quantum measurement can be regarded as a question to the measured system: "What are the values of given commuting observables?". The final state of quantum measurement provides an eigenstate of the observables as the answer to this question. M would be in the role of the prover and Oi would serve as interrogators.

    In the first case multiple interrogators measurements would entangle M with unentangled states of the tensor product H1⊗ H2 for O followed by a state function reduction splitting the state of M to un-entangled state in the tensor product M1⊗ M2.

    In the second case the entire M would be interrogated using entanglement of M with entangled states of H1⊗ H2 using measurements of several commuting observables. The theorem would state that interrogation in this manner is more efficient in infinite-D case unless HFFs are involved.

  3. This interpretation differs from the interpretation in terms of computational problem solving in which one would have several provers and one interrogator. Could these interpretations be dual as the complete symmetry of the quantum measurement with respect to O and M suggests? In the case of multiple provers (analogous to accused criminals) it is advantageous to isolate them. In the case of multiple interrogators the best result is obtained if the interrogator does not effectively split itself into several ones.

2. Connes embedding problem and the notion of finite measurement/cognitive resolution

Alain Connes formulated what has become known as Connes embedding problem. The question is whether infinite matrices forming factor of type II1 can be always approximated by finite-D matrices that is imbedded in a hyperfinite factor of type II1 (HFF). Factors of type II and their HFFs are special classes of von Neumann algebras possibly relevant for quantum theory.

This result means that if one has measured of a complete set of for a product of commuting observables acting in the full space, one can find in the finite-dimensional case a unitary transformation transforming the observables to tensor products of observables associated with the factors of a tensor product. In the infinite-D case this is not true.

What seems to put alarms ringing is that in TGD there are excellent arguments suggesting that the state space has HFFs as building bricks. Does the result mean that entanglement cannot help in quantum computation in TGD Universe? I do not want to live in this kind of Universe!

2. Tsirelson problem

Tsirelson problem (see this) is another problem mentioned in the popular article as a physically interesting application. The problem relates to the mathematical description of quantum measurement.

Three systems are considered. There are two systems O1 and O2 representing observers and the third representing the measured system M. The measurement reducing the entanglement between M and O1 or O2 can regarded as producing correspondence between state of M and O1 or O2, and one can think that O1 or O2 measures only obserservables in its own state space as a kind of image of M. There are two manners to see the situation. The provers correspond now to the observers and the two situations correspond to provers without and with entanglement.

Consider first a situation in which one has single Hilbert space H and single observer O. This situation is analogous to IP.

  1. The state of the system is described statistically by a density matrix - not necessarily pure state -, whose diagonal elements have interpretation as reduction probabilities of states in this bases. The measurement situation fixes the state basis used. Assume an ensemble of identical copies of the system in this state. Assume that one has a complete set of commuting observables.
  2. By measuring all observables for the members of the ensemble one obtains the probabilities as diagonal elements of the density matrix. If the observable is the density matrix having no- degenerate eigenvalues, the situation is simplified dramatically. It is enough to use the density matrix as an observable. TGD based quantum measurement theory assumes that the density matrix describing the entanglement between two subsystems is in a universal observable measure in state function reductions reducing their entanglement.
  3. Can one deduce also the state of M as a superposition of states in the basic chosen by the observer? This basis need not be the same as the basis defined by - say density matrix if the system has interacted with some system and this ineraction has led to an eigenstate of the density matrix. Assume that one can prepare the latter basis by a physical process such as this kind of interaction.

    The coefficients of the state form a set of N2 complex numbers defining a unitary N× N matrix. Unitarity conditions give N conditions telling that the complex rows and unit vectors: these numbers are given by the measurement of all observables. There are also N(N-1) conditions telling that the rows are orthogonal. Together these N+N(N-1)=N2 numbers fix the elements of the unitary matrix and therefore the complex coefficients of the state basis of the system can be deduced from a complete set of measurements for all elements of the basis.

Consider now the analog of the MIP* involving more than one observer. For simplicity consider two observers.
  1. Assume that the state space H of M decomposes to a tensor product H=H1⊗ H2 of state spaces H1 and H2 such that O1 measures observables X1 in H1 and O2 measuresobservables X2 in H2. The observables X1 and X2 commute since they act in different tensor factors. The absence of interaction between the factors corresponds to the inability of the provers to communicate. As in the previous case, one can deduce the probabilities for the various outcomes of the joint measurements interpreted as measurements of a complete set of observables X1 ⊗ X2.
  2. One can also think that the two systems form a single system O so that O1 and O2 can entangle. This corresponds to a situation in which entanglement between the provers is allowed. Now X1 and X2 are not in general independent but also now they must commute. One can deduce the probabilities for various outcomes as eigenstates of observables X1 X2 and deduce the diagonal elements of the density matrix as probabilities.
Are these manners to see the situation equivalent? Tsirelson demonstrated that this is the case for finite-dimensional Hilbert spaces, which can indeed be decomposed to a tensor product of factors associated with O1 and O2. This means that one finds a unitary transformation transforming the entangled situation to an unentangled one and to tensor product observables.

For the infinite-dimensional case the situation remained open. According to the article, the new result implies that this is not the case. For hyperfinite factors the situation can be approximated with a finite-D Hilbert space so that the situations are equivalent in arbitrary precise approximation.

See the article MIP*= RE: What could this mean physically? or the chapter Evolution of Ideas about Hyper-finite Factors in TGD.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Tuesday, August 11, 2020

Why not publish a book about TGD?

I was asked why I do not publish a book about TGD. Some people also ask why I have not considered the idea of applying TGD to some real problem in physics. I glue below the reply which should explain why not.

Some people have also informed me that Einstein said that any big idea must be so simple that even a child can understand. Why I do not publish a picture book for children about TGD explaining the big idea using a couple of pictures? My answer could be the following: Einstein made only a single big blunder in his life. It was not the proposal of the cosmological constant but the above statement: fools around the world really take it literally. I appreciate people writing for children but I am a different kind of writer.

So: why don't I publish a book for adult readers or even colleagues about TGD? I actually have 24 online books almost ready for printing. Basic theory and lots of applications covering all branches of physics and also biology and neuroscience, which the people making these questions have not noticed since just seeing a link to my homepage - no time for more than this - does not give any idea about what TGD really is. These books can be published posthumously as collected works when the time is ripe for this. The reasons are many-fold.

There are overlapping topics and colleagues would not lose the opportunity to blame me for self-plagiarism as happened with the previous book about TGD. There was some ridiculous counting of words mechanism used to reveal my criminal character. For two years I spent a lot of useful working time with totally irrelevant activities having very little to do with the contents of the book. The compensation is so small that bank costs would make me the net payer. No one reads books nowadays and no-one even considers buying a book by non-name.

I do not have too many years left and I want to use them to develop TGD. This is for purely selfish reasons: it marvellous to live in full swing still at this age and do history of science.

I have also given up the hopes of explaining TGD understandably: 42 years distance to colleagues is so long that I feel myself being on a mountain top covered by clouds. They refuse even to believe that there is some-one there. 24 books as a climbing guide telling also about all wrong tracks is too much for anyone, and it is not inspiring to passively follow the instructions. It is much more motivational for them to rediscover TGD by themselves.

I have hoped that I could help them in this process and perhaps shorten 42 years to a decade. I have explained again and again what the deep problems are and what would be the TGD solution to them hoping that it would be more motivational to use their own brain to solve the key problems. They are not interested even in this option. They prefer to follow the wrong paths shown by names and repeat the mistakes already made. Or alternatively, to build totally non-sensible one-line theory based on mere pictures. So: let them discover all by themselves. Trial and error is the most effective manner to learn.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, August 10, 2020

Fast Radiowave bursts in TGD framework

I encountered a highly interesting popular article (thanks for my friend Asta for the link) with title "Mysterious 'fast radio burst' detected closer to Earth than ever before" (this).

Fast radio wave bursts (FRBs) arrive from a distance of hundreds of millions of light years - the scale of a large void. If the energy of FRBs is radiated isotropically in all directions - an assumption to be challenged below - the total energy is of the same order of magnitude that the energy of the Sun produced during a century. There are FRBs repeating with a period of 16 days located to a distance of 500 million light years from Earth.

The latest bursts arrive from a distance of only about 30 thousand light years from our own galaxy Milky Way described in the popular article can be assigned with magnetar (see this) which is a remnant of neutron start and has extremely strong magnetic field of about 1011 Tesla.

Below is the abstract of the article (this) reporting the discovery.

We report on International Gamma-Ray Astrophysics Laboratory (INTEGRAL) observations of the soft γ ray repeater SGR 1935+2154 performed between 2020 April 28 and May 3. Several short bursts with fluence of ∼ 10-7-10-6 erg cm-2 were detected by the Imager on-board INTEGRAL (IBIS) instrument in the 20-200 keV range. The burst with the hardest spectrum, discovered and localized in real time by the INTEGRAL Burst Alert System, was spatially and temporally coincident with a short and very bright radio burst detected by the Canadian Hydrogen Intensity Mapping Experiment (CHIME) and Survey for Transient Astronomical Radio Emission 2 (STARE2) radio telescopes at 400-800 MHz and 1.4 GHz, respectively.

Its lightcurve shows three narrow peaks separated by ∼ 29 ms time intervals, superimposed on a broad pulse lasting ∼ 0.6 s. The brightest peak had a delay of 6.5 +/- 1.0 ms with respect to the 1.4 GHz radio pulse (that coincides with the second and brightest component seen at lower frequencies). The burst spectrum, an exponentially cutoff power law with photon index Γ =0.7-0.2+0.4 and peak energy Ep=65+/- 5 keV, is harder than those of the bursts usually observed from this and other magnetars.

By the analysis of an expanding dust-scattering ring seen in X-rays with the Neil Gehrels Swift Observatory X-ray Telescope (XRT) instrument, we derived a distance of 4.4-1.3+2.8 kpc for SGR 1935+2154, independent of its possible association with the supernova remnant G57.2+0.8. At this distance, the burst 20-200 keV fluence of (6.1+/- 0.3)× 10-7 erg cm-2 corresponds to an isotropic emitted energy of ∼ 1.4× 1039 erg. This is the first burst with a radio counterpart observed from a soft γ ray repeater and it strongly supports models based on magnetars that have been proposed for extragalactic fast radio bursts.

What could be the interpretation of the finding in the TGD framework? The weirdest feature of the FRB is its gigantic total energy assuming that the radiation is isotropic during the burst. This assumption can be challenged in the TGD framework, where the stellar systems are connected to a monopole flux tube network and radiation flows along flux tubes, which can also branch. This brings strongly in mind the analog of a nervous system in cosmic scales and this analogy is used in what follows.

  1. The duration of pulses is few milliseconds: the duration of nerve pulses is the same. Is this a wink-wink to the Poirots of astrophysics?
  2. Bursts can arrive regularly for instance with a period of T=16.35 days (see this). This brings in the mind of astro-Poirot biorhythm, in particular EEG rhythms. This would not be the only such rhythms: also the period of Talpha=160 minutes, for which have proposed an interpretation as a cosmic analog of alpha rhythm is known (see this). The ratio T/Tα=147.15 would give for the analogous brain rhythm the value of 14.7 seconds.
  3. Let us assume that stellar systems indeed form an analog of neural network connected by flux and assume that the topology of this network is analogous to that defined by axons. In TGD framework neural communications between neurons occur actually by using dark photons with effective Planck constant heff=nh0 along the flux tubes with the velocity of light so that feedback from brain and even from the magnetic body of brain back to sensory organs as a virtual sensory input becomes possible. The function of nerve pulses is to connect the outgoing branch of the flux tube associated with the axon and those associated with dendrites of the post-synaptic neuron toa longer flux tubes by using neurotransmitters as relays.
  4. The stellar object as an analog of a neuron would send its dark photon signals along the flux tube assignable to a single axon. Axon would later branch to dendrites arriving to other stellar systems and eventually perhaps to planets as analogs of synaptic contacts. An interesting question is whether also the analogs of nerve pulses and neurotransmitters acting as relays in the synaptic contacts defined by planets could make sense. What could nerve pulses propagating along the flux tube correspond to?

    Remark: In the TGD based model of brain there would be also flux tube network analogous to the meridian system of Eastern medicine and responsible for the holistic and spatial aspects of consciousness since more than one flux tube can emanate from a given node making possibly non-linear networks (see this). Nervous system with tree- like structure would be responsible for the linear and temporal aspects of conscious experience. Tree-like structure would be crucial for the understanding of Alzheimer disease (see this). Meridian system would be a predecessor of the neural system.

  5. The distances of FRBs are of the order of large voids having galaxies at their boundaries and forming lattice-like networks possibly assignable to the tesselations of 3-D hyperbolic space defining cosmic time= constant surfaces. This kind of tesselations could accompany also brain (see this). In the fractal Universe of TGD one can wonder whether these voids are analogs of cells or even neurons and form cosmic biological organisms with flux tubes forming a network allowing communications.
The basic implication is that the energy of the emitted radiation could be dramatically smaller than that predicted by an isotropic radiation burst. It is interesting to look whether the proposed picture survives quantitative modelling.
  1. The reduction factor r for the total emitted energy would be essentially r= S/A, where S is the area of the "axonal" flux tube and A=4π R2 is the surface area of the magnetar. One must estimate the value of r.
  2. Flux quantization for a single sheet of the many-sheeted magnetic flux tube involved would give eBS= hbar0 h=6h0 (see this and this). The general order of magnitude estimate is eB ∼ hbar0/S. If each sheet carries out the same energy, the number of sheets is n=heff/h0 and the effective area of a flux tube is S= hbar0/eB. Does the magnetic field assigned with magnetar correspond to a single sheet or to all sheets? If the field is measured from cyclotron energies assuming heff=h it would correspond to all sheets and the measured magnetic field would be the effective magnetic field Beff= nB/6 for h= 6h0.
  3. The branching of the flux tube could correspond to the splitting of the many-sheeted flux tube to tubes with smaller number of sheets and involve reduction of heff. This would give the estimate r= hbar0/eBA. Magnetic field of 1 Tesla corresponds to a unit flux quantum with radius - magnetic length . about 2.6× 10-8 meters. Assuming magnetar radius R=20 km one has r∼ 10-25/6.
  4. The estimate for the total emitted energy assuming isotropic radiation is the energy radiated by the Sun during a century. Sun transforms roughly E100=1.3× 1019 kg of mass to radiation during a century. This gives for the energy emitted in FRB the estimate E= r E100∼ 10-6/6 kg which is roughly 7.5 Planck masses mPl≈ 2.2× 10-8 kg. The order of magnitude is Planck mass. The estimate is of course extremely rough.

    In any case, the idea that pulses could have mass of order few Planck masses is attractive. Note that a large neuron with radius about 10-4 meters has a mass of order Planck mass (see this).

  5. From the total detected energy dE/dS= 6.1× 10-7 erg m-2= 3.8× 109 eVm-2 and total radiated energy E= 7.5 mPl one can estimate the total area S covered by the branched energy flux if it covers the entire area with a shape of disk of radius R. This gives some idea about how wide the branching is. The total energy is E =(dE/dS)× π R2 giving R= [E/π (dE/dS)]1/2∼ .9× 109 m. The equitoral radius of the Sun is RSun= .7× 109 m. RSun∼ .78 R. This conforms with the idea that the radiation arrives along the axon-like flux tube connecting Sun and the magnetar branching so that it covers the entire Sun.
The ratio heff/h should be of the same order of magnitude as the ratio X=E/Erad, where Erad is the energy of the radio wave photon with frequency 1.4 GHz for heff=h: X∼ heff/h. The ratio Y= X/(heff/h) should satisfy Y∼ 1.
  1. To proceed further, one can use the TGD variant of Nottale's hypothesis. The hypothesis states that one can assign to gravitational flux tubes gravitational Planck constant hbargr. The original hypothesis was ℏeff=ℏgr and the more recent form inspired by the adelic vision states that hgr corresponds to a large integer factor of heff. One has ℏgr= GMm/v0= rSm/2v0. Here M is the mass of the large object - now that of magnetar. m is the mass of the smaller quantum coherent object in contact with the gravitational flux tube mediating gravitational interaction as dark graviton exchanges.

    v0 is a velocity parameter. For Sun would have β0,S=v0/c≈ 2-11 from the model for the inner planets as Bohr orbits (see this).

  2. The Planckian educated guess is m∼ mPl so that one would have hbargr/hbar= rS(M)/(2LPlβ0), where LPl is Planck length and rS(M) is the Schwartshild radius of magnetar. This would give Y= X/(ℏgr/ℏ)= .4 if one has rS=3 km as for the Sun. rS is probably large but smaller than magnetar radius about 20 km. The masses of the magnetars are in the range 1-2 solar masses. For M= 2MS one obtains Y=.8.

    The rough estimate is not far from Y=1 and suggests that the interacting quantum units at the receiving end have mass of order Planck mass. Interestingly, the mass of a large neuron with radius 10-4 m is about Planck mass (see this), which supports the view that quantum gravitation in the TGD sense is fundamental for life - even in the cosmic scales.

The physical interpretation of the velocity parameter v0 is one of the key challenges.
  1. The order of magnitude of v0 is the same as for the rotational velocities in the solar system. I have considered a geometry based interpretation (see this and this).
  2. The analogy with the neural system encourages the question whether v0 could have a concrete interpretation as the analog of the nerve pulse conduction velocity assignable to the dark magnetic flux tubes connecting distant systems.

    In TGD framework nerve pulses (see this) are proposed to be induced by the perturbations of Sine-Gordon soliton sequences for the generalized Josephson junctions assignable to the cell membrane and identifiable as transversal flux tubes assignable to various membrane proteins such as ion channels and pumps. The dark variants of the biologically important ions would give rise to the supra currents.

    Could the gravitational flux tubes analogous to axons have this kind of structure and give rise to generalized Josephson junctions with ions serving also in this case as current carriers?

To sum up, the proposed interpretation as cosmic neural networks conforms with the basic assumptions of TGD. Most importantly, quantitative predictions are correct. The picture is of course not deduce from axioms: this is pattern recognition with basic principles predicting a lot of new physics.

I encountered a highly interesting popular article (thanks for my friend Asta for the link) with title "Mysterious 'fast radio burst' detected closer to Earth than ever before" (this).

Fast radio wave bursts (FRBs) arrive from a distance of hundreds of millions of light years - the scale of a large void. If the energy of FRBs is radiated isotropically in all directions - an assumption to be challenged below - the total energy is of the same order of magnitude that the energy of the Sun produced during a century. There are FRBs repeating with a period of 16 days located to a distance of 500 million light years from Earth.

The latest bursts arrive from a distance of only about 30 thousand light years from our own galaxy Milky Way described in the popular article can be assigned with magnetar (see this) which is a remnant of neutron start and has extremely strong magnetic field of about 1011 Tesla.

Below is the abstract of the article (this) reporting the discovery.

We report on International Gamma-Ray Astrophysics Laboratory (INTEGRAL) observations of the soft γ ray repeater SGR 1935+2154 performed between 2020 April 28 and May 3. Several short bursts with fluence of ∼ 10-7-10-6 erg cm-2 were detected by the Imager on-board INTEGRAL (IBIS) instrument in the 20-200 keV range. The burst with the hardest spectrum, discovered and localized in real time by the INTEGRAL Burst Alert System, was spatially and temporally coincident with a short and very bright radio burst detected by the Canadian Hydrogen Intensity Mapping Experiment (CHIME) and Survey for Transient Astronomical Radio Emission 2 (STARE2) radio telescopes at 400-800 MHz and 1.4 GHz, respectively.

Its lightcurve shows three narrow peaks separated by ∼ 29 ms time intervals, superimposed on a broad pulse lasting ∼ 0.6 s. The brightest peak had a delay of 6.5 +/- 1.0 ms with respect to the 1.4 GHz radio pulse (that coincides with the second and brightest component seen at lower frequencies). The burst spectrum, an exponentially cutoff power law with photon index Γ =0.7-0.2+0.4 and peak energy Ep=65+/- 5 keV, is harder than those of the bursts usually observed from this and other magnetars.

By the analysis of an expanding dust-scattering ring seen in X-rays with the Neil Gehrels Swift Observatory X-ray Telescope (XRT) instrument, we derived a distance of 4.4-1.3+2.8 kpc for SGR 1935+2154, independent of its possible association with the supernova remnant G57.2+0.8. At this distance, the burst 20-200 keV fluence of (6.1+/- 0.3)× 10-7 erg cm-2 corresponds to an isotropic emitted energy of ∼ 1.4× 1039 erg. This is the first burst with a radio counterpart observed from a soft γ ray repeater and it strongly supports models based on magnetars that have been proposed for extragalactic fast radio bursts.

What could be the interpretation of the finding in the TGD framework? The weirdest feature of the FRB is its gigantic total energy assuming that the radiation is isotropic during the burst. This assumption can be challenged in the TGD framework, where the stellar systems are connected to a monopole flux tube network and radiation flows along flux tubes, which can also branch. This brings strongly in mind the analog of a nervous system in cosmic scales and this analogy is used in what follows.

  1. The duration of pulses is few milliseconds: the duration of nerve pulses is the same. Is this a wink-wink to the Poirots of astrophysics?
  2. Bursts can arrive regularly for instance with a period of T=16.35 days (see this). This brings in the mind of astro-Poirot biorhythm, in particular EEG rhythms. This would not be the only such rhythms: also the period of Talpha=160 minutes, for which have proposed an interpretation as a cosmic analog of alpha rhythm is known (see this). The ratio T/Tα=147.15 would give for the analogous brain rhythm the value of 14.7 seconds.
  3. Let us assume that stellar systems indeed form an analog of neural network connected by flux and assume that the topology of this network is analogous to that defined by axons. In TGD framework neural communications between neurons occur actually by using dark photons with effective Planck constant heff=nh0 along the flux tubes with the velocity of light so that feedback from brain and even from the magnetic body of brain back to sensory organs as a virtual sensory input becomes possible. The function of nerve pulses is to connect the outgoing branch of the flux tube associated with the axon and those associated with dendrites of the post-synaptic neuron toa longer flux tubes by using neurotransmitters as relays.
  4. The stellar object as an analog of a neuron would send its dark photon signals along the flux tube assignable to a single axon. Axon would later branch to dendrites arriving to other stellar systems and eventually perhaps to planets as analogs of synaptic contacts. An interesting question is whether also the analogs of nerve pulses and neurotransmitters acting as relays in the synaptic contacts defined by planets could make sense. What could nerve pulses propagating along the flux tube correspond to?

    Remark: In the TGD based model of brain there would be also flux tube network analogous to the meridian system of Eastern medicine and responsible for the holistic and spatial aspects of consciousness since more than one flux tube can emanate from a given node making possibly non-linear networks (see this). Nervous system with tree- like structure would be responsible for the linear and temporal aspects of conscious experience. Tree-like structure would be crucial for the understanding of Alzheimer disease (see this). Meridian system would be a predecessor of the neural system.

  5. The distances of FRBs are of the order of large voids having galaxies at their boundaries and forming lattice-like networks possibly assignable to the tesselations of 3-D hyperbolic space defining cosmic time= constant surfaces. This kind of tesselations could accompany also brain (see this). In the fractal Universe of TGD one can wonder whether these voids are analogs of cells or even neurons and form cosmic biological organisms with flux tubes forming a network allowing communications.
The basic implication is that the energy of the emitted radiation could be dramatically smaller than that predicted by an isotropic radiation burst. It is interesting to look whether the proposed picture survives quantitative modelling.
  1. The reduction factor r for the total emitted energy would be essentially r= S/A, where S is the area of the "axonal" flux tube and A=4π R2 is the surface area of the magnetar. One must estimate the value of r.
  2. Flux quantization for a single sheet of the many-sheeted magnetic flux tube involved would give eBS= hbar0 h=6h0 (see this and this). The general order of magnitude estimate is eB ∼ hbar0/S. If each sheet carries out the same energy, the number of sheets is n=heff/h0 and the effective area of a flux tube is S= hbar0/eB. Does the magnetic field assigned with magnetar correspond to a single sheet or to all sheets? If the field is measured from cyclotron energies assuming heff=h it would correspond to all sheets and the measured magnetic field would be the effective magnetic field Beff= nB/6 for h= 6h0.
  3. The branching of the flux tube could correspond to the splitting of the many-sheeted flux tube to tubes with smaller number of sheets and involve reduction of heff. This would give the estimate r= hbar0/eBA. Magnetic field of 1 Tesla corresponds to a unit flux quantum with radius - magnetic length . about 2.6× 10-8 meters. Assuming magnetar radius R=20 km one has r∼ 10-25/6.
  4. The estimate for the total emitted energy assuming isotropic radiation is the energy radiated by the Sun during a century. Sun transforms roughly E100=1.3× 1019 kg of mass to radiation during a century. This gives for the energy emitted in FRB the estimate E= r E100∼ 10-6/6 kg which is roughly 7.5 Planck masses mPl≈ 2.2× 10-8 kg. The order of magnitude is Planck mass. The estimate is of course extremely rough.

    In any case, the idea that pulses could have mass of order few Planck masses is attractive. Note that a large neuron with radius about 10-4 meters has a mass of order Planck mass (see this).

  5. From the total detected energy dE/dS= 6.1× 10-7 erg m-2= 3.8× 109 eVm-2 and total radiated energy E= 7.5 mPl one can estimate the total area S covered by the branched energy flux if it covers the entire area with a shape of disk of radius R. This gives some idea about how wide the branching is. The total energy is E =(dE/dS)× π R2 giving R= [E/π (dE/dS)]1/2∼ .9× 109 m. The equitoral radius of the Sun is RSun= .7× 109 m. RSun∼ .78 R. This conforms with the idea that the radiation arrives along the axon-like flux tube connecting Sun and the magnetar branching so that it covers the entire Sun.
The ratio heff/h should be of the same order of magnitude as the ratio X=E/Erad, where Erad is the energy of the radio wave photon with frequency 1.4 GHz for heff=h: X∼ heff/h. The ratio Y= X/(heff/h) should satisfy Y∼ 1.
  1. To proceed further, one can use the TGD variant of Nottale's hypothesis. The hypothesis states that one can assign to gravitational flux tubes gravitational Planck constant hbargr. The original hypothesis was ℏeff=ℏgr and the more recent form inspired by the adelic vision states that hgr corresponds to a large integer factor of heff. One has ℏgr= GMm/v0= rSm/2v0. Here M is the mass of the large object - now that of magnetar. m is the mass of the smaller quantum coherent object in contact with the gravitational flux tube mediating gravitational interaction as dark graviton exchanges.

    v0 is a velocity parameter. For Sun would have β0,S=v0/c≈ 2-11 from the model for the inner planets as Bohr orbits (see this).

  2. The Planckian educated guess is m∼ mPl so that one would have hbargr/hbar= rS(M)/(2LPlβ0), where LPl is Planck length and rS(M) is the Schwartshild radius of magnetar. This would give Y= X/(ℏgr/ℏ)= .4 if one has rS=3 km as for the Sun. rS is probably large but smaller than magnetar radius about 20 km. The masses of the magnetars are in the range 1-2 solar masses. For M= 2MS one obtains Y=.8.

    The rough estimate is not far from Y=1 and suggests that the interacting quantum units at the receiving end have mass of order Planck mass. Interestingly, the mass of a large neuron with radius 10-4 m is about Planck mass (see this), which supports the view that quantum gravitation in the TGD sense is fundamental for life - even in the cosmic scales.

The physical interpretation of the velocity parameter v0 is one of the key challenges.
  1. The order of magnitude of v0 is the same as for the rotational velocities in the solar system. I have considered a geometry based interpretation (see this and this).
  2. The analogy with the neural system encourages the question whether v0 could have a concrete interpretation as the analog of the nerve pulse conduction velocity assignable to the dark magnetic flux tubes connecting distant systems.

    In TGD framework nerve pulses (see this) are proposed to be induced by the perturbations of Sine-Gordon soliton sequences for the generalized Josephson junctions assignable to the cell membrane and identifiable as transversal flux tubes assignable to various membrane proteins such as ion channels and pumps. The dark variants of the biologically important ions would give rise to the supra currents.

    Could the gravitational flux tubes analogous to axons have this kind of structure and give rise to generalized Josephson junctions with ions serving also in this case as current carriers?

To sum up, the proposed interpretation as cosmic neural networks conforms with the basic assumptions of TGD. Most importantly, quantitative predictions are correct. The picture is of course not deduce from axioms: this is pattern recognition with basic principles predicting a lot of new physics.

See the article Fast radio wave bursts: is life a cosmic fractal? or the chapter About the Nottale's formula for hgr and the relation between Planck length and CP2 length R .

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.