https://matpitka.blogspot.com/2021/

Friday, December 31, 2021

About TGD counterparts of twistor amplitudes

The twistor lift of TGD, in which H=M4 × CP2 is replaced with the product of twistor spaces T(M4) and T(CP2), and space-time surface X4⊂ H with its 6-D twistor space as 6-surface X6 ⊂ T(M4)× T(CP2), is now a rather well-established notion and M8-H duality predicts it at the level of M8.

Number theoretical vision involves M8-H duality. At the level of ⊂H⊂, the quark mass spectrum is determined by the Dirac equation in ⊂H⊂. In M8 mass squared spectrum is determined by the roots of the polynomial P determining space-time surface and are in general complex. By Galois confinement the momenta are integer valued when p-adic mass is used as a unit and mass squared spectrum is also integer valued. This raises hope about a generalization of the twistorial construction of scattering amplitudes to TGD context.

It is always best to start from a problem and the basic problem of the twistor approach is that physical particles are not massless.  

  1. The intuitive TGD based proposal has been that since quark spinors are massless in H, the masslessness in the 8-D sense could somehow solve the problems caused by the massivation in  the construction of twistor scattering amplitudes. However, no obvious mechanism has been identified. One step in this direction was the realization that in H quarks propagate with well-defined chiralities and only the square of Dirac equation is satisfied. For a quark of given helicity the  spinor can be identified as helicity spinor.
  2.  M8 quark momenta are in general complex as algebraic integers. They are the counterparts of the area momenta xi of momentum twistor space whereas H momenta are identified as ordinary momenta. Total momenta  of Galois confined states  have as components ordinary integers.
  3.   The  M8 counterpart of  the 8-D massless condition in H is the restriction of momenta to mass shells m2= rn determined as roots  of P. The M8 counterpart of Dirac equation in H is octonionic Dirac equation, which is algebraic as everything in M8 and analogous to massless Dirac equation. The solution is a helicity spinor λ associated with the massive momentum. 
The outcome is an extremely simple proposal for the scattering amplitudes.
  1. Vertices correspond to trilinears of Galois confined many-quark states as states  of super symplectic algebra acting as isometries of the "world of classical worlds" (WCW). Quarks are on-shell with H  momentum p and M8 momenta xi,xi+1, pi=xi+1-xi. Dirac operator xkiγk restricted to fixed helicity L,R appears as a vertex factor and has an  interpretation as a residue of a pole from an on-mass-shell propagator D so that a correspondence with twistorial construction becomes obvious. D  is expressible in terms of the helicity spinors of given chirality and gives two independent holomorphic factors as in case of massless theories.
  2. MHV construction utilizing k=2 MHV amplitudes as building bricks does not seem to be needed at the level of a single space-time surface. One can of course ask, whether the M8 quark lines could be regarded as analogs of lines connecting different MHV diagrams replaced with Galois singlets. The scattering amplitudes would be rational functions in accordance with the number theoretic vision. The absence of logarithmic radiative corrections is not a problem: the coupling constant evolution would be discrete and defined by the hierarchy of extensions of rationals.
  3. The scattering amplitudes for a single 4-surface X4 are determined by a polynomial. The integration over WCW is replaced with a summation of polynomials characterized by rational coefficients. Monic polynomials are highly suggestive. A connection with p-adicization emerges via the identification of the p-adic prime as one of the ramified primes of P. Only (monic) polynomials having a common p-adic prime are allowed in the sum. The counterpart of the vacuum functional exp(-K) is naturally identified as the discriminant D of the extension associated with P and p-adic coupling constant evolution emerges from the identification of exp(-K) with D.
See the article About TGD counterparts of twistor amplitudes or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD. 


Sunday, December 12, 2021

What would you choose: Alcubierre drive or superluminal quantum teleportation in zero energy ontology?

There was an interesting popular article (see this) about a theoretical article "Worldline numerics applied to custom Casimir geometry generates unanticipated intersection with Alcubierre warp metric" by Harold White et al published in European Physical Journal (see this).

The article claims that a calculation of the Casimir energy (see this) for a system of two parallel metal plates using what is called world-line numerics predicts that the region between capacitor plates has a torus-like region inside which the vacuum energy density is negative (note that the vacuum energy depends on the shape and size of the cavity for a quantum field theory restricted inside the capacitor by posing suitable boundary conditions).

The observation is that the vacuum energy density resembles that for the so-called Alcubierre drive (see this) claimed to make possible space-time travel with superluminal speeds with respect to the time coordinate of an asymptotically flat space-time region. The idea is that if the space contracts in front of the space-ship and expands behind, super-liminality becomes possible. Inside the space-ship the space-time would be in a good approximation flat. Alcubierre himself suggests that the Casimir effect might produce the needed negative energy density.

It is easy for a skeptic to invent objections. Consider first the calculation of the vacuum energy behind Casimir force, which is a real effect and has been experimentally detected.

  1. The original calculation of Casimir was for van der Waals forces. It has been later show that Casimir effect could be interpreted as retarded van der Waals force (see this): the consideration of poorly defined vacuum energies was not needed in this approach.

    Later emerged the proposal that one can forget the interpretation as an interaction between molecules and that the calculation applies by considering the system as idealized conducting capacitor plates. There are objections against this interpretation.

  2. Force is the negative gradient of energy. The predicted force is finite although the calculation of the vacuum energy gives an ultraviolet divergent infinite answer requiring a regularization. The regularization gives a finite result as an energy E per area A of the plates, which is negative and given by

    E/A= -ℏπ2/720a3.

    a is the distance between plates and force is proportional to 1/a4.

    Note that there is no dependence on the fine structure constant or any other fundamental coupling strengths. Casimir energy and force approach rapidly to zero when a increases so that practical applications to space-travel do not look feasible.

Also the notion of Alcubierre drive can be criticized.

  1. The basic problem of general relativity (GRT) is that the notions of energy and momentum and corresponding conservation laws are lost: this was the starting point of TGD. In weak gravitational fields in which space-time is a small metric deformation of empty Minkowski space-time, one can expect that these notions make approximately sense. However, Alcubierre drive represents a situation in which the deviation from a flat Minkowski space is large. Does it make sense to speak about (conserved) energy anymore?

  2. If one accepts GRT in this kind of situation one still has the problem that negative energy density violates the basic assumptions of GRT. Some kinds of exotic matter with negative energy would suggest itself if one believes that energy corresponds to some kind of particles

  3. One can also argue that the proposed effect is a kind of Munchausen trick. The situation must allow an approximate GRT based description by regarding space-time ship as a single unit whose energy is determined by the sum of the energy of the space-time ship and Casimir energy and is positive so that the space-ship moves in a good approximation along time-like geodesic of the background space-time. The corrections to this picture taking into account the detailed structure of the space-ship should not change the description in an essential manner and only add small scale motion superposed to the center of mass motion.

What about the situation in TGD?

  1. The notions of energy and momentum are well-defined and the classical conservation laws are not lost. The conserved classical energy assignable to space-time surface is actually analogous to Casimir energy although it is not assigned to vacuum fluctuations and consists of the contributions assignable to Kähler action and volume action. These contributions depend on Kähler coupling strength and cosmological constant which in the TGD framework is (p-adic) length scale dependent. Recall that for the parallel conductor plates at least, Casimir energy has no dependence on fundamental coupling strengths.

    If the energy is positive definite in TGD as there are excellent reasons to believe, the basic condition for the Alcubierre drive is not satisfied in TGD.

  2. Here I must however counterargue myself. One can construct very simple space-time surfaces for which the metric is flat and Euclidean and they are extremals of the basic variational principle.

    1. Consider a surface representable as a graph of a map M4× CP2 given by Φ= ω t, where Φ is angle coordinate of the geodesic circle of CP2. The time component gtt= 1-R2ω2 of the induce flat metric is negative for ω >1/R.

      The energy density associated with the volume part of the action is non-vanishing and proportional to (gtt1/2 gtt and negative. The coefficient is analogous to cosmological constant.

    2. Can these "tachyonic" surfaces correspond to preferred extremals of the action, which are physically analogous to Bohr orbits realizing holography. The 3-D intersections of this solution with two t= constant time slices are Euclidean 3-spaces E3 or identical pieces of E3. If the preferred extremal minimizes its volume action, then (gtt1/2=(1-R2ω2)1/2 is maximum. This gives ω R=0 and a flat piece of M4.

      Interestingly, the original formulation for what is it to be a preferred extremal (as a condition for holography required by the realization of general coordinate invariance), was that space-time surfaces are absolute minima for the action which at that time was assumed to be mere Kähler action. The twistor lift of TGD forced the inclusion of the volume term. It seems that Alcubierre drive is not possible in TGD.

      It might be also possible to show this by demonstrating that the embedding of the Alcubierre metric as a 4-surface in M4× CP2 is not possible.

    3. TGD also allows different kinds of Euclidean regions as preferred extremals. These correspond to what I call CP2 type extremals. They have positive energy density and they have light-like geodesics as M4 projection and they serve as classical geometric models for fundamental particles.
Recall that the motivating problem of the article was how to avoid the restrictions posed by the finite light-velocity on space travel. The first thing that comes to mind is that it is not very clever to move a lot of steel and other metals to distant parts of the Universe. Quantum teleportation allows us to consider a more advanced form of space travel. One could send only the information needed to reconstruct the transferred system at the second end. Reconstruction of the space travelers is quite a challenge with existing technology but the less ambitious goal of sending just the information looks more promising and qubits have been already teleported.

Concerning superluminal teleportation, the problem is that in the standard quantum theory teleportation also requires sending of classical information. Maximal signal velocity makes superluminal teleportation impossible. This poses extremely stringent limits on the communications with distant civilizations.

In the zero energy ontology of TGD (see this and this), the situation changes.

  1. In the so-called "big" state function reductions (BSFRs), which are the TGD counterparts for ordinary SFRs, the arrow of time changes.
  2. For light-signal this means that the signal is reflected in time direction and returns back in time with a negative energy (this brings in mind the negative energy condition for Alcubierre drive). This is just like ordinary reflection but in time direction perhaps allowing seeing in time direction I have proposed conscious memory recall could correspond to this kind if seeing in time direction.
  3. This might also make practically instantaneous classical communications over space-like distances possible. This in turn would also make possible superluminal quantum teleportation.
For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD. 


How bubbles in coffee cup, hyperbolic geometry, and magnetic body could relate?

I received in Facebook from Runcel D. Arcaya a beautiful photo of fractal bubble structure formed at the surface of liquid in coffee cup.

This picture is fascinating and inspires questions. What could be the physics behind this structure? Is standard hydrodynamics really enough?

The bubble structures from the point of view of standard physics

Let us list what looks like standard physics.

  1. In the interior of the liquid bubbles of air are formed: this requires energy feed, say due to shaking.
  2. The bubbles are lighter than liquid and rise to the surface and eventually develop holes and disappear.
  3. The pressure inside the bubble is higher than in air outside and this gives rise to a spherical shape locally. Otherwise one would have a minimal surface with vanishing curvature (locally a saddle). The role of pressure means that thermodynamics is needed to understand them.
What kind of dynamics could give rise to the fractality and the beautiful structure with a big bubble in the center.
  1. The structure brings in mind the illustrations of 2-D hyperbolic geometry and its model as Poincare disk.

    One can also imagine concentric rings consisting of bubbles with scaled down size as radius increases. Atomic structure or planetary system of bubbles also comes into mind!

  2. The so- called (rhombitriheptagonal tiling) of the Poincare disk in the plane comes to mind. Could the bubble structure be associated with a tiling/tessellation of H2 represented as a Poincare disk?
What about TGD description of the bubble structure?

Could one understand the bubble structures in classical TGD in which 4-D space-time surfaces in M4× CP2 would be minimal surfaces with singularities?

  1. Tessellations of 3-D hyperbolic space H3, which has H2 as sub-manifold, are realized in mass shell of momentum space familiar to particle physicists and also in 4-D Minkowski space M4 \subset M4× CP2 as the surface t2-r2= a2,where a is light-cone proper time and identified as Lorentz invariant cosmic time in cosmologies. H3 is central in TGD.
  2. Tessellations of H3 induce tessellations of H2 and they could in some sense induce 2-D tessellations at the space-time surface, magnetic body (MB) of any system is an excellent candidate in this respect.

    This would be a universal process. For instance, genetic code could be understood as a universal code associated with them and genes would be 1-D projections of the H3 tessellations known as icosa-tetrahedral tessellation.

    I have proposed that cell membranes could give rise to 2-D realizations of genetic code as an abstract representation of the 3-D tessellation at MB (see this).

  3. Could these "bubble tessellations" somehow correspond to the 3-D tessellations of H3, not necessarily the icosa-tetrahedral one, since there is an infinite number of different tessellations of H3?
The basic problem is that the Poincare disk has a finite radius and H2 consists of the entire Euclidean plane. Furthermore, the Euclidean plane has vanishing curvature while Poincare disk has a negative curvature. The simplistic attempts indeed fail.
  1. There is no natural map from the hyperbolic plane H2 to the Poincare disk. Projection is impossible since it would require an infinite compression of the Euclidean plane.
  2. What about realizing Poincare plane as 2-surface in M4× CP2 with induced metric equal to the metric of Poincare disk given by ds2P =ds2E/(1-ρ2): here ds2E is the Euclidean metric of the plane and ρ its radial coordinate.
Simple realizations seem implausible. Presumably the negative curvature of H2 in contrast to the positive curvature of CP2 and vanishing curvature of the Euclidean plane is the problem.

The following represents a possible successful attempt.

  1. Could the MB of the system, which can realize the tesselations, somehow induce the discrete Poincare disk based bubble structure as a discrete representation of its 2-D hyperbolic sub-geometry? The distances between the discrete points of the bubble representation, say the positions of bubbles, at MB would have hyperbolic distances and induced correspond to the distances at MB.
  2. There is evidence that in a rather abstract statistical sense neurons of the brain obey a hyperbolic geometry. Neurons functionally near to each other are near to each other in an effective hyperbolic geometry. Hyperbolic geometry at the level of the MB of the brain could realize this concretely. Neurons functionally near each other could send their signals to points near each other at MB of the brain (see this.
For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Friday, December 03, 2021

Spin Glasses, Complexity, and TGD

Spin glasses represent an exotic phenomenon, which remains poorly understood in the standard theoretical framework of condensed matter physics. Actually, spin glasses provide a prototype of complex systems and methods used for spin glasses can be applied in widely different complex systems.

A TGD inspired view about spin glasses is discussed.

  1. TGD view about space-time leads to the notion of magnetic flux tubes and magnetic body. Besides spins also long closed magnetic flux tubes would contribute to magnetization. The basic support for this assumption is the observation that the sum of the NFC magnetization and the FC remanence is equal to the NFC magnetization. Magnetic field assignable to spin glass would correspond to a kind of flux tube spaghetti and the couplings Jij between spins would relate to magnetic flux tubes connecting them.
  2. Quantum TGD leads to the notion of "world of classical worlds" (WCW) and to the view about quantum theory as a "complex square root" of thermodynamics (of partition function). The probability distribution for {Jij} would correspond to ground state functional in the space of space-time surfaces analogous to Bohr orbits.
  3. Spin glass is a prototype of a complex system. In the TGD framework, the complexity reduces to adelic physics fusing real physics with various p-adic physics serving as correlates of cognition. Space-time surfaces in H=M4× CP2 correspond to images of 4-surfaces X4⊂ M8c mapped to H by M8-H duality. X4 is identified as 4-surface having as holographic boundaries 3-D mass shells for which the mass squared values are roots of an octonionic polynomial P obtained as an algebraic continuation of a real polynomial with rational coefficients. The higher the degree of P, the larger the dimension of the extension of rationals induced by its roots, and the higher the complexity: this gives rise to an evolutionary hierarchy. The dimension of the extension is identifiable as an effective Planck constant so that high complexity involves a long quantum coherence scale.

    TGD Universe can be quantum critical in all scales, and the assumption that the spin glass transition is quantum critical, explains the temperature dependence of NFC magnetization in terms of long range large heff quantum fluctuations and quantum coherence at critical temperature.

  4. Zero energy ontology predicts that there are two kinds of state function reductions (SFRs). "Small" SFR would be preceded by a unitary time evolution which is scaling and generated by the scaling generator L0. This conforms with the fact that relaxation rates for magnetization obey power law rather than exponent law. "Big" SFRs would correspond to ordinary SFRs and would change the arrow of time. This could explain aging, rejuvenation and memory effects.
  5. Adelic physics leads to a proposal that makes it possible to get rid of the replica trick by replacing thermodynamics with p-adic thermodynamics for the scaling operator L0 representing energy. What makes p-adic thermodynamics so powerful is the extremely rapid converges of Z in powers of p-adic prime p.
See the article Spin Glasses, Complexity, and TGD or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Thursday, November 18, 2021

Superionic ice, possibly existing inside some planets, discovered

A superionic ice-like phase of water at high temperature and pressure (20 gigaPascals but much less than the expected pressure, which is higher than 50 gigaPascals) has been discovered. See the popular article and the research article.

The bonds between hydrogen atoms and oxygen ions are broken in this phase and ionized hydrogen atoms form a fluid,a kind of proton ocean in which the oxygen lattice floats.

In the TGD framework dark proton sequences with effective Planck constant h_eff>h at monopole magnetic tubes play a key role in quantum biology. Dark DNA codons would be 3-proton triplets at monopole flux tubes parallel to DNA strands and would give rise to a fundamental realization of the genetic code.

One can wonder whether the protons of this superionic could be dark in the TGD sense and reside in monopole flux tubes. Could they form a superfluid-like or superconductor-like phase by a universal mechanism which I call Galois confinement, which requires that the total momenta of composites of dark protons with are algebraic integer valued momenta are ordinary integers in suitable units (periodic boundary conditions): see this and this.

It is conjectured that this kind phase could reside in the interiors of Neptune and Uranus perhaps even deep inside the Earth. The TGD based view about super-conductivity leads to a rather eye-brow raising question. Could the vanishing of large scale magnetic fields of planets like Venus and Mars be due to the TGD variant of the Meissner effect and could these planet interiors be superconductors in the TGD sense (see this)?

Could superionic ice appear in the interior of Earth? Could one consider the following scenario?

Primordial Earth had a vanishing magnetic field by the Meissner effect caused by superionic ice. Part of the superconducting superionic water melted and formed ordinary water at lower temperature and pressure and gave rise to underground oceans. Superconductivity was lost in the Earth scale but the monopole flux based magnetic field and the ordinary magnetic field induced by the currents that it generated remained but did not cancel each other anymore. In the transition increasing the radius of Earth by factor 2 during the Cambrian explosion the water in these oceans bursted to the surface of Earth.

Earthquakes that should not occur

There is an interesting finding, which seems to relate to the superionic ice. It has been discovered that there are earthquakes much deeper in the interior of Earth than expected (see this). These earthquakes are in the transition zone between upper and lower mantle and (the depth range 410-620 km) even below it (750 km). The pressure range is 20-25 GPa. The temperature at the base of the transition zone is estimated to be about 1900 K (see this). This parameter range inspires the question whether superionic ice could emerge at the base of the transition zone and whether the appearance of hydrogen as liquid in pores could make possible the earthquakes below the transition zone just as the presence of ordinary liquid in pores is believed to make them possible above the transition zone.

In the crust above 20 km depth the rocks are cold and brittle and prone to breaking and most earthquakes occur in this region. At deeper the rocks deform under high pressures and no breaking occurs. Deeper in the crust the matter is hotter and pressure higher and breaking does not occur easily.

Around a depth of 400 km, just above the transition zone, the upper mantle of the rock consists of olivine, which is brittle. In the transition zone olivine is believed to transform to wadsleyite and at deeper depth ringwoodite. At 680 km, where the upper mantle ends, ringwoodite would transform to bridgmanite and periclase. The higher pressure phases are analogous to graphite, which deforms easily under pressure and does not break whereas olivine is analogous to diamond and is brittle.

One can understand the earthquakes down to 400 km near the upper boundary of the transition zone in terms of the model in which water in the proposed upper mantle is pushed away from the pores by pressure, which leads to breaking. Below this depth water is believed to be totally squeezed out from the pores so that mechanism does not work. The deepest reported earthquake occurs at a depth 750 km and looks mysterious. There are several proposals for its origin.

The area of Bonin island is a subduction zone and it has been proposed that the boundary between upper and lower mantle is at a larger depth than thought. The cold Earth crust could allow a lower temperature so that matter would remain brittle since the transition to high pressure forms of rock would not occur. Another proposal is that the region considered is not homogenous and different forms of rock are present. Even direct transition of olivine to ringwoodite is possible and it has been suggested that this could make the earthquakes possible.

Could there be a connection between superionic ice and earthquakes?

TGD allows us to consider the situation from a new perspective by bringing in the notions of magnetic flux tubes carrying dark matter. Also the zero energy ontology (ZEO) might be highly relevant. The following represents innocent and naive questions of a layman at the general level.

  1. ZEO inspires the proposal that earthquakes correspond to "big" state function reductions (BSFRs) in which the arrow of time at the magnetic body of the system changes. This would explain the generation of ELF radiation before the earthquake although one would expect it after the earthquake (see this).

    The BSFRs would occur at quantum criticality and the question is what this quantum criticality corresponds to. Could the BSFR correspond to the occurrence of a phase transition in which the superionic ice becomes ordinary water? If this is the case, the transition zone, and also a region below it, would be near quantum criticality and prone to earthquakes.

  2. The dark magnetic flux tubes are 1-D objects and possess Hagedorn temperature TH as a limiting temperature. The heat capacity increases without limit as TH is approached. Could a considerable part of thermal energy go to the flux tube degrees of freedom so that the temperature of the ordinary matter would remain lower than expected and the material could remain in a brittle olivine form.
  3. Could the energy liberated in the earthquake correspond to the dark magnetic energy (for large enough value of heff assignable to gravitational magnetic flux tubes) assignable to the flux tubes rather than to the elastic energy of the rock material? Could the liberated energy be dark energy liberated as heff decreases and flux tubes suddenly shorten? Could this correspond to a phase transition in which superionic ice transforms to an ordinary phase of water?
One can also ask more concrete questions.
  1. Suppose that water below the transition zone (P> 20 GPa and T > 1900 K) can exist in superionic ice containing hydrogen ions in liquid form. Could the high pressure force the superionic liquid out from the pores and induce the breaking?
  2. In the range 350-655 km, the temperature varies in the range 1700-1900 K (see this). The temperature at the top of transition zones would be slightly above 1700 K. Could regions of superionic ice appear already at 1700 K, which is below T=2000 K?
  3. Could the transition zone be at criticality against the phase transition to superionic water? This idea would conform with the proposal that the region in question is not homogenous.

See the article Updated version of Expanding Earth model or the chapter Expanding Earth Model and Pre-Cambrian Evolution of Continents, Climate, and Life.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Mott insulators learn like living matter

Researchers in Rutgers University have found that quantum materials, in this case Mott insulators, are able to learn very much like living matter (see this). The conductivity of the quantum material represented behavior and sensory input was represented by external stimuli like oxygen, ozone and light.

The finding was that conductivity depends on these stimuli and that the system mimics non-associative learning. Non-associative learning does not involve pairing with the stimulus but habituation or sensitization with the stimulus.

I have already earlier (see this) briefly considered transition metals, Mott insulators, and antiferromagnets from the point of view of TGD inspired theory of high Tc superconductivity.

  1. By looking at Wikipedia (see this), one finds that Mott insulators are transitional metal oxides such as NiO. Transition metals, such as Ni, can have unpaired valence electrons since they can appear in electronic configurations [Ar] 3d8 4s2 or [Ar] 3d9 4s1. This should make transition metals and their oxides conductors. They are not since they seem to somehow develop an energy gap between states in the same valence band making them insulators.
  2. Mott developed a model for NiO as an insulator: the expected conduction was based on the transition for neighboring Ni2+O2- molecules

    (Ni2+O2-)2 → Ni3+O2- + Ni1+O2-.

    In the latter configuration, the number of valence electrons of Ni is odd for both neighbors.

  3. The formation of the gap can be understood as a competition between repulsive Coulomb potential U between 3d electrons and the transfer integral t of 3d electrons between neighboring atoms assignable to the transition. The total energy difference between the two states is E=U-2zt, where z is the number of neighboring atoms. A large value of U leads to a formation of a gap implying the insulator property.
  4. Also antiferromagnetic ordering is necessary for the description of Mott insulators. Even this is not enough, and the rest which is not so well understood, is colloquially called mottism. The features of Mott insulators that require mottism are listed in the Wikipedia article. They include the vanishing of the single particle Green function along a connected surface in the first Brillouin zone and the presence of charge 2e boson at low energies.
  5. The description of both Mott insulators and high Tc superconductors involves antiferromagnetism and Mott insulators exhibit extraordinary phenomena such as high Tc superconductivity and so-called colossal magnetoresistance thought to be due the interaction between charge and spin of conduction electrons.
In the TGD framework, the description of high Tc superconductors (see this, this and this) involves pairs of monopole flux tubes with opposite direction of monopole magnetic flux possible not possible in Maxwellian electrodynamics. The members of Cooper pairs, which are dark in the TGD sense having an effective Planck constant heff≥ h, reside at the monopole flux tubes. The Cooper pairs are present already above Tc but the flux tubes are short and closed so that supercurrent flows only in short scales. At Tc long flux tubes are formed by reconnection.
  1. Dark valence electrons could help to understand Mott insulators. Transition metals are known for a strange effect in which the valence electrons seem to disappear (see this, this, and this). The TGD proposal is that the electrons become dark in the TGD sense.
  2. It has become clear that dark electron can appear only as bound states for which the sum of momenta, which are algebraic integers in the extensions of rationals with dimension h=heff/h0 (this guarantees periodic boundary conditions) must be Galois singlets: one has Galois confinement. This implies that the total momentum is ordinary integer (see this and this).

    Therefore free dark electrons are not allowed and Cooper pairs and possibly also states formed by a larger number of electrons, say four as has been found (see this) are possible as Galois singlets. In the TGD inspired quantum biology dark proton triplets realize genetic codons and genes could correspond to N-codons as Galois confined states of 3N dark protons (see this).

  3. As a rule, single particle energies increase with increasing heff and the thermal energy feed could increase the effective value Planck constant for an unpaired valence electron of Mott insulator from h to heff>=nh0>h of the valence electrons and it would become dark in the TGD sense. Here n denotes the dimension of extension of rationals assignable to the space-time region. The natural assumption is that Galois confinement forces the Cooper pairing of unpaired electrons of neighboring atoms.
  4. Above Tc, the flux tubes associated with Cooper pairs would be too short for large scale superconductivity so that one would have a conductor or a Mott insulator. Under certain conditions involving low enough temperature, a supraflow in long scales would become possible by the mechanism described above. The massive magnetoresistance could involve a transfer of electrons as Cooper pairs at the magnetic flux tubes of the external magnetic field which would be too short to give rise to superconductivity or even superconductivity. External magnetic fields could also induce dark ferromagnetism as formation of dark flux tubes.
Dark electrons, protons and ions residing at the magnetic flux tubes of the "magnetic body" (MB) of the system are in a key role in the TGD based quantum biology and essential for learning as self-organization. heff serves as a measure for the number theoretical complexity and therefore "intelligence" of the system. There MB naturally acts as a "boss".

Also now the MB of the Mott insulator could play a key role: MB with heff >h would be the "boss" and learn and induce changes in the behavior of the ordinary matter, the "biological body" (BB). In the non-associative learning, adaptation and sensitization is involved and it would be MB that adapts or sensitizes. The TGD view of a neuron proposes a rather detailed model for the communication between the BB and MB (see this).

See the article TGD and condensed matter or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wednesday, November 17, 2021

About TGD view of neuron

The realization that saltation as a conduction over the myelinated portions of the axon is still poorly understood phenomenon inspired a careful reanalysis of the earlier TGD inspired visions of nerve pulse conduction, EEG and of brain based on the new view about space-time, the notion of the magnetic body carrying heff>h phases behaving like dark matter, and the zero energy ontology (ZEO) based quantum measurement theory extending to a theory of consciousness.

The TGD view about nerve pulse replaces nerve pulse as a wave assignable to a generalized Josephson junction formed by lipid layers of the cell membrane for which Josephson frequency fc is replaced by the sum fc+Δ Ec, where Δ Ec is the difference between cyclotron frequencies for transversal flux tubes at the different sides of the axon. What propagates is the deviation of membrane potential below the critical value for the generation of action potential. There would be no action potential in the myelinated portions of the axon and it would be generated only in the non-myelinated portions of length about 1 \mum and gives rise to chemical effects and also communicate a signal to the magnetic body if the notion of generalized Josephson junction is accepted.

An interesting challenge for the model is the discovery that the density of the voltage gated ionic channels in the dendrites of neurons is considerably lower for humans than for mammals. The general model suggests that the spatiotemporal patterns of Josephson radiation emitted by segments between nearby ionic channels or pumps define analogs of sentences of language having nerve pulse as a period analogous to the stop codon for DNA, then these sentences would be longer for humans, which could relate to the emergence of the human language capacity.

See the article About TGD view of neuron or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Friday, November 12, 2021

About systems that apparently defy Newton's third law

Gary Ehlenber sent a very interesting Quanta Magazine link telling about the work of Vincenzo Vitelli (see this).
  1. The topic is extremely interesting but the popular article  produces a lot of confusion by introducing misleading metaphors for  Newton's law of reciprocity stating only that the total conserved quantities are conserved for an  N-particle system. If  conservation fails in a 2-particle system, there must be a third system present - magnetic body (MB) in the TGD framework. 
  2. Also the claim that energy is not conserved, is simply wrong. A more precise statement is that thermodynamic equilibria are not reached in some systems and this together with the existence of non-equilibrium systems suggests that the arrow of time is not always the same - zero energy ontology (ZEO).  If one  accepts  the notions of MB and ZEO, there is no need to give up conservation laws.
  3. The importance of singular points - I would call them critical or quantum critical points - is also emphasized. At these points the conservation laws would be violated. TGD interpretation is different: at these points the transfer of conserved quantities between MB and the system considered becomes important.
  4. Polariton-exciton systems are mentioned as a starting point of the work of Vitello.  This system allows Bose-Einstein condensates (BECs)  at room temperature but energy feed is required. This is something totally new. TGD predicts  forced Bose-Einstein condensates and I have discussed polariton-exciton BECs as an example. 
> The topic is highly interesting from the TGD point of view for several reasons.  
    >
  1. The  notion of magnetic body (MB) appears as a third system in non-resiprocal situations  and quite concretely can lead to small violation of energy and momentum conservation although these violations are small because MB uses energy for control purposes, biological body does the hard work. 
  2. Number theoretic TGD predicts hierarchy of Planck constants. MB carries heff>h phases. This means a larger algebraic complexity, kind of IQ, and makes it the "boss". Also the longer length scale of quantum coherence typically proportional to heff implies this.    The energy of a particle increases with heff and one must have a metabolic energy feed to  preserve the heff distribution from flattening by spontaneous reductions of heff values.  The formation of bound states can however compensate for the increase of energy when heff is increased.

    Bound state formation could be universally based on this and one ends up to quite concrete proposal for how bound states are formed as what I call Galois singlets.  4-momenta of fundamental fermions are algebraic integers for given extensions of rationals labelling the space-time region and Galois confinement says that the bound states have integer valued 4-momenta: this guarantees periodic boundary conditions.

  3. > In the  TGD framework, the hierarchy of heff phases behaving like dark matter  predicts that driven superconductivity (and various BECs) is possible. Cooper pairs and also charges with heff>h give rise to non-dissipating supra currents  at MB.   The problem is that heff is reduced spontaneously. For Cooper pairs binding energy stabilizes the pairs  since the energy of the pair reduces below the energy for free charges. This works below critical temperature. Above critical temperature one can feed energy to the system so that  the equilibrium becomes flow equilibrium. This applies to various Bose-Einstein condensates, in particular polariton-exciton condensate. 
  4. ZEO predicts that in an ordinary  ("big") state function reduction  time reversal occurs. This solves the basic problems of quantum measurement theory but also forces to generalize thermodynamics and leads to a new view about non-equilibrium systems since time reversal means that dissipation occurs in reverse time direction for a subsystem and looks like self-organization for the outsider.  

    In particular, one must give up the idea about stable equilibrium states as energy minima.   If the subsystem is ending up  to such it can make a BSFR changing the arrow of time and from the point of view of  the outsider starts to extract energy from the environment. Negentropy Maximization Principle (NMP) forces this since in thermal equilibrium information does not increase anymore.  In biology homeostasis is based on this. 

  5. Singular points as analogs of critical points are mentioned in the article. At these points one cannot distinguish between two phases. In the TGD framework quantum critical points are points at which long range fluctuations are possible and they correspond to large values of effective Planck constant heff at the MB of the system labelling phase behaving like dark matter. The phase transition creating these phases means that conservation laws are apparently violated. This provides a test for the TGD vision.
  6. Information itself is a central notion missing from  standard physics. Number theoretic physics involving p-adic number fields provides correlates for cognition and the formal  p-adic  analog of entropy can be negative and is interpreted as a measure for information associated with entanglement of 2 systems (2-particle level)  whereas ordinary entropy is related to the loss of information  about either entangled  state (1-particle level).  The sum of two Shannon entropies is by NMP non-negative and increases as the dimension of extensions of rationals increases. This implies evolution as an increase of algebraic complexity, of information sources, and quantum coherence scales. 
See the article TGD as it is towards end of 2021 and the book and the chapters of the book TGD and Condensed Matter. For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Thursday, November 11, 2021

Humans are different

The realization that saltation as a conduction over the myelinated portions of the axon is still poorly understood phenomenon inspired a careful reanalysis of the earlier TGD inspired visions of nerve pulse conduction, EEG and of brain based on the new view about space-time, the notion of the magnetic body carrying heff>h phases behaving like dark matter, and the zero energy ontology (ZEO) based quantum measurement theory extending to a theory of consciousness.

The TGD view about nerve pulse replaces nerve pulse as a wave assignable to a generalized Josephson junction formed by lipid layers of the cell membrane for which Josephson frequency fc is replaced by the sum fc+Δ Ec, where Δ Ec is the difference between cyclotron frequencies for transversal flux tubes at the different sides of the axon.

What propagates is the deviation of membrane potential below the critical value for the generation of action potential. There would be no action potential in the myelinated portions of the axon and it would be generated only in the non-myelinated portions of length about 1 μm and gives rise to chemical effects and also communicate a signal to the magnetic body if the notion of generalized Josephson junction is accepted.

A test for this picture came from the popular article in Medicalxpress (see this) telling about highly interesting observation described in the Nature article "Allometric rules for mammalian cortical layer 5 neuron biophysics" by Mark Harnett (see this).

The finding is that the density of voltage gated channels in the human brain is dramatically lower than in other mammalian brains.

  1. The neuronal system studied was layer 5 pyramidal neurons. Dendrites of these neurons were considered. Densities of voltage gated channels per neuron volume and per brain volume were studied. The ion channels studied were Na and K channels. The channels considered are ion pumps and need metabolic energy.

    10 mammalian species were studied so that cortical thickness and neuron size were the varying parameters. As the neuron size increases, the density of neurons decreases.

  2. The first finding was that the density of ion channels for the neuron increases as the neuron size increases. The density of ion channels per brain volume was however found to be constant.

    Humans were found to be an exception. The density of the channels per brain volume is dramatically reduced. The proposed interpretation is that this reduces the amount of metabolic energy needed to generate action potentials and the metabolic energy is used for other purposes.

Before continuing, it is good to recall some basic facts about neurons. Synapses, dendrites, and myelination are the basic notions needed if one tries to understand these findings. It is enough to notice that most synaptic contacts are between axons to dendrites but that almost any other combinations are possible. Myelination is mostly for axons and only rarely for dendrites. The dendrites of the excitatory pyramidal cells studied in the article are profusely decorated with dendritic spines.

Could the TGD view about the brain and neuron allow us to interpret the difference between humans and other mammals? Why would the density of the voltage gated ionic channels be smaller for pyramidal dendrites? How could this relate the evolutionary leap leading to the emergence of humans?

TGD view about neuron and brain allows us to consider two different but not mutually exclusive explanations for the finding.

  1. The spatial resolution of the percept produced at MB by Josephson radiation would be reduced for humans. This need not be a drawback since it could be also understood as an abstraction. High spatial resolution would be needed only for local percepts in the scale of neuron soma. On longer scales it would mean generation of useless information and metabolic energy waste.

    The natural guess is that the resolution scale is proportional to ℏeff,B at intra-brain flux tubes in turn proportional to ℏeff,MB for the flux tubes at the MB of brain having quantal length scales much longer than brain size. The range of variation of the spatial resolution could correspond to the variation of ordinary photon wavelengths between visible wavelengths (of order μm) and IR wavelengths of order 14.8 μm. Note however that the lengths of myelinated portions are about 100 μm.

  2. Suppose that Josephson radiation patterns associated with the myelinated portions of axon define "sentences" and the unmyelinated portions define periods ending these "sentences" by a nerve pulse. Does the notion of "sentence" make sense also for dendrites?

    At least in the case of humans, having a reduced volume density of ion channels, this picture might generalize also to dendrites, which are usually un-myelinated since the myelination is not needed since the dendrites are typically short as compared to axons. If so, the average distance between two ion channels would define length and duration for a "sentence".

    For other mammals than humans, the "sentences" would be very short or the notion of "sentence" would not make sense at all (the spatial extent of the perturbation of the membrane potential would be of the order wavelength of the soliton). Could this reflect the emergence of language in humans? MB would not only receive long "sentences" but also send them back as control commands inducing motor actions and virtual sensory input.

  3. If the communication between pre-and postsynaptic neuron occurs via MB, dendrites would receive "sentences" from the MB of the presynaptic neuron as a feedback. If generalized motor action is in question, BSFR and time reversal would be involved. The action potentials propagate along axons in a single direction, which would reflect a fixed arrow of time. Does the reversed arrow of time imply that the action potentials along dendrites propagate outwards from the cell body?

    According to Wikipedia (see this), dendrites indeed have the ability to send action potentials back into the dendritic arbor. Known as back-propagating action potentials, these signals depolarize the dendritic arbor and provide a crucial component toward synapse modulation and long-term potentiation. Furthermore, a train of back-propagating action potentials artificially generated at the soma can induce a calcium action potential (a dendritic spike) at the dendritic initiation zone in certain types of neurons.

  4. Dendrites are usually unmyelinated. This conforms with the fact that dendrites are much shorter than axons so that myelination is not needed. Myelination would also restrict the number of synaptic contacts. Myelinated dendrites have been however found in the motochords of frog (see this) and in the olfactory bulb (OB) of some mammals, for instance mouse (see this). Their fraction is small.

    Olfactory system (OS) is very interesting in this respect since it represents the oldest parts of CNS. The axons from the nasal cavity to the olfactory bulb (OB), where odours are thought to be processed are unmyelinated as are the axons of invertebrates in general. The axons from the olfactory bulb (OB) to the olfactory cortex (OC) are myelinated. This conforms with the idea that OB corresponds to the oldest part of OS. The TGD interpretation would be OB sends the results of analysis to OC via MB as "sentences".

    OB also can have a small fraction of myelinated dendrites implying a reduction in the number of synaptic contacts. The rule "A→B" → "A→ MB→ B" (signal from A to B in brain goes via MB and involves BSFR at MB) suggests that there is an MB between olfactory epithelium and OB and that some analysis is performed at MB. If so, the myelinated dendrites would correspond to input from MB as long "sentences".

See the article About TGD view of neuron or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Tuesday, November 09, 2021

Does dark matter induce formation of more dark matter?

It has been proposed (see this) that dark matter could induce formation of dark matter. This would suggest that dark matter is a phase of ordinary matter and its formation is a phase transition generated by a seed. This supports the TGD view about dark matter as phases of matter with effective Planck constant heff=nh0 having arbitrarily large values and behaving like dark matter: phases with different values are relatively dark.
  1. The integer n corresponds to a dimension of extension of rationals associated with a polynomial determining space-time surface as surface in M8 mapped to H=M4×CP2 by M8-H duality. n corresponds also to the order of the Galois group acting as a symmetry group.
  2. Galois confinement suggests a universal mechanism for bound state formation: physical states are composites of particles with momenta coming as are algebraic integers. The components of total 4-momentum would be ordinary integers by periodic boundary conditions. This mechanism has also generalization: one has Galois singlet wave function in the space of momenta with components as algebraic integers.
  3. As a rule, particle energies increase with heff and the analog of "metabolic energy feed" is needed to prevent the reduction of heff to h. In living matter the function of metabolism is just this.
  4. The phase transitions increasing heff are possible in the presence of energy feed. Bose-Einstein condensation and formation of Cooper pairs (and even states with a larger number of particles, such as 4 electrons as observed recently) could be examples of this. The binding energy of the composite would compensate for the energy needed to increase heff. Fermi statistics with algebraic integer valued momenta allows more Galois confined bound states with a given energy favoring therefore the occurrence of the phase transition.
  5. Phase transitions quite generally have the property that a small seed induces the phase transition. This would predict that the presence of dark matter favors the emergence of more dark matter.
See the article TGD as it is towards end of 2021

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Moon has possessed a magnetic field

The surprise of yesterday (see this) was that the Moon has had a magnetic field, which at the surface of the Moon has had the same order of magnitude as the magnetic field of Earth about BE=.5 Gauss at the surface. The finding is deduced from the direction of the magnetization of the material at the surface. The same method is used to deduce information about the magnetic history of Earth.

The idea that the Moon would have a liquid interior, carrying a net current, which by dynamo effect would generate the current creating the magnetic field, looks rather implausible. This proposal has problems also in the case of Earth. The problem is that the current creating the magnetic field should have dissipated a long time ago. The magnetic field of Earth however stubbornly continues to exist. The same problem is encountered with magnetic fields in cosmic scales: they should not exist since in standard cosmology the currents would be short ranged.

At microscopic level, TGD replaces magnetic fields with magnetic flux quanta, flux tubes and flux sheets. The flux tubes can be monopole flux tubes in which case they are stable and no current is needed to preserve the field. This is crucial. The cross section of the flux tube is closed 2-surface, which requires non-trivial space-time topology. Second kind of flux tubes have vanishing flux and correspond to Maxwellian magnetic fields requiring current.

In the case of Earth, a good guess for the strength of monopole contribution would be about .2 Gauss, roughly 2BE/5 from experiments of Blackman et al leading to the notion of the dark matter as heff>h phases at magnetic flux tubes. This field would play a key role in TGD inspired quantum biology but this value would not be the only value possible.

This leads to a model for the maintenance of BE. When the non-monopole part of BE becomes weak enough, the magnetic body (MB) of Earth turns, and induced currents re-creating the induced part (see this).

Could the monopole part of the magnetic field at monopole flux tubes play a role analogous to field H induces a magnetization M cancelling the total field B= H+M in the case of diamagnets? H and M would be at different space-time sheets but their effects on test particles touching all sheets would sum up and at QFT limit B would be the detected effective field and vanish for dia-magnets.

Superconductors are diamagnetic. This is usually explained in terms of the Meissner effect. TGD however leads to a model of superconductivity in which supra currents are heff>h phases at flux tubes, presumably monopole flux tubes. Could magnetic fields actually penetrate to super-conductors as monopole flux tubes (or sheets) with quantized flux inducing a magnetization cancelling the total effective field at quantum field theory limit, which is the sum over fields at different space-time sheets as far as its effects are considered (see this).

Venus is in many aspects twin of Earth but does not have a detectable magnetic field. Also Mars seems to have no global magnetic field but has auroras and local magnetic fields. This inspires crazy questions. Could Venus be a diamagnet. Could the magnetic bodies of Venus, Mars and also Moon be superconductors in the scale of the entire object? But why would the MB of Earth not be able to cancel the total field? Could the rotating liquid core induce an additional field, which prevents this?

See the articles Empirical support for the Expanding Earth Model and TGD view about classical gauge fields and TGD as it is towards end of 2021 .

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, November 08, 2021

Superdeterminism and TGD

Gary Ehlenber sent me an article of Sabine Hossenfelder and Tim Palmer (see this). The article seems like a good collection of arguments pro and con superdeterminism.

When I encounter this kind of proposal, I ask a simple question. What new phenomena are predicted and what anomalies the new approach solves? In the case of superdeterminism, the list of this kind of phenomena is very short. Therefore superdeterminism looks to me like an attempt to return to the good old days before quantum theory and to save the materialistic/physicalistic world view implying that the notions of ethics and moral are illusions.

It must be added that the entire theoretical physics community suffers and also the community of rebels from the same conservatism and superdeterminism represents only an extreme example of this conservatism.

In my view, one should start from where we are now and try to see what in our conceptual landscape is wrong and what new notions and ideas are needed.

To me, the only way forward is to accept non-determinism and the basic paradox of quantum measurement theory without attempts to "interpret" and make a simple question: What goes wrong in our ontology? Does it really make sense to give up entire ontology as Copenhagen interpretation suggests?

There are many deep problems besides the measurement problem.

  1. Is our view about time somehow wrong? Should we distinguish between causations of classical physics and of free will. We experience free will directly: should we accept it as real and perhaps assign it to quantum jump?
  2. Is the assumption about a fixed arrow of time correct? We know experienced time and geometric time are different. Should we accept this also as theoreticians?
  3. Does physics really become classical and deterministic on some scale? Does it really do so? Could this be an illusion due to wrong ontology?
  4. Is deterministic classical physics an exact part of quantum theory?: after all, every quantum measurement is interpreted in terms of classical correlates.
  5. Does the mysterious entanglement have classical, geometric space-time correlates? ER-EPR correspondence could be interpreted in this manner.
  6. There are also the notions of wave-particle duality/position-momentum duality: do we really understand them? Position momentum duality is lost in quantum field theory since coordinates as dynamical variables become parameters. Shouldn't we worry about this?
TGD allows us to answer these questions.
  1. Particle as a point-like entity is non only a source of divergence problems but also suggest local realism, which prevents classical space-time correlates for the notion of entanglement. In TGD, particles as 3-surfaces solve the divergence problem and the new view about classical fields as surfaces leads to the notion of field/magnetic body (MB). Flux tubes connecting particle-like 3-surfaces serve as space-time correlates/prerequisites of entanglement. Flux tubes replace wormholes in ER-EPR correspondence. Many-sheeted space-time is a closely related second new notion.

    MB carrying dark matter as heff>h phases brings in a totally new level to the description and has a key role in biology. heff phases emerge from a generalization of physics: number theoretic vision and geometric view about physics are dual and the duality actually generalizes the position-momentum space duality lost in quantum field theories.

  2. The measurement problem producing myriads of interpretations is the key problem. Here our notion of time is the source of problems. Despite the obvious differences between experienced and subjective time, we stubbornly continue to identify them. Second stubborn assumption is that the arrow of time is fixed despite the fact that in self-organization the arrow of time effectively changes. The standard explanation is in terms of non-equilibrium thermodynamics but this might be only a part of the story. In particular, in living matter.

    In practical quantum theory (quantum optics) one is forced to introduce also the notion of weak measurement. It has no real counterpart in the standard picture. It is analogous to classical measurement: no dramatic changes.

    In quantum theory based zero energy ontology, "big" and "small" state function reductions (BSFRs and SSFRs) emerge naturally. BSFR is the counterpart of ordinary quantum measurement and changes the arrow of time. SSFR is the counterpart of weak measurement and preserves the arrow of time. The experiments of Minev et al and those of Libet provide direct support for BSFR. BSFR also allows us to understand why physics looks classical, not only in long length scales, but always for a system which has an arrow of time opposite that of the system monitored.

    BSFR makes possible dissipation with a reversed arrow of time looking like self-organization. The postulated extremely complex biological programs would be just dissipation with an opposite arrow of time implied by a generalization of the second law. Homeostasis as a paradoxical ability to stay near (quantum) criticality would also have a trivial explanation.

    BSFR leads also to the vision about life and death as universal phenomena not limited to bio-chemical systems only.

  3. Number theoretic vision involving M8-H duality generalizing position-momentum duality to space-time level leads to the notion of cognitive representation providing not only unique discretization of space-time surface but also correlates of cognition. Galois group becomes a symmetry group and Galois confinement stating that quarks as fundamental fermions have 4-momenta which are algebraic integers form states with total 4-momenta whose components are ordinary integers by periodic boundary conditions. This means Galois confinement which could be behind the formation of bound states universally.
See the article TGD as it is towards end of 2021.

For a summary of earlier postings see Latest progress in TGD. Articles and other material related to TGD. 


Sunday, November 07, 2021

Could computable reals (p-adics) replace reals (p-adics) in physics?

For some reason I have managed to not encounter the notion of computable number (see this) as opposed to that of non-computable number (see this). The reason is perhaps that I have been too lazy to take computationalism seriously enough.

Computable real number is a number, which can be produced to an arbitrary accuracy by a Turing computer, which by definition has a finite number of internal states, has input which is natural number and produces output which is natural numbers. Turing computer computes values of a function from natural numbers to itself by applying a recursive algorithm.

The following three formal definitions of the notion are equivalent.

  1. The number a is computable, if it can be expressed in terms of a computable function n→ f(n) from natural numbers to natural numbers characterized by the property

    (f(n)-1)/n≤ a≤ (f(n)+1)/n .

    For rational a=q, f(n)= nq satisfies the conditions. Note that this definition does not work for p-adic numbers since they are not well-ordered.

  2. The number a is computable if for an arbitrarily small rational number ε there exists a computable function producing a rational number r satisfying |r-x|≤ ε. This definition works also for p-adic numbers since it involves only the p-adic norm which has values which are powers of p and is therefore real valued.
  3. The number a is computable if there exists a computable sequence of rational numbers ri converging to a such that |a-ri| ≤ 2-i holds true. This definition works also for 2-adic numbers and its variant obtained by replacing 2 with the p-adic prime p makes sense for p-adic numbers.
The set Rc of computable real numbers and the p-adic counterparts Qp,c of Rc, have highly interesting properties.
  1. Rc is enumerable and therefore can be mapped to a subset of rationals: even the ordering can be preserved. Also Qp,c is enumerable but now one cannot speak of ordering. As a consequence, most real (p-adic) numbers are non-computable. Note that the pinary expansion of a rational is periodic after some pinary digit. For a p-adic transcendental this is not the case.
  2. Algebraic numbers are computable so that one can regard Rc as a kind of completion of algebraic numbers obtained by adding computable reals. For instance, π and e are computable. 2π can be computed by replacing the unit circle with a regular polygon with n sides and estimating the length as nLn. Ln the length of the side. e can be computed from the standard formula. Interestingly, ep is an ordinary p-adic number. An interesting question is whether there are other similar numbers. Certainly many algebraic numbers correspond to ordinary p-adic numbers.
  3. Rc (Qp,c) is a number field since the arithmetic binary operations +, -, ×, / are computable. Also differential and integral calculus can be constructed. The calculation of a derivative as a limit can be carried out by restricting the consideration to computable reals and there is always a computable real between two computable reals. Also Riemann sum can be evaluated as a limit involving only computable reals.
  4. An interesting distinction between real and p-adic numbers is that in the sum of real numbers the sum of arbitrarily high digits can affect even all lower digits so that it requires computational work to predict the outcome. For p-adic numbers memory digits affect only the higher digits. This is why p-adic numbers are tailor made for computational purposes. Canonical identification ∑ xnpn → ∑ xnp-n used in p-adic mass calculations to map p-adic mass squared to its real counterpart (see this) maps p-adics to reals in a continuous manner. For integers this corresponds is 2-to-1 due to the fact that the p-adic numbers -1= (p-1)/(1-p) and 1/p are mapped to p.
  5. For computable numbers, one cannot define the relation =. One can only define equality in some resolution ε. The category theoretical view about equality is also effective and conforms with the physical view.

    Also the relations ≤ and ≥ fail to have computable counterparts since only the absolute value |x-y| can appear in the definition and one loses the information about the well-ordered nature of reals. For p-adic numbers there is no well-ordering so that nothing is lost. A restriction to non-equal pairs however makes order relation computable. For p-adic numbers the same is true.

  6. Computable number is obviously definable but there are also definanable numbers, which are not computable. Examples are Gödel numbers in a given coding scheme for statements, which are true but not provable. More generally, the Gödel numbers coding for undecidable problems such as the halting problem are uncomputable natural numbers in a given coding scheme. Chaitin's constant, which gives the probability that random Turing computation halts, is a non-computable but definable real number.
  7. Computable numbers are arithmetic numbers, which are numbers definable in terms of first order logic using Peano's axioms. First order logic does not allow statements about statements and one has an entire hierarchy of statements about... about statements. The hierarchy of infinite primes defines an analogous hierarchy in the TGD framework and is formally similar to a hierarchy of second quantizations (see this).
See the article MIP*= RE: What could this mean physically? or the chapter Evolution of Ideas about Hyper-finite Factors in TGD.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD. 


Tuesday, November 02, 2021

Does consciousness survive bodily death?

Bigelow institute organized an essay competition. What is the best possible evidence for the survival of human consciousness after bodily death? was the question of the essay. I wrote an essay, or rather a research article of almost 100 pages with the generous help of my friend Paul Kirsch, who also proposed that I should participate. It was not surprising that I did not become a half-millionaire. The concepts and ideas certainly went completely over the heads of the respected jury.

It is very difficult to provide water tight evidence for life after death since near-death experiences are subjective and do not provide objective proof.

The situation changes if one has a testable theory of consciousness. The theory of consciousness presented here is inspired by Topological Geometrodynamics (TGD). TGD was born as a proposal for a unification of fundamental interactions, and indeed provides a general theory of consciousness as a generalization of quantum measurement theory predicting that consciousness, life and death are universal phenomena. The theory relies on new views of space-time and classical fields, and provides a new ontology behind quantum theory that predicts that state function reduction involves time reversal.

The proposed hypothesis forces a new view of the relationship between experienced time and physicist's time, and generalizes thermodynamics so that the second law is replaced with what I call the Negentropy Maximization Principle. Also cognition is included and forces the extension of real number based physics to adelic physics including not only reals but also p-adic number fields. Adelic physics predicts a hierarchy of phases of ordinary matter with a non-standard value heff of the Planck constant interpreted as dark matter which for large values heff is quantum coherent at arbitrarily long scales. Theory makes testable predictions at all scales supporting the proposed view of the continuation of life beyond biological death. A model for what happens in biological death and an explanation for various aspects of near-death experiences emerges.

See the article Does consciousness survive bodily death? or a chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD. 


Saturday, October 30, 2021

Neutrinos and TGD

In what follows, the problem of missing right-handed neutrinos and the problem created by apparently contradictory findings of Mini-Boone and Micro-Boone about neutrino mixing are discussed. Also the topological model for neutrino and D-quark CKM mixing is briefly considered.

Why only left-handed neutrinos are observed?

A basic theoretical motivation for the sterile neutrinos is the difficulty posed by the fact that the neutrinos behave like massive particles. This is not consistent with their left-handedness, which is an experimental fact.

As a matter of fact, the sterile neutrinos would be analogous to the covariantly constant right-handed neutrinos in TGD if only J(cP2) would be present.

Remark: As already stated, in the sequel it is assumed that leptons as bound states of 3 antiquarks can be described using spinors of H with chirality opposite to that for quarks. They have colored modes and the action of super-symplectic algebra is assumed to neutralize the color and also give rise to a massless state getting its small mass by p-adic thermodynamics.

How could one understand the fact that only left-handed neutrinos are observed although neutrinos are massive? One can consider two approaches leading to the same conclusion.

Is it possible to have time evolution respecting M4 chirality and neutrinos with fixed chirality possible despite their mass?

  1. All spinor modes in CP2 are of the form ΦL or D(CP2L and therefore generated from left-handed spinors ΦL.

    If one assumes D(H)Ψ=0, the spinor modes of H are of the form D(M4R× ΦL + ΨR× D(CP2L. The modes of form D(M4L× ΦR + ΨL× D(CP2R are therefore of the form D(M4L× DΦL + ΨL× D2(CP2L. The mixing of chiralities is unavoidable.

  2. However, if one assumes only the condition D2(H)Ψ=0, one can obtain both left- and right-handed modes without mixing of M4 chiralities and M4 Kähler structure could make the lowest mass second right-handed neutrino (covariantly constant in CP2) tachyonic. The time evolution generated by the exponent of L0 would respect M4 chirality.

    This does not prevent superpositions of right- and left-handed fermions if their masses are the same. If only charged leptons can satisfy this condition, one can understand why right-handed neutrinos are not observed.

An alternative approach would rely on quantum measurement theory but leads to the same conclusion.
  1. Suppose that neutrinos can appear as superpositions of both right- and left-handed components. To detect a right-handed neutrino, one must have a measurement interaction, which entangles both length and right-handed components of the neutrino with the states of the measuring system. Measurement would project out the right-handed neutrino. If only the J(CP2) form is present, the right-handed neutrino has only gravitational interactions, and this kind of measurement interaction does not seem to be realizable.
  2. Putting it more explicitly, the reduction probability should be determined by a matrix element of a neutral (charged) weak current between a massive neutrino (charged lepton) spinor with a massless right-handed neutrino spinor. This matrix element should have the form ΨbarRL, where O transforms like a Dirac operator. If it is proportional to D(H), the matrix element vanishes by the properties of the massless right-handed neutrino.
  3. There is however a loophole: the transformation of left- to right-handed neutrinos analogous to the transformation to sterile neutrino in the neutrino beam experiments could demonstrate the existence of νR just like it was thought to demonstrate the existence of the inert neutrino in Mini-Boone experiment. Time evolution should thus respect M4 chirality.
If J(M4) is present, one might understand why right- and left-handed neutrinos have different masses.
  1. Also the right-handed neutrino interacts with Kähler gaug potential A(M4) and one can consider an entanglement distinguishing between right- and left-handed components and the measurement would project out the right-handed component. How could this proposal fail?

    Could it be that right- and left-handed neutrinos cannot have modes with the same mass so that these superpositions are not possible as mass eigen states? Why charged modes could have the same mass squared but not the neutral ones?

  2. The modes with right-handed CP2 chirality are constructed from the left-handed ones by applying the CP2 Dirac operator to them and they have the same CP2 contribution to mass squared. However, for the right-handed modes the Jkl(M4kl term splits the masses. Could it be that for right- and left-handed charged leptons the same value of mass is possible.

    The presence of J(M4) breaks the Poincare symmetry to that for M2 which corresponds to a Lagrangian manifold. This suggests that the physical mass is actually M2 mass and the QCD picture is consistent with this. Also the p-adic mass calculations strongly support this view. The E2 degrees of freedom would be analogous to Kac-Moody vibrational degrees of freedom of string. This would allow right- and left-handed modes to have different values of "cyclotron" quantum numbers n1 and n2 analogous to conformal weights. This could allow identical masses for left- and right-handed modes. For a Lagrangian manifold M2, one would have n1=n2=0, which could correspond to ground states of super-symplectic representation.

  3. Why identical masses would be impossible for right- and left-handed neutrinos? Something distinguishing between right- and left-handed neutrinos should explain this. Could the reason be that Z0 couples to left-handed neutrinos only? Could the fact that charged leptons and neutrinos correspond to different representations of color group explain why only charged states can have right and left chiralities with the same mass?

    Perhaps it is of interest to notice that the presence of Jkl(M4kl for right-handed modes makes possible the existence of a mode for which mass can vanish for a suitable selection of B.

Mini-Boone and Micro-Boone anomalies and TGD

After these preliminaries we are ready to tackle the anomalies associated with the neutrino mixing experiments. The incoming beam consists of muonic neutrinos mixing with electron neutrinos. The neutrinos are detected as they transform to electrons by an exchange of W boson with nuclei of the target and the photon shower generated by the electron serves as the experimental signature.

The basic findings are as follows.

  1. Mini-Boone collaboration reported 2018 (see this) an anomalously large number of electrons generated in the charged weak interaction assumed to occur between neutrino and a nucleus in the detector. "Anomalous" meant that the fit of the analog of the CKM matrix of neutrinos could not explain the finding. Various explanations including also inert neutrinos were proposed. Muonic inert neutrino would transform to inert neutrino and then to electron neutrino increasing the electro neutrino excess in the beam.
  2. The recently published findings of Micro-Boone experiment (see this) studied several channels denoted by 1eNpM\pi where N=0,1 is the number of protons and M=0,1 is the number of pions. Also the channel 1eX, where "X" denotes all possible final states was studied.

    It turned out that the rate for the production of electrons is below or consistent with the predictions for channels 1e1p, 1eNp0\pi and 1eX. Only one channel was an exception and corresponds to 1e0p0\pi.

    If one takes the finding seriously, it seems that a neutrino might be able to transform to an electron by exchanging the W boson with a nucleus or hadron, which does not belong to the target.

In TGD, the only imaginable candidate for this interaction could be charged current interaction with a dark nucleus or with a nucleon with heff>h. This could explain the absence of ordinary hadrons in the final state for 1e events.
  1. Dark particles are identified as heff>h phases of the ordinary matter because they are relatively dark with respect to phases with a different value of heff. Dark protons and ions play a key role in the TGD inspired quantum biology (see For a summary of earlier postings (see this) and even in the chemistry of valence bonds (seethis). Dark nuclei play a key role in the model for "cold fusion" (see this) and this) and also in the description of nuclear reactions with nuclear tunnelling interpreted as a formation of dark intermediate state (see this).
  2. I have proposed that dark protons are also involved with the lifetime anomaly of the neutron (see this). The explanation relies on the transformation of some protons produced in the decay of neutrons to dark protons so that the measured life time would appear to be longer than real lifetime. In this case, roughly 1 percent of protons from the decay of n had to transform to dark protons.

  3. If dark protons have a high enough value of heff and weak bosons interacting with them have also the same value of heff, their Compton length is scaled up and dark W bosons behave effectively like massless particles below this length scale. The minimum scale seems to be nuclear or atomic scale. This would dramatically enhance the dark rate for ν p→ e+n so that it would have the same order of magnitude as the rates for electromagnetic interactions. Even a small fraction of dark nucleons or nuclei could explain the effect.
CKM mixing as topological mixing and unitary time evolution as a scaling

The scaling generator L0 describes basically the unitary time evolution between SSFRs (see this) involving also the deterministic time evolutions of space-time surfaces as analogs of Bohr orbits appearing in the superposition defining the zero energy state. How can one understand the neutrino mixing and more generally quark and lepton mixing in this picture?

  1. In the TGD framework, quarks are associated with partonic 2-surfaces as boundaries of wormhole contacts, which connect two Minkowskian space-time sheets and have an Euclidean signature of induced metric and light-like projection to M4 (see this) and this).

  2. For some space-time surfaces in their superposition defining a zero energy state, the topology of the partonic 2-surfaces can change in these time evolutions. The mixing of boundary topologies would explain the mixing of quarks and leptons. The CKM matrix would describe the difference of the mixings for U and D type quarks and for charged and neutral leptons. The topology of a partonic 2-surface is characterized by the genus g as the number of handles attached to a sphere to obtain the topology.

    The 3 lowest genera with g≤ 2 have the special property that they always allow Z2 as a conformal symmetry. The proposal is that handles behave like particles and thanks to Z2 symmetry g=2 the handles form a bound state. For g>2 one expects a quasi-continuous spectrum of mass eigenvalues. These states could correspond to so-called unparticles introduced by Howard Georgi (\url{https://cutt.ly/sRZKSFm}).

  3. The time evolution operator defined by L0 induces mixing of the partonic topologies and in a reasonable idealization one can say that L0 has matrix elements between different genera. The dependence of the time evolution operator on mass squared differences is natural in this framework. In standard description it follows from the approximation of relativistic energies as p0\simeq p+ m2/2p. Also the model of hadronic CKM relies on mass squared as a basic notion and involves therefore L0 rather than Hamiltonian.
See the article Neutrinos and TGD.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD. 


Tuesday, October 26, 2021

Quantum hydrodynamics in nuclear physics and hadron physics

The field equations of TGD defining the space-time surfaces have interpretation as conservation laws for isometry charges and therefore have a hydrodynamics character. The hydrodynamic character is actually characterized in quite concrete ways (see this, this, and this).

Also nuclear and hadron physics suggest applications for Quantum Hydrodynamics (QHD). The basic vision about what happens in high energy nuclear and hadron collisions is that two BSFRs take place. The first BSFR creates the intermediate state with heff>h: the entire system formed by colliding systems need not be in this state. In nuclear physics this state corresponds to a dark nucleus which decays in the next BSFR to ordinary nuclei.

The basic notions are the notion of dark matter at MB and ZEO, in particular the change of the arrow of time in BSFR.

1. Cold fusion, nuclear tunnelling, ℏeff, and BSFRs

This model allows us to understand "cold fusion" in an elegant manner (see this, this, and this). The dark protons at flux tubes associated with water and created by the Pollack effect have much smaller nuclear binding energy than ordinary nucleons. This energy is compensated to a high degree by the positive Coulomb binding energy which corresponds roughly to distance given by electron Compton length.

Dark nuclear reactions between these kinds of objects do not require large collision energy to increase the value of heff and can take place at room temperature. After the reaction the dark nuclei can transform to ordinary nuclei and liberate the ordinary nuclear binding energy. One can say that in ordinary nuclear reactions one must get to the top of the energy hill and in "cold fusion" one already is at the top of the hill.

Quite generally, the mechanism creating intermediate dark regions in the system of colliding nuclei in BSFR, would be the TGD counterpart of quantum tunnelling in the description of nuclear reactions based on Schrödinger equation. This mechanism could be involved with all tunnelling phenomena.

2. QHD and hadron physics

Hadron physics suggests applications of QHD.

2.1 Quark gluon plasma and QHD

In hadron physics quark gluon plasma (see this) has turned out to be what it was thought to be originally. Instead of being like a gas of quarks and gluons with a relatively large dissipation, it has turned out to behave like almost perfect fluid. This means that the ratio η/s of viscosity and entropy is near to its minimal value proposed by string model based arguments to be η/s=ℏ/m.

To be a fluid means that the system has long range correlations whereas in gas the particles move randomly and one cannot assign to the system any velocity field or more general currents. In the TGD framework, the existence of a velocity field means at the level of the space-time geometry generalized Beltrami flow allowing to define a global coordinate varying along the flow lines (see this and this). This would be a geometric property of space-time surfaces and the finite size of the space-time surface would serve as a limitation.

In the TGD framework the replacement ℏ→ ℏeff requires that s increases in the same proportion. If the fluid flow is realized in terms of vortices controlled by pairs of monopole flux tubes defining their cores and Lagrangian flux tubes with gradient flow defining the exteriors of the cores, this situation is achieved.

In this picture entropy could but need not be associated with the monopole flux tubes with non-Beltrami flow and with non-vanishing entropy since the number of the geometric degrees of freedom is infinite which implies limiting temperature known has Hagedorn temperature TH which is about 175 MeV for hadrons, and slightly higher than pion mass. In fact, the Beltrami property holds for the flux tubes with 2-D CP2 projection, which is a complex manifold for monopole flux tubes. The fluid flow associated with (controlled by) the monopole flux tubes would have non-vanishing vorticity for monopole fluxes and could dissipate.

The monopole flux tube at the core of the vortex could therefore serve as the source of entropy. One expects that η/s as minimal value is not affected by h→ heff. One expects that s → (ℏeff/ℏ)s= ns since the dimension of the extension of rationals multiplies the Galois degrees of freedom by n.

Almost perfect fluids are known to allow almost non-interacting vortices. For a perfect fluid, the creation of vortices is impossible due to the absence of friction at the walls. This suggests that the ordinary viscosity is not the reason for the creation of vortices, and in the TGD picture the situation is indeed this. The striking prediction is that the masses of Sun and Earth appear as basic parameters in the gravitational Compton lengths Λgr determining νgr= Λgrc.

2.2 The phase transition creating quark gluon plasma

The phase transition creating what has been called quark gluon plasma is now what it was expected to be. That the outcome behaves like almost perfect fluid was the first example. TGD leads however to a proposal that since quantum criticality is involved, phases with ℏeff>h must be present.

p-Adic length scale hypothesis led to the proposal (see this and this) that this transition could allow production of so called M89 hadrons characterized by Mersenne prime M89=289-1 whereas ordinary hadrons would correspond to M107. The mass scale of M89 hadrons would be by a factor 512 higher than that of ordinary hadrons and there are indications for the existence of scaled versions of mesons.

How M89 hadrons could be created. The temperature TH= 175 MeV is by a factor 1/512 lower than the mass scale of M89 pion. Somehow the colliding nuclei or hadrons must provide the needed energy from their kinetic energy. What certainly happens is that this energy is materialized in the ordinary nuclear reaction to ordinary pions and other mesons. The mesons should correspond to closed flux tubes assignable to circular vortices of the highly turbulent hydrodynamics flow created in the collision.

Could roughly 512 mesonic flux tubes reconnect to circular but flattened long flux tubes having length of M89 meson, which is 512 times that of ordinary pions? I have proposed this kind of process, analogous to BEC, to be fundamental in both biology (see this, this, and this) and also to explain the strange findings of Eric Reiter challenging some basic assumptions of nuclear physics if taken at face value (see this).

The process generating an analog of BEC would create in the first BSFR M89 mesons having ℏeff/ℏ=512. In the second BSFR the transition ℏeff→ ℏ would take place and yield M89 mesons. It would seem that part of the matter of the composite system ends up to n M89 hadronic phase with 512 times higher TH. In the number theoretic picture, these BEC like states would be Galois confined states (see this and this).

2.3 Can the size of a quark be larger than the size of a hadron?

The Compton wavelength Λc= ℏ/m is inversely proportional to mass. This implies that the Compton length of the quark as part of the hadron is longer than the Compton length of the hadron. If one assigns to Compton length a geometric interpretation as one does in M8-H duality mapping mass shell to CD with radius given by Compton length, this sounds paradoxical. How can a part be larger than the whole? One can think of many approaches to what might look like a paradox.

One could of course argue that being a part in the sense of tensor product has nothing to with being a part in geometric sense. However, if one requires quantum classical correspondence (QCC), one could argue that a hadron is a small region to which much larger quark 3-surfaces are attached.

One could also say that Compton length characterizes the size of the MB assignable to a particle which itself has size of order CP2 length scale. In this case the strange looking situation would appear only at the level of MBs and the magnetic bodies could have sizes which increase when the particle mass decreases.

What if one takes QCC completely seriously? One can look at the situation in ZEO.

  1. The size of the CD corresponds to Compton length and CDs for different particle masses have a common center and form a Russian doll-like hierarchy. One can continue the geodesic line defining point of CD associated with the hadron mass so that it intersects the CDs associated with quarks, in particular that for the lightest quark.
  2. The distances between the quarks would define the size scale of the system in this largest CD and in the case of light hadrons containing U and D quarks it would be of the order of the Compton length of the lightest quark involved having mass about 5 MeV: this makes about .2 × 10-13 m. There are indeed indications that the MB of proton has this size scale.
One could also require that there must be a common CD based on such an identification of heff for each particle that its size does not depend on the mass of the particles.
  1. Here ℏgr= GMm/β0 provides a possible solution. The size of the CD would correspond to Λgr =GM/v0 for all particles involved. One could call this size the quantum gravitational size of the particle.

  2. There is an intriguing observation related to this. To be in gravitational interaction could mean ℏeff=ℏgr=GMm/v0 so that the size of the common CD would be given by Λgr= GMm/v0. The minimum mass M given ℏgr>ℏ would be M=β0 MPl2/m. For protons this gives M ≥ 1.5 × 1038 mp. Assuming density ρ ≈ 1030A/m3, A the atomic number, the length L for the side cube with minimal mass M is L×β0× 102/A1/3. For β0= 2-11 assignable to the Sun-Earth system, this gives L∼ 5/A1/3 mm. The value of Λgr for Earth is 4.35 mm for β0=1. The orders of magnitude are the same. Is this a mere accident?
One solution to the problem is that the ratio ℏeff(H)/ℏeff(q) is so large that the problem disappears.
  1. If ℏeff(1)=ℏ, the value of ℏeff for hadron should be so large that the geometric intuitions are respected: this would require heff/h;≥ mH/mq. The hadrons containing u, d, and c quarks are very special.
  2. Second option is that the value of heff for quarks is smaller than h to guarantee that the Compton length is not larger than ℏ. The perturbation theory for states consisting of free quarks would not converge since Kähler coupling strength αK ∝ 1/ℏeff would be too large. This would conform with the QCD view and provide a reason for color confinement. Quarks would be dark matter in a well-defined sense.
  3. The condition would be ℏeff(H)/ℏeff(q)≥ m(H)/mq, where q is the lightest quark in the hadron. For heavy hadrons containing heavy quarks this condition would be rather mild. For light hadrons containing u,d, and c quarks it would be non-trivial. Ξ gives the condition ℏ/ℏeff≥ 262. The condition could not be satisfied for too small masses of the value of ℏ= 7!ℏ0=5040ℏ0 identifiable as the ratio of dark CP2 deduced from p-adic mass calculations and Planck length.
See the article TGD and Quantum Hydrodynamics or the chapter chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.