Friday, October 28, 2016

Is inflation theory simply wrong?

I listened a very nice lecture about inflation by Steinhardt, who was one of the founders of inflation theory and certainly knows what he talks. Steinhardt concludes that inflation is simply wrong. He discusses three kind of flexibilities of inflationary theory, which destroy its ability to predict and makes it non-falsifiable and therefore pseudoscience.

Basically cosmologists want to understand the extreme simplicity of cosmology. Also particle physics has turned to be extremely simple whereas theories have during last 4 decades become so complex that they cannot predict anything.

  1. CMB temperature is essentially constant. This looks like a miracle. The constant cosmic temperature is simply impossible due to the finite horizon size in typical cosmology making impossible classical communications between distant points so that temperature equalization cannot take place.

  2. One must also understand the almost flatness of 3-space: the value of curvature scalar is very near to zero.

Inflation theories were proposed as a solution of these problems.

Inflation theories

The great vision of inflationists is that these features of the universe result during an exponentially fast expansion of cosmos - inflationary period - analogous to super-cooling. This expansion would smooth out all inhomogenities and an-isotropies of quantum fluctuation and yield almost flat universe with almost constant temperature with relative fluctuations of temperature of order 10-5. The key ingredient of recent inflation theories is a scalar field known as inflaton field (actually several of them are needed). There are many variants of inflationary theory (see this). Inflaton models are characterized by the potential function V (Φ) of the inflaton field Φ analogous to potential function used in classical mechanics. During the fast expansion V(Φ) would vary very slowly as a function of the vacuum expectation value of Φ . Super cooling would mean that Φ does not decay to particles during the expansion period.

  1. In "old inflation" model cosmos was trapped in a false minimum of energy during expansion and by quantum tunneling ended up to true minimum. The liberated energy decayed to particles and reheated the Universe. No inflaton field was introduced yet. This approach however led to difficulties.

  2. In "new inflation" model the effective potential Veff (Φ,T) of inflaton field depending on temperature was introduced. It would have no minimum above critical temperature and super-cooling cosmos would roll down the potential hill with a very small slope. At critical temperature the potential would change qualitatively: a minimum would emerge at critical temperature and the inflaton field fall to the minimum and decay to particles and causes reheating. This is highly analogous to Higgs mechanism emerging as the temperature reduces below that defined by electroweak mass scale.

  3. In "chaotic inflation" model there is no phase transition and the inflaton field rolls down to true vacuum, where it couples to other matter fields and decays to particles. Here it is essential that the expansion slows down so that particles have time to transform to ordinary particles. Universe is reheated.

Consider now the objections of Steinhardt against inflation. As non-specialist I can of course only repeat the arguments of Steinhardt, which I believe are on very strong basis.
  1. The parameters characterizing the scalar potential of inflaton field(s) can be chosen freely. This gives infinite flexibility. In fact, most outcomes based on classical inflation do not predict flat 3-space in recent cosmology! The simplest one-parameter models are excluded empirically. The inflaton potential energy must be very slowly decreasing function of Φ: in other words, the slope of the hill along which the field rolls down is extremely small. This looks rather artificial and suggests that the description based on scalar field could be wrong.

  2. The original idea that inflation leads from almost any initial conditions to flat universe, has turned out to be wrong. Most initial conditions lead to something very different from flat 3-space: another infinite flexibility destroying predictivity. To obtain a flat 3-space must assume that 3-space was essentially flat from beginning!

  3. In the original scenario the quantum fluctuations of inflaton fields were assumed to be present only during the primordial period and single quantum fluctuation expanded to the observer Universe. It has however turned out that this assumption fails for practically all inflationary models. The small quantum fluctuations of the inflationary field still present are amplified by gravitational backreaction. Inflation would continue eternally and produce all possible universes. Again predictivity would be completely lost. Multiverse has been sold as a totally new view about science in which one gives up the criterion of falsifiability.

Steinhardt discusses Popper's philosophy of science centered around the notions of provability, falsifiability, and pseudoscience. Popper state that in natural sciences it is only possible to prove that theory is wrong. A toy theory begins with a bold postulate "All swans are white!". It is not possible to prove this statement scientifically because it should be done for all values of time and everywhere. One can only demonstrate that the postulate is wrong. Soon one indeed discovers that there are also some black swans. The postulate weakens to "All swans are white except the black ones!". As further observations accumulate, one eventually ends up with not so bold postulate "All swans have some color.". This statement does not predict anything and is a tautology. Just this has happened in the case of inflationary theories and also in the case of superstring theory.

Steinhardt discusses the "There is no viable alternative" defense, which also M-theorists have used. According to Steinhardt there are viable alternatives and Steinhardt discusses some of them. The often heard excuse is also that superstring theory is completely exceptional theory because of its unforeseen mathematical beauty: for this reason one should give up the falsifiability requirement. Many physicists, including me, however are unable to experience this heavenly beauty of super strings: what I experience is the disgusting ugliness of the stringy landscape and multiverse.

The counterpart of inflation in TGD Universe

It is interesting to compare inflation theory with the TGD variant of very early cosmology (see this ). TGD has no inflaton fields, which are the source of the three kind of infinite flexibilities and lead to the catastrophe in inflation theory.

Let us return to the basic questions and the hints that TGD provides.

  1. How to understand the constancy of CMB temperature?

    Hint: string dominated cosmology with matter density behaving like 1/a2 as function of size scale has infinitely large horizon. This makes classical communications over infinitely long ranges possible and therefore the equalization of the temperature. At the moment of big-bang - boundary of causal diamond, which is part of boundary of lightcone - the M4distance between points in light-like radial direction vanishes. This could be the geometric correlate for the possibility of communications and long range quantum entanglement for the gas of strings.

    Note: The standard mistake is to see Big Bang as single point. As a matter of fact, it corresponds to the light-cone boundary as the observation that future light-cone of Minkowski space represents empty Robertson-Walker cosmology shows.

  2. How to understand the flatness of 3-space.

    Hint: (quantum) criticality predicts absence of length scales. The curvature scalar of 3-space is dimensional quantity must vanish - hence flatness. TGD Universe is indeed quantum critical! This fixes the value spectrum of various coupling parameters.

The original TGD inspired answers to the basic questions would be following.
  1. What are the initial conditions? In TGD Universe the primordial phase was a gas of cosmic strings in vicinity of the boundary of a very big CD (for the observer in recent cosmology) (see this and this). The boundary of CD - having M4 given by the intersection of future and past directed light-cones - consists of two pieces of lightcone boundary with points replaced with CP2). The gas is associated with the second piece of the boundary.

    Horizon size for M is infinite and the hierarchy of Planck constants allows quantum coherence in arbitrarily long scales for the gas of cosmic strings forming the primordial state. This could explain constant cosmic temperature both in classical and quantum sense (both explanations are needed by quantum classical correspondence).

  2. Inflationary period is replaced with the phase transition giving to space-time sheets with 4-D Minkowski space projection: the space-time as we know it. The basic objects are magnetic flux tubes which have emerged from cosmic strings as the M4 projection has thickened from string world sheet to 4-D region. These cosmic strings decay partially to elementary particles at the end of the counterpart of inflationary period. Hence Kähler magnetic energy replaces the energy of the inflaton field. The outcome is radiation dominated cosmology.

  3. The GRT limit of TGD replaces the many-sheeted space-time with a region of M4 made slighly curved (see this). Could one model this GRT cosmology using as a model as single space-time sheet? This need not make sense but one can try.

    Criticality states that mass density is critical as in inflationary scenario. Einstein's equations demand that the curvature scalar for Lorentz invariant RW cosmology vanishes. It turns out that one can realize this kind of cosmology as vacuum extremal of Kähler action. The resulting cosmology contains only single free parameter: the duration of the transition period. 3-space is flat and has critical mass density as given by Einstein tensor (see this).

    One might hope that this model could describe quantum criticality in all scales: not only the inflationary period but also the accelerating expansion at much later times. There is an exponentially fast expansion but it need not smooth out fluctuations now since the density of cosmic strings and temperature are essentially constant from beginning. This is what also inflationary models according to Steinhardt force to conclude although the original dream was that inflation produces the smoothness.

  4. The energy of inflaton field is in this scenario replaced with the magnetic energy of the magnetic flux tubes obtained from cosmic strings (2-D M4 projection). The negative "pressure" of the critical cosmology would microscopically corresponds to the magnetic tension along flux tubes.

  5. Quantum fluctuations are present also in TGD framework but quantum coherence made possible by hgr = heff=n× h dark matter saves the situation in arbitrary long scales (see this). Dark matter as large hbar phases replaces the multiverse. Dark matter exists! Unlike multiverse!

The twistorial lift of the Kähler action - whether it is necessary is still an open question - however forces to reconsider this picture (see this and this).
  1. Space-time surfaces are replaced with their twistor spaces required to allow imbedding to the twistor space of H= M4× CP2: the extremely strong conditions on preferred extremals given by strong form of holography (SH) should be more or less equivalent with the possibility of the twistor lift. Rather remarkably, M4 and CP2 are completely unique in the sense that their twistor spaces allow Kähler structure (twistor space of M4 in generalized sense). TGD would be completely unique!

    The dimensional reduction of 6-D generalization of Kähler action dictating the dynamics of twistor space of space-time surface would give 4-D 4-D Kähler action plus volume action having interpretation as a cosmological term.

    Planck length would define the radius of the sphere of the twistor bundle of M4 with radius which would be naturally Planck length. The coefficient of volume term is coupling constant having interpretation in terms of cosmological constant and would experience p-adic coupling constant evolution becoming large at early times and being extremely small in the recent cosmology. This would allow to describe both inflation and accelerating expansion at much later times using the same model: the only difference is that cosmological constant is different. This description would be actually a universal descripton of critical phase transitions. The volume term also forces ZEO with finite sizes of CDs: otherwise the volume action would be infinite!

  2. This looks nice but there is a problem. The volume term proportional to dimensional constant and one expects breaking of criticality and the critical vacuum extremal of Kähler action indeed fails to be a minimal surface as one can verify by a simple calculation. The value of cosmological constant is very small in the recent cosmology but the Kähler action of its vacuum extremal vanishes. Can one imagine ways out of the difficulty?

    1. Should one just give up the somewhat questionable idea that critical cosmology for single space-time sheet allows to model the transition from the gas of cosmic strings to radiation dominated cosmology at GRT limit of TGD?

    2. Should one consider small deformations of the critical vacuum extremal and assume that Kähler action dominates over the volume term for them so that it one can speak about small deformations of the critical cosmology is a good approximation? The average energy density associated with the small deformations - say gluing of smaller non-vacuum space-time sheetes to the background - would be given by Einstein tensor for critical cosmology.

    3. Or could one argue as follows? During quantum criticality the action cannot contain any dimensional parameters - this at least at the limit of infinitely large CD. Hence the cosmological constant defining the coefficient of the volume term must vanish. The corresponding (p-adic) length scale is infinite and quantum fluctuations indeed appear in arbitarily long scales as they indeed should in quantum criticality. Can one say that during quantum critical phase transition volume term becomes effectively vanishing because cosmological constant as coupling constant vanishes.

      One can argue that this picture is an overidealization. It might however work at GRT limit of TGD where size scale of CD defines the length scale assignable to cosmological constant and is taken to infinity. Thus vacuum extremal would be a good model for the cosmology as described by GRT limit.

  3. There is also second problem. One has two explanations for the vacuum energy and negative pressure. First would come from Käer magnetic energy and Kähler magnetic tension, and second from cosmological constant associated with the volume term. I have considered the possibility that these explanations are equivalent (see this). The first one would apply to the magnetic flux tubes near to vacuum extremals and carrying vanishing magnetic monopole flux. Second one would apply to magnetic flux tubes far from vacuum extremals and carrying non-vanishing monopole flux. One can consider quantum criticality in the sense that these two flux tubes correspond to each other in 1-1 manner meaning that their M4 projections are identical and they have same string tension this).

Steinhardt also considers concrete proposals for modifying inflationary cosmology. The basic proposal is that cosmology is a sequence of big bangs and big crunches, which however do not lead to the full singularity but to a bounch.

I have proposed what could be seen as analog of this picture in ZEO but without bounces (see this). Cosmos would be conscious entity which evolves, dies, and re-incarnates and after the re-incarnation expands in opposite direction of geometric time .

  1. Cosmology for given CD would correspond to a sequence of state function reductions for a self associated with the CD. The first boundary of CD - Big Bang - would be passive. The members of pairs of states formed by states at the two boundaries of CD would not change at this boundary and the boundary itself would remain unaffected. At the active boundary of CD the state would experience a sequence of unitary time evolutions involving localization for the position of the upper boundary. This would increase the temporal distance between the tips of CD. Conscious entity would experience it as time flow. Self would be a generalized Zeno effect.

    Negentropy Maximization Principle dictates the dynamics of state function reductions and eventually forces the first state function reduction to occur to the passive boundary of CD ,which becomes active and begins to shift farther away but in an opposite time direction. Self dies and re-incarnates with opposite arrow of clock time. The size of CD grows steadily and eventually CD is of cosmological size.

    In TGD framework the multiverse would be replaced with conscious entities, whose CDs grow gradually in size and eventually give rise to entire cosmologies. We ourselves could be future cosmologies. In ZEO the conservation of various quantum numbers is not in conflict with this.

  2. The geometry of CD brings in mind Big Bang followed by Big Crunch. This does not however forces space-time surfaces inside CDs to have similar structure. The basic observation is that the TGD inspired model for asymptotic cosmology is string dominated as also the cosmology associated with criticality. The strings in asymptotic cosmology are not however infinitely thin cosmic strings anymore but thickened during cosmic expansion. This suggests that the death of the cosmos leads to re-incarnations with a fractal zoom up of the strings of primordial stage! The first lethal reduction to the opposite boundary would produce a gas of thickened cosmic strings in M4 near the former active boundary.

  3. It might be possible to even test this picture by looking how far in geometric past the proposed coupling constant evolution for cosmological constant can be extrapolated. What is already clear is that in the recent cosmology it cannot be extrapolated to Planck time or even CP2 time.

See the article Is inflation theory simply wrong?.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Thursday, October 27, 2016

What could be the physical origin of Pythagorean scale?

I was contacted for a couple years ago by Hans Geesink and we had long discussions about consciousness and quantum biology. The discussion stimulated new ideas and this inspired me to write a chapter and article comparing our approaches. Now Hans sent me two prepublications by him and D. K. F. Meijer.

The first preprint "Bio-Soliton Model that predicts Non-Thermal Electromagnetic Radiation Frequency Bands, that either Stabilize or Destabilize Life Conditions" is in arXiv ). The abstract reads as:

Solitons, as self-reinforcing solitary waves, interact with complex biological phenomena such as cellular self-organisation. Soliton models are able to describe a spectrum of electromagnetism modalities that can be applied to understand the physical principles of biological effects in living cells, as caused by electromagnetic radiation. A bio-soliton model is proposed, that enables to predict which eigen-frequencies of non-thermal electromagnetic waves are life-sustaining and which are, in contrast, detrimental for living cells. The particular effects are exerted by a range of electromagnetic wave frequencies of one-tenth of a Hertz till Peta Hertz, that show a pattern of twelve bands, if positioned on an acoustic frequency scale. The model was substantiated by a meta-analysis of 240 published papers of biological radiation experiments, in which a spectrum of non-thermal electromagnetic waves were exposed to living cells and intact organisms. These data support the concept of coherent quantized electromagnetic states in living organisms and the theories of Davydov, Fröhlich and Pang. A spin-off strategy from our study is discussed in order to design bio-compatibility promoting semi-conducting materials and to counteract potential detrimental effects due to specific types of electromagnetic radiation produced by man-made electromagnetic technologies.

Second preprint "Phonon Guided Biology: Architecture of Life and Conscious Perception are mediated by Toroidal Coupling of Phonon, Photon and Electron Information Fluxes at Eigen-frequencies" is in Research Gate. The abstract is following.

Recently a novel biological principle, revealing discrete life sustaining electromagnetic (EM) frequencies, was presented and shown to match with a range of frequencies emitted by clay-minerals as a candidate to catalyze RNA synthesis. The spectrum of frequency bands indicate that nature employs discrete eigen-frequencies that match with an acoustic reference scale, with frequency ratios of 1:2, and closely approximated by 2:3, 3:4, 3:5, 4:5 and higher partials. The present study shows that these patterns strikingly resemble eigen-frequencies of sound induced geometric patterns of the membrane vibration experiments of E. Chladni (1787), and matches with the mathematical calculations of W. Ritz (1909). We postulate that the spectrum of EM frequencies detected, exert a phonon guided ordering effect on life cells, on the basis of induction of geometric wave patterns. In our brain a toroidal integration of phonon, photon and electron fluxes may guide information messengers such as Ca2+-ions to induce coherent oscillations in cellular macromolecules. The integration of such multiple informational processes is proposed to be organized in a fractal 4-D toroidal geometry, that is proposed to be instrumental in conscious perception. Our finding of an "acoustic life principle" may reflect an aspect of the implicate order, as postulated by David Bohm.

A very concice and very partial summary about the articles would be following.

  1. 12-note scale seems to be realized in good approximation as frequency bands (rather than single frequencies) for a membrane like system with the geometry of square obeying four-order partical differential equation studied numerically by Ritz. Since the boundary conditions are periodic this system has effective torus topology. This is rather remarkable experimental fact and extremely interesting from TGD point of view.

  2. The papers also argue that also the octave hierarchy is realized. p-Adic length scale hierarchy indeed predicts that subset of powers of 2, and more generally of 21/2 defines a hierarchy of fundamental p-adic scales with p-adic prime p near to power of two.

In the following I will discuss first the condensed matter realization of 12-note scale and after that considder the significance and realization of 12-note scale from TGD point of view.

Condensed matter realization of 12-note scale in terms of oscillations of square plate

The article discusses a condensed matter physics based realization of 12-note. Acoustic waves are seen as fundamental. Certainly the sound waves are important since they couple to electromagnetic waves. My feeling is however that they provide a secondary realization.

  1. The realization of 12-note system as 12 bands discussed in the articles is as eigen frequencies of deformations of square plate. Periodic boundary conditions imply that one can regard the system also as a torus. One has bands, not eigenfrequencies. I do not know whether one can pick up from bands frequencies, whose ratio to the fundamental would be rational and same as for Pythagorean scale. Since the system can be treated only numerically, it is difficult to answer this question.

  2. So called Chladni patterns (see An Amazing Resonance Experiment) are associated with vibrating thin square plate and correspond to the node lines of the deformation of the plate in direction orthogonal to the plate. As one adds very small particles at the plate and if the vibrational acceleration is smaller than the gravitational acceleration the particles get to the node lines and form Chladni pattern. Hence the presence of gravitation seems to be essential for the Chladni patters to occur. These patterns make visible the structure of standing wave eigenmodes of the plate. It is also possible to have patterns assignable to the antinodes at which the deformation is maximum but vibrational acceleration vanishes as in the harmonic oscillator at the maximum value of the amplitude.

  3. The vibrations of square plate obey fourth order partial diff equation for the Chladni pattern having the general form

    t2 u= K (∇2)2 u .

    Here u is the small deformation in direction orthonormal to the plate. The equation can be deduced from the theory of elasticity about which I do not know much. For standing wave solutions the time dependence is separable to trigonometric factor sin(ω t) or cos(ω t), and one obtains eigenvalue equation

    K(∇2)2 u =-ω2 u .

  4. The natural basis for the modes is as products of 1-D modes um(x) for string satisfying ∂x2 um=0 at the ends of the string (x={-1,1}): this in both x and y directions. This must express the fact that energy and momentum do not flow out at boundaries.

    The modes satisfy

    d4um/dx4= kn4 um .

    Boundary conditions allow modes with both even and odd parity:

    um= [cos(km)cosh(kmx) + cosh(km) cos(knx)] /[cosh2km+cos2(km)] ,

    tan(km)+tanh(km)=0 , m even .

    um= [sin(km)sinh(kmx) + sinh(km)sin(knx) )]/[sinh2km+sin2(km)]

    tan(km)-tanh(km)=0 , m odd .

  5. The 2-D modes are not products of 1-D modes but sums of products

    wεmn= um(x)un(y) + ε um(y)un(x) , ε=+/- 1 .

    Modern physicist would notice classical entanglement between x and y degrees of freedom. The first ε=1 mode is analogous symmetric two-boson state and second ε=-1 mode to antisymmetric two-fermion state.

  6. The variational ansatz of Ritz was superposition of these modes (this variational method was actually discovered by Ritz). Ritz minimized the expectation value of the Hermitian operator (∇2)2 in the ground state and obtained an approximation for the frequencies which holds true with 1 per cent accuracy.
Unfortunately, 4-D geometry does not give rise to this kind of equations: time and space are not in democratic roles. TGD inspired vision would be different. The magnetic flux tubes and even strings could be the fundamental objects concerning biology and consciousness. The acoustic realization of the 12-note scale would be secondary one. Even genetic code would have fundamental realization at the level of dark nuclear physics and chemical realization of genetic code would be secondary realization.

2. Why 12-note scale?

Why I am convinced that 12-note scale should be so important?

  1. The mysterious fact about music experience is that frequencies whose ratios come as rationals are somehow special concerning music experience. People with absolute pitch prefer the Pythagorean scale with this property as aesthetically pleasing. Pythagorean scale is obtained by forming the 3k multiples of fundamental and by dividing by a suitable power 2m of 2 to get a frequency in the basic octave. This scale appears in TGD inspired model for music harmonies, which as a byproduct led to a model of genetic code predicting correctly the numbers of DNA codons coding for given amino-acid. The appearance of powers of 2 and 3 suggest 3-adicity and 2-adicity. Furthermore, rationals correspond to the lowest evolutionary level defined by the hierarchy of algebraic extensions of rationals.

    This gives excellent reasons to ask whether 12-note scale could be realized as some physical system. One might hope that this system could be somehow universal. Geometric realization in terms of wave equation would be the best that one could have.

  2. The model of harmony is realized in terms of Hamilton cycles assignable to icosahedron and tetrahedron. Hamilton cycles at icosahedron are closed paths going through all 12 points of icosahedron and thus can define a geometric representation of the Pythagorean scale. The rule is that curve connects only nearest points of icosahedron and corresponds to scaling of frequency by 3/2 plus reduction to basic octave by dividing by a suitable power of 2. The triangles of the icosahedron define allowed 20 chords for given harmony and one obtains 256 basic harmonies characterized by the symmetries of the cycle: symmetry group can be cyclic group Z6, Z4 or Z2 or reflection group Z2 acting on icosahedron.

    Bioharmonies are obtained by combining Z6, Z4 and Z2 of either type. One obtains 20+20+20 =60 3-chords defining the bio-harmony. One must add tetrahedral harmony with 4 chords in order to obtain 64 chords. It turns out that it corresponds to genetic code under rather mild assumptions. DNA codons with 3 letters could correspond 3-chords with letter triplets mapped to 3-chords. Amino-acids would correspond to orbits of given codon at icosahedron under one of the symmetry groups involved.

How to realize 12-note scale at fundamental level universally?

How could one realize 12-note scale at the fundamental level - that is in terms of 4-D geometry? The realization should be also universal and its existence should not depend on special properties of physical system. Vibrating strings provide the simplest manner to realize 12-note scale. Harmonics do not however allow its realization. They are in higher octaves and define only the color of the note. There are actually two realizations.

The simplest realization relies on the analogy with piano.

  1. The string of piano corresponds to a magnetic flux tube/associated fermionic string and the frequency of the note would be determined by the length of the flux tube. The quantization for the length as certain rational multiples of p-adic length scale gives rise to the 12-note scale. Tensor network would be like piano with the flux tubes of the network with quantized lengths defining the strings of piano.

  2. Why the length of the flux tube defining the fundamental frequency would correspond to a frequency of Pythagorean scale? Could this be due to the preferred extremal property realizing SH and posing very strong conditions on allowed space-time surface and 3-surfaces at their ends at boundaries of causal diamonds? If so, 12-note scale would be part of fundamental physics!

    The rational multiples f(m,n)= (m/n)f0, m=0,1,..n-1, of the fundamental f0 with m/n≤ 2 (single octave) are in a preferred position mathematically since the superpositions of waves with these frequencies can be represented as superpositions of the suitable harmonics of the scaled down fundamental f1=f0/n. For Pythagorean scale m/n= 3k/2l the new fundamental is some "inverted octave f1= f0/2kmax of the fundamental and the allowed harmonics are of form m=2r3l.

Second realization would be dynamical and based on the analogy with string instruments.
  1. String instruments allow to realize 12-note scale by varying the length of the vibrating string. The note of scale corresponds to the fundamental frequency for the portion of the shortened string, which is picked. Why the lengths of shortened strings should correspond to inverses of frequencies of 12-note scale? One should have powers of 3 divided by powers of 2 to get a frequency in fundamental octave. Could p-adic length scale hypothesis, which generalizes and length scales coming as powers of square roots of small primes help?

  2. Strings bring in mind magnetic flux tubes connecting partonic 2-surfaces. They behaving in good approximation like strings and are actually accompanied by genuine fermionic strings and corresponding string world sheets. Flux tubes play a fundamental role in living matter in TGD Universe. Flux tubes carrying dark matter identified as large heff=n× h phases would serve as space-time correlates for negentropic entanglement and gives rise to tensor nets with partonic 2-surfaces as nodes and flux tubes connecting them (see this). Could magnetic flux tubes or associated fermionic strings provide the instruments using Pythagorean scale?

    Partonic 2-surfaces and string world sheets dictate space-time surface by strong form of holography (SH) implied by strong form of general coordinate invariance. It is quite possible that not all configurations of partonic 2-surfaces and string world sheets allow SH that is realization as space-time surface: perhaps only the flux tubes with length corresponding to Pythagorean scale allow it. For p-adic counterparts of space-time surfaces the possibility of p-adic pseudo-constants (failure of strict determinism of field equations) makes this possible: the interpretation is as imagined p-adic space-time surface which cannot be realized as real space-time surface.

    How these flux tubes could behave like strings of guitar? When my finger touching the guitar string it dividing it to two pieces. The analogy for this is the appearance of additional partonic 2-surface between the two existing ones so that one has two flux tubes connecting the original partonic two-surface to the new one. A change of the topology of 3-space would be involved with this stringy music!

    More precisely, the flux tubes would be closed if they carry monopole magnetic flux: they would begin from "upper" wormhole throat of wormhole contact A (partonic 2-surface), go along "upper" space-time sheet to the throat of wormhole contact B go the "lower"space-time sheet through it, return to the "lower" throat of wormhole contact A and back to the "upper" throat. Shortening of the string would correspond to a formation of wormhole contact at some point of this flux tube structure splitting the flux tube to two pieces.

  3. Another realization could be in terms of the quantization of the distance between partonic 2-surfaces connected by flux tubes and associated strings in given p-adic length scale, which by p-adic length scale hypothesis would correspond to power of square root of 2 so that also octaves and possibly also half octaves would be obtained (note that half octave corresponds to tritonus, which was regarded by church as an invention of devil!). Also now the justification in terms of SH.

For background the chapter Quantum model for hearing. See also the article What could be the physical origin of Pythagorean scale?.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, October 24, 2016

Brain metabolic DNA as an indication for genomic R&D based on dark DNA

I learned a lot in SSE-2016 conference. For instance, the notion of brain metabolic DNA (BMD) about which Antonion Giudetta had a nice poster was a new notion to me. TGD suggests active R&D like process driving genetic evolution and I have been a little bit disappointed since epigenetics is too passive in this respect. BMD would fit with my crazy speculations.

I try to summarize my first impressions about brain metabolic DNA.

  1. The profiles for both the repetitive and non-repetitive fractions differ from native DNA and for learning rats differs from those for control rats. Stress and learning situations induce this process and it occurs at least in brain.

  2. Wikipedia lists DNA replication and repair as the basic mechanisms of DNA synthesis. They would yield essentially a copy of native DNA. Does this mean that there could be some new mechanism responsible for the synthesis?

I have worked with two new new mechanisms of DNA synthesis emerging from TGD based new biophysics for which MB consisting of magnetic flux tubes carrying dark matter identified as large heff=n× h, n integer, phases is crucial.

These new phases of ordinary particles identifiable as dark matter would make possible macroscopic quantum coherence in much longer length scales than usually for large values of n since Compton length is proportional to heff. Large heff would make living matter a macroscopic quantum system. Large heff phases would be created at quantum criticality: the large values of Compton lengths would be correlates for long range correlations and quantum fuctuations. Quantum criticality is indeed emerging as a basic aspect of living matter.

  1. The experiments of Montagnier et al suggest that remote replication of DNA involving sending information about the template strand using light is possible. Peter Gariaev's group has made similar claims much earlier. Together with Peter Gariaev we published an article in Huping Hu's journal DNADJ about remote replication of DNA before the work of Montagnier (the article is also at homepage).

    The idea is that what I call dark photons (see below) carry genetic information. Dark photons would have energies in visible and UV range and could transform to biophotons with same energy. This would make them bio-active since biomolecules have transition energy spectrum in this range. The challenge is to understand the details of the information transfer mechanism. What would be needed would be regeneration DNA or dark DNA at the receiver end using the information. How this precisely occurs is of course only a subject of speculation.

    This mechanism as such would not however apply to this situation since the ordinary DNA could not serve as template.

  2. The notion of dark DNA is one of the key new physics notions of TGD and the transcription of dark DNA to ordinary DNA could be involved with generation BMD.

    1. The proposal is that genetic code has realization at the level of "dark" nuclear physics (see this). Dark DNA would correspond to dark proton sequences having interpretation as dark nuclei. Darkness would mean that the protons are in phase with non-standard value of Planck constant given by heff=n× h, n integer which can vary. The value of heff learns as a kind of intelligence quotient since it tells the scales of long term memory and intentional action and also the size scale of the system). It could serve as intelligence quotient of cells and pyramidal neurons generating EEG as Josephson radiation (frequency of Josephson radiation is f= 2eV/heff in terms of membrane potential V) could be the neuronal intellectuals).

    2. Dark DNA could accompany ordinary DNA as parallel dark proton strands. The negative phosphate charge would neutralize the positive charge of dark protons so that the system would be classically stable. The ability to pair in this manner would quite generally select preferred biomolecules as winners in evolution.

    3. For instance, the transcription of dark DNA to ordinary DNA is possible: dark DNA would serve as template for the ordinary DNA codons. Dark variants of biomolecules could make possible R&D in living matter. Evolution would not be by random mutations plus selection but intentional and more analogous to occurring in R&D laboratories.

    4. If dark DNA strands were used as tempates in the generation of BMD one could understand why learning BMD differs from the native DNA. Primarily the dark DNA would be modified as a response to learning and the modification would be transcribed to that of ordinary DNA.

      The interesting question is whether these changes could also be transferred to the germ cells say by sending the information in form of light and generating copies of newly generated DNA portions replacing the original ones.

See the chapter Homeopathy in many-sheeted space-time. See also the article Comments about the representations in SSE-2016 conference about consciousness, biology, and paranormal phenomena .

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Anesthetes again

The writing of the summary about SSE-2016 conference forced to think again the model for anesthetes in light of the vision about cell membrane as sel-loading battery relying on TGD based model for Pollack's exclusion zones (EZ) in terms of heff/h=n phases.

First however a philosophical remark.

  1. According to the behavioristic definition of consciousness, the ability to respond to sensory input and perform motor actions are essential aspects of consciousness. To my opinion these abilities correspond to only particular type of consciousness and consciousness might be possible even without neural activities (OBEs and NDEs). In any case, the inability to generate nerve pulse patterns would be an essential aspect for what we call loss of consciousness. This happens if there is hyperpolarization of neuronal membrane.

  2. Hyperpolarization means reduced rate of spontaneous nerve pulse generation. This would be achieved if microtubules gain additional negatively charge so that the radial component of microtubule electric field increases. Hence the interaction of anesthetes with the microtubuli should generate this negative charge. One possibility is that Pollack effect generates in the presence of anesthete negatively charged exclusion zone (EZs). The TGD based model assumes that the protons are transferred to the magnetic flux tubes as dark protons and perhaps end up to the exterior of cell membrane and transform to ordinary protons. This would induce hyperpolarization. The neutral anesthete atoms or molecules in turn could be transferred to the microtubules along flux tubes.

Consider next a model for the cell membrane.
  1. In TGD Universe cell membranes could be generalized Josephson junctions. The energy of generalized Josephson photons (dark with energies in bio-photon range) would be the difference of cyclotron energies for flux tubes at the two sides of the membrane plus the ordinary Josephson energy. Generalized Josephson photons would take care of communications of sensory data to MB.

    Unless the cyclotron energies at the two sides of the membrane are same, the new contribution would dominate in the communications to MB for large values of heff since cyclotron energy is proportional to heff, and neuronal contribution would represent frequency modulation allowing to code nerve pulse patterns to kind of "whale's song". For smaller value of heff ordinary Josephson energy would dominate.

    There is a temptation to assume that the value of heff serves as a kind of intelligence quotient of cell. Frequency scale and energy scale for the analog of EEG would serve for the same purpose. For instance, pyramidal neurons responsible for EEG would represent the intellectual elite of brain and ordinary cells could have much smaller value of heff being say by factor 2-10 smaller than for pyramidal cells so that generalized Josephson energy would be of the same order of magnitude as ordinary Josephsone energy and in IR range.

  2. Generalized Josephson photons with biophoton energies would also generate Pollack's EZs by ionizing one proton from hydrogen bonded pair of water molecules. The reduction of the membrane potential below the threshold for nerve pulse generation could reduce the energy of Josephson photons below threshold for generating Pollack's EZs and neuronal membrane would cease to be self-loading battery: this would replace ionic Josephson currents with ohmic currents through cell membrane and generate nerve pulse.

    The objection is that for low values of heff generalized Josephson energy reduces to ordinary one in IR range and for high values to cyclotron energy in visible-UV range. It is known that IR photons generate EZs in the experiments of Pollack. The process could occur in two steps involving cyclotron radiation - perhaps from MB - kicking of hydrogen bonded water molecules to a state, where proton is almost ionized so that the IR radiation would take care of the ionization. The mechanism generating EZs cannot be different for ordinary cells and neurons. Either the notion of generalized Josephson junction must be given up or in the case of neurons glial cells accompanying also axons generate the IR radiation giving rise to EZs inside axons.

  3. It is also attractive to see at least ordinary cell membrane as a self-loading battery. The generation of Pollack's EZs with negative charge and dark proton charge at magnetic flux tubes of the associated MB could make cell a self-loading battery ".

    Generalized Josephson photons from cell membrane or cyclotron photons could generate EZs by kicking protons to dark protons at flux tubes of MB of the cell. The energy must be in some critical range in order that this can happen. For too small energies the process stops. Besides ionic charge distributions EZs and the delocalized dark proton charges and the flux tubes extending beyond cell interior would be responsible for the resting potential.

    EZs are not expected to be completely stable. The heff→ h phase transition would bring dark protons back as ordinary protons and destroy EZs and reduce the magnitude of membrane potential. There could be a competition between the generation and destruction of EZs by heff→ h phase transition.

  4. This picture is enough to explain the effect of anesthetes. Anesthetes at microtubules would generate a negative charge assignable to additional EZs thus increasing the magnitude of the membrane potential. This would imply stable hyperpolarization preventing the generation of nerve pulses.

What about generation of nerve pulses in this framework? I have suggested a TGD based model for nerve pulse relying on the idea about cell membrane as array of Josephson junctions consisting of membrane proteins (channel and pump proteins) but the model leaves open what exactly generates the nerve pulse. The expectation has however been that microtubules play a key role in the generation of nerve pulse. A charge wave with positive charge propagating along microtubule could induce the reduction of the membrane potential and lead to a generation of nerve pulse as a secondary wave.
  1. The propagation of heff→ h phase transition followed by its reversal along axon interior could serve as a weak control signal inducing the nerve pulse propagation at quantum criticality. This phase transition could be assignable to microtubules. Battery would temporarily discharge during the nerve pulse. If glial cells generate the EZs making axons glial-cell loaded batteries then the return back to the normal state after nerve pulse would be possible by the presence glial cells.

  2. During nerve pulse either the generation of EZs ceases and/or the existing EZs suffer an heff reducing phase transition so that flux tubes are shortened and the positive dark charge returns to EZs and cell membrane potential is reduced. The generation of nerve pulse is usually modelled using ohmic ionic currents, which suggests that quantum coherence is lost by a reduction of heff, which is predicted to be proportional to ion mass so that cyclotron energy spectrum is universal and in visible-UV range for bio-photons.

  3. Nerve pulse could be a "secondary wave induced by a wave of positive charge propagating along microtubule. This wave of positive charge would rather naturally result from the reduction heff→ h and return back to heff. A pair of phase transitions dark-ordinary-dark would propagate along the microtubule. The unidirectionality of the propagation direction would be forced by the fact that it can begin only from axonal hillock. Axonal hillock contains a large number of voltage gated ion channels, which would serve as generalized Josephson junctions in TGD framework.

  4. What one can one conclude about the development of total charge during the time development of membrane potential V(t)? Nerve pulse corresponds to certain segment of axon and lasts for few milliseconds. The cell membrane voltage goes from resting potential V(t=0)= Vrest to approximately V(t=T)= -Vrest and returns back. The total charge in cell interior defines the value of electric field E at the interior side of cell membrane and approximation interior as conductor, the value of Ein good approximation one has V= Ed= Qcelld/4π R2 in spherical geometry and V= Ed= dQtot/dl d/2π R in cylindrical geometry of axon. Here Qtot is the charge of the piece of axons at which nerve pulse is located. Total charge is sum of microtubular charge Qmt serving as a control parameter and the total ionic charge QI changing due to the presence of ohmic ionic currents during the pulse (ionic currents are Josephson currents except during nerve pulse).

    To get some quantitative grasp, let us idealize the situation by assuming that during nerve pulse the negative microtubular charge Qmt(0)<0 goes to Qmt(T)=0 for V(T)= -Vrest (EZs disappear totally) and returns back to its original value as the phase transition returning the value of heff occurs.

    One has Qtot(0)=Qmt(0)+QI(0) before the nerve pulse. At V= -Vrest one has Qtot (T)=-Qtot(0), which gives -Qtot(0) = QI(T). This gives Qmt(0)= QI(T)-QI(0).

    What can one say about the magnitude of Qmt? If this charge serves control purpose and if the system is kicked off from quantum criticality, the change of Qmt need not be large so that no large modifications of the ordinary model of nerve pulses are needed. The negative microtubular charge is partially due to the GTPs along microtubular to which EZs are associated. The value of resting potential of order .06 eV at threshold for nerve pulse generation and estimates for linear ionic charge densities dQI(0)/dl and dQI(T)/dl and Qmt(0)/dt would allow to test the model.
    The heff→ h phase transition outside quantum criticality would take place in millisecond time scale.

The distinctions between neurons and ordinary cells allow to invent objections against the proposed scenario.
  1. Ordinary cell membrane should act as a self-loading battery with Josephson radiation generating Pollack's EZs. Axonal microtubules are missing but the cytoskeleton consisting also of microtubules is present. Inside the cell soma the microtubules meet the cell membrane transversally. There is also T-shaped antenna like structure involving microtubules whereas ordinary neurons have axonal microtubules. Also now a microtubular positive charge generated by heff→ h phase transition could induce the reduction of membrane potential.

  2. Why the analog of nerve pulse does not take place also now? In the case of cancer cells membrane potential is reduced and can become even vanishing, and one might think that the lack of recovery is due to the absence of glial cells taking care that EZs are generated. For too low Josephson energies the self-loading would stop and due to the spontaneously occurring heff→ h phase transitions, the membrane potential would be gradually reduced.

    In the case of neurons the heff→ h phase transition would occur fast. The transition away from quantum criticality could cause this since long range quantum fluctuations would disappear. The value of membrane potential or the difference between neuronal and glial membrane potentials could serve as a critical parameter changing as the membrane potential is reduced. The quantum criticality of ordinary cell membrane would be analogous to self-organized quantum criticality. That of neuronal axon to quantum criticality induced by glial cells.

See the chapter Quantum model for nervepulse. See also the article Comments about the representations in SSE-2016 conference about consciousness, biology, and paranormal phenomena .

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Sunday, October 23, 2016

Toponium at 30.4 GeV?

Prof. Matt Strassler tells about a gem found from old data files of ALEPH experiment by Arno Heisner. The 3-sigma bump appears at 30.40 GeV and could be a statistical fluctuation and probably is so. It has been found to decay to muon pairs and b-quark pairs. The particle that Strassler christens V (V for vector) would have spin 1.

Years ago I commented a candidate for scaled down top quark reported by Aleph: this had mass around 55 GeV and the proposal was that it corresponds to p-adically scaled up b quark with estimated mass of 52.3 GeV.

Could TGD allow to identify V as a scaled up variant of some spin 1 meson?

  1. p-adic length scale hypothesis states that particle mass scales correspond to certain primes p≈ 2k, k>0 integer. Prime values of k are of special interest. Ordinary hadronic space-time sheets would correspond to hadronic space-time sheets labelled by Mersenne prime p=M107=2107-1 and quarks would be labelled by corresponding integers k.

  2. For low mass mesons the contribution from color magnetic flux tubes to mass dominates whereas for higher mass mesons consisting of heavy quarks heavy quark contribution is dominant. This suggests that the large mass of V must result by an upwards scaling of some light quark mass or downwards scaling of top quark mass by a power of square root of 2.

  3. The mass of b quark is around 4.2-4.6 GeV and Upsilon meson has mass about 9.5 GeV so that at most about 1.4 GeV from total mass would correspond to the non-perturbative color contribution partially from the magnetic body. Top quark mass is about 172.4 GeV and p-adic mass calculations suggest k=94 (M89) for top. If the masses for heavy quark mesons are additive as the example of Upsilon suggests, the non-existing top pair vector meson - toponium - would have mass about m= 2× 172.4 GeV =344.8 GeV.

  4. Could the observed bump correspond to p-adically scaled down version of toponium with k= 94+7=101, which is prime? The mass of toponium would be 30.47 GeV, which is consistent with the mass of the bump. If this picture is correct, V would be premature toponium able to exist for prime k=101. Its decays to b quark pair are consistent with this.

  5. Tommaso Dorigo argues that the signal is spurious since the produced muons tend to be parallel to b quarks in cm system of Z0. Matt Strassler identifies the production mechanism as a direct decay of Z0 and in this case Tommaso would be right: the direct 3-particle decay of Z0→ b+bbar+V would produce different angular distribution for V. One cannot of course exclude the possibility that the interpretation of Tommaso is that muon pairs are from decays of V in its own rest frame in which case they certainly cannot be parallel to b quarks. So elementary mistake from a professional particle physicist looks rather implausible. The challenge of the experiments was indeed to distinguish the muon pairs from muons resulting from b quarks decaying semileptonically and being highly parallel to b quarks.

    A further objection of Tommaso is that the gluons should have roughly opposite momenta and fusion seems highly implausible classically since the gluons tend to be emitted in opposite directions. Quantally the argument does not look so lethal if one thinks in terms of plane waves rather than wave packets. Also fermion exchange is involved so that the fusion is not local process.


  6. How the bump appearing in Z0→ b+bbar+V would be produced if toponium is in question? The mechanism would be essentially the same as in the production of Ψ/J meson by a c+cbar pair. The lowest order diagram would correspond to gluon fusion. Both b and bbar emit gluon and these could annihilate to a top pair and these would form the bound state. Do virtual t and tbar have ordinary masses 172 GeV or scaled down masses of about 15 GeV? The checking which option is correct would require numerical calculation and a model for the fusion of the pair to toponium.

    That the momenta of muons are parallel to those of b and bbar could be indeed understood. One can approximate gluons with energy about 15 GeV as a brehmstrahlung almost parallel/antiparallel to the direction of b/bbar both having energy about 45 GeV in the cm system of Z0. In cm they would combine to V with helicity in direction of axis nearly parallel to the direction defined by the opposite momenta of b and bbar. The V with spin 1 would decay to a muon pair with helicities in the direction of this axis, and since relativistic muons are in question, the momenta would by helicity conservation tend to be in the direction of this axis as observed.

Are there other indications for scaled variants of quarks?
  1. Tony Smith has talked about indications for several mass peaks for top quark. I have discussed this here in terms of p-adic length scale hypothesis. There is evidence for a sharp peak in the mass distribution of the top quark in 140-150 GeV range). There is also a peak slightly below 120 GeV, which could correspond to a p-adically scaled down variant t quark with k=93 having mass 121.6 GeV for (Ye=0, Yt=1). There is also a small peak also around 265 GeV which could relate to m(t(95))=243.2 GeV. Therefore top could appear at least at p-adic scales k=93, 94, 95. This argument does not explain the peak in 140-150 GeV range rather near to top quark mass.

  2. What about Aleph anomaly? The value of k(b) somewhat uncertain. k(b)=103 is one possible value. I have considered the explanation of Aleph anomaly in terms of k=96 variant of b quark. The mass scaling would be by factor of 27/2, which would assign to mass mb=4.6 GeV mass of about 52 GeV to be compared with 55 GeV.
To sum up, the objections of Tommasso Dorigo might well kill the toponium proposal and the bump is probably a statistical fluctuation. It is however amazing that its mass comes out correctly from p-adic length scale hypothesis which does not allow fitting.

For background see New Particle Physics Predicted by TGD: Part I . See also the article Toponium at 30.4 GeV?

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Friday, October 21, 2016

Impressions from SSE-2016 conference

I had the opportunity to participate SSE-2016 conference held October 13-16 in Sigtuna, Sweden. The atmosphere of conference was very friendly and inspiring and it was heartwarming to meet people familiar from past conferences and email contacts. I am grateful for Tommi Ullgren for making the participation possible and taking care of all practicalities so that I had just to remember to take my passport with me and arrive to Helsinki at correct time!

The themes of the conference were consciousness, biology, and paranormal phenomena (or more neutral "remote mental interactions" or even milder "non-locality" used in order to not induce so strong aggressions in skeptics). There were several notable speakers such as Stuart Hameroff talking about Orch-Or, microtubules and anesthetes as a Royal Road to the understanding of consciousness; Anirban Bandyonophyay talking about his ideas related to music, fractals, and ....; JohnJoe McFadden explaining his electromagnetic theory of consciousness and quantum biology; Rupert Sheldrake talking about morphogenetic fields; etc... Besides invited lectures and keynote talks many other very interesting talks we held. Panel discussions helped to see the differences between various approaches.

Personal face-to-face discussions were highly stimulating. I am rather passive socially thanks to certain rather traumatic experiences of past generating Pavlov dog like conditioning against anything associating with academic and a very severe phobia towards professors. Therefore I am grateful for Tommi for serving as a social midwife making possible also for me to get involved to these discussions.

Before leaving to Sigtuna I promised in Facebook to give some kind of report about the conference and now I must fill my promise. In the following I summarize some of my expressions about various talks. For a man of one theory like me the only manner that I can get view what was presented is by comparing it to my own theory - that is TGD. Why this strategy is so good is that only differences need to be detected in order to get a rough overall view. Therefore TGD has at least one purpose for its existence: to make easier for its developer to learn what others have done!

My perspective is rather narrow: I am a theoretical physicist interested in the quantum physical correlates of consciousness and life and also paranormal phenomena. Theoreticians are in general skeptics concerning the theories of others and I am not an exception. I am basically interested on new interesting phenomena providing challenges for TGD inspired theory of consciousness and quantum biology. About talks related to measurement technology or medicine I cannot say anything interesting.

Unfortunately, I lost some lectures and had to use abstracts to get idea about what the contents was. Almost as a rule, I comment only those lectures that I listened or which had obvious connection with my own work. I do not even try to be objective and report only my impressions about those talks that induced cognitive resonance.

The page providing the proceedings of SSE-2016 is under construction and will contain the abstracts of various talks.

I will discuss various rperesentations from TGD point of view in the article Comments about the representations in SSE-2016 conference about consciousness, biology, and paranormal phenomena.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Sunday, October 09, 2016

Topological condensed matter physics and TGD

There has been a lot of talk about the physics Nobel prize received by Kosterlitz, Thouless, and Haldane. There is an article summarizing the work of KTH in rather detailed manner. The following is an attempt to gain some idea about what are the topics involved.

The notions of topological order, topological physics, and topological materials pop up in the discussions. I have worked for almost 4 decades with Topological Geometrodynamics and it is somehow amusing how little I know about the work done in condensed matter physics.

The pleasant surprise is that topological order seems to have rather precise counterpart in TGD at the level of fundamental physics: in standard model it would emerge. In any case, it is clear that condensed matter physicists have taken the lead during last 32 years when particle physicists have been wandering in stringy landscape.

See the article Topological condensed matter physics and TGD.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Still about induced spinor fields and TGD counterpart for Higgs

The understanding of the modified Dirac equation and of the possible classical counterpart of Higgs field in TGD framework is not completely satisfactory. The emergence of twistor lift of Kähler action inspired a fresh approach to the problem and it turned out that a very nice understanding of the situation emerges.

More precise formulation of the Dirac equation for the induced spinor fields is the first challenge. The well-definedness of em charge has turned out to be very powerful guideline in the understanding of the details of fermionic dynamics. Although induced spinor fields have also a part assignable space-time interior, the spinor modes at string world sheets determine the fermionic dynamics in accordance with strong form of holography (SH).

The well-definedness of em charged is guaranteed if induced spinors are associated with 2-D string world sheets with vanishing classical W boson fields. It turned out that an alternative manner to satisfy the condition is to assume that induced spinors at the boundaries of string world sheets are neutrino-like and that these string world sheets carry only classical W fields. Dirac action contains 4-D interior term and 2-D term assignable to string world sheets. Strong form of holography (SH) allows to interpret 4-D spinor modes as continuations of those assignable to string world sheets so that spinors at 2-D string world sheets determine quantum dynamics.

Twistor lift combined with this picture allows to formulate the Dirac action in more detail. Well-definedness of em charge implies that charged particles are associated with string world sheets assignable to the magnetic flux tubes assignable to homologically non-trivial geodesic sphere and neutrinos with those associated with homologically trivial geodesic sphere. This explains why neutrinos are so light and why dark energy density corresponds to neutrino mass scale, and provides also a new insight about color confinement.

A further important result is that the formalism works only for imbedding space dimension D=8. This is due the fact that the number of vector components is the same as the number of spinor components of fixed chirality for D=8 and corresponds directly to the octonionic triality.

p-Adic thermodynamics predicts elementary particle masses in excellent accuracy without Higgs vacuum expectation: the problem is to understand fermionic Higgs couplings. The observation that CP2 part of the modified gamma matrices gives rise to a term mixing M4 chiralities contain derivative allows to understand the mass-proportionality of the Higgs-fermion couplings at QFT limit.

See the article Still about induced spinor fields and TGD counterpart for Higgs or the chapter Higgs or something else of "p-Adic physics".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Friday, October 07, 2016

Boolean algebras, Stone spaces and TGD

The Facebook discussion with Stephen King about Stone spaces led to a highly interesting development of ideas concerning Boolean, algebras, Stone spaces, and p-adic physics. I have discussed these ideas already earlier but the improved understanding of the notion of Stone space helped to make the ideas more concrete. The basic ideas are briefly summarized.

p-adic integers/numbers correspond to the Stone space assignable to Boolean algebra of natural numbers/rationals with p=2 assignable to Boolean logic. Boolean logic generalizes for n-valued logics with prime values of n in special role. The decomposition of set to n subsets defined by an element of n-Boolean algebra is obtained by iterating Boolean decomposition n-2 times. n-valued logics could be interpreted in terms of error correction allowing only bit sequences, which correspond to n<p<2k in k-bit Boolean algebra. Adelic physics would correspond to the inclusion of all p-valued logics in single adelic logic.

The Stone spaces of p-adics, reals, etc.. have huge size and a possible identification (in absence of any other!) is in terms of concept of real number assigning to real/p-adic/etc... number a fiber space consisting of all units obtained as ratios of infinite primes. As real numbers they are just units but has complex number theoretic anatomy and would give rise to what I have assigned the terms algebraic holography and number theoretic Brahman = Atman.

For a details see the articleBoolean algebras, Stone spaces and TGD.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Tuesday, October 04, 2016

p-Adic physics as physics of cognition and imagination and counterparts of recursive functions

In TGD Universe p-adic physics is physics of cognition and imagination and real physics also carries signatures about the presence of p-adic physics as p-adic fractality: this would explain the unexpected success of p-adic mass calculations. The outcome would be a fusion of real and various p-adic number fields to form adeles. Each extension of rationals giving rise to a finite-dimensional extension of p-adic numbers defines an adele, and there is hierarchy of adeles defining an evolutionary hierarchy. The better the simulation p-adic space-time sheet is for real space-time sheet, the larger the number of common algebraic points is. This intuitive idea leads to the notion of monadic geometry in which the discretization of the imbedding space causal diamond is central for the definition of monadic space-time surfaces. They are smooth both in real and p-adic sense but involve discretization by algebraic points common to real and p-adic space-time surfaces for some algebraic extension of rationals inducing corresponding extension of p-adics.

How this could relate to computation? In the classical theory of computation recursive functions play a key role. Recursive functions are defined for integers. Can one define them for p-adic integers? At the first glance the only generalization of reals seems to be the allowance of p-adic integers containing infinite number of powers of p so that they are infinite as real integers. All functions defined for real integers having finite number of pinary digits make sense p-adically.

What is something compeletely new that p-adic integers form a continuum in a well-defined sense and one can speak of differential calculus. This would make possible to pose additional conditions coming from the p-adic continuity and smoothness of recursive functions for given prime p. This would pose strong constraints also in the real sector for integers large in the real sense since the values f(x) and f(x+ kpn) would be near to each other p-adically by p-adic continuity and p-adic smoothness would pose additional strong conditions.

How could one map p-adic recursive function to its real counterpart? Does one just identify p-adic arguments and values as real integers or should one perform something more complex? The problem is that this correspondence is not continuous. Canonical identification for which the simplest form is I: xp=∑n xnpn→ ∑n xnp-n=xR would however relate p-adic to real arguments continuously. Canonical identification has several variants typically mapping small enough real integers to p-adic integers as such and large enough integers in the same manner as I. In the following let us restrict the consideration to I.

Basically, one would have p-adic valued recursive function fp(xp) with a p-adic valued argument xp. One can assign to fp a real valued function of real argument - call it fR - by mapping the p-adic argument xp to its real counterpart xR and its value yp=fp(x) to its real counterpart yR: fR(xR) = I(f(xp)=yR. I have called the functions in this manner p-adic fractals: fractality reflects directly to p-adic continuity.

fR could be 2-valued. The reason is that p-adic numbers xp=1 and xp =(p-1)(p+p2+..) are both mapped to real unit and one can have fp(1)≠ fp((p-1)(p+p2+..)). This is a direct analog for 1=.999... for decimal expansion. This generalizes to all p-adic integers finite as real integers: p-adic arguments (x0, x1,...xn, 0, 0,...) and (x0,x1,...xn-1,(p-1),(p-1),...) are mapped to the same real argument xR. Using finite pinary cutoff for xp this ceases to be a problem.

Recursion plays a key role in the theory of computation and it would be nice if it would generalize in a non -trivial manner to the realm of p-adic integers (or general p-adic numbers).

  1. From Wikipedia one finds a nice article about primitive recursive functions. Primitive recursive functions are very simple. Constant function, successor function, projection function. From these more complex recursive functions are obtained by composition and primitive recursion. These functions are trivially recursive also in p-adic context and satisfy the conditions of p-adic continuity and smoothness. Composition respects these properties tool. I would guess that same holds also for primitive recursion.

    It would seem that there is nothing new to be expected in the realm of natural numbers if one identifies p-adic integers as real integers as such. Situation changes if one uses canonical identification mapping p-adic integers to real numbers (for instance, 1+2+22→> 1+1/2+1/4= 7/4 for 2-adic numbers). One could think of doing computations using p-adic integers and mapping the results to real numbers so that one could do computations with real numbers using p-adic integers and perhaps p-adic differential calculus so that computation using analytic computations would become possible instead of pure numerics. This could be very powerful tool.

  2. One can consider also real valued recursive functions and functions having values in (not only) algebraic extensions of rationals. Exponent function is an interesting primitive recursive function in real context: in p-adic context exp(x) exists p-adically if x has p-adic norm smaller than 1). exp(x+1) does not exist as p-adic number unless one introduces extension of p-adic numbers containing e: this is necessary in physically interesting p-adic group theory. exp(x+kp) however exists as p-adic number. The composition of exp restricted to p-adic numbers with norm smaller than 1 with successor function does not exist. Extension of rationals containing e is needed if one wants successor axiom and exponential function.

  3. The fact that most p-adic integers are infinite as real numbers might pose problems since one cannot perform infinite sums numerically. p-Adic continuity would of course allow approximations using finite number of pinary digits. The real counterparts of functions involved using canonical identification would be p-adic fractals: this is something highly non-trivial physically.

    One could also code the calculations at higher level of abstraction by performing operations for functions rather than numbers. The finite arithmetics would be for the labels of functions using tables expression the rules for various operations for functions (such as multiplication). Build a function bases and form tables for various operations between them like multiplication table of algebra, computerize the operations using these tables and perform pinary cutoff at end. The rounding error would emerge only at this last step.

The unexpected success of deep learning is conjectured to reflect the simplicity of the physical world: only a small subset of recursive functions is needed in computer simulation. The real reason could be p-adic physics posing for each value of p very strong additional constraints on recursive functions coming from p-adic continuity and differentiability. p-Adic differential calculus would be possible for the p-adic completions of integers and could profoundly simplify the classical theory of computation.

For background see the article TGD Inspired Comments about Integrated Information Theory of Consciousness.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.