https://matpitka.blogspot.com/search?updated-max=2015-05-19T20:49:00-07:00&max-results=100&reverse-paginate=true

Saturday, November 01, 2014

Large parity breaking in heavy ion collisions?

Ulla Matfolk reminded about an old Sciencedaily article (see this) telling about discovery of large parity breaking effects at RHIC studying collisions of relativistic heavy ions at energies at which QCD suggests the formation of quark gluon plasma. Somehing exotic is observed but it seems to be something different from quark gluon plasma in that long range correlations not characteristic for plasma phase are present and the particle production does not look like black body radiation. Similar findings are made also at LHC and also for proton-proton collisions. This suggests new physics and M89 hadron physics is the TGD inspired candidate for it. In any case, I took the article as a hype as I read it for four years ago.

Now I read the article again and started to wonder on what grounds authors claim large parity violation. What they claim to observed are magnetic fields in which u and d quarks with charges 2/3 and -1/3 move in opposite directions along the magnetic field lines (flux tubes in TGD). They assign these motions to the presence of strong parity breaking, much stronger than predicted by the standard model.

1. Instanton density as origin of parity breaking

What says TGD? In TGD magnetic fields would form flux tubes, even flux tubes carrying monopole flux are possible. The findings suggests that magnetic field was accompanied by electric field and that both were parallel to the flux tubes and each other in average sense. Helical magnetic and electric fields parallel in average sense could be associated with flux tubes in TGD.

The helical classical field patterns would break the parity of ground state. Instanton density for Kähler field, essentially E.B, measuring the non-orthogonality of E and B would serve as a measure for the strength of parity breaking occurring at the level of ground state and thus totally different from weak parity breaking. u and d quarks with opposite signs of em charges would move in opposite directions in the electric force.

2. The origin of instanton density in TGD Universe

What is the origin of these non-orthogonal magnetic and electric fields? Here I must dig down to a twenty years old archeological layer of TGD. Already at seventies an anomalous creation of anomalous e+e- pairs having axion-like properties in heavy ion collisions near Coulomb wall was observed. Effect was forgotten since it was not consistent with standard model. TGD explanation is in terms of pairs resulting from the decay of lepto-pion formed as bound states of color excited electron and positron and created in strong non-orthogonal electric and magnetic fields of colliding nuclei.

Objection: Color excited leptons do not conform with standard model view about color. In TGD this is not a problem since colored states correspond to partial waves in CP2 and both leptons and quarks can move in higher color partial waves but usually with much higher mass.

Non-vanishing instanton density would mean that the orthogonal E and B created by colliding protons appear at the *same* space-time sheet so that a coherent instanton density E.B is created and gives rise to the generation of pairs. Large value of E.B means large parity breaking at the level of ground state. One expects that in most collisions the fields of colliding nuclei stay at different space-time sheets and therefore do not interfere directly (only their effects on charged particles sum up) but that with some property the fields can enter to the same space-time sheet and generate the physics not allowed by standard model.

Objection: Standard model predicts extremely weak parity breaking effects: this is due to the massivation of weak bosons, for massless weak bosons the parity breaking would be large. Indeed, if the non-orthogonal E and B are at different space-time sheets, no instantons are generated.

Objection: The existence of new particle in MeV scale would change dramatically the decay widths of weak bosons. The TGD solution is that colored leptons are dark in TGD sense (heff=n×h,n>1). Large heff would make weak bosons effectively massless below scaled up Compton length of weak bosons proportional to heff and large parity breaking could be understood also the "conventional manner".

3. Strong parity breaking as signature of dark variant of M89 hadron physics

This picture would apply also now and also leads to an increased understanding of M89 hadron physics about which I have been talking for years and which is TGD prediction for LHC. Very strong non-orthogonal E and B fields would be most naturally associated with colliding protons rather than nuclei. The energy scale is of course much much higher than in the heavy ion experiment. Instanton-like space-time sheets, where the E and B of the colliding nuclei could be formed as magneto-electric flux tubes (a priori this of course need not occur since fields an remain at different space-time sheets).

The formation of axionlike states is expected to be possible as pairs color excited quarks. M89 hadron physics is a scaled up copy of the ordinary M107 hadron physics with mass scale which is by a factor 512 higher. The natural possibility is pions of M89 hadron physics but with large heff/h ≈ 512 so that the size of M89 pions could increase to a size scales of ordinary hadrons! This would explain why heavy ion collisions involve energies in TeV range appropriate for M89 hadrons and thus Compton scales of order weak scale whereas size scales are associated with QCD plasma of M107 hadron physics and is by a factor 1/512 smaller. Brings in mind a line from an biblical story: The hands are Esau's hands but the voice is Jacob's voice! Quite generally, the failure estimates based on Uncertainty Principle could serve as a signature for non-standard values of heff: two great energy scale for effect as compared to its length scale.

To sum up, the strange findings about heavy ion and proton proton collisions at LHC for which I suggested M89 physics as an explanation would indeed make sense and one also ends up to a concrete mechanism for the emergence of dark variants of weak physics. The magnetic flux tubes playing key role in TGD inspired quantum biology would carry also electric fields not-orthonal to magnetic fields and the two fields would be twisted. As a mattter of fact, the observed strong parity breaking would be very analogous to that observed in biology if one accepts TGD based explanation of chiral selection in living matter.

4. Could this relate to non-observed SUSY somehow?

Dark matter and spartners have something in common: it is very difficult to observe them! I cannot resist typing a fleeting crazy idea, which I have managed to forfend several times but is popping up again and again from the murky depths of subconscious to tease me. TGD predicts also SUSY albeit different from the standard one: for instance, separate conservation of lepton and baryon numbers is predicted and fermions are not Majorana fermions. Whether covariantly constant right-handed neutrino mode which carries no quantum numbers except spin could be seen as a Majorana lepton is an open question.

One can however assume that covariantly constant right-handed neutrino, call it νR, and its antineutrino span N=2 SUSY representation. Particles would appear as SUSY 4-plets: particle, particle+νR,particle + antiνR, particle+ νR+antiνR. Covariantly constant right-handed neutrinos and antineutrino would generate the least broken sub-SUSY. Sparticles should obey the same mass formula as particles but with possibly different p-adic mass scale.

But how the mass scales of particles and its spartners can be so different if right handed does not have any weak interactions? Could it be that sparticles have same p-adic mass scale as particles but are dark having heff=n×h so that the observation of sparticle would mean observation of dark matter!?;-). Particle cannot of course transform to its spartner directly: already angular momentum conservation prevents this. For N=2 SUSY one can however consider the transformation of particle to the state particle +X, where X is νR+antiνR representing a dark variant of particle and having same quantum numbers. It would have non-standard value heff =n×h of Planck constant. The resulting dark particles could interact and generate also states in dark SUSY 4-plet. Dark photons could be spartners of photons and decay to biophotons. SUSY would be essential for living matter!

Critical reader asks whether leptopions could be actually pairs of (possibly color excited) N=2 SUSY partners of selectron and spositron. The masses of (color) excitations making up electropion must be indeed identical with electron and positron masses. Should one give up the assumption that color octet excitations of leptons are in question? But if color force is not present, what would bind the spartners together for form electropion? Coulomb attraction so that dark susy analog of positronium would be in question? But why not positronium? If spartner of electron is color excited, one can argue that its mass need not be the same as that of electron and could be of order CP2! The answer comes out only by calculating and I am too old to start this business again;-). But what happens to leptohadron model if color excitation is not in question? Nothing dramatic, the mathematical structure of leptohadron model is not affected since the calculations involve only the assumption that electropion couples to electromagnetic "instanton" term fixed by anomaly considerations.

If this makes sense, the answers to four questions: What is behind chiral selection in biology?; What dark matter is? ; What spartners are and why they are not seemingly observed?; What is behind various forgotten axion/pion-like states? would have a lot in common!

For the new physics predicted by TGD see the chapter "New Particle Physics Predicted by TGD: Part I" of "TGD and p-Adic numbers".

Wednesday, October 29, 2014

Geometric theory of harmony

For some time ago I introduced the notion of Hamiltonian cycle as a mathematical model for musical harmony and also proposed a connection with biology: motivations came from two observations (see this). The number of icosahedral vertices is 12 and corresponds to the number of notes in 12-note system and the number of triangular faces of icosahedron is 20, the number of aminoacids and the number of basic chords for the proposed notion of harmony. This led to a group theoretical model of genetic code and replacement of icosahedron with tetraicosahedron to explain also the 21st and 22nd amino-acid and solve the problem of simplest model due to the fact that the required Hamilton's cycle does not exist.

This led also to the notion of bioharmony. This article is a continuation to the mentioned article providing a proposal for a theory of harmony and detailed calculations.

  1. 3-adicity and also 2-adicity are essential concepts allowing to understand the basic facts about harmony. The notion of harmony at the level of chords is suggested to reduce to the notion of closeness in the 3-adic metric using as distance the distance between notes measures as the minimal number of quints allowing to connect them along the Hamilton's cycle. In ideal case, harmonic progressions correspond to paths connecting vertex or edge neighbors of the triangular faces of icosahedron.

  2. An extension of icosahedral harmony to tetraicosahedral harmony was proposed as an extension of harmony allowing to solve some issues of icosahedral harmony relying on quint identified as rational frequency scaling by factor 3/2.

  3. The idea that the rules of bioharmony realized on amino-acid sequences interpreted as sequences of basic 3-chords leads to highly non-trivial and testable predictions about amino-acid sequences.
If one can find various icosahedral Hamilton's cycles one can immediately deduce corresponding harmonies. This would require computer program and a considerable amount of anlysis. My luck was that the all this has been done. One can find material about icosahedral Hamilton's cycles in web, in particular the list of all 1024 Hamilton's cycles with one edge fixed (see ) (this has no relevance since only shape matters). If one identifies cycles with opposite internal orientations, there are only 512 cycles. If the cycle is identified as a representation of quint cycle giving representation of 12 note scale, one cannot make this identication since quint is mapped to fourth when orientation is reversed. The earlier article about icosahedral Hamiltonian cycles as representations of different notions of harmony is helpful.

The tables listing the 20 3-chords of associated with a given Hamilton's cycle make it possible for anyone with needed computer facilities and music generator to test whether the proposed rules produce aesthetically appealing harmonies for the icosahedral Hamiltonian cycles.

For details see the chapter Quantum model of hearing of "TGD and EEG" or the article Geometric theory of harmony.

Sunday, October 26, 2014

p-Adic length scales hypothesis for twin primes

A theorem of Yitang Zhang about distribution of distances of twin primes (see this) has caught the attention of media. The theorem states that there exists number of order 107 such that the number of twin prime pairs with mutual distance smaller than this number is infinite. The naive expectation is that minimum distance between twins increases and becomes infinite at the limit as also the average distance does.

In TGD framework the result is not so counter-intuitive.

  1. p-Adic length scale hypothesis implies that elementary particles correspond to p-adic primes characterizing the effective p-adic topology assignable to them. If cognition is accepted to be part of world order then one must generalize physics by gluing together real and various p-adic physics and p-adic space-time sheets are correlates for cognition. The basic hypothesis emerging from the comparison of the results of p-adic mass calculations with experimental numbers is that physically preferred p-adic primes correspond to primes p≈ 2k, k positive integer. has as preferred values primes and Mersenne primes. Also Gaussian Mersennes assignable to Gaussian (complex) integers appear an define important p-adic length relevant to living matter (scales between cell membrane thickness and cell nucleus size. One can also consider twin pairs for $k$ and these will be indeed discussed below.

  2. If one accepts the idea that elementary particles correspond to primes, one can turn the wheel around and ask whether physics might help in understanding the distribution of primes. Elementary particles form bound states. These bound states would correspond to twin pairs or larger clusters of primes near to each other. If this is the case then one can make number theoretical conjectures based on approximate scale invariance. Twin prime pairs are analogous to bound stats of particles and have some minimal size and the number of pairs with this size is infinite just like the number of twin primes. Same applies to bound states of arbitrary many twin primes. Besides twin primes there are, triple primes, etc. and also they form bound states. Same conjecture applies to the number of them having minimal size.
As such the idea that twin primes might be important in physics does not look interesting. One can however consider p-adic length scale hypothesis restricted so that one considers only primes p≈ 2k, k prime (another such restriction considered earlier is that the second scale of twin scale pair corresponds to Gaussian Mersenne). What does one obtain if one considers only pairs of twin primes (k,k+2)? The twins form pairs of p-adic length scales L(k) and L(k+2) differ by scaling of 2 and already during the first years of p-adic physics I noticed that this kind of pairs might be relevant in biomatter and also in particle physics. Twins appear as basic scale pairs also in solar system as following considerations show. A reasonable conjecture is that twin pairs define an important set of fundamental length scales allowing to get a glimpse about entire physics from CP<sub>2</sub> scale up to longest cosmological scales.

First some background about twins and p-adic length scales hypothesis.

  1. From Wolfram Mathworld ) one finds the following list of twin pairs. The first few twin primes are of form n± 1 for n=4, 6, 12, 18, 30, 42, 60, 72, 102, 108, 138, 150, 180, 192, 198, 228, 240, 270, 282, .... Explicitly, these are (3, 5), (5, 7), (11, 13), (17, 19), (29, 31), (41, 43), ... . All twin primes except (3, 5) are of the form $6n± 1$.

  2. From this one can calculate p-adic length scales L(k), k=n± 1 and mass scales assuming p≈ 2k. It is convenient to use electrons Compton length in the scale involved rather than p-adic length scales proper (as a matter fact, I confused for a long time these two scales). By Lp=L(k)∝ p1/2≈ 2k/2 one has Le(k)= 2(k-151)/2Le(151), where scaled up electron Compton length Le(151)= 2(151-127)/2=12)Le(127) can be also written as Le(151) ≈ L(151)/(5+x)1/2, where x<1 (most naturally very near to zero) is a parameter left free by uncertainties of the model behind p-adic mass calculations.

    The reference scale can be taken as Le(k=151)≈ 10 nm, which is cell membrane thickness. Note that this suggests that Cooper pairs of electrons and perhaps even their light and dark variants are important. Macroscopic quantum phases of dark electrons are highly suggestive. The Gaussian integer (1+i)kG=151-1 defines Gaussian Mersenne as also kG= 157,163,167. Such a large number Gaussian Mersennes in so small an integer interval can be regarded as a number theoretical miracle and must relate to the very special physics of living matter systems. The natural guess is that both p-adically scaled up and dark variants of strong interaction physics, weak physics, and electromagnetism appear in these scales varying up to the size of cell nucleus.

  3. p-Adic mass scales are obtained from electron mass me= .5 MeV by the scaling m(k)= 2(127-k)/2 × me/(5+x)1/2. It seems physically more natural to use as the mass scale also the mass me(k) of electron if it were be characterized by k.
Consider first the twin length scales assignable to elementary particle and atomic physics.
  1. n=102 defines pair (101,103). I have earlier assigned these p-adic prime to to b and c quark. One has me(101)= 4 GeV and me(103)= 2 GeV.

  2. n=108 defines pair (107,109). k=107 is assigned with hadronic space-time sheets and with proton. The mass scales are mk(107)=512 MeV and mk(109)=256 MeV. k=109 could be assigned with deuteron. An interesting question is twin pair is relevant for the understanding of also QCD. The current masses of u and d quark (with the mass assignable to the magnetic body not included) are rather light - in the range 5-20 MeV. This means that their Compton lengths are much longer than those of hadron itself! The interpretation is that they are associated with the magnetic body of the hadron, say proton. This could explain the anomalous finding that the charge radius of proton seems to be slightly larger than it should be.

  3. n=138 gives rise to pair (137,139). The pair defines atomic length scales possibly highly relevant for the condensed matter and molecular physics. The length scale pair is (Le(137)=.79 Angstroms and Le(139)=1.57 Angstroms. A new physics in these length scales is strongly suggestive if p-adic length scale hypothesis is accepted.

The next group of twin length scales can be assigned to living matter.
  1. n=150 gives rise to the pair (149,151) corresponding to the thicknesses of lipid layer of cell membrane and cell membrane itself. The scales are Le(149)=5 nm and Le(151)=10 nm. Especially the scale of 10 nm appears very often in bio-systems as a basic scale: for instance, the thickness of DNA coil is of this order of magnitude.

  2. n=180 defines pair (179,181) differing by a scaling by 215≈ .512× 106 from the pair defined by cell membrane. The scales are Le(179)=2.5 mm and Le(179)=5 mm and might relate to binary columnar structures appearing in the cortex (orientation and ocular dominance columns). The thickness of cortex is of order 2-3 mm.

  3. n=192 defines pair (191,193) related by scaling with 2×106 to lipid-cell membrane pair. The scales are Le(191)=1 cm and Le(179).

The last two scales in the length scale range considered could relate to the physics of solar system.
  1. n=270 defines pair (269,271) related by scaling by scaling 1018 to cell membrane system. One has Le(269)=5 Mkm and Le(271)=10 Mkm. These scales are larger than the size scale .7 Mkm defined by solar radius but smaller than astronomical unit AU defined size scales for planetary orbital radii. In the model of solar system based on the notion of gravitational Planck constant Earth corresponds to n=5 Bohr orbit with radii proportional to n2. The radius of n=1 orbit would correspond to the length scale AU/52≈ 5.98 Mkm rather near to Le(169) so that the size scale of gravitational atom would be in question. The pair of atomic length scales (137,139) would be replaced with its gravitational variant (269,271). The interpretation proposed earlier is that the invisible gravitational orbitals correspond to dark matter around which no visible matter has condensed (yet).

  2. n=282 defines pair (281,283) related by scaling 32 to the previous pair. The scales are 160 Mkm and 320 Mk. The radius of the orbit of Earth defines astronomical unit AU=149.60 Mkm and again the order of magnitude is same.

It is remarkable that electron Compton scales in various p-adic length scales are in question which suggests that dark electrons are involved in an essential manner. To my opinion these findings are so remarkable that p-adic length scale hypothesis deserves serious consideration since it also leads to excellent predictions for elementary particle masses.

There is also an exceptional twin pair not considered above. It is formed by primes 2 and 3. Recently I have been developing a model of music harmony based among other things on the interplay of these primes. Octaves in music correspond to powers of 2 and under octave equivalence the notes of 12-note scale constructed by using quint rotation correspond to powers of 3/2 of equivalently of 3. One has both 2-padic and p-adic aspects. The natural distance for notes corresponds to the number of quints related the notes in quint rotation. This hypothesis allows to understand basic facts about harmony. Some time ago I commented about an article that I received from Jose Diez Faixat (see this) claiming what in TGD framework translates to the statement that 3-adicity is realized at the level of biological time scales. This conforms with the TGD based icosahedral model for biosystems suggesting a close relationship between twelve note scale and aminoacids and also leads to a proposal for icosahedral realization of genetic code (see this).

Thursday, October 16, 2014

Has dark matter particle been observed?

Lubos tells about claim that axions coming from Sun and transformed in Earth's magnetic field to X rays in keV range have been observed. The mass of the axion - to be distinguished from its much higher energy - would be few microeVs, which is rather small and corresponds to a p-adic length scale L(k=207) of about .5 meters. The next prime k=211 would give p-adic scale of 2 meters.

This brings in mind TGD explanation (see this) of and old anomaly - the production of electron positron pairs in collisions of heavy nuclei just above the Coulomb wall - in terms of production of axion-like states in the strong non-orthogonal electric and magnetic fields of colliding giving rise to non-vanishing "instanton density" E•B which generates coherent states of electropions which are bound states of color octet excitations of electron and positrons. Electropions would decay to electron positron pairs and gamma pairs. Later also other leptopion candidates have been detected. Unfortunately they do not represent dark matter in the sense of the prevailing dogmatics so that they have b den forgotten.

Color excited electrons should not be produced in the decays of weak bosons so that they must be either very heavy as free particle or dark in TGD sense and thus having non-standard value of Planck constant given as multiple heff=n× h. Same "instanton" coupling would cause the transformations of solar axions to X rays and the change of the orientation of the Earth's magnetic field with respect to Sun would imply the characteristic variation allowing to detect the small signal from large background.

p-Adic length scale hypothesis predicts the possibility of several copies of QCD like and weak physics. This copy could correspond prime p∼ 2211 and color excitations of corresponding electrons could be in question. Mass of scaled up electropion would be about 4 micro eV. Also scaled down quarks and thus scaled down analogs of ordinary pion can be considered.

Perhaps the simplest production mechanism would be the time reversal of the mechanism assumed to produce the ob served anomalous X rays. X rays from Sun could transform to axions in the magnetic fields of the sunspots. This would predict correlation with sunspot activity.

There have been also earlier reports about axion candidates at rather low masses and p-adic length scale hierarchy and dark matter hierarchy realized in terms of heff would provide a generic explanation. In biology both hierarchies of QCD type and weak physics could play a central role. Big Science is often experienced as a reductionistic invasion proceeding towards shorter and short length scales and this view still dominates physics although the construction of new accelerators is becoming more and more expensive. Big things have big inertia and it takes a long time for Big Science to be able to turn its direction and return to the regions which it has considered to be already conquered.

Wednesday, October 15, 2014

Easiest and cheapest manner to kill TGD

One of the cheapest and fastest manners to end up with a claim that TGD a crackpot theory is to notice that H=M4×CP2 does not allow imbedding of an arbitrary solution of Einstein's equations. The dimension of Minkowskian space allowing this is counted in hundreds. This requires however the failure to realize that the whole point of TGD is just the fact that imbedding really matters! With sufficient amount of academic arrogance one can however become unaware of this basic fact and even argue that Einstein's theory is the final word in this respect. The basic academic gift needed is listening without listening and reading without reading - as Wheeler might have formulated it.

In the belief system of General Relativity imbedding space of course has absolutely no relevance. In TGD framework it is of utmost significance. This is why TGD is revolutionary! Imbedding space defines the "shape" of space-time surface as "seen" by 8-D observer, its symmetries define quantum number spectrum, its spinor connection defines weak gauge fields classically, it brings in many-sheeted space-time, etc. Surface property extends the theory of gravitation to a unified theory of known interactions. The irony is that you get also non-gravitational nteractions by reducing the number of local field degrees of freedom (to four!) rather than introducing a lot of new ones!

Imbeddability to 8-D space-time implies that space-time dynamics is extremely constrained as compared to that in GRT. This has been of course basic worry shadowing the amazing predictions such as understanding of standard model symmetries and quantum numbers and solution of basic problems of cosmology (why mass density is below critical, absence of horizons,…). The ultimate solution of the problem was that the transition to GRT space-time involves in long length scales replacement of many-sheeted - this is the key-word- space-time regions with regions of Minkowski space whose metric is sum of M4 metric and deviations of the metrics of sheets from M4 metric. Similar description applies to gauge potentials and gives standard model gauge fields. In simple situations one can assume that this effective GRT space-time is representable as 4-surface in H and I do this with successful predictions in cosmology.

Imbedding of 2-D string world sheet of course mattered also in the original string model. Unfortunately our space-time is not 2-dimensional but a trick called spontaneous compactification was invented and imbedding space as something given and possibly fixed by the consistency and existence of the theory was replaced with dynamical one. This was a desperate attempt to obtain 4-D space time at least in certain approximation.

The correct manner to proceed would have been as an answer to the question how to generalize string model by replacing string world sheets with 4-D surfaces. Even worse, for "technical reasons" also the idea about string world sheet as sub-manifold with induced metric was given up and dynamical metric at it was induced (there was also the failure to realize that also spinor connection and spinor structure could be induced: this would have immediately led to the geometrization of known interactions and quantum numbers).

As a result, one obtained 2 gravities: stringy 2-D gravity and 10-D gravity to become later 11-D gravity in M-theory context. This from the attempt to quantize 4-D gravity! I must say that I felt deep co-shame and pity since I could only passively inspect as the catastrophe took place without being able to help. This was like seeing helpless victims of a traffic accident to suffer without being able to do anything except to wait for ambulance.

Now a common realization of the horrible mistake has emerged. For reasons easy to understand, this is not a reason for hypeing and the community has wisely chosen to be silent. There are however still some very loudy and very aggressive advocates of the dead science program, who obviously have failed to realize what went wrong (rather amazing but possible if one has a "true fan" attitude).

Perhaps not surprisingly, these individuals are not doing active research on strings themselves (one of the advocates sees everyone disagreeing with him as an evil communist and explains that the reason is that doing research without salary would be communism!). They are trying desperately to gain some scientific respectability using besides ultra-aggressive rhetorics a classical trick: make a lot of noise about pseudoscience since this gives you unearned scientific status. Cold fusion, water memory and anything challenging text book wisdom are excellent targets in this respect. In Finland, people calling for some reason themselves skeptics - most of them actually academic dropouts desperately wanting to appear in the role of wise scientist - are using the same simple trick.

Recipe is simple and conforms with the ultraconservative attitudes of these people explaining also why they are not able to do real research, which always requires challenging of own beliefs. Direct the negative emotions against a researcher, who has found something which does not conform with the text book wisdom. Emotional brain begins to dominate and logical thinking ceases. You indeed get the praise of those who do not have the needed background to decide themselves and are too naive to realize that they are cheated.

Rossi's E-cat used by Lubos for the propagandistic purposes is a perfect target in this respect. I am just wondering how Lubos can see Rossi's E-cat as a failure because it is not commercialized during 4 years and at same time sees no reason to worry about hot fusion program, which has lasted at least 60 years and the outcome is only a bet from Lockheed Martin that fusion plants will be here around 2024. Needless to say, cold fusion plants were here after decade already at seventies and probably already before this!;-)


Saturday, October 11, 2014

Rossi's E-cat spoils again the day of standard physicist!

The battle about reality of cold fusion is approaching its unavoidable end. Third party researchers have now verified cold fusion in a version of Andrea Rossi's E-cat (see this).

Tommaso Dorigo comments the latest finding and admits that the methodology looks very sound. Tommaso is not able to find any other explanation than fraud for the unpleasant fact that cold fusion occurs contrary to what text books tell us.

Lubos, who he is ready to accept that we live in the landscape of 11-dimensional M-theory at some brane without able to predict anything as physicists and test this belief in any manner, has attacked vigorously against new nuclear physics using instead of facts all his rhetoric skills to convince the readers that the people involved with cold fusion are swindlers, idiots, etc... - only "communist" and "criminal" seem to be missing from the the list.

I have commented cold fusion several times in my blog discussing also the TGD based views about cold fusion involving dark phases of matter realized as phases with non-standard value of Planck constant. I have also developed alternative variants of the model in my books (see this and this).

The most recent model (see the article) starts from the model of Widom and Larsen reported to explain the observed isotope ratios, and assuming that the basic step of the reaction involves the transformation of proton to neutron by absorption of orbital electron. Contrary to naive intuitions, weak interactions would be involved in an essential manner in the process! To my opinion this is however one of the weak points of the model at quantitative level although I believe that weak interactions are involved. Second weak point is the assumption that neutron approaches the target with extremely low velocity thus making the interaction cross section very large (proportional to inverse of the relative velocity). Here the new dark physics predicted by TGD would come in rescue: as dark variant of weak interaction physics with weak boson Compton scale scaled up to atomic length scale would make impossible possible. Below the scaled-up scale weak bosons would behave like massless particles and weak interactions would get as strong as electromagnetic ones. Also the de-Broglie wavelength of neutron would be scaled up since it is proportional to heff: this is equivalent with having very low relative velocity.

My sincere hope is that hard empirical facts could eventually force the main stream colleagues to admit that something in our standard view about physics is badly wrong as also the failed attempts to understand dark matter and dark physics have repeatedly demonstrated. I do not however expect any fast changes. The natural dynamical time scale for the phychology of face keeping is of the order of human lifetime!

Addition: In his blog posting about latest developments in cold fusion Lubos demonstrates that he has become a parody of himself, kind of a living joke. Lubos talks about scientific method and would certainly agree that objectivity free of non-emotional attitude is one of its basic signatures. Lubos is however exploding of anger and his arguments rely on personal insults instead of contents. The basic tragedy and paradox of Lubos is that he always reacts instead of acting. Instead of realizing that all ideologies are dangerous because they do not allow freedom of thought, he sees communism as the Devil. The irony is that he behaves just like an ideal communist respecting authorities more than anything else and experiencing those who think differently as enemies. Lubos even sees conspiracy against quantum mechanics and blackholes! Lubos has become age-fan of the dying M-theory and - in amusing contrast - extremely conservative believer on the most hardnosed reductionism. Hence for Lubos the physics below TeV energies is fully done and any-one who considers that cold fusion might be something real and that biology might involve some interesting new physics is swindler, criminal, crackpot, communist,.... - there is long menu of options and you can choose whatever you like!

I want to make absolutely clear that I am not a fan of cold fusion. My attitude is different because TGD predicts new physics in all scales and allows to consider models for how cold fusion would happen. TGD is a genuine theory rather than some random model cooked up to explain some anomaly or unexpected observation (say dark matter or dark energy or why supersymmetry is not seen at LHC). If I would still live in the old ontology, I would certainly have a temptation to accept conspiracy theories as an easy way out of cognitive dissonance. Cognitive dissonance is what scientist should be able to tolerate and here the problem of Lubos and many others is.

Wednesday, October 08, 2014

What EEG sounds like?

TGD view about living matter and brain suggests a possible connection with dark photons and sound. Dark cyclotron photons are central in TGD inspired mnodel of living matter (see this). I have asked whether also the notion of dark sound could make sense: dark sounds would correspond to oscillations of dark matter and magnetic flux tubes.

A rather stringent variant of dark photon idea is that dark cyclotron photons have universal cyclotron energy spectrum (see this). This follows from the proportionality of heff to the mass of the charged particle. This in turn follows from the condition heff =hgr= GMm/v0 (v0 has dimensions of velocity) having the interpretation that electromagnetic Planck constant equals to gravitational Planck constant so that same flux tubes can mediate gravitational and electromagnetic interactions (dark photons and gravitons). Note that both gravitational and em Compton lengths would be independent of mass of the particle: something essential for macroscopic quantum coherence of different particles. Some experimental input obtained by taking seriously the claim that the observed slightly too large mass of electron Cooper pair in measurements involving magnetic field (see this) is due to a gigantic gravimagnetic London moment proportional to h2 is due to to the replacement h→ heff. Dark cyclotron photons would have energies in biophoton range (visible and UV energies corresponding to energy range for excitations of biomolecules) and would thus having biological effects and optimal for biocontrol by magnetic body and communication of sensory input to magnetic body.

Piezoelectric effect suggests a transformation of to ordinary sound waves with low frequencies (see this). A bunch of ordinary phonons (h_eff= n×h) with low energy could be created from single dark photon. Could this transformation be involved with a transformation of visual percepts to auditory percepts in association areas of brain and with a production of virtual sensory input as feedback from auditory areas to ears using dark photons from brain or magnetic body through brain propagating to ear and transformed to bunches of phonons with the same low frequency.

Then comes the question. Has anyone tested what EEG sounds like!? Phase information is lost in EEG power spectrum so that testing requires additional information -probably crucial for the information content - but one could try to get some idea about this. What is remarkable that the part of EEG present during wake-up state corresponds to the sound frequencies is above 10 Hz and we aso hear consciously frequencies above 10-20 Hz! The scientific contents bill of century: EEG in sounds sounds like internal speech!;-)

A little notice: Science Daily reports about experimental search of XMASS collaboration excluding the possibility that super-weakly interacting particles constitute all dark matter. This is also a further blow against standard SUSY. Maybe time is ripe for a serious reconsideration of the simplistic idea about dark matter as some exotic particle. And maybe the ideas about the nature of SUSY might need some updating.

Monday, October 06, 2014

More precise view about remote replication

Both Luc Montagnier and Peter Gariaev have found strong evidence for what might be called remote replication of DNA. I have developed a TGD inspired model for remote replication using the data from Peter Gariaev, who has developed the notion of wave DNA waveDNA supported by Montagnier's findings.

Polymer chain reaction (PCR) provides a manner to buildcopies of piece of DNA serving as template. Once single copy is produced, it serves as a template for a further copy so that exponential amplification is achieved. Montagnier's and Gariaev's works suggest however that the synthesis of DNA could also occur without a real matrix DNA as remote replication. According to the proposal of Gariaev DNA template would be remotely represented as what he calls wave DNA. Montagnier uses 7 Hz ELF radiation to obtain the effect whereas Gariaev uses scattering of laser light into large interval of frequencies to achieve the effect.

In TGD approach magnetic body containing dark matter with large Planck constant, the associated cyclotron radiation for which energy scale is proportional to effective Planck constant heff=n× h having large values implying conjectured macroscopic quantum coherence of living matter, dark analog of DNA represented as dark proton sequences at magnetic flux tubes and accompanying ordinary DNA, plus reconnection of U-shaped magnetic flux tubes assignable to the magnetic bodies of bio-molecules and allowing them to recognize each other, are the basic elements. The model has evolved from the attempts to understand water memory and homeopathy in TGD framework (see this).

Both 7 Hz ELF radiation and scattering of laser light would both generate dark photon (large Planck constant) spectrum with a wide spectrum of frequencies but with the same energy which in Gariaev's experiments would naturally be the energy of scatter laser light. The dark photons would provide representation for DNA codons. If 7 Hz frequency radiation involves dark photons with energies of visible photons transforming to ordinary photons before scattering from DNA the outcome would be same as in Gariaev's experiments.

The updated model involves same elements as the model discussed in (see this) but there are also new elements due to the developments in the model of dark DNA allowing to imagine a detailed mechanism for how water can represent DNA and how DNA could be transcribed to dark DNA. The transcription/association represents a rule and rules are represented in terms of negentropic entanglement in TGD framework with pairs of states in superposition representing the instances of the rule. Transition energy serves as a characterizer of a molecule - say DNA codon - and the entangled state is a superposition of pairs in which either molecule is excited or dark DNA codon is excited to higher cyclotron state with same energy: this requires tuning of the magnetic field and sufficiently large value of heff at the flux tube. Negentropic entanglement is due to the exchange of dark photons: this corresponds to wave DNA aspect. Dark cyclotron photons also generate negatively charged exclusion zones (EZs) discovered by Pollack and in this process transform part of protons to dark ones residing at the magnetic flux tubes associated with EZs and forming dark proton sequencies.

For details see the chapter Quantum gravity, dark matter, and prebiotic evolution or the article More Precise View about Remote DNA Replication .

Saturday, October 04, 2014

Standard SUSY or M89 hadron physics?

MSSM (Minimal supersymmetry extension of standard model) predicts 4 CP even Higgs like states and 1 CP odd meson like state. Lubos Motl has sent several postings related to this bump which he wants to see as a candidate for CP even Higgs. There is indeed evidence for a meson like state at mass around 135 GeV from several sources.

For 15 months ago Lubos told about 2.7 sigma excess from CMS suggesting the existence of a meson like state at 136.6 GEV - the mass of Higgs is 125 GeV. One year ago Lubos told about evidence for dilepton decays (e+e- and μ+ μ- as final states) for a state in the mass interval 130-140 GeV.

Lubos wants to interpret the possibly existing meson-like state as Higgs like state predicted by SUSY: this would require state to be CP even. I checked that CMS collaboration says nothing about whether the possible state is CP even or odd. For CP=+1 the meson is an analog of Higgs particle and MSSM or its suitable generalization might explain it. For CP=-1 meson is pionlike state and standard extensions of standard model are in difficulties.

In the most recent posting Lubos tells about a proposed interpretation of the 135 bump as SUSY Higgs boson. MSSM does not however work and one must extend the model. Typically these models require low stop quark mass - contrary to naive intuitions stop would be the lightest squark). If the mass of stop is not near to that of top the lower bound from ATLAS and CMS is around 700 GeV but there is little ray of hope for SUSY builders: mass range near top quark mass. Also this little hope is however steadily diminishing. Jester told this morning about new ATLAS lower limit for stop quark mass: it is around 190 GeV (to be compared with 170 GeV for top). I would not be surprised if heavy stop would destroy also this model.

In TGD framework the identification as pion of M89 hadron physics with mass scale 512 times higher than that of ordinary hadron physics indeed makes sense. I have talked a lot about M89 hadron physics. As a matter fact, my first wrong identification of bump at 125 GeV which turned out to be the Higgs was as M89 pion. Now it has become clear that TGD certainly predicts Higgs like state, that all candidates for Higgs vacuum expectation have been short-lived and that there is no need for them. The massivation of elementary takes place by p-adic thermodynamics which is something totally new since it definitely leaves quantum field theory framework, where only the mimicry of massivation is possible, not its real understanding. In cosmology QFT description fails for the period before what mainstream wants to call inflation and the counterpart of inflationary period itself. In biology, where the many-sheetedness of space-time and new view about classical fields are in key role, QFT fails too.

There is obviously a serious conflict of interests! Lubos wants the possibly existent 135 state to be SUSY Higgs (CP even) and I want it to be M89 pion (CP odd). If the particle really exists then this tiny CP bit could become the stone thrown by the tiny David and killing the Big Goliath!

There are also two-year old observations of Fermi telescope (see this) suggesting the presence of a monochromatic gamma ray line at 135 GeV. One explanation would be the decay of 270 GeV pion-like state to two gammas. Why the mass should be twice the mass of M89 pion? p-Adic length scales hypothesis allows mass scales coming as half octaves and thus mass octaves of pion-like states and I have proposed for years ago that this phenomenon in the case of what I call tau-pions which I used to explain CDF anomaly forgotten long time ago (TGD allows also leptons to have coloured excitations and thus to form what one can call leptohadrons). Another possibility that one can imagine that two 135 GeV pion-like states very nearly at rest annihilate to two gamma rays.

One should mention also that the measurement of the W+W- cross-section has been consistently ∼ 20 per cent higher than the theoretical prediction across both ATLAS and CMS for both 7 and 8 TeV runs. SUSY inspired explanation would be in terms of light stop. In this model W pair results from stop pair. The decay chain begins with the decay stop → χ+/-b to chargino and b quark followed by the decay χ+/-→ χ0W+/-
of chargino to neutralino and W boson in turn decaying to lepton or quark pair.

The best fit gives m(stop)=212 GeV, which is dangerously near to the lower bound 190 GeV. Maybe it is safer to not stop that ambulance although the title of the preprint recommends this! The mass difference stop-χ+/- would be 7 GeV and neutralino mass would be 150 GeV so that chargino decay to neutralino plus W cannot produced on mass shell particles. If chargino with mass of 205 GeV and neutralino with mass 150 GeV are on mass shell at rest then W boson would get 55 GeV energy and would be virtual and can of course decay to lepton pair or quark pair.

Could the presence of M89 pion at energy 135 GeV help to understand the discrepancy of WW production? Could the decays u89 → bW of M89 u quark and antiquark inside M89 pion explain the enhanced W pair production? Professional might be able to answer the question immediately. π89 decays to a pair of virtual u89 quarks with mass one half of that for π89 that is 67.5 GeV not far from the 55 GeV above. Hence t→ Wb decays produce virtual W boson with energy not far from that in supersymmetric model (the mass of W boson is 80 GeV and b quark mass is around 4 GeV). Maybe the model cannot be killed without more detailed calculations.

Tuesday, September 30, 2014

New experimental information about chiral selection

Chirality selection is fundamental problem in biology. It means large parity breaking since only second molecular handedness appears in living matter. This is not the case in inanimate matter. This distinction must be somehow fundamental for what it is to be living. The problem with large parity breaking is however that although it is caused by weak interactions it is extremely small effect and it is difficult to imagine how it could emerge. Even more difficult it is to understand why it would emerge only for the in vivo variants of bio-molecules.

One promising idea is that the original parity asymmetry would not be biological but would transferred to biology. In 1967 biohemist Frederic Vester and environmental scientist Tilo Ulbricht proposed that some physical phenomenon could have changed the balance between left and right handed molecule concentrations during earlier stages of evolution. Beta decays are certainly the first candidate to come in mind since in beta decays the breaking of parity symmetry manifests itself as the appearance of only second helicity for electrons. One says that the electrons from beta decays of nuclear neutron to proton are polarized with spin directing to direction opposite from the momentum of electron. For high energies electron is an eigenstate of parity operator and can be said to be left or right-handed. At low energies both chiralities are present and the spin projection to the direction of motion an have both signs. For instance, one could imagine that cosmic rays decaying in the atmosphere producing nuclei and muons suffering beta decay in atmosphere could produce the polarized electrons.

This asymmetry would manifest itself as slightly different decay rates for molecule and its mirror image induced by the absorbtion of the polarized electron. This small difference could be however exponentially amplified by reaction kinetics and lead to chirality selection by say cosmic rays. The process would have taken place long time ago and led to the dominance of second molecular handedness.

The challenge would be to find a chemical process achieving this. A strong constraint is that this effect should be present only for the in vivo variants of biomolecules. It is difficult to imagine why recent physics as we find it in text books could distinguish between in vivo and in vitro.

The attempts to identify the chemical process involved have not been successful hitherto. Now however Nature published an interesting article about a possible chemical mechanism leading to chirality selection.

  1. Gay and Joan Dreiling working at the University of Nebraska-Lincoln irradiated a organic compound bromocamphor with low energy spin-polarized electrons and achieved a success. The rates of decays induced by absorption of polarized electron differ by about .6 per cent for the two possible polarizations. This is large difference. Note that the chemical reaction is not expected to involve parity breaking. The parity breaking would be inherited from electrons. There are however some problems involved.

  2. The asymmetry occurs only for electron energies below electronvolt. Electrons from cosmic rays have much higher energies but one could argue that there should exist a slowing down mechanism of electrons or there is some other natural source of very low energy polarized electrons.

  3. Second problem is that for low energies one cannot anymore have a well-defined electron chirality but one can speak about spin component in the direction of momentum - actually any direction can be chosen for the quantization axis. Does the result mean that it is the sign of spin projection rather than handedness of electron which matters? This would suggest that at the low energy limit the mirror reflection leaves electron invariant and the parity invariance of physical chemistry would imply that there is no distinction between the decay rates! To me this looks a serious problem: maybe I have misunderstood something.
TGD based view about dark matter suggests a scenario in which external parity breaking is not necessary. Parity breaking would be a feature of chemical reaction itself and involve "new chemistry". This of course makes no sense in the chemistry supported by the standard model. The basic elements are dark matter identified as a hierarchy of dark phases with effective value of Planck constant given by heff=n× h and the notion of magnetic body. Magnetic body would use biological body as sensory receptor and motor instrument and EEG and its possible scaled variants would make the communications with and control of biological body possible.
  1. In dark phase weak bosons would be effectively massless below the scaled up weak scale and weak interactions would be as strong as electromagnetic interaction. The scaling corresponds Lw→ (heff/h × Lw and the resulting dark weak scale can be even of cellular size scale. The otherwise extremely weak parity breaking effects would be large in dark phase. The large parity breaking of weak interaction for the weak decays of dark variants of molecules would directly imply different decay rates for dark magnetic/field bodies of left- and right-handed molecules.

  2. Parity breaking effects of weak interactions causing chiral selection would large due to the presence of dark effectively massless W bosons and Z boson whose interactions break parity. The description of parity breaking effects in nuclear physics (see this suggests that one parity breaking effect would be induced by the interaction s∇ VZ, where VZ is the scalar potential associated with the classical Z field.

  3. The additional bonus is that this mechanism would be possible only in vivo since living matter would be made living by large heff phases identifiable as dark matter at magnetic flux tubes!

  4. Also the observations about bromocamphor could have explanation in terms of dark matter at flux tubes relevant for the observed decays. This would allow to circumvent the problem that the electrons in the experiments involving bromocamphor have so low energy that they do not have well-defined chirality and the difference between the decay rates should be very small.

Another analogous idea is based on polarisation of photons in atmosphere, which in turn induces faster decay for the second molecular chirality: see this. Now the polarisation would be due to the scattering of originally unpolarised photons with atmosphere. This would not involve parity breaking but just Raileygh scattering. Now the polarization is however linear rather than circular and I must confess that I failed to understand how this could lead to breaking of parity which would correspond dominance of either circular polarization. If some reader understands this, I would be happy for explanation.

Sunday, September 28, 2014

A new twist in the inflationary story

There is a highly interesting article about the possible impact of BICEP2 results for inflation (see this). For my earlier comments on BICPE2 see this, this, and this.

The original belief after BICEP2 was that inflation finds strong support from the unexpectedly strong dependence of CMB temperature on the polarization of CMB photons claimed by BICEP2. This would have been caused by primordial gravitational waves. Most bloggers were however silent about a serious problem. If the finding is real, the large polarization implies that QFT approximation fails. Inflationary scenario is indeed essentially QFT based with inflaton field playing the role of Higgs field such that the energy of inflation field decays to particles during inflationary period before the radiation dominated phase.

As already mentioned, now another serious objection has emerged. Inflation predicts a spectrum of gravitatonal waves with amplitudes proportional to mass density fluctuations. The longer the scale, the larger the amplitude should be. The spectrum claimed by BICEP2 shows however exactly the opposite behavior as was found by the authors of the article.

Situation has turned around! The highest desire of inflation theorists is now that BICEP2 results are wrong and due to dust or some other effect! This might well be the case. Paul Steinhardt - one of the developers of inflationary model - has turned into skeptic, and says that unfortunately this is not the only severe problem of inflation. For instance, basic prediction is that we should observe multiverse realized as space-time regions obeying different physics laws (different values of basic parameters of say standard model). We do not! I have been myself wondering why this wrong prediction does not seem to bother multiverse enthusiasts. Is just this corner of Universe with its own physics laws so large that we cannot observer other regions? Strange indeed!

BICEP2 finding forced me to develop TGD view about the TGD counterpart of inflation. The physical interpretation in TGD is totally different from that of inflationary model.

  1. The rapid expansion is a quantum critical period during which the curvature scalar of 3-space vanishes (no scales during criticality) and the induced 3-metric is flat. A phase transition from a phase dominated by a gas of cosmic strings in the future light-cone of Minkowski space (imbedding space is M4×CP2) to radiation dominated cosmology would be in question.
    In the string gas phase GRT description does not apply. Lorentz invariance of the gas phase implies that the universe is isotropic and homogenous and therefore one should avoid the multiverse. It is important to realize tht Lorentz invariance is not present as global symmetry in GRT based cosmology and inflation was hoped to solve the resulting problems.

  2. The phase transition means the emergence of many-sheeted space-time. GRT space-time emerges as its simplification obtained by lumping together the sheets of many-sheeted space-time to a region of Minkowski space provided with a naturally defined effective metric. GRT space-time is assumed to be modellable using single sheeted space-time identifiable as vacuum extremal of Kähler action. This assumption is natural but can be criticized.

  3. Not too surprisingly, the cosmic expansion during critical phase is rapidly accelerating and the energy density of the critical cosmology which is unique apart from it duration would become infinite without transition to radiation dominated cosmology (the gas of cosmic strings has finite mass density). A transition to radiation dominated cosmology must happen before this. CMB temperature fluctuations emerge from the variations of the value of cosmic time at which the transition to radiation dominated cosmology takes place. The magnetic energy of cosmic strings decays to particles so that inflation field is replaced with Kähler magnetic field which also explains the
    appearance of magnetic fields in cosmic scales elegantly.

What one can say about the amplitude of the gravitational waves as a function of scale? Does it increase with the scale as it should do in inflationary models predicting that the amplitude is proportional the amplitude of density fluctuations? Does the amplitude of density fluctuations increase with scale or does Lorentz invariance or properties of string gas phase prevent this? And what density fluctuations are? And what defines the scale?

  1. In inflationary models density fluctuations are generated by gravitational perburbations in existing space-time.
    In TGD Universe many-sheeted space-time is just emerging during inflationary period and QFT starts to make sense
    during radiation dominated phase. Density fluctuations should correspond to the density fluctuations for the gas of cosmic strings topologically condensed to the emerging space-time sheets during the phase transition analogous to a condensation of vapour to 2-D surface.

  2. This would suggest that density fluctuation at space-time sheet corresponds to a maximum amount of string energy in the volume in question. The energy density of cosmic string gas is proportional to T/a2 - here a is the light-cone proper time characterizing the scale and T is string tension. This predicts that density fluctuations decrease with length scale. TGD prediction would be consistent with BICEP2 finding even if it is correct. Otherwise the situation remains unsettled.

  3. Note that as the Minkowski space projections of cosmic strings gradually thicken, they become Kähler magnetic flux tubes carrying monopole flux: this explains the presence of cosmic magnetic fields since no current is needed to generatate monopole fields. The mass density per unit length of cosmic string is given by string tension essentially as the density of the magnetic energy which decays partially to matter. As the string thickens, the energy density of string decreases as the inverse of its transversal area. The total mass of string however remains proportional to its length.



Thursday, September 25, 2014

No blackholes?

I received an interesting link from Ulla about blackholes. The claim of physics professor Laura Mersini-Houghton is that black holes cannot be created in star collapse. The collapse of star would involve at endphases a "swelling" which evaporates the blackhole candidate as Hawking radiation so that no blackhole would be formed!

One positive consequence would be that this would put an end to the endless prattling about blackhole entropy, firewalls, etc.. which to my opinion are just gnawing of an old bone. Even better: no-one would waste time to a desperate attempt to reduce QCD, nuclear physics, or condensed matter physics to the physics of blackholes in 10 dimensions!

Personally I take blackholes only as a singularity of GRT. For some reason colleagues seem to be unable to admit that GRT space-time is not enough. String theories do not bring anything new since strings as such have too low dimension to tell anything about space-time physics and their long length scale limit is (or is hand-waved to be) just Einstein-YM type theory. In the theory next to GRT blackholes must be replaced with something more well-defined.

This sounds extremely blasphemous, but after 37 years or hard work I can safely state that TGD is one of the very few real candidates for the theory next to GRT.

  1. In TGD framework one can imbed Schwarschild solution down to some critical radius. Something happens at the horizon of blackhole. Scwartchild-Nordström solution suggests what. By an arbitrary small deformation of the radial component grr=1/(1-2GM/r) so that it becomes finite at horizon r=rS=2GM the determinant of 4-metric becomes vanishing. CP2 is one of the extremals of Einstein-Maxwell action with cosmological constant.

    This observation suggests that horizon in improved theory is the 3-surface at which the signature of the metric transforms from Minkowskian to Euclidian. Euclidian space-times are used in the quantization of GRT as a mathematical trick in an attempt to make some sense about path integral by transforming it to functional integral but are not taken seriously physically.

  2. In TGD space-time surfaces indeed consist of both Minkowskian and Euclidian regions and Euclidian regions correspond to the 4-D lines of generalized Feynman diagrams. A good model is as deformations of CP2 type vacuum extremal which are locally just CP2 but whose imbedding to M4× CP2 is non-standard having light-like random curve as M4 projection giving rise to conformal invariance and Virasoro conditions. Blackhole would be replaced with particle like object identifiable as line of generalized Feynman diagram. All material objects would have space-time sheet which has Euclidian signature of induced metric and represents the object as particle like object.

It must be emphasized that GRT space-time is indeed only an approximate concept in TGD framework.
  1. Einstein-Yang-Mills space-time is obtained from the many-sheeted space-time of TGD by lumping together the sheets and describing it as a region of Minkowski space endowed with an effective metric which is sum of flat Minkowski metric and deviations of the metrics of sheets from Minkowski metric. Same procedure is applied to gauge potentials to get standard model.

  2. The motivation is that test particle topologically condenses at all space-time sheets present in given region of M4 and and the effects of the classical fields at these sheets superpose. Thus superposition of fields is replaced with superposition of their effects and linear superposition with set theoretic union of space-time sheets. TGD inspired cosmology assumes that the effective metric obtained in this manner allows imbedding as vacuum extremal of Kähler action. The justification of this assumption is that it solves several key problems of GRT based cosmology. One can also represent a subset of solutions of Einstein's equations as vacuum extremals but in the generic case the many-sheeted space-time cannot be approximated by single space-time sheet.

  3. The number of field patterns in TGD Universe is extremely small - given by preferred extremals - and the relationship of TGD to GRT and YM theories is like that of atomic physics to condensed matter physics. In the transition to GRT-Yang-Mills picture one gets rid of enormous topological complexity but the extreme simplicity at the level of fields is lost. Only four CP2 coordinates appear in the role of fields in TGD framework and at GRT Yang-Mills limit they are replaced with a large number of classical fields.

Many-sheeted space-time makes itself therefore visible only via anomalies.
  1. We indeed see everywhere outer boundaries: they could be identifiable as effective or genuine boundaries of space-time sheets in TGD framework: this depends on whether one adds to Kähler action a boundary term allowing the presence of boundaries or not. If not, the space-time sheets are pairs of sheets glued together along would-be boundaries. Seeing is however not believing. We must somehow measure the presence of space-time sheets before we can take them seriously!

  2. In many-sheeted space-time signal from say supernova can arrive along several space-time sheets simultaneously and the arrival time depends on the sheet. In case of SN1987A this effect was observed for neutrinos and photons as one can learn from Wikipedia article).

  3. The "dropping" of particles from smaller to larger space-time sheet at the boundary of smaller one provides a possible mechanism of metabolism since the zero point kinetic energy decreases.

Addition: Lubos has been highly emotional about Laura Mersini's paper. It is a pity that Lubos experiences all questioning of cherished assumptions as an organized attempt to destroy quantum mechanics, general relativity, or superstring theory and reacts accordingly. Also this posting is full of adjectives such as "nutty", "idiotic", "stupid", "silly","lunatic","crank","super-idiotic", to list few of the evergreens of Lubos. Everyone can decide whether to take seriously a person whose arguments are full of this kind of ad hominem attacks.

These bursts of anger do not change the fact that black hole represents a mathematical singularity of an otherwise highly successful theory. There is no doubt that GRT describes the physics outside the exterior of blackhole. The challenge is to find a theory which allows to understand what really happens in the interior. Essentially microscopic view about space-time replacing the GRT view, which is essentially "astroscopic" . Superstrings tried this but did not bring in nothing new since all predictions emerge from ad hoc hypothesis known as spontaneous compactification. Someone has correctly said that superstring theory predicts 10-dimensional Einsteinian gravity, the rest is an inflation of ad hoc hypothesis. Also Big bang cosmology is very successful but "What is the real physics behind and before what we identify as inflation?" is the question which we should approach with an open mind.


Monday, September 22, 2014

Why I keep my mind open concerning BICEP2 findings?

Some time ago BICEP2 published an eprint, which became the scientific news of year. They claimed that cosmic microwave background (CMB) exhibits co called B modes with a local polarization, which is rotational ("whirly"), and would be caused by primordial gravitational waves during the inflationary period. I discussed the finding from TGD point of view here.

Inflation theorists interpreted this as a support for inflation although the size of the effect was so large that quantum field theoretic modelling in terms of inflaton concept (analogous to Higgs) becomes highly questionable mathematically. Soon the finding of BICEP2 was challenged (I wrote about this here) . Later BICEP2 published an article in which they admitted that the effect could be due to the dust. Planck collaboration published now an eprint supporting the view that the signal observed by BICEP2 is probably due to dust.

The effect forced to develop in more detail the TGD view about the transition from primordial cosmic string dominated cosmology (gas of cosmic strings) to radiation dominated cosmology. TGD view forces to leave the world as quantum field theories (QFTs) general relativity see it. Primordial cosmology would correspond to tring gas dominated phase in which standard view about space-time does not make sense. The counterpart of inflationary period would correspond to a transition to radiation dominated phase in which the space-time sheets - slightly curved pieces of Minkowski space - make sense and GRT limit of TGD obtained by replacing the space-time sheets of many-sheeted space-time with single slightly curved piece of M4 is practical and many-sheetedness makes it manifest only via anomalies. In GRT framework the gas of cosmic strings does not make sense and inflationary period in which space-time sheets already appear is a bridge to the world, where GRT space-time is a useful concept.

This transition is phase transition and quantum criticality fixes the cosmology during this period completely apart from the duration of the period. The curvature scalar of 3-space vanishes (no scales at quantum criticality) as it does also in inflationary cosmology. Hence it should not be surprising (it was!) that the outcome were very similar to that in inflationary cosmology: accelerated expansion. This period must have finite duration as it has because otherwise the mass density would become literally infinite, which is not possible since the density of cosmic strings in "vapour phase" is finite.

Unlike the purely phenomenological QFT mode involving inflaton type fields, TGD provides a detailed geometric mechanism for the generation of the CMB polarization in terms of magnetic flux tubes. I cannot however calculate the size of the effect, and one can also imagine effect caused by the presence of the magnetic flux tubes and it could contribute also at later times.

Lubos has a very interesting posting emphasizing the David-Goliath setting: big and arrogant Planck against tiny Bicep2! On basis of my own frustrating experiences as tiny David I am forced to take this sociological interpretation seriously.

Lubos argues that the whirling pattern observed for the polarisation direction vector is regular and exhibits a definite scale. One can argue that dust - if it it is dust - would produce a random pattern. Personally I do not have any strong attitudes pro or con: TGD predicts primordial gravitational waves but I am unable to calculate the magnitude of the effect. TGD also suggests aan alternative mechanism producing the whirling: it could be due to rotational magnetic fields at magnetic flux tubes (having cosmic strings as predecessors) - that is vectorial rather than tensorial effect. This effect could be present also at later times and produce regular whirling pattern.

Magnetic fields are indeed known to exist in arbitrary long astrophysical and cosmological scales. This is one of the big challenges of cosmology since their presence requires currents but in cosmology thermodynamical equilibrium does not allow this kind of currents in long length scales. In TGD framework (due to the topology of CP2) the flux tubes carry monopole fluxes requiring no currents, and in the first approximation these fields can be locally irrotational although they have closed flux lines. A flow around an annulus in plane illustrates this: the flow can ge locally irrotational everywhere but obviously there is a global rotation. Does local irrotational character of these magnetic fields imply that local polarization of CMB cannot be generated by closed flux tubes? Or does the whirling pattern make the web of flux tubes directly visible?

Average blogger is quite too hasty in putting BICEP2 findings to the same category as the "discovery" that neutrinos travel with a speed slightly higher than photons (doing this with emotional rhetorics is an easy manner to increase the number of visitors!). Ironically, supernova SN1987 demonstrated for 27 years that neutrinos indeed seem to move faster than photons! The neutrinos from SN1987 came in two groups and both neutrino bursts came earlier than the gamma ray burst. TGD explanation is that the the neutrinos bursts and potons arrived along different space-time sheets with slightly different arrival times meaning Δc/c ∼ 10-9. The effect claimed by the OPERA collaboration would have been much larger, something like Δc/c ∼ 10-5.

Thursday, September 18, 2014

Is cosmic expansion a mere coordinate effect?

There is a very interesting article about cosmic expansion or rather a claim about the absence of cosmic expansion.

The argument based on the experimental findings of a team of astrophysicists led by Eric Lerner goes as follows. In non-expanding cosmology and also in the space around us (Earth, Solar system, Milky Way), as similar objects go further away, they look fainter and smaller. Their surface brightness remains constant. In Big Bang theory objects actually should appear fainter but bigger. Therefore the surface brightness- total luminosity per area - should decrease with distance. Besides this cosmic redshift would be dimming the light.

Therefore in expanding Universe the most distant galaxies should have hundreds of times dimmer surface brightness since the surface are is larger and total intensity of light emitted more or less the same. Unless of course, the total luminosity increases to compensate this: this would be of course total adhoc connection between dynamics of stars and cosmic expansion rate.

This is not what observations tell. Therefore one could conclude that Universe does not expand and Big Bang theory is wrong.

The conclusion is of course wrong. Big Bang theory certainly explains a log of things. I try to summarize what goes wrong.

  1. It is essential to make clear what time coordinate one is using. When analyzing motions in Solar System and Milky Way, one uses flat Minkowski coordinates of Special Relativity. In this framework one observes no expansion.

  2. In cosmology one uses Robertson-Walker coordinates (a,r, θ,φ). a and r a the relevant ones. In TGD inspired cosmology R-W coordinates relate to the spherical variant (t,rM,θ,φ) of Minkowski coordinates by formulas

    a2= t2-rM2, rM= a×r.

    The line element of metric is

    ds2= gaada2 -a2[dr2/(1+r2)+r22]

    and at the limit of empty cosmology one has gaa=1.

    In these coordinates the light-cone of empty Minkowski space looks like expanding albeit empty cosmology! a is just the light-cone proper time. The reason is that cosmic time coordinate labels the a=constant hyperboloids (hyperbolic spaces) rather than M4 time=constant snapshots. This totally trivial observation is extremely important concerning the interpretation of cosmic expansion. Often however trivial observations are the most difficult ones to make.

Cosmic expansion would to high extend a coordinate effect but why should one then use R-W coordinates in cosmic scales? Why not Minkowski coordinates?
  1. In Zero Energy Ontology (ZEO) - something very specific to TGD - the use of these coordinates is natural since zero energy states are pairs of positive and negative energy states localized about boundaries of causal diamonds (CD), which are intersections of future and past directed light-cones having pieces of light-cone boundary as their boundaries. The geometry of CD suggests strongly the use of R-W coordinates associated with either boundary of CD. The question "Which boundary?" would lead to digression to TGD inspired theory of consciousness. I have talked about this in earlier postings.

  2. Thus the correct conclusion is that local objects such as stars and galaxies and even large objects do not participate in the expansion when one looks the situation in local Minkowski coordinates - which by the way are uniquely defined in TGD framework since space-time sheets are surfaces in M4×CP2. In General Relavity the identification of the local Minkowski coordinates could be highly non-trivial challenge.

    In TGD framework local systems correspond to their own space-time sheets and Minkowski coordinates are natural for the description of the local physic since space-time sheet is by definition a space-time region allowing a representation as a graph of a map from M4 to CP2. The effects caused by the CD inside which the space-time surfaces in question belong to the local physics are negligible. Cosmic expansion is therefore not a mere coordinate effect but directly reflects the underlying ZEO.

  3. In General Relativity one cannot assume imbeddability of the generic solution of Einstein's equations to M4 × CP2 and this argument does not work. The absence of local expansion have been known for a long time and Swiss Cheese cosmology has been proposed as a solution. Non-expanding local objects of constant size would be the holes of Swiss Cheese and the cheese around them would expand. The holes of cheese would correspond to space-time sheets in TGD framework. All space-time sheets can be in principle non-expanding and they have suffered topological condensation to large space-time sheets.

One should also make clear GRT space-time is only an approximate concept in TGD framework.
  1. Einstein-Yang-Mills space-time is obtained from the many-sheeted space-time of TGD by lumping together the sheets and describing it as a region of Minkowski space endowed with an effective metric which is sum of flat Minkowski metric and deviations of the metrics of sheets from Minkowski metric. Same procedure is applied to gauge potentials.

  2. The motivation is that test particle topologically condenses at all space-time sheets present in given region of M4 and and the effects of the classical fields at these sheets superpose. Thus superposition of fields is replaced with superposition of their effects and linear superposition with set theoretic union of space-time sheets. TGD inspired cosmology assumes that the effective metric obtained in this manner allows imbedding as vacuum extremal of Kähler action. The justification of this assumption is that it solves several key problems of
    GRT based cosmology.


  3. The number of field patterns in TGD Universe is extremely small - given by preferred extremals - and the relationship of TGD to GRT and YM theories is like that of atomic physics to condensed matter physics. In the transition to GRT-Yang-Mills picture one gets rid of enormous topological complexity but the extreme simplicity at the level of fields is lost. Only four CP2 coordinates appear in the role of fields in TGD framework and at GRT Yang-Mills limit they are replaced with a large number of classical fields.

Is evolution 3-adic?

I received an interesting email from Jose Diez Faixat giving a link to his blog. The title of the blog is "Bye-bye Darwin" and tells something about his proposal. The sub-title "The Hidden rhythm of evolution" tells more. Darwinian view is that evolution is random and evolutionary pressures select the randomly produced mutations. Rhythm does
not fit with this picture.

The observation challenging Darwinian dogma is that the moments for evolutionary breakthroughs - according to Faixat's observation - seems to come in powers of 3 for some fundamental time scale. There would be precise 3-fractality and accompanying cyclicity - something totally different from Darwinian expectations.

By looking at the diagrams demonstrating the appearance of powers of 3 as time scales, it became clear that the interpretation int terms of underlying 3-adicity could make sense. I have speculated with the possibility of small-p p-adicity. In particular, p-adic length scale hypothesis stating that primes near powers of 2 are especially important physically could reflect underlying 2-adicity. One can indeed have for each p entire hierarchy of p-adic length scales coming as powers of p1/2. p=2 would give p-adic length scale hypothesis. The observations of Faixat suggest that also powers p=3 are important - at least in evolutionary time scales.

Note: The p-adic primes characterizing elementary particles are gigantic. For instance, Mersenne prime M127 = 2127 -1 characterizes electron. This scale could relate to the 2-adic scale L2(127)= 2127/2× L2(1). The hierarchy of Planck constants coming as heff=n× h also predicts that the p-adic length scale hierarchy has scaled up versions obtained by scaling it by n.

The interpretation would be in terms of p-adic topology as an effective topology in some discretization defined by the scale of resolution. In short scales there would be chaos in the sense of real topology: this would correspond to Darwinian randomness. In long scales p-adic continuity would imply fractal periodicities in powers of p and possibly its square root. The reason is that in p-adic topology system's states at t and t+kpn, k=0,1,...p-1, would not differ much for large values of n.

The interpretation relies on p-adic fractality . p-Adic fractals are obtained by assigning to real function its p-adic counterpart by mapping real point by canonical identification

SUMn xn pn → SUM n xn p-n

to p-adic number, assigning to it the value of p-adic variant of real function with a similar analytic form and mapping the value of this function to a real number by the inverse of the canonical identification, the powers of p correspond to a fractal hierarchy of discontinuities.

A possible concrete interpretation is that the moments of evolutionary breakthroughs correspond to criticality and the critical state is universal and very similar for moments which are p-adically near each other.

The amusing co-incidence was that I have been working with a model for 12-note scale based on the observation that icosahedron is a Platonic solid containing 12 vertices. The scale is represented as a closed non-self-intersecting curve - Hamiltonian cycle - connecting all 12 vertices: octave equivalence is the motivation for closedness. The cycle consists of edges connecting two neighboring vertices identified as quints - scalings of fundamental by 3/2 in Platonic scale. What is amusing that scale is obtained essentially powers of 3 are in question scaled down (octave equivalence) to the basic octave by a suitable power of 2. The faces of icosahedron are triangles and define 3-chords. Triangle can contain either 0,1, 2 edges of the cycle meaning that the 3-chords defined by faces and defining the notion of harmonic contain 0,1, or 2 quints. One obtains large number of different harmonies partially characterized by the numbers of 0-, 1-, and 2-quint icosahedral triangles.

The connection with 3-adicity comes from the fact that Pythagorean quint cycle is nothing but scaling by powers of 3 followed by suitable downwards scaling by 2 bringing the frequency to the basic octave so that 3-adicity might be realized also at the level of music!

There is also another strange co-incidence. Icosahedron has 20 faces, which is the number of amino-acids. This suggests a connection between fundamental biology and 12-note scale. This leads to a concrete geometric model for amino-acids as 3-chords and for proteins as music consisting of sequences of 3-chords. Amino-acids can be classified into 3 classes using polarity and basic - acid/neutrality character of side chain as basic criteria. DNA codons would would define the notes of this music with 3-letter codons coding for 3-chords. One ends up also to a model of genetic code relying on symmetries of icosahedron from some intriguing observations about the symmetries of the code table.

At the level of details the icosahedral model is able to predict genetic code correctly for 60 codons only, and one must extend it by fusion it with a tetrahedral code. The fusion of the two codes corresponds geometrically to the fusion of icosahedron with tetrahedron along common face identified as "empty" amino-acid and coded by 2 stopping codons in icosahedral code and 1 stopping codon in tetrahedral code. Tetrahedral code brings in 2 additional amino-acids identified as so called 21st and 22nd amino-acid discovered for few years ago and coded by stopping codons. These stopping codons certainly differ somehow from the ordinary one - it is thought that context defines somehow the difference. In TGD framework magnetic body of DNA could define the context.

The addition of tetrahedron brings one additional vertex, which correlates with the fact that rational scale does not quite closed. 12 quints gives a little bit more than 7 octaves and this forces to introduce 13 note for instance, Ab and G# could differ slightly. Also micro-tubular geometry involves number 13 in an essential manner.

DNAs humble beginnings as nutrient carrier; humble - really?

I received from Ulla an interesting link about the indications that DNA may have had rather humble beginnings: it would have served as a nutrient carrier. Each nuclteotide in the phosphate-deoxiribose backbone corresponds to a phosphate and nutrient refers to phosphate assumed to carry metabolic energy in high energy phosphate bond.

In AXP, X=M,D,T the number of phosphates is 1,2,3. When ATP transforms to ADP it gives away one phosphate to the acceptor molecule which receives thus metabolic energy. For DNA there is one phosphate per nucleotide and besides A also T, G, and C are possible.


The attribute "humble" reflects of course the recent view about the role of nutrients and metabolic energy. It is just ordered energy what they are carrying. TGD view about life suggest that "humble" is quite too humble an attribute.

  1. The basic notion is potentially conscious information. This is realized as negentropic entanglement for which entanglement probabilities must be rational numbers (or possibly also algebraic numbers in some algebraic extension of rationals) so that their p-adic norms make sense. The entanglement entropy associated with the density matrix characterizing entanglement is defined by a modification of Shannon formula by replacing the probabilities in the argument of the logarithm with their p-adic norms and finding the prime for which the entropy is smallest. The entanglement entropy defined in this manner can be and is negative unlike the usual Shannon entropy. The interpretation is as information associated with entanglement. Second law is not violated since the information is 2-particle property whereas as Shannon entropy is single particle property characterizing average particle.

    The interpretation of negentropic entanglement is as potentially conscious information: the superposition of pairs of states would represent abstraction or rule whose instances would be the pairs of states. The large the number of pairs, the higher the abstraction level.

  2. The consistency with standard quantum measurement theory gives strong constraints on the form of the negentropic entanglement. The key notion is that if density matrix is proportional to unit matrix, standard measurement theory says nothing about the outcome of measurement and entanglement can be preserved. Otherwise the reduction occurs to one of the states involved. This situation could correspond to negentropic 2-particle entanglement. For several subsystems each sub-system-complement pair would have similar density matrix. There is also a connection with dark matter identified as phases with non-standard value heff=n×h of Planck constant. n defines the dimension of the density matrix. Thus dark matter at magnetic flux quanta would make living matter living.

    In 2-particle case the entanglement coefficients form a unitary matrix typically involved with quantum computing systems. DNA-cell membrane system is indeed assumed to form a topological quantum computer in TGD framework. The braiding of magnetic flux tubes connecting nucleotides with lipids of the cell membrane defines topological quantum computer program and its time evolution is induced by the flow of lipids forming a 2-D liquid crystal. This flow can be induced by nearby events and also by nerve pulses.

    Sidestep: Actually pairs of flux tubes are involved to make high temperature super-conductivity possible with members of Cooper pairs at flux tubes with same or opposite directions of spins depending on the direction of magnetic field and thus in spin S=0 or S=1 state. For large value of Planck constant heff=n×h the spin-spin interaction energy is large and could correspond in living matter to energies of visible light.


  3. Negentropy Maximization Principle (NMP) is the basic variational principle of TGD inspired theory of consciousness. NMP states that the gain of negentropic entanglement is maximal in state function reduction so that negentropic entanglement can be stable.

  4. NMP guarantees that during evolution by quantum jumps recreating the Universe (and sub-Universes assignable to causal diamonds (CDs)) the information resources of Universe increase. Just to irritate skeptics I have spoken about "Akashic records". Akashic records would form books in a universal library and could be read by interaction free quantum measurement preserving entanglement but generating secondary generating state function reductions providing conscious information about Akashic records defining also a model of self.

    Sidestep: Self can be identified as a sequence of state function for which only first quantum is non-trivial at second boundary of CD whereas other quantum jumps induce change of superposition of CDs at the opposite boundary and states at them). Essentially a discretized counterpart of unitary time development would be in question. This allows to understand how the arrow of psychological time emerges and why the contents of sensory experience is about so narrow a time interval. Act of free will corresponds to the first state function reduction at opposite boundary and thus involves change of the arrow of psychological time at some level of self hierarchy: this prediction is consistent with the Libet's findings that conscious decision implies neural activity initiated before the decision ("before" with respect to geometric time, not subjective time).


In this framework the phosphates could be seen as ends of magnetic flux tubes connecting DNA to cell membrane and mediating negentropic entanglement with the cell membrane. DNA as topological quantum computer vision conforms with the interpretation DNA-cell membrane system as "Akaschic records". This role of DNA-cell membrane system would have emerged already before the metabolic machinery, whose function would be to transfer the entanglement of nutrient molecules with some bigger system X to that between biomolecules and X. Some intriguing numerical co-incidences suggest that X could be gravitational Mother Gaia and flux tubes mediating gravitational interaction with nutrient molecules and gravitational Mother Gaia could be in question. This brings in mind Penrose's proposal about the role of quantum gravity. TGD is indeed a theory of quantum gravity predicting that gravitation is quantal in astroscopic length scales.