https://matpitka.blogspot.com/2008/

Wednesday, December 17, 2008

Antimatter as dark matter?

Intuitively the scaling of Planck constant scales up quantum lengths, in particular the size of causal diamond CD defined as intersection of future and past directd lightcones. This looks trivial but one one must describe precisely what is involved to check internal consistency and also to understand how to model the quantum phase transitions changing Planck constant. It turns out that the back of the Big Book along with CDs is analogous to Josephson junction which in presence of dissipation leads to the separation of charges to different pages. This might relate to the generation of matter antimatter asymmetry of the visible matter.

The first manner to understand the situation is to consider CD with a fixed range of M4 coordinates. The scaling up of the covariant Kähler metric of CD by r2=(hbar/hbar0)2 scales up the size of CD by r. Another manner to see the situation is by scaling up the linear M4 coordinates by r for the larger CD so that M4 metric becomes same for both CDs. The smaller CD is glued to the larger one isometrically together along (M2ÇCD) Ì CD anywhere in the interior of the larger CD. What happens is non-trivial for the following reasons.

  1. The singular coverings and factor spaces are different and M4 scaling is not a symmetry of the Kähler action so that the preferred extrema in the two cases do not relate by a simple scaling. The interpretation is in terms of the coding of the radiative corrections in powers of hbar to the shape of the preferred extremals. This becomes clear from the representation of Kähler action in which M4 coordinates have the same range for two CDs but M4 metric differs by r2 factor.

  2. In common M4 coordinates the M4 gauge part Aa of CP2 Kähler potential for the larger CD differs by a factor 1/r from that for the smaller CD. This guarantees the invariance of four-momentum assignable to Chern-Simons action in the phase transition changing hbar. The resulting discontinuity of Aa at M2 is analogous to a static voltage difference between the two CDs and M2 could be seen as an analog of Josephson junction. In absence of dissipation (expected in quantum criticality) the Kähler voltage could generate oscillatory fermion, em, and Z0 Josephson currents between the two CDs. Since Kähler gauge potential couples to quarks and leptons with opposite signs the current would be in opposite directions for quarks and leptons as well as for matter and antimatter. In presence of dissipation the currents would be ohmic and could force quarks and leptons and matter and antimatter to different pages of the Big Book and quarks inside hadrons would have nonstandard value of Planck constant.

  3. The discontinuities of Au and Af allow to assign electric and magnetic Kähler point charges QKe/m with M1 Ì M2 and having sign opposite to those assignable with dCD×CP2. It should be possible to identify physically M2, the line E1 representing quantization axis of angular momentum, and the position of QK.

For details and background see the updated chapter Quantum Hall effect and Hierarchy of Planck Constants of "Physics in Many-Sheeted Space-time".

Tuesday, December 16, 2008

Is dark matter anyonic?

For year or two ago I proposed an explanation of FQHE, anyons, and fractionization of quantum numbers in terms of hierarchy of Planck constants realized as a generalization of the imbedding space H=M4×CP2 to a book like structure. The book like structure applies separately to CP2 and to causal diamonds (CD Ì M4) defined as intersections of future and past directed light-cones. The pages of the Big Book correspond to singular coverings and factor spaces of CD (CP2) glued along 2-D subspace of CD (CP2) and are labeled by the values of Planck constants assignable to CD and CP2 and appearing in Lie algebra commutation relations. The observed Planck constant hbar, whose square defines the scale of M4 metric corresponds to the ratio of these Planck constants. The key observation is that fractional filling factor results if hbar is scaled up by a rational number.

In the new chapter Quantum Hall effect and Hierarchy of Planck Constants of "p-Adic Length Scale Hypothesis and Hierarchy of Planck Constants" I discussed this idea in more detail. The outcome is a rather detailed view about anyons on one hand, and about the Kähler structure of the generalized imbedding space on the other hand.

In previous postings and in the chapter Quantum Astrophysics of "Physics in Many-Sheeted Space-time" consider the idea that dark matter is in anyonic phase in astrophysical scales. Among other things this leads to an explanation for both the successes and partial failures of Bohr orbitogy in astrophysical length scales. In the following I briefly sum up some key points of the vision that anyonization and associated charged fractionization are universal aspects of dark matter identified as quantum coherent phases with large value of Planck constant.

Charge fractionization is a fundamental piece of quantum TGD and should be extremely general phenomenon and the basic characteristic of dark matter known to contribute 95 per cent to the matter of Universe.

  1. In TGD framework scaling hbar = mhbar0 implies the scaling of the unit of angular momentum for m-fold covering of CD only if the many particle state is Zm singlet. Zm singletness for many particle states allows of course non-singletness for single particle states. For factor spaces of CD the scaling hbar® hbar/m is compensated by the scaling l® ml for Lz=lhbar guaranteing invariance under rotations by multiples of 2p/m. Again one can pose the invariance condition on many-particle states but not to individual particles so that genuine physical effect is in question.

  2. There is analogy with Z3-singletness holding true for many quark states and one cannot completely exclude the possibility that quarks are actually fractionally charged leptons with m=3-covering of CP2 reducing the value of Planck constant so that quarks would be anyonic dark matter with smaller Planck constant and the impossibility to observe quarks directly would reduce to the impossibility for them to exist at our space-time sheet. Confinement would in this picture relate to the fractionization requiring that the 2-surface associated with quark must surround the tip of CD. Whether this option really works remains an open question. In any case, TGD anyons are quite generally confined around the tip of CD.

  3. Quite generally, one expects that dark matter and its anyonic forms emerge in situations where the density of plasma like state of matter is very high so that N-fold cover of CD reduces the density of matter by 1/N factor at given sheet of covering and thus also the repulsive Coulomb energy. Plasma state resulting in QHE is one examples of this. The interiors of neutron stars and black hole like structures are extreme examples of this, and I have proposed that black holes are dark matter with a gigantic value of gravitational Planck constant implying that black hole entropy -which is proportional to 1/hbar - is of same order of magnitude as the entropy assignable to the spin of elementary particle. The confinement of matter inside black hole could have interpretation in terms of macroscopic anyonic 2-surfaces containing the topologically condensed elementary particles. This conforms with the TGD inspired model for the final state of star inspiring the conjecture that even ordinary stars could posses onion like structure with thin layers with radii given by p-adic length scale hypothesis.

    The idea about hierarchy of Planck constants was inspired by the finding that planetary orbits can be regarded as Bohr orbits : the explanation was that visible matter has condensed around dark matter at spherical cells or tubular structures around planetary orbits. This led to the proposal that planetary system has formed through this kind of condensation process around spherical shells or flux tubes surrounding planetary orbits and containing dark matter.

    The question why dark matter would concentrate around flux tubes surrounding planetary orbits was not answered. The answer could be that dark matter is anyonic matter at partonic 2-surfaces whose light-like orbits define the basic geometric objects of quantum TGD. These partonic 2-surfaces could contain a central spherical anyonic 2-surface connected by radial flux tubes to flux tubes surrounding the orbits of planets and other massive objects of solar system to form connected anyonic surfaces analogous to elementary particles.

    If factor spaces appear in M4 degrees of freedom, they give rise to Zn Ì Ga symmetries. In astrophysical systems the large value of hbar necessarily requires a large value of na for CD coverings as the considerations of - in particular the model for graviton dark graviton emission and detection - forces to conclude. The same conclusion follows also from the absence of evidence for exact orbifold type symmetries in M4 degrees of freedom for dark matter in astrophysical scales.

  4. The model of DNA as topological quantum computer assumes that DNA nucleotides are connected by magnetic flux tubes to the lipids of the cell membrane. In this case, p-adically scaled down u and d quarks and their antiquarks are assumed to be associated with the ends of the flux tubes and provide a representation of DNA nucleotides. Quantum Hall states would be associated with partonic 2-surfaces assignable to the lipid layers of the cell and nuclear membranes and also endoplasmic reticulum filling the cell interior and making it macroscopic quantum system and explaining also its stability. The entire system formed in this manner would be single extremely complex anyonic surface and the coherent behavior of living system would result from the fusion of anyonic 2-surfaces associated with cells to larger anyonic surfaces giving rise to organs and organisms and maybe even larger macroscopically quantum coherent connected systems.

    In living matter one must consider the possibility that small values of na correspond to factor spaces of CD (consider as example aromatic cycles with Zn symmetry with n = 5 or n = 6 appearing in the key molecules of life). Large hbar would require CP2 factor spaces with a large value of nb so that the integers characterizing the charges of anyonic particles would be shifted by a large integer. This is not in accordance with naive ideas about stability. One can also argue that various anomalous effects such as IQHE with n equal to an integer multiple of nb would have been observed in living matter.

    A more attractive option is that both CD and CP2 are replaced with singular coverings. Spin and charge fractionization takes place but the effects are small if both na, nb, and na/nb are large. An interesting possibility is that the ends of the flux tubes assumed to connect DNA nucleotides to lipids of various membranes carry instead of u, d and their anti-quarks fractionally charged electrons and neutrinos and their anti-particles having nb=3 and large value of na. Systems such as snowflakes could correspond to large hbar zoom ups of molecular systems having subgroup of rotation group as a symmetry group in the standard sense of the word.

    The model of graviton de-coherence constructed in allows to conclude that the fractionization of Planck constant has interpretation as a transition to chaos in the sense that fundamental frequencies are replaced with its sub-harmonics corresponding to the divisor of hbar/hbar0 = r/s. The more digits are needed to represent r/s, the higher the complexity of the system. Period doubling bifurcations leading to chaos represent a special case of this. Living matter is indeed a system at the boundary of chaos (or rather, complexity) and order and larger values of nb would give rise to the complexity having as a signature weak charge and spin fractionization effects.

  5. Coverings alone are enough to produce rational number valued spectrum for hbar, and one must keep in mind that the applications of theory do not allow to decide whether only singular factor spaces are really needed.

For details see the new chapter Quantum Hall effect and Hierarchy of Planck Constants of "p-Adic Length Scale Hypothesis and Hierarchy of Planck Constants".

Sunday, December 14, 2008

Revised vision about quantum astrophysics

I had deduced earlier a formula for the quantized Planck constant from the requirement that it represents algebraic homomorphism. Two options for which Planck constants were inverses of each other were possible. As usual, I chose the wrong one! The development of a detailed model for fractional quantum Hall effect fixed the choice on basis of physical arguments. The next task is to go through all applications and make the needed modifications. I started from Quantum Astrophysics. A glue below the abstract.
The vision that the quantum dynamics for dark matter is behind the formation of the visible structures suggests that the formation of the astrophysical structures could be understood as a consequence of gravitational Bohr rules. The origin of these rules has remained a little bit mysterious until the discovery that the hierarchy of Planck constants relates very closely to anyons and fractionization of quantum numbers.

  1. Key element is the notion of partonic 2-surface, which for large values of Planck constant can have astrophysical size. This surface contains dark matter in anyonic many particle state if it surrounds the tip of so called causal diamond (the intersection of future and past directed light-cones). Also flux tubes surrounding the orbits of planets and other astrophysical objects containing dark matter would be connected by radial flux tubes to central anyonic 2-surface so that the entire system would be anyonic and quantum coherent in astrophysical scale. Visible matter is condensed around these dark matter structures.

  2. Since space-times are 4-surfaces in H=M4×CP2 (or rather, its generalization to a book like structure), gravitational Bohr rules can be formulated in a manner which is general coordinate invariant and Lorentz invariant.

  3. The value of the parameter v0 appearing in gravitational Planck constant varies and this leads to a weakened form of Equivalence Principle stating that v0 is same for given connected anyonic 2-surface, which can have very complex topology. In the case of solar system inner planets would be connected to an anyonic surface assignable to Sun and outer planets with different value of v0 to an anyonic surface assignable to Sun and inner planets as a whole. If one accepts ruler-and-compass hypothesis for allowed values of Planck constant very powerful predictions follow.

This general conceptual framework is applied to build simple models in some concrete examples.

  1. Concerning Bohr orbitology in astrophysical length scales, the basic observation is that in the case of a straight cosmic string creating a gravitational potential of form v12/r Bohr quantization does not pose any conditions on the radii of the circular orbits so that a continuous mass distribution is possible. This situation is obviously exceptional. If one however accepts the TGD based vision that the very early cosmology was cosmic string dominated and that elementary particles were generated in the decay of cosmic strings, this situation might have prevailed at very early times. If so, the differentiation of a continuous density of ordinary matter to form the observed astrophysical structures would correspond to an approach to a stationary situation governed by Bohr rules for dark matter and in the first approximation one could neglect the intermediate stages.

  2. This general picture is applied by considering some simple models for astrophysical systems involving planar structures. There are several universal predictions. Velocity spectrum is universal and only the Bohr radii depend on the choice of mass distribution. The inclusion of cosmic string implies that the system associated with the central mass is finite. Quite generally dark parts of astrophysical objects have shell like structure like atoms as do also ring like structures.

  3. p-Adic length scale hypothesis provides a manner to obtain a realistic model for the central objects meaning a structure consisting of shells coming as half octaves of the basic radius: this obviously relates to Titius-Bode law. Also a simple model for planetary rings is obtained. Bohr orbits do not follow cosmic expansion which is obtained only in the average sense if phase transitions reducing the value of basic parameter v0 occur at preferred values of cosmic time. This explains why v0 has different values and also the decomposition of planetary system to outer and inner planets with different values of v0.

TGD Universe is quantum critical and quantum criticality corresponds very naturally to what has been identified as the transition region to quantum chaos.

  1. The basic formulation of quantum TGD is consistent with what has been learned from the properties of quantum chaotic systems and quantum chaotic scattering. Wave functions are concentrated around Bohr orbits in the limit of quantum chaos, which is just what dark matter picture assumes.

  2. The model for the emission and detection of dark gravitons allows to conclude that the transition to chaos via generation of sub-harmonics of fundamental frequency spoiling the original exact periodicity corresponds to a sequence of phase transitions in which Planck constant transforms from integer to a rational number whose denominator increases as chaos is approached. This gives a precise characterization for the phase transitions leading to quantum chaos in general.

  3. In this framework the chaotic motion of astrophysical object becomes the counterpart of quantum chaotic scattering and the description in terms of classical chaos is predicted to fail. By Equivalence Principle the value of the mass of the object does not matter at all so that the motion of sufficiently light objects in solar system might be understandable only as quantum chaotic scattering. The motion of gravitationally unbound comets and rings of Saturn and Jupiter and the collisions of galactic structures known to exhibit the presence of cart-wheel like structures define possible applications.

The description of gravitational radiation provides a stringent test for the idea about dark matter hierarchy with arbitrary large values of Planck constants. In accordance with quantum classical correspondence, one can take the consistency with classical formulas as a constraint allowing to deduce information about how dark gravitons interact with ordinary matter. The standard facts about gravitational radiation are discussed first and then TGD based view about the situation is sketched.

For details and background see the updated chapter Quantum Astrophysics of "Physics in Many-Sheeted Space-time".

Thursday, December 11, 2008

New URL for my homepage

Note: The URL of my home page has changed to http://tgd.wippiespace.com/public_html/index.html. Few weeks after the discovery of CDF anomaly and after I had informed in physics blogs that TGD predicted the new physics explaining this anomaly as well as a long list of other anomalies already 1990 (the article is published in International Journal of Theoretical Physics) Helsinki University informed me that the old URL is not available after 10.12. With the help of some friendly souls the date was changed to 31.12. Otherwise TGD had disappeared from the web totally since for some reason they are unable to redirect visitors to the new URL after the page has been removed! Please update the link since the older link does not work next year.

About dark matter and CDF anomaly

Tommaso Dorigo told in his posting about the talk of Nima-Arkadi Haed relating to dark matter and CDF anomaly. Nima and others are beginning to realize what I realized for 3 years ago. Dark matter is not not just some neutral extremely weakly interacting particle but there are a lot of them and they can be also charged.

This is still rather ugly idea since it forces to introduce additional gauge group having standard model gauge groups as subgroup. In TGD framework the hierarchy of Planck constants realized in terms of book like structure of generalized 8-D imbedding space containing space-times as 4-surfaces realizes this much more elegantly since darkness is relative: all matter at pages different from the page of us is dark from our perspective since local interaction vertices are not possible. Gauge group is just the universal standard model gauge group having purely number theoretical interpretation.

I glue my reseponse to Tommaso Dorigo's blog also here.

Amusing, just this is what I have been talking for years but in much more elegant form and in much more detail with applications ranging from quantum Hall effect to astrophysics to cosmology to quantum biology.

Much of honor goes to Laurent Nottale who noticed that inner and outer planetary orbits can be seen as Bohr orbits with a gigantic value of Planck constant. The TGD explanation is in terms of condensation of visible matter around 2-D surfaces defining anyonic systems consisting of dark matter with very large Planck constant and therefore in macroscopically quantum coherent phase. This would be the basic mechanism for the formation of planetary systems (see my blog).

This finding and various biological anomalies led to the generalization of 8-D imbedding space of TGD having a book like structure with pages labeled by different values of Planck constant (this is oversimplification), and containing space-times as 4-surfaces. Typically the light-like 3-surfaces - the basic objects of TGD Universe- are at one particular page but tunneling is possible by leakage through the back of the book.

We would live at one particular page and the matter at other pages would be dark relative to us. It can be just ordinary particles if stability conditions allow this (anyonic phase is highly suggestive). There are no local interaction vertices between particles belonging to different pages. This explains darkness.

Particles can leak between different pages and it is even possible to photograph dark matter. This provides a possible explanation for various strange findings of Peter Gariaev about interaction of DNA with visible, IR and UV light. There is long list of other anomalies in living matter finding explanation in this framework. In living matter this kind of interactions would take place routinely in the model of quantum biology based on dark matter. One fascinating implication is phase transition changing the value of Planck constant and scaling up or down quantum scales typically proportional to hbar: this provides fundamental control mechanism of cellular biology where phase transition change the size scale occur very frequently.

About CDF anomaly and related anomalies. TGD predicts both leptons and quarks have colored excitations. Color octet excitations of leptons plus p-adic length scale hypothesis explains quantitatively CDF anomaly (predicts the mass of lightest excitation (charged tau-pion with mass mtau), the masses of the excitations proposed by CDF come as 2×mtau, 4×mtau, 8×mtau (neutral tau-pions) in accordance with the proposal of CDF group. Model also provides mechanism producing the muon jets and predicts a correct order of magnitude for the production cross section. Also very importantly, if colored excitations of leptons are present only at pages having nonstandard Planck constant, there is no contribution to intermediate boson decay widths from decays to colored leptons.

During years many other similar anomalies have been found. Electropions made themselves visible already at seventies in heavy ion collisions. About this I published two papers in International Journal of Theoretical Physics (1990,1992). Ortopositronium decay rate anomaly has interpretation in terms of electropion production. The gamma rays with energies at electron rest mass from galactic nuclei have interpretation as decay products of dark electro-pions. I have also discussed Karmen anomaly as the first evidence for colored excitations of muon. Year ago emerged evidence for mu-pion. For references see my earlier blog postings and also the material at my homepage.

This approach to dark matter differs from Nima's in three respects. It came three years earlier (as becomes clear by looking at old postings in my blog and links to the books and articles at my home page, there are also publications in CASYS proceedings). It is much more elegant since just the standard model gauge group is postulated (actually this gauge group follows as a prediction from number theoretic vision about TGD). And it implies a profound generalization of quantum theory itself.

This theory is however a crackpot theory according to the crowd opinion. Dear Anonymous, before telling me not to fill this blog with spam, tell me exactly what makes TGD a crackpot theory. If you bother to go to my home page and read you find that it cannot be the content. What it is then? I am really interested. Perhaps also some others are.

Note: The URL of my home page has changed http://tgd.wippiespace.com/public_html/index.html since few weeks after the discovery of CDF anomaly Helsinki University informed me that the old URL is not available after 10.12. With the help of some friendly souls the date was changed to 31.12. Otherwise TGD had disappeared from the web totally since for some reason they are unable to redirect visitors to the new URL after the page has been removed!

For details and background see the updated chapter Recent Status of Leptohadron Hypothesis of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy", and the article New evidence for colored leptons.

Tuesday, December 09, 2008

Quantum Hall effect and Hierarchy of Planck Constants

I have already earlier proposed the explanation of FQHE, anyons, and fractionization of quantum numbers in terms of hierarchy of Planck constants realized as a generalization of the imbedding space H=M4×CP2 to a book like structure. The book like structure applies separately to CP2 and to causal diamonds (CD Ì M4) defined as intersections of future and past directed light-cones. The pages of the Big Book correspond to singular coverings and factor spaces of CD (CP2) glued along 2-D subspace of CD (CP2) and are labeled by the values of Planck constants assignable to CD and CP2 and appearing in Lie algebra commutation relations. The observed Planck constant hbar, whose square defines the scale of M4 metric corresponds to the ratio of these Planck constants. The key observation is that fractional filling factor results if hbar is scaled up by a rational number.

In the new chapter Quantum Hall effect and Hierarchy of Planck Constants of "p-Adic Length Scale Hypothesis and Hierarchy of Planck Constants" I try to formulate more precisely this idea. The outcome is a rather detailed view about anyons on one hand, and about the Kähler structure of the generalized imbedding space on the other hand.

  1. Fundamental role is played by the assumption that the Kähler gauge potential of CP2 contains a gauge part with no physical implications in the context of gauge theories but contributing to physics in TGD framework since U(1) gauge transformations are representations of symplectic transformations of CP2. Also in the case of CD it makes also sense to speak about Kähler gauge potential. The gauge part codes for Planck constants of CD and CP2 and leads to the identification of anyons as states associated with partonic 2-surfaces surrounding the tip of CD and fractionization of quantum numbers. Explicit formulas relating fractionized charges to the coefficients characterizing the gauge parts of Kähler gauge potentials of CD and CP2 are proposed based on some empirical input.

  2. One important implication is that Poincare and Lorentz invariance are broken inside given CD although they remain exact symmetries at the level of the geometry of world of classical worlds (WCW). The interpretation is as a breaking of symmetries forced by the selection of quantization axis.

  3. Anyons would basically correspond to matter at 2-dimensional "partonic" surfaces of macroscopic size surrounding the tip of the light-cone boundary of CD and could be regarded as gigantic elementary particle states with very large quantum numbers and by charge fractionization confined around the tip of CD. Charge fractionization and anyons would be basic characteristic of dark matter (dark only in relative sense). Hence it is not surprising that anyons would have applications going far beyond condensed matter physics. Anyonic dark matter concentrated at 2-dimensional surfaces would play key key role in the the physics of stars and black holes, and also in the formation of planetary system via the condensation of the ordinary matter around dark matter. This assumption was the basic starting point leading to the discovery of the hierarchy of Planck constants. In living matter membrane like structures would represent a key example of anyonic systems as the model of DNA as topological quantum computer indeed assumes.

  4. One of the basic questions has been whether TGD forces the hierarchy of Planck constants realized in terms of generalized imbedding space or not. The condition that the choice of quantization axes has a geometric correlate at the imbedding space level motivated by quantum classical correspondence of course forces the hierarchy: this has been clear from the beginning. It is now clear that first principle description of anyons requires the hierarchy in TGD Universe. The hierarchy reveals also new light to the huge vacuum degeneracy of TGD and reduces it dramatically at pages for which CD corresponds to a non-trivial covering or factor space, which suggests that mathematical existence of the theory necessitates the hierarchy of Planck constants. Also the proposed manifestation of Equivalence Principle at the level of symplectic fusion algebras as a duality between descriptions relying on the symplectic structures of CD and CP2 forces the hierarchy of Planck constants.

For details see the new chapter Quantum Hall effect and Hierarchy of Planck Constants of "p-Adic Length Scale Hypothesis and Hierarchy of Planck Constants".

Monday, December 08, 2008

About top quark mass again

In his latest blog posting of Tommaso Dorigo summarizes the latest measurement of top quark mass by CDF. Top quark is experimentally in a unique position since toponium does not exist and top quark mass is that of free top. Therefore top quark mass provides a stringent test for TGD based mass calculations based on p-adic thermodynamics.
  1. The prediction for top quark mass depends on second order contributions to electron mass and top mass parameterized by numbers Yt and Yt varying in the interval [0,1). This contribution is of order one per cent. Once Ye is fixed, the CP2 size (and mass scale) is fixed completely from electron mass.
  2. The prediction for top quark mass is 167.8 GeV for Yt=Ye=0 (vanishing second order corrections) and 169.1 GeV for Yt=1 and Ye=0 (maximal possible mass for top). The prediction is reduced for Ye>0 since CP2 mass scale is reduced.
  3. The experimental estimate for mt remained for a long time somewhat higher than the prediction of TGD. The previous experimental average value was m(t)=169.1 GeV with the allowed range being [164.7, 175.5] GeV (see the blog posting of Tommaso Dorigo). The fine tuning Ye=0,Yt=1 giving 169.1 GeV is somewhat un-natural.
  4. The most recent value obtained by CDF reported in detail by Tommaso Dorigo is mt=165.1+/- 3.3+/- 3.1 GeV. This represents lower bound for the mass consistent for Ye=Yt=0. The prediction increases for Yt>0. Clearly, TGD passes the stringent test posed by the top quark mass.
For details see the chapters p-Adic Mass Calculations: Elementary Particle Masses (Table 3) and p-Adic Mass Calculations: Hadron Masses (Table 1). of "p-Adic Length Scale Hypothesis and Hierarchy of Planck Constants".

Thursday, November 27, 2008

Could lepto-hadrons correspond to dark matter?

It has been proposed that the particles produced in CDF anomaly might be decay products of dark matter particles. In TGD framework leptohadron hypothesis explains successfully the basic quantitative and qualitative factors about CDF anomaly and relates it to a bundle of other anomalies (as previous postings should demonstrate). The question is whether there are compelling reasons for identifying leptohadrons as dark matter in TGD sense.

Consider first the experimental side. The proposed identification of cosmic strings (in TGD sense) as the ultimate source of both visible and dark matter does not exclude the possibility that a considerable portion of topologically condensed cosmic strings have decayed to some light particles. In particular, this could be the situation in the galactic nuclei.

The idea that lepto-hadrons might have something to do with the dark matter has popped up now and then during the last decade but for some reason I have not taken it seriously. Situation changed towards the end of the year 2003. There exist now detailed maps of the dark matter in the center of galaxy and it has been found that the density of dark matter correlates strongly with the intensity of monochromatic photons with energy equal to the rest mass of electron.

The only explanation for the radiation is that some yet unidentified particle of mass very nearly equal to 2me decays to an electron positron pair. Electron and positron are almost at rest and this implies a high rate for the annihilation to a pair of gamma rays. A natural identification for the particle in question would be as a lepto-pion (or rather, electro-pion). By their low mass lepto-pions, just like ordinary pions, would be produced in high abundance, in lepto-hadronic strong reactions and therefore the intensity of the monochromatic photons resulting in their decays would serve as a measure for the density of the lepto-hadronic matter. Also the presence of lepto-pionic condensates can be considered.

These findings force to take seriously the identification of the dark matter as lepto-hadrons. This is however not the only possibility. The TGD based model for tetra-neutrons is based on the hypothesis that mesons made of scaled down versions of quarks corresponding to Mersenne prime M127 (ordinary quarks correspond to k=107) and having masses around one MeV could correspond to the color electric flux tubes binding the neutrons to form a tetra-neutron. The same force would be also relevant for the understanding of alpha particles. Of course, also now the identification as dark matter in tGD sense can be considered. One implication would be that strong interactions would become weak in higher orders and guarantee the convergence of perturbative QCD type theory.

There are also good theoretical arguments for why lepto-hadrons and also exotic quarks should be dark matter in the sense of having a non-standard value of Planck constant.

  1. Since particles with different Planck constant correspond to different pages of the book like structure defining the generalization of the imbedding space, the decays of intermediate gauge bosons to colored excitations of leptons would not occur and would thus not contribute to their decay widths.

  2. In the case of electro-pions the large value of the coupling parameter Z1Z2aem > 1 combined with the hypothesis that a phase transition increasing Planck constant occurs as perturbative QFT like description fails would predict that electro-pions represent dark matter. Indeed, the power series expansion of the exp(iS) term might well fail to converge in this case since S is proportional to Z1Z2αem. For t-pion production one has Z1=-Z2=1 and in this case one can consider also the possibility that t-pions are not dark in the sense of having large Planck constant. Contrary to the original expectations darkness does not affect the lowest order prediction for the production cross section of lepto-pion.

For details and background see the updated chapter Recent Status of Leptohadron Hypothesis of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy", and the article New evidence for colored leptons.

P.S. CDF anomaly was quite a nice birthday gift and the successful explanation of the anomaly eventually led to a Christmas gift from Finnish colleagues. Thank you very much. The gift was a message telling that I cannot anymore use the computer of Helsinki University for my homepage (this has been the only support that I have obtained from Helsinki University for years). Thank you again and Good Christmas!

Wednesday, November 26, 2008

Quantum representations of fundamental group of knot complement: a complete set of knot invariants?

I visited Helsinki and spent one day by reading some popular stuff in physics library. Among other things I found a popular article in New Scientist summarizing briefly the recent situation in the field of knot invariants. To get some idea about what knot invariants are see this, this, and this.

This field is of interest to me because braids, links, and knots are closely related and because so called number theoretical braids have become the fundamental structure of quantum TGD. One reason is that they define geometric correlates for the notion of finite measurement resolution. There are many other reasons. Number theoretical braids are assigned to the incoming and outgoing lines of generalized Feynman diagrams so that topological QFT becomes part of quantum TGD.

The reading of the article inspired what looks like an attractive idea. Maybe a trivial one: any knot specialist could immediately tell this. The fundamental group of the complement of knot, call it G, is what is known as a complete knot invariant in the sense that it fails to distinguish only between knot and its mirror image. The question is whether one could define a braided version of the fundamental group and perhaps define new Jones polynomial like knot invariants as quantum traces of the unitary quantum group representation matrices for G. If so, one could have a complete set of quantum invariants.

At least at first glimpse this seems be possible. One can assign to a braid a knot and more generally, links by joining the upper and lower ends of strands suitably. Knot is obtained by joining the upper end of n:th strand to lower end of n+1:th strand cyclically and assuming that the added connecting strands form a trivial braid having no braiding with the original braid gives a knot. The other extreme is N-link for N-braid obtained just by connecting the end points of strands. I do not remember how much non-uniqueness this representation involves. In any case this kind of representation exists always for a given knot.

The following argument describes how the quantum representation for G could be constructed.

  1. Let g be an element of G represented as a loop linked with the braid associated with the knot. Assume that the loop is un-knotted so that no new information having nothing to do with the knot (link) is brought in. Cut this loop in the region outside the braid from which knot (link) is obtained and join the ends to the upper and lower ends of the braid in such a manner that no new linking or knotting results. The new braid strand extends N-braid to N+1-braid. One can also connect the ends of the new strand by homotopically trivial un-knotted strand to get a closed loop. The outcome is 2-link in the case of knot and n+1-link in the case of n-link.

  2. This assigns to the knot (link) and corresponding N-braid an N+1-braid and one can define braid/link/knot invariant as a quantum trace. Actually, any link invariant obtained as a braid invariant defines a representation for a given element of G. In this manner all elements of G (there is infinite number of them) are represented as links and define new quantum invariants for the original knot. Since quantum phase is a root unity, many elements of G are expected to correspond to the same invariant.

  3. The interpretation in terms of topological quantum field theory (TQFT) suggests that one could add arbitrary numbers of loops representing elements of G so that infinite number of Jones invariants telling how these many particle states feel the presence of the braid defining the knot (or link). If all braids containing the braid defining the knot (link) as a sub-braid are allowed, a lot of irrelevant information is loaded to the system so that it seems natural to assume that the added loops are unknotted and mutually unlinked. The triviality of the braid associated with homotopy loops might imply that only single copy is needed for each element of G: the loops would effectively represent topological fermions. If also topological bosons are needed, super-symmetric arithmetic TQFT is what comes in mind first.

  4. Large, perhaps even infinite number of braids would define braid/link/knot invariants for a given braid. This approach would conform with category theoretical thinking and would be in spirit with physicist's manner to get information about a physical system by perturbing it: in this case by adding loops representing elements of G.

Sunday, November 23, 2008

Estimate for electro-pion production cross section in heavy ion collisions

I have described in earlier postings the model explaining the CDF anomaly as evidence for colored excitations of leptons - one of the basic predictions of TGD distinguishing it from standard model- forming bound states identified as leptopions: τ-pion in this case. For 15 years ago I ended up with this kind of model as an explanation of anomalous electron-positron pair production in heavy ion collisions near Coulomb wall: electron-positron pairs would have resulted from electro-pions. The difficulty of this model was that the total production cross section was roughly by an order of magnitude smaller than the reported one using the maximal value of the impact parameter, which looked reasonable at that time.

The work with CDF anomaly led to a generalization and modification of the original leptopion model and it is important to check that the modified model can reproduce also the cross section for the production of electro-pions. The maximal value of the impact parameter allowing this turns out to be essentially 1 Angstrom corresponding to photon energy of 8.1 keV: this X-ray energy has same scale as the rest energy difference between exotic and ordinary variants of nuclei predicted by TGD and discussed in the previous posting, which suggests a connection. Note that atomic radius would in TGD framework represent a fundamental length scale of also nuclear physics realized as the size scale of "field bodies" associated with nuclei and implied by the topological quantization of classical fields in TGD Universe. The following piece of text summarizes the result of calculation. For details the interested reader can consult the links at the end.

The numerical estimate for lepto-pion production cross section (giving estimate cross section for the production of electron-positron pairs) is carried out for thorium with (Z=90,A=232). The value of the collision velocity of the incoming nucleus in the rest frame of the second nucleus is taken as b = .1. From the width dv/v=.2 of velocity distribution in the same frame the upper bound g £ 1+d, d @ 2×10-3 for the Lorentz boost factor of electro-pion in cm system is deduced. The cutoff is necessary because energy conservation is not coded to the structure of the model.

As expected, the singular contribution from the cone vcmcos(q) = b, vcm = 2v/(1+v2) gives the dominating contribution to the cross section. This contribution is proportional to the value of bmax2 at the limit f = 0. Cutoff radius is taken to be bmax=150 ×gcmhbar/m(pe)=1.04 A. The numerical estimate for the cross section using the parameter values listed comes out as s = 5.5 mb to be compared with the rough experimental estimate of about 5 mb. The interpretation would be that the space-time sheet associated with colliding nuclei during the collision has this transversal size in cm system. At this space-time sheet the electric and magnetic fields of the nuclei interfere.

From this one can cautiously conclude that lepto-pion model is consistent with both electro-pion production and t-pion production in proton antiproton collisions. One can of course criticize the large value of impact parameter and a good justification for 1 Angstrom should be found. One could also worry about the singular character of the amplitude making the integration of total cross section somewhat risky business using the rather meager numerical facilities available. The rigorous method to calculate the contribution near the singularity relies on stepwise halving of the increment Δθ as one approaches the singularity. The calculation gives twenty smaller result as that with constant value of Δθ. Hence it seems that one can trust on the result of calculation at least at the order of magnitude level.

This figure gives the differential production cross section for g1 = 1.0319. Obviously the differential cross section is strongly concentrated at the cone due to singularity of the production amplitude for fixed b.

The important conclusion is that the same model can reproduce the value of production cross section for both electro-pions explaining the old electron-positron anomaly of heavy ion collisions and τ-pions explaining the CDF anomaly of proton-antiproton collisions at cm energy sqrt(s)= 1.96 TeV with essentially same and rather reasonable assumptions (do not however forget the large maximal value of the impact parameter!).

In the case of electro-pions one must notice that depending on situation the final states are gamma pairs for electro-pion with mass very nearly equal to twice the electron mass. In the case of neutral τ-pion the strong decay to three p-adically scaled down versions of τ-pion proceeds faster or at least rate comparable to that for the decay to gamma pair. For higher mass variants of electro-pion for which there is evidence (for instance, one with mass 1.6 MeV) the final states are dominated by electron-positron pairs. This is true if the primary decay products are electro-baryons of form (say) eex= e8ν8νc,8 resulting via electro-strong decays instead of electrons and having slightly larger mass than electron. Otherwise the decay to gamma pair would dominate also the decays of higher mass states. A small magnetic moment type coupling between e, eex and electro-gluon field made possible by the color octet character of colored leptons induces the mixing of e and eex so that eex transforms to e by emission of photon. The anomalous magnetic moment of electron poses restrictions on the color magnetic coupling.

For details and background see the updated (and still under updating) chapter Recent Status of Leptohadron Hypothesis of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy".

Tuesday, November 18, 2008

GSI anomaly

Jester wrote a fantastic posting titled hitchhikers-guide-to-ghosts-and-spooks in particle physics summarizing quite a bundle of anomalies of particle physics and also one of nuclear physics- known as GSI anomaly. The abstract of the article Observation of Non-Exponential Orbital Electron Capture Decays of Hydrogen-Like 140Pr and 142Pm Ions describing the anomaly is here.

We report on time-modulated two-body weak decays observed in the orbital electron capture of hydrogen-like 140Pr59+ and 142Pm60+ ions coasting in an ion storage ring. Using non-destructive single ion, time-resolved Schottky mass spectrometry we found that the expected exponential decay is modulated in time with a modulation period of about 7 seconds for both systems. Tentatively this observation is attributed to the coherent superposition of finite mass eigenstates of the electron neutrinos from the weak decay into a two-body final state.

This brings in mind the nuclear decay rate anomalies which I discussed earlier in the posting Tritium beta decay anomaly and variations in the rates of radioactive processes. These variations in decay rates are in the scale of year and decay rate variation correlates with the distance from Sun. Also solar flares seem to induced decay rate variations.

The TGD based explanation relies on nuclear string model in which nuclei are connected by color flux tubes having exotic variant quark and antiquark at their ends (TGD predicts fractal hierarchy of QCD like physics). These flux tubes can be also charged: the possible charges +,-1,0. This means a rich spectrum of exotic states and a lot of new low energy nuclear physics. The energy scale corresponds to Coulomb interaction energy αemm, where m is mass scale of exotic quark. This means energy scale of 10 keV for MeV mass scale. The well-known poorly understood X-ray bursts from Sun during solar flares in the wavelength range 1-8 A correspond to energies in the range 1.6-12.4 keV -3 octaves in good approximation- might relate to this new nuclear physics and in turn might excite nuclei from the ground state to these excited states and the small mixture of exotic nuclei with slightly different nuclear decay rates could cause the effective variation of the decay rate.

The question is whether there could be a flux of X rays in time scale of 7 seconds causing the rate fluctuation by the same mechanism also in GSI experiment. For instance, could this flux relate to synchrotron readiation. Maybe not. In any case, the prediction is what might be called X ray nuclear physics and artificial X ray irradiation of nuclei would be an easy manner to kill or prove the hypothesis.

One can imagine also another possibility.

  1. The first guess is that the transitions between ordinary and exotic states of the ion are induced by the emission of exotic W boson between nucleon and exotic quark so that the charge of the color bond is changed. In standard model the objection would be that classical W fields do not make sense in the length scale in question. The basic prediction deriving from induced field concept (classical ew gauge fields correspond to the projection of CP2 spinor curvature to the space-time surface) is however the existence of classical long range gauge fields- both ew and color. Classical W field can induce charge entanglement in all length scales and one of the control mechanisms of TGD inspired quantum biology relies on remote control of charge densities in this manner.
  2. In the approximation that one has two-state system, this interaction can be modelled by using as interaction Hamiltonian hermitian non-diagonal matrix V, which can be written as Vσx, where σx is Pauli sigma matrix. If this process occurs coherently in time scales longer than hbar/V, an oscillation with frequency ω= V/hbar results. Since weak interactions are in question, 7 second modulation period for rate might make sense.

The hypothesis can be tested quantitatively.

  1. The weak interaction Coulomb potential energy is of form

    V(r)/hbar= αWexp(-mWr)/r,

    where r is the distance between proton center of mass and the end of color flux tube and therefore of order proton Compton length rp so that one can write

    r= x× rp .

    where x should be of order unity but below it.

  2. The frequency ω = 2π/τ= V/hbar must correspond to 14 seconds, the oscillation period which is twice the modulation period of the reaction rate. By taking W boson Compton time as time unit this condition can be written as

    αwexp(-y)/y = tW/τ ,

    y= xrp/rW= xmW/mp∼ 80× x , αw= αem/sin2θW.

  3. This gives the condition

    exp(-y)/y = (tp/τ)×sin2θW/(80×α)

    allowing anyone possessing MATLAB and skills given by first year course in calculus to solve y since the left hand side is known. Feeding in proton Compton length 1.321×10-15 m and sin2θW=.23 one obtains that the distance between flux tube end and proton cm is x=.6446 times proton Compton length, which compares favorably with the guess x∼1 smaller than 1. One must however notice that the oscillation period is exponentially sensitive to the value of x. For instance, if the charge entanglement were between nucleons, x>1 would hold true and the time scale would be enormous. Hence the simple model requires new physics and predicts correctly the period of the oscillation under very reasonable assumptions.

  4. One could criticize this by saying that the masses of two states differ by amount which is of order 10 keV or so. This does not however affect the argument since the mass corresponds to the diagonal non-interaction part of the Hamiltonian contributing only rapidly oscillating phases whereas interaction potential induces oscillating mixing as is easy to see in interaction picture.

  5. If one believes in the hierarchy of Planck constants and p-adically scaled variants of weak interaction physics, charge entanglement would be possible in much longer length scales and the time scale of it raises the question whether qubits could be realized usin proton and neutron in quantum computation purposes. I have also proposed that charge entanglement could serve as a mechanism of biocontrol allowing to induce charge density gradients from distance in turn acting as switches inducing biological functions.

So: it happened again! Again I have given good reason for my learned critics to argue that TGD explains everything so that I am a crackpot and so on. Well... after a first feeling of deep shame I dare to defend myself. In the case of standard model explanatory power has not been regarded as an argument against the theory but my case is of course different since I do not have any academic position since my fate is to live in Finland (still no seminar, colloqium, popular journal article, or public comment about CDF anomaly by any academic theoretical physicist in Finland!). And if my name were Feynman, this little argument would be an instant classic.

In fact it occurred to me that my critics (usually those brave "Anonymous" of blogs- last time in Resonaance- whose contributions as a rule contain zero bits of information) could go through the argument and publicly demonstrate what he believes to be the fatal error. Maybe we could even make a bet. If the critic does not find error he pays 1000 euros. If he finds the error, I pay it. In this case Anonymous should however reveal his name. Some third party revealing also his name would serve as a judge!

For background see chapters TGD and Nuclear Physics (periodic nuclear rate variations in the scale of year) and Nuclear String Hypothesis (GSI anomaly) of "p-Adic length scale Hypothesis and Dark Matter Hierarchy".

Tunnelling nanotubes: Life's secret network

There is an interesting article in New Scientist titled Tunnelling nanotubes: Life's secret network. These tubes are 50-200 nm thick: this scale range relates by a factor 100 to the scale range .5-2 nm associated with gap junctions. Nanotubes can connect cells to each other over distances of several cell diameters and make possible new kinds of communications between cells.

Magnetic flux tubes containing dark particles (ordinary particles with large value of Planck constant) have gradually involved to a basic structure of TGD based quantum model of living matter. For instance, in the model of DNA as topological quantum computer they define braidings coding for topological quantum computation programs. The matter inside cell would form a complex web in which various biomolecules are connected by the flux tubes. The basic functions of cell would rely on two mechanisms: the contraction and expansion of the flux tubes induced by a phase transition changing the value of Planck constant and the reconnection of magnetic flux tubes changing the topology of this web.

Could these flux tubes serve as templates for the formation of nanotubes? The natural idea is that they do so for various linear structures filling the living cell and for axons, DNA and aminoacid sequences, axons, and other linear structures populating living matter. The interested reader might find the articles at my homepage interesting.

Monday, November 17, 2008

Numerical estimate for the production cross section of tau-pion

I have spent more than one week in developing a detailed model for leptopion production so that it applies to the high energy collisions of protons and antiprotons in CDF. The earlier model was constructed for the production of electro-pion in heavy ion collisions in the vicinity of Coulomb wall. After having identified several unclear points in the original formulation based on Born approximation inspired heuristics I found conceptually much more precise formulation of model starting from the heuristics inspired but not equivalent with eikonal approximation. The improved model predicts a perturbation theory in powers of the Coulomb potential of the colliding charges and the previous prediction was a apart from numerical factor the lowest order prediction of the new model. Contrary to the expectations it turned out that the lowest order prediction does not depend on hbar in accordance with the vision that lowest order cross sections correspond to classical theory and do not depend on hbar.

It turned also possible to calculate the production amplitude using very reasonable approximation so that the numerics could be restricted to the integral over the phase space of τ-pion. Errors are therefore under analytic control. I cannot of course exclude numerical factors of order unity which are not quite correct since the calculation is really tedious and my calculations skills are not the best ones.

A brief article summarizing the details of the calculation of the τ-pion production cross section can be found from my homepage. Here is the abstract.

The article summarizes the quantum model for τ-pion production. Various alternatives generalizing the earlier model for electro-pion production are discussed and a general formula for differential cross section is deduced. Three alternatives inspired by eikonal approximation generalizing the earlier model inspired by Born approximation to a perturbation series in the Coulombic interaction potential of the colliding charges. The requirement of manifest relativistic invariance for the formula of differential cross section leaves only two options, call them I and II. The production cross section for τ-pion is estimated and found to be consistent with the reported cross section of about 100 nb for option I under natural assumptions about the physical cutoff parameters (maximal energy of τ-pion center of mass system and the estimate for the maximal value of impact parameter in the collision which however turns out to be unimportant unless its value is very large). For option II the production cross section is by several orders of magnitude too small. Since the model involves only fundamental coupling constants, the result can be regarded as a further success of the τ-pion model of CDF anomaly. Analytic expressions for the production amplitude are deduced in the Appendix as a Fourier transform for the inner product of the non-orthogonal magnetic and electric fields of the colliding charges in various kinematical situations. This allows to reduce numerical integrations to an integral over the phase space of lepto-pion and gives a tight analytic control over the numerics.

Other aspects of the model are discussed in the chapter Recent Status of Leptohadron Hypothesis of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy". The chapter is still under updating since I must recalculate the cross section for the production of electro-pions in heavy ion collisions to test the updated model and also add plots about differential cross sections, which are very strongly concentrated on forward direction and production plane in the case of τ-pion. The singularity concentrated on conical surface is however absent in the relativistic situation. One might hope that the correct estimate for the order of magnitude about 100 nb of the reported cross section under reasonable assumptions about the physical cutoff parameters (maximum energy of leptopion in center of mass system and impact parameter cutoff which is however not significant unless very large impact parameters are allowed) might put bells ringing also in the ears of colleagues.

I might be overoptimistic since even after correct predictions/explanations for all the basic findings, which means

  • correct prediction for the lifetime of the lightest new particle in terms of fundamental constants,
  • explanation of the three states proposed by CDF group together with correct prediction for their masses differing by powers of two (p-adic length scale hypothesis),
  • an explanation for the emergence of jets in terms of the very special kinematics for the decays of leptopions at given p-adic length scale to the low p-adic scale implying that decay produces are almost at rest,
  • a unified explanation for the anomalies suggesting the existence of also electropions and muo-pions,
  • explanation for the ortopositronium decay rate anomaly and Karmen anomaly,
  • possible explanation for the anomaly in anomalous magnetic moment of muon, ...
not a single colleague has reported having heard sounds of ringing bells. I have already earlier half-seriously considered the possibility that my poor collegues might be totally deaf. If this is the case, it of course changes everything and I must apologize for my inpatience during these fifteen years.

The text below is not meant to describe the model but serve only as a sample possibly stimulating interest.

The estimate of the cross section involves some delicacies. The model has purely physical cutoffs which must be formulated in a precise manner.

  1. Since energy conservation is not coded into the model, some assumption about the maximal t-pion energy in cm system expressed as a fraction e of proton's center of mass energy is necessary. Maximal fraction corresponds to the condition m(pt) £ m(pt)g1 £ empgcm in cm system giving [m(pt)/(mpgcm) £ e £ 1. gcm can be deduced from the center of mass energy of proton as gcm = Ös2mp, Ös=1.96 TeV. This gives 1.6×10-2 < e < 1 in a reasonable approximation. It is convenient to parameterize e as


    e = (1+d m(pt)

    mp
    × 1

    gcm
     .

    The coordinate system in which the calculations are carried out is taken to be the rest system of (say) antiproton so that one must perform a Lorentz boost to obtain upper and lower limits for the velocity of t-pion in this system. In this system the range of g1 is fixed by the maximal cm velocity fixed by e and the upper/lower limit of g1 corresponds to a direction parallel/opposite to the velocity of proton.

  2. By Lorentz invariance the value of the impact parameter cutoff bmax should be expressible in terms t-pion Compton length and the center of mass energy of the colliding proton and the assumption is that bmax=gcm×hbar/m(pt), where it is assumed m(pt)=8m(t). The production cross section does not depend much on the precise choice of the impact parameter cutoff bmax unless it is un-physically large in which case bmax2 proportionality is predicted.

The numerical estimate for the production cross section involves some delicacies.

  1. The power series expansion of the integral of CUT1 using partial fraction representation does not converge since that roots c± are very large in the entire integration region. Instead the approximation A1 @ iBcos(y)/D simplifying considerably the calculations can be used. Also the value of b1L is rather small and one can use stationary phase approximation for CUT2. It turns out that the contribution of CUT2 is negligible as compared to that of CUT1.

  2. Since the situation is singular for q = 0 and f = 0 and f = p/2 (by symmetry it is enough to calculate the cross section only for this kinematical region), cutoffs


    q Î [e1, (1-e1)]×p ,
    f Î [e1, (1-e1)]×p/2 ,
    e1=10-3 .

    The result of the calculation is not very sensitive to the value of the cutoff.

  3. Since the available numerical environment was rather primitive (MATLAB in personal computer), the requirement of a reasonable calculation time restricted the number of intervals in the discretization for the three kinematical variables g,q,f to be below Nmax=80. The result of calculation did not depend appreciably on the number of intervals above N=40 for g1 integral and for q and f integrals even N=10 gave a good estimate.

The calculations were carried for the exp(iS) option since in good approximation the estimate for exp(iS)-1 model is obtained by a simple scaling. exp(iS) model produces a correct order of magnitude for the cross section whereas exp(iS)-1 variant predicts a cross section, which is by several orders of magnitude smaller by downwards αem2 scaling. As I asked Tommaso Dorigo for an estimate for the production cross section in the discussion inspired by his first blog posting , he mentioned that authors refer to a production cross section is 100 nb, which looks to me suspiciously large (too large by three orders of magnitude), when compared with the production rate of muon pairs from b-bbar. δ=1.5 which corresponds to τ-pion energy 36 GeV gives the estimate &sigma=351 nb. The energy is suspiciously high.

In fact, in the recent posting of Tommaso Dorigo a value of order .1 nb for the production cross section was mentioned. Electro-pions in heavy ion collisions are produced almost at rest and one has Δ v/v∼ .2 giving δ= Δ E/m(π)∼ 2× 10-3. If one believes in fractal scaling, this should be at least the order of magnitude also in the case of τ-pion. This would give the estimate σ∼ 1 nb. For δ= Δ E/m(π)∼ 10-3 a cross section σ= .1 nb would result.

The plot for the differential production cross section is here (the scale of the earlier plot was erratic due to the above mentioned error).

For details and background see the updated (and still under updating) chapter Recent Status of Leptohadron Hypothesis.

Monday, November 03, 2008

Comparison of CDF model for CDF anomaly with TGD based model

In Monday morning a paper by CDF collaboration had appeared in arXiv and it is interesting to compare the model with TGD based model (or rather, the last one of the three models that one can imagine and corresponding to a production of off mass shell k=107 τ-pion, production of k=107 τ-pion coherent state heated into QCD plasma like state producing colored lepton jets, and production of k=103 on mass shell τ-pion with mass scaled up by factor 8 from that of k=107 τ-pion).

The paper proposes that three new particles are involved. The masses for the particles - christened h3, h2, and h1 - are assumed to be 3.6 GeV, 7.3 GeV, and 15 GeV. h1 is assumed to be pair produced and decay to h2 pair decaying to h3 pair decaying to a τ pair.

h3 mass is assumed to be 3.6 GeV and life-time of 20×10-12 seconds. The mass is same as the TGD based prediction for neutral τ-pion mass, whose lifetime however equals to 1.12× 10-17 seconds (gamma+gamma decay dominates). The correct prediction for the lifetime provides a strong support for the identification of long-lived state as charged τ-pion with mass near τ mass so that the decay to μ and its antineutrino dominates. Hence the model is not consistent with leptohadronic model.

p-Adic length scale hypothesis predicts that allowed mass scales come as powers of sqrt(2) and these masses indeed come in good approximation as powers of 2. Several p-adic scales appear in low energy hadron physics for quarks and this replaces Gell-Mann formula for low-lying hadron masses. Therefore one can ask whether the proposed masses correspond to neutral tau-pion with p= Mk=2k-1, k=107, and its p-adically scaled up variants with p∼ 2k, k= 105, and k=103 (also prime). The prediction for masses would be 3.6 GeV, 7.2 GeV, 14.4 GeV.

This co-incidence cannot of course be taken too seriously since the powers of two in CDF model have rather mundane origin: they follow from the assumed production mechanism producing 8 τ-leptons from h1. One can however spend some time by looking whether it could be realized somehow allowing p-adically scaled up variants of τ-pion.

  1. The proposed model for the production of muon jets is based on production of k=103 neutral τ-pion (or several of them) having 8 times larger mass than k=107 τ-pion in strong E·B background of the colliding proton and antiproton and decaying via strong interactions to k=105 and k=107 τ-pions.
  2. The first step would be

    π0τ(103)→ π0τ(105)+π+τ(105)+π-τ(105).

    This step is not kinematically possible if masses are obtained by exact scaling and if m(π0τ)< m(π+/-τ) holds true as for ordinary pion. p-Adic mass formulas do not however predict exact scaling. In the case that reaction is not kinematically possible, it must be replaced with a reaction in which second charged k=105 pion is virtual and decays weakly.

  3. Second step would consist of a scaled variant of the first step

    π0τ(105)→ π0τ(107)+π+τ(107)+π-τ(107),

    and the weak decays of the π+/-τ(105) with mass 2m(τ) to lepton pairs.

  4. The last step would involve the decays of both charged and neutral πτ(107). The signature of the mechanism would be anomalous gamma pairs with invariant masses 2k×m(τ),k=1,2,3 coming from the decays of neutral τ-pions.

  5. Dimensionless four-pion coupling l determines the decay rates for neutral t-pions appearing in the cascade. Rates are proportional to phase space-volumes, which are rather small by kinetic reasons.

The total cross section for producing single leptopion can be estimated by using the quantum model for leptopion production. Production amplitude is essentially Coulomb scattering amplitude for a given value of the impact parameter b for colliding proton and anti-proton multiplied by the amplitude U(b,p) for producing on mass shell k=103 lepto-pion with given four-momentum in the fields E and B and given essentially by the Fourier transform of E·B. The replacement of the motion with free motion should be a good approximation.

UV and IR cutoffs for the impact parameter appear in the model and are identifiable as appropriate p-adic length scales. UV cutoff could correspond to the Compton size of nucleon (k=107) and IR cutoff to the size of the space-time sheets representing topologically quantized electromagnetic fields of colliding nucleons (perhaps k=113 corresponding to nuclear p-adic length scale and size for color magnetic body of constituent quarks or k=127 for the magnetic body of current quarks with mass scale of order MeV). If one has hbar/hbar0 = 27 one could also guess that the IR cutoff corresponds to the size of dark em space-time sheet equal to 27L(113) = L(127) (or 27L(127) = L(141)), which corresponds to electron's p-adic length scale. These are of course rough guesses.

Quantitatively the jet-likeness of muons means that the additional muons are contained in the cone q < 36.8 degrees around the initial muon direction. If the decay of p0t(k) can occur to on mass shell p0t(k+2), k=103,105, it is possible to understand jets as a consequence of the decay kinematics forcing the pions resulting as decay products to be almost at rest.

  1. Suppose that the decays to three pions can take place as on mass shell decays so that pions are very nearly at rest. The distribution of decay products m[`(n)] in the decays of p±(105) is spherically symmetric in the rest frame and the energy and momentum of the muon are given by


    [E,p] = [m(t)+ m2(m)

    4m(t)
    ,m(t)- m2(m)

    4m(t)
    ] .

    The boost factor g = 1/Ö{1-v2} to the rest system of muon is g = x+(4x)-1 ~ 18, x= m(τ)/m(μ).

  2. The momentum distribution for m+ coming from p+t is spherically symmetric in the rest system of p+ . In the rest system of m- the momentum distribution is non-vanishing only for when the angle q between the direction of velocity of m- is below a maximum value of given by tan(qmax) = 1 corresponding to a situation in which the momentum m+ is orthogonal to the momentum of m- (the maximum transverse momentum equals to m(m)vg and longitudinal momentum becomes m(m)vg in the boost). This angle corresponds to 45 degrees and is not too far from 36.8 degrees.

  3. At the next step the energy of muons resulting in the decays of p±(103)


    [E,p] = [ m(t)

    2
    + m2(m)

    2m(t)
    , m(t)

    2
    - m2(m)

    2m(t)
    ] ,

    and the boost factor is g1 = (x + x-1)/2 ~ 9, x= m(τ)/m(μ). qmax satisfies the condition tan(qmax) = g1v1/gv @ 1/2 giving qmax @ 26.6 degrees.

If on mass shell decays are not possible, the situation changes since either of the charged pions is off mass shell. In order to obtain similar result the virtual should occur dominantly via states near to on mass shell pion. Since four-pion coupling is just constant, this option does not seem to be realized.

Additional signatures of the model come from very peculiar kinematics of lepto-pion production. The produced τ-pions are restricted in the scattering plane of the colliding charges and produced very nearly at rest in cm frame. In the rest frame of the target the produced τ-pions are concentrated on the cone with opening angle cos(θ)= β/vcm, where vcm= 2v/(1+v2) and v is the velocity of τ-pion in the rest system of the proton.

For details and background see the chapter Recent Status of Leptohadron Hypothesis of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy".

Selective amnesia

People are beginning to take TGD based view about dark matter. Sean Carroll speculates in the New Scientist of last week abot dark photons and importance of dark matter in chemistry and even biology.

This is of course what I have been busily doing since the beginning of 2005. I have developed detailed model of quantum biology based on dark matter phases realized as a hierarchy with levels labeled by values of Planck constant (see the series of articles here and various books about TGD inspired quantum biology). To my great astonishment, Sean Carroll forgets to mention my work in his speculations although he certainly knows it since I have visited many times in his blog and other blogs that Sean himself has visited. Maybe Sean should become a little bit worried: this kind of memory defects are not good in this profession.

I am afraid that this kind of selective amnesia might infect also other theoreticians visiting in blogs and I must ask myself whether something is wrong with me. Could it be that it is my blog and the blogs that I have visited which spread the disease? Since this might be the case, I must give a general warning. I have visited many blogs during weekend and talked about CDF anomaly. There have been a lot of visitors since this might be the discovery of century in particle physics. Therefore it could happen that quite many particle theoreticians will suffer similar memory defects in near future as they produce models for CDF anomaly and gradually end up with the "Eureka! Colored leptons!". I am really very sorry if this turns out to be the case.

Below summary Sean Carroll's speculations.

Mysterious dark matter could be shining with its own private kind of light. This "dark radiation" would be invisible to us, but could still have visible effects. Astronomers usually assume that dark matter particles barely interact with each other. Lotty Ackerman and colleagues at Caltech in Pasadena decided to test this assumption by supposing there is a force between dark matter particles that behaves in the same way as the electromagnetic force. That would imply a new form of radiation that is only accessible to dark matter.

Their calculations showed that it could have as much as 1 % of the strength of the electromagnetic force and not conflict with any observations. If the force is close to this strength, its effects might be detectable, as it should affect how dark matter clumps together.

"It might even help with some niggling problems we have now," says team member Sean Carroll. For example, it might explain why there are fewer dwarf galaxies than models predict. Carroll even speculates that more complex dark matter might exist, forming dark matter atoms with their own chemistry – and maybe biology.

Very familiar to me. Except that they have postponed the discovery of the hierarchy of Planck constants, which makes it un-necessary to introduce new gauge groups, which is really something extremely ugly.

More about CDF

The weekend has been rather busy and emotional, to say the least. Just week or two ago I compared the situation in world economy to that in theoretical particle physics and found many similarities (See Serious scientist syndrome and Two Richistans). I also speculated with the necessity of New Deal. Also in theoretical particle physics. Maybe this New Deal will be realized in particle physics much sooner than I ever dared to dream. I believe that CDF anomaly will not respect the sociology of science, and everything I know about anomaly (at my non-specialist level forcing to concentrate on bare essentials) is consistent with what TGD predicts. In particular, the correct prediction for the lifetime of new particle without bringing in no new parameters is something which is really difficult to dismiss. Also the predictions for the masses of τ-pions are testable as also τ--baryons should be there. Similar predictions follow for electronic and muonic variants of leptohadron physics.

The response in blogs created in me mixed feelings. Many participants continued to behave as I would not exist at all and responded only to the hand wavings of names. There were also the usual arrogant telling that my theory does not predict anything! Still! After these 31 years and 15 books! Someone even censored out my posting! There were also people realizing how far reaching the implications of these successful predictions are. I am especially grateful for Lubos and Tommaso for adding link to my blog.

I glue below two responses to blogs. I have taken the freedom to edit and add something since originals can be found from the blogs. The first one is to Tommaso Dorigo in his blog and tries to explain how the basic predictions come out. The second one is to Jester in Resonaance blog and tries to make clear that the notion predict means much more than doing Monte Carlo calculations. Here I have attached some new arguments to the end of the response.

My hope is that I could make clear that theoretical physics is very conceptual stuff at the fundamental level and the conceptualization is far from being funny world salad since these words with precise meaning provide a higher level language with which to communicate complex ideas. This is something which quite too many theoretical physicists restricting their activities to mere application of methods have forgotten. I am not of course referring to Tommaso and Jester here. I must however say that in my own country the situation in this respect is rather gloomy, to put it mildly.

Response to Tommaso Dorigo

Dear Tommaso,

the model without further calculations explains the following observations.

  1. Jets results by the same mechanism as in QCD. A coherent state of τ-pions is generated in the non-orthogonal E and B of colliding protons, which is then heated to QCD plasma like state decaying to colored lepton jets producing leptohadrons in turn decaying to ordinary leptons. I have considered in detail the decay mechanism leading from colored excitations to ground state here.

  2. Muons dominate if the masses of τ and τc(ν and νc) are near to each other because there is very little phase space for τ final states. The near equality of the masses is motivated by the fact that this is true for ec and μc. In the case of electron the small electron mass reduces the phase space. In the case of neutrino the equality is not checked: it would predict that charge electropion has mass nearly equal to that of electron. I have very probably discussed this point in the above link: this would of course relate to the anomalous production of electron positron pairs discovered already at seventies but for some reason forgotten since then.

  3. The numbers of anomalous muons with opposite charges are same since neutral taupion initiates the jets.

  4. One can also calculate the life times of neutral and charged τ-pions just by putting to the standard formula for lifetimes of ordinary pions appropriate masses. Charged pion decays weakly and everything is calculable using PCAC hypothesis (pion field proportional to axial current), neutral pion decays weak two photon channel dominantly and anomaly considerations fix the Lagrangian to product of τ-pion field and E·B with a coefficient involving only axial coupling f(πτ)= xm(πτ), fine structure constant, m(πsub>τ)... The prediction is correct is x is scaled down from that for ordinary pion by a factor of .4. I would not like to sound like a teacher but add this result should really wake up everyone in the audience.

  5. The prediction for neutral leptopion mass is 3.6 GeV and same as in the paper of CDF collaboration [13], which had appeared to the arXiv Monday morning as I learned from the blog of Tommaso. The masses suggested in the article were 3.6 GeV, 7.3 GeV, and 15 GeV. p-Adic length scale hypothesis predicts that allowed mass scales come as powers of sqrt(2) and these masses come in good approximation as powers of 2. Several p-adic scales appear in low energy hadron physics for quarks and this replaces Gell-Mann formula for low-lying hadron masses. Therefore one can ask whether these masses correspond to neutral tau-pion with p= Mk=2k-1, k=107) and its scaled up variants with p=about 2k, k= 105, and k=103 (also prime). The prediction for masses would be 3.6 GeV, 7.2 GeV, 14.4 GeV.
  6. Also the total rate for virtual leptopion production is calculable using the product of leptopion field and instanton action density E·B. This requires a model for the collision. The simplest thing to do is to start with free collision parameterized by impact parameter and velocity and integration of differential cross section over impact parameter values up to infrared cutoff, which must be posed in order to have a finite result. This was done in case of lepto-electron production using classical model for orbits of ions and resulting E and B. In the case of electropion atomic size was the first guess for the cutoff. Now τ-pion Compton length is the first guess. One can estimate from this the rate for the production of leptopions and since this is the rate determining step, the total rate for production of anomalous muons via jet mechanism.

Response to Jester

Dear Jester,

a further comment about what is to predict.

TGD predicts colored leptons from extremely general premises. TGD was born as a proposal how to solve the problem of General Relativity due to the fact that energy is not well defined. Space-time as 4-D surface predicts standard model symmetries for H=M4×CP2 and that color is not spin like quantum number but corresponds to CP2 partial waves. Leptohadron physics is just one of the many predicted deviations from standard model leading to a plethora of testable predictions. TGD of course actually predicts infinite number of colored excitations of quarks and leptons so that we would be beginning to see only a tip of iceberg.

Second example is nuclear string model. Nucleons bind to strings with nucleons connected by color bonds with scaled variants of quark and antiquark at its ends. Fractal hierarchy of QCD like physics is also a prediction of induced gauge field leading to geometrization of known gauge fields and predicting extremely strong correlations between classical ew and color gauge fields expressible in terms of four CP2 coordinates. In particular, em field is accompanied by classical color gauge field for non-vacuum extremals. Therefore em flux tubes must have quark and antiquark at its ends serving as the source of the field.

Besides neutral bonds there can be charged color bonds. This predicts a large number of new nuclear states. Also for these there is empirical evidence: some of it came just some time ago, some of it has been collected during last 30 years by Russian physicist Shnoll. Nuclear decay rates correlate with distance from Sun and the explanation is that X rays from Sun induce transitions from ground states of nucleus to excited states containing charged color bonds. This is testable in laboratory by irradiating nuclei with X rays.

TGD also predicts the spectrum of elementary particle masses with one per cent accuracy besides mass scales from extremely general assumptions: super-conformal invariance (its generalization actually) and p-adic thermodynamics. Masses are exponentially sensitive to the integer k in prime p∼ 2k so that there is no hope of getting masses correctly by fitting. The detailed calculations are in 4 chapters of the book containing the leptopion chapter. There is large number of testable predictions: new exotic states, existence scaled up variants of quarks, neutrinos also of electrons with mass scale scaled up by a power of square root of 2.

One problem is that colleagues typically confuse prediction with mere numerics which can be done by any gifted graduate student. Before you can concentrate on numerics or have general recipes for calculating Feynman graphs, a huge amount of conceptualization is needed starting from ontological questions. What theory predicts to exists? This is the first question and must be answered before detailed Monte Carlo calculations of particle distributions. Some one must do it and it has been my fate.

In TGD this has led to a totally new ontology that I call zero energy ontology, which is consistent with crossing symmetry of QFTs but provides a radically new view about quantum states. A new view about time has been necessary to understand the formulation of M-matrix, which generalizes the notion of S-matrix and fuses thermodynamics with quantum theory being essentially real square root of density matrix analogous to the modulus of wave function multiplied by unitary "phase" representing S-matrix. Connes tensor product fixes M-matrix highly uniquely so that the task is to calculate. Also the notion of Feynman graph generalizes. The notion of measurement resolution becomes a key notion and one could even say that its mathematical representation fixes the M-matrix. And so on.

This conceptualization period is accompanied (I wrote first "is followed", which is not of course true) by a period when you gradually quantify your predictions by starting from simple yes/no predictions which could kill you theory. You also busily explain anomalies: if not anything else, this prevents them to be put completely under the carpet by mainstream. Unfortunately the electropion anomaly discovered already at seventies suffered this fate.

I hasten to admit that TGD is far from precise Feynman rules that any grad student could apply. For instance, during last weeks I have been working with an application of category theory in order to formulate precisely the generalized Feynman rules of TGD in terms of N-point functions of conformal QFT. Or rather, those of symplecto-conformal QFT. Symplectic QFT is analogous to conformal QFT and I managed to solve its N-point functions from associativity conditions in terms of operad notion giving infinite hierarchy of discrete symplectic field algebras. Discreteness is correlate for finite measurement resolution and realized in terms of number theoretic braids which also emerge from totally different premises. Very beautiful new mathematics allowing to formulate the notion of finite measurement resolution emerges.

From this it is still a long way to practical calculations since one must deduce the long length scale limit of the theory in order to use continuum mathematics, and specify precisely what of the very many candidates for the conformal field theories applies in a specific situation. Also the very notion of conformal field theory generalizes for light-like 3-surfaces.

The second problem is that the extreme arrogance of particle physics colleagues has made impossible the communication of TGD. It is of course censors who suffer most from the censorship in the long run. As a consequence I am doomed to be 31 years ahead of the community, which is still trying to make sense of string models refusing to realize that an extremely beautiful generalization obtained by replacing strings with light-like 3-surfaces exists and predicts among other things space-time dimension correctly and deduces standard model symmetries from number theory.