Wednesday, November 30, 2011

Duality between hadronic and partonic descriptions of hadron physics

I found the talk of Matthew Schwartz titled The Emergence of Jets at the Large Hadron Collider belonging to the Monday Colloqium Series at Harward. The talk told about the history of the notion of jet and how it is applied at LHC. The notion of jet is something between perturbative and non-perturbative QCD and therefore not a precisely defined concept as one approaches small mass limit for jets.

The talk inspired some questions relating to QCD and hadron physics in general. I am of course not competent to say anything interesting about jet algorithms. Hadronization process is however not well understood in the framework of QCD and uses phenomenological fragmentation functions. The description of jet formation in turn uses phenomenological quark distribution functions. TGD leads to a rather detailed fresh ideas about what quarks, gluons, and hadrons are and stringy and QFT like descriptions emerge as excellent candidates for low and high energy descriptions of hadrons. Low energies are the weakness of QCD and one can well ask whether QCD fails as a physical theory at infrared. Could TGD do better in this respect?

Only a minor fraction of the rest energy of proton is in the form of quarks and gluons. In TGD framework these degrees of freedom would naturally correspond to color magnetic flux tubes carrying color magnetic energy and in proton-proton collisions the color magnetic energy of p-p system in cm system is gigantic. The natural question is therefore about what happens to the "color magnetic bodies" of the colliding protons and of quarks in proton-proton collision.

In the sequel I will develop a simple argument leading to a very concrete duality between two descriptions of hadron reactions manifest at the level of generalized Feynman graphs. The first description is in terms of meson exchanges and applies naturally in long scales. Second one is terms of perturbative QCD applying in short scales. The basic ingredients of the argument are the weak form of electric-magnetic duality and bosonic emergence leading to a rather concrete view about physical particles, generalized Feynman diagrams reducing to generalized braid diagrams in the framework of zero energy ontology (ZEO), and reconnection of Kähler magnetic flux tubes having interpretation in terms of string diagrams providing the mechanism of hadronization. Basically the prediction follows from the dual interpretations of generalized Feynman diagrams either as stringy diagrams (low energies) or as Feynman diagrams (high energies).

It must be emphasized that this duality is something completely new and a simple prediction of the notion of generalized Feynman diagram. The result is exact: no limits (such as large N limit) are needed.

Weak form of electric magnetic duality and bosonic emergence

The weak form of electric magnetic duality allows the identification of quark wormhole throats as Kähler magnetic monopoles with non-vanishing magnetic charges Qm. The closely related bosonic emergence effectively eliminates the fundamental BFF vertices from the theory.

  1. Elementary fermion corresponds to single wormhole throat with Kähler magnetic charge. In topological condensation a wormhole throat is formed and the working hypothesis is that the second throat is Kähler magnetically neutral. The throats created in topological condensation (formation of topological sum) are always homologically trivial since purely local process is in question.

  2. In absence of topological condensation physical leptons correspond to string like objects with opposite Kähler magnetic charges at the ends. Topologically condensed lepton carries also neutralizing weak isospin carried by neutrino pair at the throats of the neutralizing wormhole contact. Wormhole contact itself carries no Kähler magnetic flux. The neutralization scale for Qm and weak isospin could be either weak length scale for both fermions and bosons. The alternative option is Compton length quite generally - this even for fermions since it is enough that the weak isospin of weak bosons is neutralized in the weak scale. The alert reader have of course asked whether the weak isospin of fermion must be neutralized at all if this is the case. Whether this really happens is not relevant for the following arguments.

  3. Whether a given quark is accompanied by a wormhole contact neutralizing its weak isospin is not quite clear: this need not be the case since the Compton length of weak bosons defines the range of weak interactions. Therefore one can consider the possibility that physical quarks have non-vanishing Qm and that only hadrons have Qm=0. Now the Kähler magnetic flux tubes would connect valence quarks. In the case of proton one would have three of them. About 31 year old proposal is that color hyper charge is proportional to Kähler magnetic charge. If so then color confinement would require Kähler magnetic confinement.

  4. By bosonic emergence bosons correspond to wormhole contacts or pairs of them. Now wormhole throats have opposite values of Qm but the contact itself carries vanishing Kähler magnetic flux. Fermion and anti-fermion are accompanied by neutralizing Kähler magnetic charge at the ends of their flux tubes and neutrino pair at its throats neutralizes the weak charge of the boson.

The dual interpretations of generalized Feynman diagrams in terms of hadronic and partonic reaction vertices

Generalized Feynman diagrams are defined in the framework of zero energy ontology (ZEO). Bosonic emergence eliminates fundamental BFF vertices and reduces generalized Feynman diagrams to generalized braid diagrams. This is essential for the dual interpretation of the qqg vertex as a meson emission vertex for hadron. The key idea is following.

  1. Topologically condensed hadron - say proton- corresponds to a double sheeted structure: let us label the sheets by letters A and B. Suppose that the sheet A contains wormhole throats of quarks carrying magnetic charges. These wormhole throats are connected by magnetically neutral wormhole contact to sheet B for which wormhole throats carry vanishing magnetic charges.

  2. What happens when hadronic quark emits a gluon is easiest to understand by considering first the annihilation of topologically non-condensed charged lepton and antilepton to photon - that is L+Lbar → γ vertex. Lepton and antilepton are accompanied by flux tubes at different space-time sheets A and B and each has single wormhole throat: one can speak of a pair of deformations of topologically condensed CP2 type vacuum extremals as a correlate for single wormhole throat. At both ends of the flux tubes deformations of CP2 type exremals fuse via topological sum to form a pair of photon wormhole contacts carrying no Kähler magnetic flux. The condition that the resulting structure has the size of weak gauge boson suggests that weak scale defines also the size of leptons and quarks as magnetic flux tubes. Quarks can however carry net Kähler magnetic charge (the ends of flux tube do not have opposite values of Kähler magnetic charge.

  3. With some mental gymnastics the annihilation vertex L+Lbar → γ can be deformed to describe photon emission vertex L→ L+γ. The negative energy antilepton arrives from future and positive energy lepton from the past and they fuse to a virtual photon in the manner discussed.

  4. qqg vertex requires further mental gymnastics but locally nothing is changed since the protonic quark emitting the gluon is connected by a color magnetic flux tube to another protonic quark in the case of incoming proton (and possibly to neutrino carrying wormhole contact with size given by the weak length scale). What happens is therefore essentially the same as above. The protonic quark has become part of gluon at space-time sheet A but has still flux tube connection to proton. Besides this there appears wormhole throat at space-time sheet B carrying quark quantum numbers: this quark would in the usual picture correspond to the quark after gluon emission and antiquark at the same space-time sheet associated with the gluon. Therefore one has proton with one quark moving away inside gluon at sheet A and a meson like entity at sheet B. The dual interpretation as the emission of meson by proton makes sense. This vertex does not correspond to the stringy vertex AB+CD→ AD+BC in which strings touch at some point of the interior and recombine but is something totally new and made possible by many-sheeted space-time. For gauge boson magnetically charge throats are at different space-time sheets, for meson they at the same space-time sheet and connected by Kähler magnetic flux tube.

  5. Obviously the interpretation as an emission of meson like entity makes sense for any hadron like entity for which quark or antiquark emits gluon. This is what the duality of hadronic and parton descriptions would mean. Note that bosonic emergence is absolutely essential element of this duality. In QCD it is not possible to understand this duality at the level of Feynman diagrams.

Reconnection of color magnetic flux tubes

The reconnection of color magnetic flux tubes is the key mechanism of hadronization and a slow process as compared to quark gluon emission.

  1. Reconnection vertices have interpretation in terms of stringy vertices AB+CD→ AD+BC for which interiors of strings serving as representatives of flux tubes touch. The first guess is that reconnection is responsible for the low energy dynamics of hadronic collisions.

  2. Reconnection process takes place for both the hadronic color magnetic flux tubes and those of quarks and gluons. For ordinary hadron physics hadrons are characterized by Mersenne prime M107. For M89 hadron physics reconnection process takes place in much shorter scales for hadronic flux tubes.

  3. Each quarks is characterized by p-adic length scales: in fact this scale characterizes the length scale of the the magnetic bodies of the quark. Therefore Reconnection at the level of the magnetic bodies of quarks take places in several time and length scales. For top quark the size scale of magnetic body is very small as is also the reconnection time scale. In the case of u and d quarks with mass in MeV range the size scale of the magnetic body would be of the order of electron Compton length. This scale assigned with quark is longer than the size scale of hadrons characterized by M89. Classically this does not make sense but in quantum theory Uncertainty Principle predicts it from the smallness of the light quark masses as compared to the hadron mass. The large size of the color magnetic body of quark could explain the strange finding about the charge radius of proton.

  4. For instance, the formation of quark gluon plasma would involve reconnection process for the magnetic bodies of colliding protons or nuclei in short time scale due to the Lorentz contraction of nuclei in the direction of the collision axis. Quark-gluon plasma would correspond to a situation in which the magnetic fluxes are distributed in such a manner that the system cannot be decomposed to hadrons anymore but acts like a single coherent unit. Therefore quark-gluon plasma in TGD sense does not correspond to the thermal quark-gluon plasma in the naive QCD sense in which there are no long range correlations.

    Long range correlations and quantum coherence suggest that the viscosity to entropy ratio is low as indeed observed. The earlier arguments suggest that the preferred extremals of Kähler action have interpretation as perfect fluid flows. This means at given space-time sheet allows global time coordinate assignable to flow lines of the flow and defined by conserved isometry current defining Beltrami flow. As a matter fact, all conserved currents are predicted to define Beltrami flows. Classically perfect fluid flow implies that viscosity, which is basically due to a mixing causing the loss of Beltrami property, vanishes. Viscosity would be only due to the finite size of space-time sheets and the radiative corrections describable in terms of fractal hierarchy CDs within CDs. In quantum field theory radiative corrections indeed give rise to the absorbtive parts of the scattering amplitudes.

Hadron-parton duality and TGD as a "square root" of the statistical QCD description

The main result is that generalized Feynman diagrams have dual interpretations as QCD like diagrams describing partonic reactions and stringy diagrams describing hadronic reactions so that these matrix elements can be taken between either hadronic states or partonic states. This duality is something completely new and distinguishes between QCD and TGD.

I have proposed already earlier this kind of duality but based on group theoretical arguments inspired by what I call M8-M4× CP2 duality and two hypothesis of the old fashioned hadron physics stating that vector currents are conserved and axial currents are partially conserved. This duality suggests that the group SO(4)= SU(2)L× SU(2)R assignable to weak isospin degrees of freedom takes the role of color group at long length scales and can be identified as isometries of E4 subset M8 just like SU(3) corresponds to the isometries of CP2.

Initial and final states correspond to positive and negative energy parts of zero energy states in ZEO. These can be regarded either partonic or hadronic many particle states. The inner products between positive energy parts of partonic and hadronic state basis define the "square roots" of the parton distribution functions for hadrons. The inner products of between negative energy parts of hadronic and partonic state basis define the "square roots" of the fragmentations functions to hadrons for partons. M-matrix defining the time-like entanglement coefficients is representable as product of hermitian square root of density matrix and S-matrix is not time reversal invariant and this partially justifies the use of statistical description of partons in QCD framework using distribution functions and fragmentation functions. Decoherence in the sum over quark intermediate states for the hadronic scattering amplitudes is essential for obtaining the standard description.

For details and background see the article Algebraic braids, sub-manifold braid theory, and generalized Feynman diagrams or the new chapter Generalized Feynman Diagrams as Generalized Braids of "Towards M-matrix".

Tuesday, November 29, 2011

Eternal inflation and TGD

The process leading to this posting was boosted by the irritation caused by the newest multiverse hype in New Scientist which was commented by Peter Woit. Also Lubos told about Brian Greene's The Fabric of Cosmos IV which is similar multiverse hype with Guth, Linde, Vilenkin, and Susskind as stars but also single voice of criticism was accepted (David Gross who could not hide his disgust).

The message of New Scientist article was that multiverse is now a generally accepted paradigm, that it follows unavoidably from modern physics and has three strong pillars: dark energy, eternal inflation, and the string model landscape. Even LHC has demonstrated its correctness by finding no evidence for the standard SUSY. That was the prediction of superstring models but then some-one realized that there had been some-one predicting that multiverse predicts no super-symmetry! As a matter fact, every single prediction inspired by super string models went wrong, there are good reasons to expect that Higgs will not be found, and standard SUSY has been excluded. Besides this an increasing amount of evidence for new physics not predicted by standard TOEs. And one should not forget neutrino super-luminality. All this shakes the foundations of both super-string theory, where GUT is believed to be the low energy limit of the theory with Higgs fields playing a key role. In inflationary scenarios Higgs like scalar fields carrying the vacuum energy give rise to radiation and therefore also ordinary matter.

The three pillars of the multiverse become catastrophic weaknesses if the Higgs paradigm fails. Vacuum energy cannot correspond to Higgs, the scalar fields driving inflation are not there, and one cannot say anything about possible low energy limits of super string theories since even the basic language describing them is lost!

Maybe I am becoming an old angry man but I must confess that this kind of hype is simply too much for me. Why colleagues who know what the real situation is do not react to this bullshit? Are they so lazy that they allow physics to degenerate into show business without bothering to do anything? Or does a culture of Omerta prevail as some participant in Peter Woit's blog suggested? Even if a man has seen a crime to take place, he is not allowed to reveal it. It he does, he suffers vendetta. I have experienced the academic equivalent of vendetta: not for this reason but for having the courage to think with my own brain. Maybe laziness is a more plausible explanation.

But I do not have any right to doom my colleagues if I am myself too lazy to do anything. My moral duty is to tell that this hype is nothing but unashamed lying. On the other hand, the digging of a heap of shit is really depressing. Is there any hope of learning anything? I refuse from spending time in superstring landscape but should I see the trouble of comparing eternal inflation with TGD?

In this mixed mood I decided to refresh my views about how TGD based cosmology differs from inflationary scenario. The pleasant surprise was that this comparison combined with new results about TGD inspired cosmology provided fresh insights to the relationship of TGD and standard approach and shows how TGD cures the lethal diseases of the eternal inflation. Very roughly: the replacement of the energy of the scalar field with magnetic energy replaces eternal inflation with a fractal quantum critical cosmology allowing to see more sharply the TGD counterpart of inflation and accelerating expansion as special cases of criticality. Hence it was not wasted time after all.

Wikipedia gives a nice overall summary inflationary cosmology and I recommend it to the non-specialist physics reader as a manner to refresh his or her memory.

1. Brief summary of the inflationary scenario

Inflationary scenario relies very heavily on rather mechanical unification recipes based on GUTs. Standard model gauge group is extended to a larger group. This symmetry group breaks down to standard model gauge group in GUT scale which happens to correspond to CP2 size scale. Leptons and quarks are put into same multiplet of the gauge group so that enormous breaking of symmetries occurs as is clear from the ratio of top quark mass scale and neutrino mass scale. These unifiers want however a simple model allowing to calculate so that neither aesthetics nor physics does not matter. The instability of proton is one particular prediction. No decays of proton in the predicted manner have been observed but this has not troubled the gurus. As a matter fact, even Particle Data Tables tell that proton is not stable! The lobbies of GUTs are masters of their profession!

One of the key features of GUT approach is the prediction Higgs like fields. They allow to realize the symmetry breaking and describe particle massivation. Higgs like scalar fields are also the key ingredient of the inflationary scenario and inflation goes to down to drain tub if Higgs is not found at LHC. It is looking more and more probable that this is indeed the case. Inflation has endless variety of variants and each suffers from some drawback. In this kind of situation one would expect that it is better to give up but it has become a habit to say that inflation is more that a theory, it is a paradigm. When superstring models turned out to be a physical failure, they did not same thing and claimed that super string models are more like a calculus rather than mere physical theory.

1.1 The problems that inflation was proposed to solve

The basic problems that inflation was proposed to solve are magnetic monopole problem, flatness problem, and horizon problem. Cosmological principle is a formulation for the fact that cosmic microwave radiation is found to be isotropic and homogenous in an excellent approximation. There are fluctuations in CMB believed to be Gaussian and the prediction for the spectrum of these fluctuations is an important prediction of inflationary scenarios.

  1. Consider first the horizon problem. The physical state inside horizon is not causally correlated with that outside it. If the observer today receives signals from a region of past which is much larger than horizon, he should find that the universe is not isotropic and homogenous. In particular, the temperature of the microwave radiation should fluctuate wildly. This is not the case and one should explain this.

    The basic idea is that the potential energy density of the scalar field implies exponential expansion in the sense that the "radius" of the Universe increases with an exponential rate with respect to cosmological time. This kind of Universe looks locally like de-Sitter Universe. This fast expansion smooths out any inhomogenities and non-isotropies inside horizon. The Universe of the past observed by a given observer is contained within the horizon of the past so that it looks isotropic and homogenous.

  2. GUTs predict a high density of magnetic monopoles during the primordial period as singularities of non-abelian gauge fields. Magnetic monopoles have not been however detected and one should be able to explain this. The idea is very simple. If Universe suffers an exponential expansion, the density of magnetic monopoles gets so diluted that they become effectively non-existent.

  3. Flatness problem means that the curvature scalar of 3-space defined as a hyper-surface with constant value of cosmological time parameter (proper time in local rest system) is vanishing in an excellent approximation. de-Sitter Universe indeed predicts flat 3-space for a critical mass density. The contribution of known elementary particles to the mass density is however much below the critical mass density so that one must postulate additional forms of energy. Dark matter and dark energy fit the bill. Dark energy is very much analogous to the vacuum energy of Higgs like scalar fields in the inflationary scenario but the energy scale of dark energy is by 27 orders of magnitude smaller than that of inflation, about 10-3 eV.

1.2 The evolution of the inflationary models

The inflationary models developed gradually more realistic.

  1. Alan Guth was the first to realize that the decay of false (unstable) vacuum in the early universe could solve the problem posed by magnetic monopoles. What would happen would be the analog of super-cooling in thermodynamics. In super-cooling the phase transition to stable thermodynanical phase does not occur at the critical temperature and cooling leads to a generation of bubbles of the stable phase which expand with light velocity.

    The unstable super-cooled phase would locally correspond to exponentially expanding de-Sitter cosmology with a non-vanishing cosmological constant and high energy density assignable to the scalar field. The exponential expansion would lead to a dilution of the magnetic monopoles and domain walls. The false vacuum corresponds to a value of Higgs field for which the symmetry is not broken but energy is far from minimum. Quantum tunneling would generate regions of true vacuum with a lower energy and expanding with a velocity of light. The natural hope would be that the energy of the false vacuum would generate radiation inducing reheating. Guth however realized that nucleation does not generate radiation. The collisions of bubbles do so but the rapid expansion masks this effect.

  2. A very attractive idea is that the energy of the scalar field transforms to radiation and produces in this manner what we identify as matter and radiation. To realize this dream the notion of slow-roll inflation was proposed. The idea was that the bubbles were not formed at at all but that the scalar field gradually rolled down along almost flat hill. This gives rise to an exponential inflation in good approximation. At the final stage the slope of the potential would come so steep that reheating would took place and the energy of the scalar field would transform to radiation. This requires a highly artificial shape of the potential energy. There is also a fine tuning problem: the predictions depend very sensitively on the details of the potential so that strictly speaking there are no predictions anymore. Inflaton should have also a small mass and represent new kind of particle.

  3. The tiny quantum fluctuations of the inflaton field have been identified as the seed of all structures observed in the recent Universe. These density fluctuations make them visible also as fluctuations in the temperature of the cosmic microwave background and these fluctuations have become an important field of study (WMAP).

  4. In the hybrid model of inflation there are two scalar fields. The first one gives rise to slow-roll inflation and second one puts end to inflationary period when the first one has reached a critical value by decaying to radiation. It is of course imagine endless number of speculative variants of inflation and Wikipedia article summarizes some of them.

  5. In eternal inflation the quantum fluctuations of the scalar field generate regions which expand faster than the surrounding regions and gradually begin to dominate. This means that there is eternal inflation meaning continual creation of Universes. This is the basic idea behind multiverse thinking. Again one must notice that scalar fields are essential: in absence of them the whole vision falls down like a card house.

The basic criticism of Penrose against inflation is that it actually requires very specific initial conditions and that the idea that the uniformity of the early Universe results from a thermalization process is somehow fundamentally wrong. Of course, the necessity to assume scalar field and a potential energy with a very weird shape whose details affect dramatically the observed Universe, has been also criticized.

2. Comparison with TGD inspired cosmology

It is good to start by asking what are the empirical facts and how TGD can explain them.

2.1 What about magnetic monopoles in TGD Universe?

Also TGD predicts magnetic monopoles. CP2 has a non-trivial second homology and second geodesic sphere represents a non-trivial element of homology. Induced Kähler magnetic field can be a monopole field and cosmic strings are objects for which the transversal section of the string carries monopole flux. The very early cosmology is dominated by cosmic strings carrying magnetic monopole fluxes. The monopoles do not however disappear anywhere. Elementary particles themselves are string like objects carrying magnetic charges at their ends identifiable as wormhole throats at which the signature of the induced metric changes. For fermions the second end of the string carries neutrino pair neutralizing the weak isospin. Also color confinement could involve magnetic confinement. These monopoles are indeed seen: they are essential for both the screening of weak interactions and for color confinement!

2.2. The origin of cosmological principle

The isotropy and homogenity of cosmic microwave radiation is a fact as are also the fluctuations in its temperature as well as the anomalies in the fluctuation spectrum suggesting the presence of large scale structures. Inflationary scenarios predict that fluctuations correspond to those of nearly gauge invariant Gaussian random field. The observed spectral index measuring the deviation from exact scaling invariance is consistent with the predictions of inflationary scenarios.

Isotropy and homogenity reduce to what is known as cosmological principle. In general relativity one has only local Lorentz invariance as approximate symmetry. For Robertson-Walker cosmologies with sub-critical mass density one has Lorentz invariance but this is due to the assumption of cosmological principle - it is not a prediction of the theory. In inflationary scenarios the goal is to reduce cosmological principle to thermodynamics but fine tuning problem is the fatal failure of this approach.

In TGD framework cosmological principle reduces sub-manifold gravity in H=M4× CP2 predicting a global Poincare invariance reducing to Lorentz invariance for the causal diamonds. This represent extremely important distinction between TGD and GRT. This is however not quite enough since it predicts that Poincare symmetries treat entire partonic 2-surfaces at the end of CD as points rather than affecting on single point of space-time. More is required and one expects that also now finite radius for horizon in very early Universe would destroy the isotropy and homogenity of 3 K radiation. The solution of the problem is simple: cosmic string dominated primordial cosmology has infinite horizon size so that arbitrarily distance regions are correlated. Also the critical cosmology, which is determined part from the parameter determining its duration by its imbeddability, has infinite horizon size. Same applies to the asymptotic cosmology for which curvature scalar is extremized.

The hierarchy of Planck constants and the fact that gravitational space-time sheets should possess gigantic Planck constant suggest a quantum solution to the problem: quantum coherence in arbitrary long length scales is present even in recent day Universe. Whether and how this two views about isotropy and homogenity are related by quantum classical correspondence, is an interesting question to ponder in more detail.

2.3 Three-space is flat

The flatness of three-space is an empirical fact and can be deduced from the spectrum of microwave radiation. Flatness does not however imply inflation, which is much stronger assumption involving the questionable scalar fields and the weird shaped potential requiring a fine tuning. The already mentioned critical cosmology is fixed about the value value of only single parameter characterizing its duration and would mean extremely powerful predictions since just the imbeddability would fix the space-time dynamics almost completely.

Exponentially expanding cosmologies with critical mass density do not allow imbedding to M4× CP2. Cosmologies with critical or over-critical mass density and flat 3-space allow imbedding but the imbedding fails above some value of cosmic time. These imbeddings are very natural since the radial coordinate $r$ corresponds to the coordinate r for the Lorentz invariant a=constant hyperboloid so that cosmological principle is satisfied.

Can one imbed exponentially expanding sub-critical cosmology? This cosmology has the line element

ds2 =dt2-ds32,

ds32= sinh2(t) dΩ32,

where ds32 is the metric of the a=constant hyperboloid of M4+.

  1. The simplest imbedding is as vacuum extremal to M4× S2, S2 the homologically trivial geodesic sphere of CP2. The imbedding using standard coordinates (a,r,θ,φ) of M4+ and spherical coordinates (Θ,Φ) for S2 is to a geodesic circle (the simplest possibility)

    Φ= f(a) , Θ=π/2 .

  2. Φ=f(a) is fixed from the condition

    a = sinh(t) ,

    giving

    gaa=(da/dt)2= 1/cosh2(t)

    and from the condition for the gaa as a component of induced metric tensor

    gaa= 1-R2 (df/da)2 =(dt/da)2 = 1/cosh2(t) .

  3. This gives

    df/da=+/- 1/R × tanh(t)

    giving f(a)= (cosh(t)-1)/R. Inflationary cosmology allows imbedding but this imbedding cannot have a flat 3-space and therefore cannot make sense in TGD framework.

2.4 Replacement of the inflationary cosmology with critical cosmology

In TGD framework inflationary cosmology is replaced with critical cosmology. The vacuum extremal representing critical cosmology is obtained has 2-D CP2 projection - in the simplest situation geodesic sphere. The dependence of Φ on r and Θ on a is fixed from the condition that one obtains flat 3- metric

a2/1+r2 - R2sin2(Θ)(dΦ/dr)2= a2

This gives

sin(Θ)=+/- ka , dΦ/dr=+/- (1/kR)× (r/(1+r2)1/2 .

The imbedding fails for |ka| >1 and is unique apart from the parameter k characterizing the duration of the critical cosmology. The radius of the horizon is given by

R= &int (1/a) × [(1-R2k2)/(1-k2a2)]1/2

and diverges. This tells that there are no horizons and therefore cosmological principle is realized. Infinite horizon radius could be seen as space-time correlate for quantum criticality implying long range correlations and allowing to realize cosmological principle. Therefore thermal realization of cosmological principle would be replaced with quantum realization in TGD framework predicting long range quantal correlations in all length scales. Obviously this realization is a in well-defined sense the diametrical opposite of the thermal realization. The dark matter hierarchy is expected to correspond to the microscopic realization of the cosmological principle generating the long range correlations.

This cosmology could describe the phase transition increasing Planck constant associated with a magnetic flux tube leading to its thickening. Magnetic flux would be conserved and the magnetic energy for the thicknened portion would be reduced via its partial transformation to radiation giving rise to ordinary and dark matter.

2.5 Fractal hierarchy of cosmologies within cosmologies

Many-sheeted space-time leads to a fractal hierarchy of cosmologies within cosmologies. The zero energy realization is in terms of causal diamonds within causal diamonds with causal diamond identified as intersection of future and past directed light-cones. The temporal distance between the tips of CD is given as an integer multiple of CP2 time in the most general case and boosts of CDs are allowed. The are also other moduli associated with CD and discretization of the moduli parameters is strong suggestive.

Critical cosmology corresponds to negative value of "pressure" so that it also gives rise to accelerating expansion. This suggests strongly that both the inflationary period and the accelerating expansion period which is much later than inflationary period correspond to critical cosmologies differing from each other by scaling. Continuous cosmic expansion is replaced with a sequence of discrete expansion phases in which the Planck constant assignable to a magnetic flux quantum increases and implies its expansion. This liberates magnetic energy as radiation so that a continual creation of matter takes place in various scales.

This fractal hierarchy is the TGD counterpart for the eternal inflation. This fractal hierarchy implies also that the TGD counterpart of inflationary period is just a scaled up invariant of critical cosmologies within critical cosmologies. Of course, also radiation and matter dominated phases as well as asymptotic string dominated cosmology are expected to be present and correspond to cosmic evolutions within given CD.

2.6 Vacuum energy density as magnetic energy of magnetic flux tubes and accelerating expansion

TGD allows also a microscopic view about cosmology based on the vision that primordial period is dominated by cosmic strings which during cosmic evolution develop 4-D M4 projection meaning that the thickness of the M4 projection defining the thickness of the magnetic flux tube gradually increases. The magnetic tension corresponds to negative pressure and can be seen as a microscopic cause of the accelerated expansion. Magnetic energy is in turn the counterpart for the vacuum energy assigned with the inflaton field. The gravitational Planck constant assignable to the flux tubes mediating gravitational interaction nowadays is gigantic and they are thus in macroscopic quantum phase. This explains the cosmological principle at quantum level.

The phase transitions inducing the boiling of the magnetic energy to ordinary matter are possible. What happens that the flux tube suffers a phase transition increasing its radius. This however reduces the magnetic energy so that part of magnetic energy must transform to ordinary matter. This would give rise to the formation of stars and galaxies. This process is the TGD counterpart for the re-heating transforming the potential energy of inflaton to radiation. The local expansion of the magnetic flux could be described in good approximation by critical cosmology since quantum criticality is in question.

One can of course ask whether inflationary cosmology could describe the transition period and critical cosmology could correspond only to the outcome. This does not look very attractive idea since the CP2 projections of these cosmologies have dimension D=1 and D=2 respectively.

In TGD framework the fluctuations of the cosmic microwave background correspond to mass density gradients assignable to the magnetic flux tubes. An interesting question is whether the flux tubes could reveal themselves as a fractal network of linear structures in CMB. The prediction is that galaxies are like pearls in a necklace: smaller cosmic strings around long cosmic strings. The model for the formation of stars and galaxies gives a more detailed view about this.

2.7 What is the counterpart of cosmological constant in TGD framework?

In TGD framework cosmological constant emerges, as one asks what might be the GRT limit of TGD. Space-time surface decomposes to regions with both Minkowskian and Euclidian signature of the induced metric and Euclidian regions have interpretation as counterparts of generalized Feynman graphs. Also GRT limit must allow space-time regions with Euclidian signature of metric - in particular CP2 itself -and this requires positive cosmological constant in this regions. The action principle is naturally Maxwell-Einstein action with cosmological constant which is vanishing in Minkowskian regions and very large in Euclidian regions of space-time. Both Reissner-Nordström metric and CP2 are solutions of field equations with deformations of CP2 representing the GRT counterparts of Feynman graphs. The average value of the cosmological constant is very small and of correct order of magnitude since only Euclidian regions contribute to the spatial average. This picture is consistent with the microscopic picture based on the identification of the density of magnetic energy as vacuum energy since Euclidian particle like regions are created as magnetic energy transforms to radiation.

For details and background see the articles Do we really understand the solar system? and Inflation and TGD, and the chapter TGD and Astrophysics.

Monday, November 28, 2011

Nuclear physics objections against Rossi's reactor

The reading of Rossi's paper and Wikipedia article led me to consider in more detail various objections against Rossi's reactor. Coulomb barrier, the lack of gamma rays, the lack of explanation for the origin of the extra energy, the lack of the expected radioactivity after fusing a proton with 58Ni (production of neutrino and positron in beta decay of 59Cu), the unexplained occurrence of 11 per cent iron in the spent fuel, the 10 per cent copper in the spent fuel strangely having the same isotopic ratios as natural copper, and the lack of any unstable copper isotopes in the spent fuel as if the reactor only produced stable isotopes.

Could natural isotope ratios be determined by cold fusion?

The presence of Cu in natural isotope ratios and the absence of unstable copper isotopes of course raise the question whether the copper is just added there. Also the presence of iron is strange. Could one have an alternative explanation for these strange co-incidences?

  1. Whether unstable isotopes of Cu are present or not, depends on how fast ACu, A<63 decays by neutron emission: this decay is expected to be fast since it proceeds by strong interactions. I do not know enough about the detailed decay rates to be able to say anything about this.

  2. Why the isotope ratios would be the same as for naturally occurring copper isotopes? The simplest explanation would be that the fusion cascades of two stable Ni isotopes determine the ratio of naturally occurring Cu isotopes so that cold fusion would be responsible for their production. As a matter fact, TGD based model combined with what is claimed about bio-fusion led to the proposal that stable isotopes are produced in interstellar space by cold fusion and that this process might even dominate over the production in stellar interiors. This wold solve among other things also the well-known Lithium problem. The implications of the ability to produce technologically important elements artificially at low temperatures are obvious.

Could standard nuclear physics view about cold fusion allow to overcome the objections?

Consider now whether one could answer the objections in standard nuclear physics framework as a model for cold fusion processes.

  1. By inspecting stable nuclides one learns that there are two fusion cascades. In the first cascade the isotopes of copper would be produced in a cascade starting from with 58Ni+n→ 59Cu and stopping at 63Cu. All isotopes ACu, A ∈{55,62} are unstable with lifetime shorter than one day. The second fusion cascade begins from 63Ni and stops at 65Cu.

  2. The first cascade involves five cold fusions and 4 weak decays of Cu. Second cascade involves two cold fusions and one weak decay of Cu. The time taken by the cascade would be same if there is single slow step involved having same duration. The only candidates for the slow step would be the fusion of the stable Ni isotope with the neutron or the fusion producing the stable Cu isotope. If the fusion time is long and same irrespective of the neutron number of the stable isotope, one could understand the result. Of course, this kind of co-incidendence does not look plausible.

  3. A-5Fe could be produced via alpha decay ACu→ A-4Co+α followed by A-4Co→ A-5Fe +p.

Could TGD view about cold fusion allow to overcome the objections?

The claimed absence of positrons from beta decays and the absence of gamma rays are strong objections against the assumption that standard nuclear physics is enough. TGD framework it is possible to ask whether the postulated fusion cascades really occur and whether instead of it weak interactions in dark phase of nuclear matter with range of order atomic length scale are responsible for the process because weak bosons would be effectively massless below atomic length scale. For TGD inspired model of cold fusion see this and this).

  1. The nuclear string model assumes that nucleons for nuclear strings with nucleons connected with color bonds having quark and antiquark at their ends. Color bonds could be also charged and this predicts new kind of internal structure for nuclei. Suppose that the space-time sheets mediating weak interactions between the color bonds and nucleons correspond to so large value of Planck constant that weak interaction length scale is scaled up to atomic length scale. The generalization of this hypothesis combined with the p-adic length scale hypothesis is actually standard piece of TGD inspired quantum biology (see this).

  2. The energy scale of the excitations of color bond excitations of the exotic nuclei would be measured in keVs. One could even consider the possibility that the energy liberated in cold fusion would correspond to this energy scale. In particular, the photons emitted would be in keV range corresponding to wavelength of order atomic length sale rather than in MeV range. This would resolve gamma ray objection.
  3. Could the fusion process 58Ni+n actually lead to a generation of Ni nucleus 59Ni with one additional positively charged color bond? Could the fusion cascade only generate exotic Ni nuclei with charged color bonds, which would transform to stable Cu by internal dark W boson exchange transferring the positive charge of color bond to neutron and thus transforming it to neutron? This would not produce any positrons. This cascade might dominate over the one suggested by standard nuclear physics since the rates for beta decays could be much slower than the rate for directed generation of Ni isotopes with positively charged color bonds.

  4. In this case also the direct alpha decay of Ni with charged color bond to Fe with charged color bond decaying to ordinary Fe by positron emission can be imagined besides the proposed mechanism producing Fe.

  5. If one assumes that this process is responsible for producing the natural isotope ratios, one could overcome the basic objections against Rossi's reactor.

The presence of em radiation in keV range would be a testable basic signature of the new nuclear physics as also effects of X-ray irradiation on measured nuclear decay and reaction rates due to the fact that color bonds are excited. As a matter fact, it is known that X-ray bursts from Sun in keV range has effects on the measured nuclear decay rates and I have proposed that the proposed exotic nuclear physics in keV range is responsible for the effect. Quite generally, the excitations of color bonds would couple nuclear physics with atomic physics and I have proposed that the anomalies of water could involve classical Z0 force in atomic length scales. Also the low compressibility of condensed matter phase could involve classical Z0 force. The possible connections with sono-luminescence and claimed sonofusion are also obvious (see this).

Sunday, November 27, 2011

Cold fusion irritates again

Lubos has been raging several times about the cold fusion gadget of Andrea Rossi and returned to the topic again. The claim of Rossi and physicist Fogardi is that the cold fusion reaction of H and Ni producing Cu in the presence of some "additives" (Palladium catalyst as in may cold fusion experiments gathering at its surface Ni?) .

Lubos of course "knows" before hand that the gadget cannot work: Coulomb barrier. Since Lubos is true believer in naive text book wisdom, he simply refuses to consider the possibility that the physics that we learned during student days might not be quite right. Personally I do not believe or disbelieve cold fusion: I just take it seriously as any person calling himself scientist should do. I have been developing for more than 15 years ideas about possible explanation of cold fusion in TGD framework. The most convincing idea is that large value of Planck constant associated with nuclei could be involved scaling up the range of weak interactions from 10-17 meters to atomic size scale and also scaling up the size of nucleus to atomic size scale so that nucleus and even quarks would like constant charge densities instead of point like charge. Therefore Coulomb potential would be smoothed and the wall would become much lower (see this and this).

One must say in honor of Lubos that at this time he had detailed arguments about what goes wrong with the reactor of Rossi: this is in complete contrast with the usual arguments of skeptics which as a rule purposefully avoid saying anything about the actual content and concentrate on ridiculing the target. The reason is of course that standard skeptic is just a soldier who has got the list of targets to be destroyed and as a good soldier does his best to achieve the goal. Thinking is not what a good soldier is expected to do since the professors in the consultive board take care of this and give orders to those doing the dirty job.

As a theoretician I have learned the standard arguments used to debunk TGD: logic is circular, text is mere world salad, everything is just cheap numerology, too many self references, colleagues have not recognized my work, the work has not been published in respected journals, and so on. The additional killer arguments state that I have used certain words which are taboos and already for this reason am a complete crackpot. Examples of bad words are "water memory", "homeopathy", "cold fusion", "crop circles", "quantum biology", "quantum consciousness". There is of course no mention about the fact that I have always emphasized that I am skeptic, not a believer or disbeliever, and only make the question "What if...." and try to answer it in TGD framework. Intellectual honesty does not belong to the virtues of skeptics who are for modern science what jesuits were for the catholic church. Indeed, as Loyola said: the purpose sanctifies the deeds.

Lubos has real arguments but they suffer from strong negative emotional background coloring so that one cannot be trust the rationality of the reasoning. The core of the arguments of Lubos is following.

  1. The water inside reactor is heated to a temperature of 100.1 C. This is slightly above 100 C defining the nominal value of the boiling point temperature at normal pressure. The problem is that if the pressure is somewhat higher, the boiling point increases and the it could happen that the no evaporation of the water takes place. If this is the case, the whole energy fed into the reactor could go to the heating of the water. The input power is indeed somewhat higher than the power needed to heat the water to this temperature without boiling so that this possibility must be taken seriously and the question is whether the water is indeed evaporated.

    Comments:

    1. This looks really dangerous. Rossi uses water only as a passive agent gathering the energy assumed to be produced in the fusion of hydrogen and nickel to copper. This would allow to assume that the water fed in is at lower temperature and also the water at outlet is below boiling boiling. Just by measuring the temperature at the outlet one can check whether the outgoing water has temperature higher than it would be if all input energy goes to its heating.
    2. This is only one particular demonstration and it might be that there are other demonstrations in which the situation is this. As a matter fact, from an excellent video interview of Nobelist Brian Josephson one learns that there are also demonstrations in which water is only heated so that the argument of Lubos does not bite here. The gadget of Rossi is already used to heat university building. The reason why the evaporation is probably that this provides an effective manner to collect the produced energy. Also by reading the Nyteknik report one learns that the energy production is directly measured rather than being based on the assumption that evaporation occurs.

  2. Is the water evaporated or not? This is the question posed by Lubos. The demonstration shows explicitly that there is a flow of vapor from the outlet. As Rossi explains there is some condensation. Lubos claims that the the flow of about 2 liters of vapor per second resulting from the evaporation 2 ml of water per second should produce much more dramatic visual effect. More vapor and with a faster flow velocity. Lubos claims that water just drops from the tube and part of it spontaneously evaporates. This is what Lubos wants to see and I have no doubt that he is seeing it. Strong belief can move mountains! Or at least can make possible the impression that they are indeed moving!;-).

    Comments:

    1. I do not see what Lubos sees but I am not able to tell how many liters of vapor per second comes out. Therefore the visual demonstration as such is not enough.
    2. I wonder why Rossi has not added flow meter measuring the amount of vapor going through the tube. Second possibility is to allow the vapor condensate back to water in the tube by using heat exchanger. This would allow to calculate the energy gained without making the assumption that all that comes out is vapor. It might be that in some experiments this is done. In fact, the gadget of Rossi has been used to heat the university building but even this is not a real proof.
To sum up, Lubos in his eagerness to debunk forgets that he is concentrating on single demonstration and forgetting other demonstrations and also the published report to which his argument do not apply. I remain however skeptic (I mean real skeptic, the skepticism of Lubos and -sad to say- of quite too many skeptics- has nothing to do with a real skeptic attitude). Rossi should give information about the details of his invention and quantitative tests really measuring the heat produced should be carried out and published. Presumably the financial aspects related to the invention explain the secrecy in a situation in which patenting is difficult.

Saturday, November 26, 2011

The origin of cosmic rays and cosmic rays as direct support for the hierarchy of Planck constants

The origin of cosmic rays remains still one of the mysteries of astrophysics and cosmology. The recent finding of a super bubble emitting cosmic rays might cast some light in the problem.

1. What has been found?

The following is the abstract of the article published in Science.

The origin of Galactic cosmic rays is a century-long puzzle. Indirect evidence points to their acceleration by supernova shockwaves, but we know little of their escape from the shock and their evolution through the turbulent medium surrounding massive stars. Gamma rays can probe their spreading through the ambient gas and radiation fields. The Fermi Large Area Telescope (LAT) has observed the star-forming region of Cygnus X. The 1- to 100-gigaelectronvolt images reveal a 50-parsec-wide cocoon of freshly accelerated cosmic rays that flood the cavities carved by the stellar winds and ionization fronts from young stellar clusters. It provides an example to study the youth of cosmic rays in a superbubble environment before they merge into the older Galactic population. The usual thinking is that cosmic rays are not born in states with ultrahigh energies but are boosted to high energies by some mechanism. For instance, super nova explosions could accelerate them. Shock waves could serve as an acceleration mechanism. Cosmic rays could also result from the decays of heavy dark matter particles.

The story began when astronomers detected a mysterious source of cosmic rays in the direction of the constellation Cygnus X. Supernovae happen often in dense clouds of gas and dust, where stars between 10 to 50 solar masses are born and die. If supernovae are responsible for accelerating of cosmic rays, it seems that these regions could also generate cosmic rays. Cygnus X is therefore a natural candidate to study. It need not however be the source of cosmic rays since magnetic fields could deflect the cosmic rays from their original direction. Therefore Isabelle Grenier and her colleagues decided to study, not cosmic rays as such, but gamma rays created when cosmic rays interact with the matter around them since they are not deflected by magnetic fields. Fermi gamma-ray space telescope was directed toward Cygnus X. This led to a discovery of a superbubble with diameter more than 100 light years. Superbubble contains a bright regions which looks like a duck. The spectrum of these gamma rays implies that the cosmic rays are energetic and freshly accelerated so that they must be close to their sources.

The important conclusions are that cosmic rays are created in regions in which stars are born and gaint their energies by some acceleration mechanism. The standard identification for the acceleration mechanism are shock waves created by supernovas but one can imagine also other mechanisms.

2. Cosmic rays in TGD Universe?

In TGD framework one can imagine several mechanisms producing cosmic rays. According to the vision discussed already earlier, both ordinary and dark matter would be produced from dark energy identified as Kähler magnetic energy and producing as a by product cosmic rays. What causes the transformation of dark energy to matter, was not discussed earlier, but a local phase transition increasing the value of Planck constant of the magnetic flux tube could be the mechanism. A possible acceleration mechanism would be acceleration in an electric field along the magnetic flux tube. Another mechanism is super-nova explosion scaling-up rapidly the size of the closed magnetic flux tubes associated with the star by hbar increasing phase transition preserving the Kähler magnetic energy of the flux tube, and accelarating the highly energetic dark matter at the flux tubes radially: some of the particles moving along flux tubes would leak out and give rise to cosmic rays and associated gamma rays.

2.1. The mechanism transforming dark energy to dark matter and cosmic rays

Consider first the mechanism transforming dark energy to dark matter.

  1. The recent model for the formation of stars and also galaxies is based on the identification magnetic flux tubes as carriers of mosly dark energy identified as Kähler magnetic energy giving rise to a negative "pressure" as magnetic tension and explaining the accelerated expansion of the Universe. Stars and galaxies would be born as bubbles of ordinary are generated inside magnetic flux tubes. Inside these bubbles dark energy would transform to dark and ordinary matter. Kähler nagnetic flux tubes are characterized by the value of Planck constant and for the flux tubes mediating gravitational interactions its value is gigantic. For a start of mass M its value for flux tubes mediating self-gravitation it would be hbargr=GM2/v0, v0<1 (v0 is a parameter having interpretation as a velocity).

  2. On possible mechanism liberating Kähler magnetic energy as cosmic rays would be the increase of the Planck constant for the magnetic flux tube occurring locally and scaling up quantal distances. Assume that the radius of the flux tube is this kind of quantum distance. Suppose that the scaling hbar→ rhbar implies that the radius of the flux tube scales up as rn, n=1/2 or n=1 (n=1/2 turns out to be the sensible option). Kähler magnetic field would scale as 1/r2n. Magnetic flux would remain invariant as it should and Kähler magnetic energy would be reduced as 1/r2n. For both options Kähler magnetic energy would be liberated. The liberated Kähler magnetic energy must go somewhere and the natural assumption is that it transforms to particles giving rise to matter responsible for the formation of star.

    Could these particles include also cosmic rays? This would conform with the observation that stellar nurseries could be also the birth places of cosmic rays. One must of course remember that there are many kinds of cosmic rays. For instance, this mechanism could produce ultra high energy cosmic rays having nothing to do with the cosmic rays in 1-100 GeV rays studied in the recent case.

  3. The simplest assumption is that the thickening of the magnetic flux tubes during cosmic evolution is based on phase transitions increasing the value of Planck constant in step-wise manner. This is not a new idea and I have proposed that entire cosmic expansion at the level of space-time sheets corresponds to this kind of phase transitions. The increase of Planck constant by a factor of two is a good guess since it would increase the size scale by two. In fact, Expanding Earth hypothesis having no standard physics realization finds a beautiful realization in this framework. Also the periods of accelerating expansion could be identified as these phase transition periods.

  4. For the values of gravitational Planck constant assignable to the space-time sheets mediating gravitational interactions, the Planck length scaling like r1/2 would scale up to black-hole horizon radius. The proposal would imply for n=1/2 option that magnetic flux tubes having M4 projection with radius of order Planck length primordially would scale up to blackhole horizon radius if gravitational Planck constant has a value GM2/v0, v0<1, assignable to a star. Obviously this evelutionary scenario is consistent with with what is known about the relations ship between masses and radii of stars.

2.2 What is the precise mechanism transforming dark energy to matter?

What is the precise mechanism transforming the dark magnetic energy to ordinary or dark matter? This is not clear but this mechanism could produce very heavy exotic particles not yet observed in laboratory which in turn decay to very energetic ordinary hadrons giving rise to cosmic rays spectrum. I have considered a mechanism for the production of ultrahigh energy cosmic rays based on the decays of hadrons of scaled up copies of ordinary hadron physics. In this case no acceleration mechanism would be necessary. Cosmic rays lose their energy in interstellar space. If they correspond to a large value of Planck constant, situation would change and the rate of the energy loss could be very slow. The above described experimental finding about Cygnus X however suggests that acceleration takes place for the ordinary cosmic rays with relatively low energies. This of course does not exclude particle decays as the primary production mechanism of very high energy cosmic rays. In any case, dark magnetic energy transforming to matter gives rise to both stars and high energy cosmic rays in TGD based proposal.

2.3. What is the acceleration mechanism?

How cosmic rays are created by this general process giving rise to the formation of stars?

  1. Cosmic rays could be identified as newly created matter leaking out from the system. Even in the absence of accelerating fields the particles created in the boiling of dark energy to matter, particles moving along magnetic flux tubes would move essentially like free particles whereas in orthogonal directions they would feel 1/ρ gravitational force. For large values of hbar this could explain very high energy cosmic rays. The recent findings about gamma ray spectrum however suggests that there is an acceleration involved for cosmic rays with energies 1-100 GeV.

  2. One possible alternative acceleration mechanism relies on the motion along magnetic flux tubes deformed in such a manner that there is an electric field orthogonal to the magnetic field in such a manner that the field lines of these fields rotate around the direction of the flux tube. The simplest imbeddings of constant magnetic fields allow deformations allowing also electric field, and one can expect the existence of preferred extremals with similar structure. Electric field would induce an acceleration along the flux tube. If the flux tube corresponds to large non-standard value of Planck constant, dissipation rate would be low and the acceleration mechanism would be very effective.

    Similar mechanism might even explain the observations about ultrahigh energy electrons associated with lightnings at the surface of Earth: they should not be there because the dissipation in the atmosphere should not allow free acceleration in the radial electric field of Earth.

    Here one must be very cautious: the findings are based on a model in which gamma rays are generated with collising of cosmic rays with matter. If cosmic rays travel along magnetic flux tubes with a gigantic value of Planck constant, they should dissipate extremely slowly and no gamma rays would be generated. Hence the gamma rays must be produced by the collisions of cosmic rays which have leaked out from the magnetic flux tubes. If the flux tubes are closed (say associated with the star) the leakage must indeed take place if the cosmic rays are to travel to Earth.

  3. There could be a connection with supernovae although it would not be based on shock waves. Also supernova expansion could be accompanied by a phase transition increasing the value of Planck constant. Suppose that Kähler magnetic energy is conserved in the process. This is the case if the lengths of the magnetic flow tubes r and radii by r1/2. The closed flux tubes associated with supernova would expand and the size scale of flux tubes would increase by factor r. The fast radial scaling of the flux tubes would accelerate the dark matter at the flux tubes radially.

    Cosmic rays having ordinary value of Planck constant could be created when some of the dark matter leaks out from the magnetic flux tubes as their expanding motion in radial direction accelerates or slows down. High energy dark particles moving along flux tube would leak out in the tangential direction. Gamma rays would be generated as the resulting particles interact with the environment. The energies of cosmic rays would be the outcome of acceleration process: only their leakage would be caused by it so that the mechanism differs in a decisice manner from the mechanism involving shock waves.

  4. The energy scale of cosmic rays - let us take it to be about E=100 GeV for definiteness- gives an order of magnitude estimate for the Planck constant of dark matter at the Kähler magnetic flux tubes if one assumes that supernovae is producing the cosmic rays. Assume that electro-magnetic field equals to induced Kähler field (the space-time projection of space-time surface to CP2 belongs homologically non-trivial geodesic sphere). Assume that E equals the cyclotron energy scale given by Ec= hbar eB/me in non-relativistic situation and by Ec= (hbar eB)1/2 in relativistic situation. The situation is relativistic for both proton and electron now and at this limit the cyclotron energy scale does not depend on the mass of the charged particle at all. This means that same value of hbar produces same energy for both electron and proton.

    1. The magnetic field of pulsar can be estimated from the knowledge how much the field lines are pulled together and from the conservation of magnetic flux: a rough estimate is B=108 Tesla and will be used also now. This field is 2× 1012BE where BE=.5 Gauss is the nominal value of the Earth's magnetic field.

    2. The cyclotron frequency of electron in Earth's magnetic field is fc(e)=6× 105 Hz in a good approximation and correspond to cyclotron energy Ec=10-14(fc/Hz) eV from the approximate correspondence eV↔ 1014 Hz true for E=hf. For the ordinary value of Planck constant electron's cyclotron energy would be for supernova magnetic field BS=108 Tesla equal to Ec=2× 10-2 (fc/Hz) eV and much below the energy scale E= 100 GeV.

    3. The required scaling hbar→ r×hbar of Planck constant is obtained from the condition Ec=E giving in the case of electron one can write

      r= (E/Ec)2×(BE/BS) × hbar eBE/me2.

      The dimensionless parameter hbar eBE/me2=1.2×10-14 follows from me=.5 MeV. The estimate gives r∼ 2× 1012. Values of Planck constant of this order of magnitude and even larger ones appear in TGD inspired model of brain but in this case magnetic field is Earth's magnetic field and the large thickness of the flux tube makes possible to satisfy the quantization of magnetic flux in which scaled up hbar defines the unit.

    To sum up, large values of Planck constant would be absolutely essential making possible high energy cosmic rays and just the presence of high energy cosmic rays could be seen as an experimental support for the hierarchy of Planck constants. The acceleration mechanism of cosmic rays are poorly understood and TGD option predicts that there is no acceleration mechanism to search for.

For details and background see the article Do we really understand the solar system? and the chapter TGD and Astrophysics of "Physics in Many-Sheeted Space-time".

Thursday, November 24, 2011

Three year old CDF anomaly is here again!

CDF anomaly - or multimuon- mystery emerged about three years ago- the preprint to arXiv.org was sent more or less at my birthday so that I regarded it with a full reason as a birthday present. The reason was that its explanation provided a further support for the existence of color octet excitations of charged leptons having the same p-adic mass scale for the lowest mass state. In this case color excited tau leptons would have been in question. For color excited electrons evidence had emerged already at seventies in heavy ion collisions. For color excited muons for a year or so before CDF anomaly.

For a series of posts about CDF anomaly by Tommaso Dorigo see this. To get handle on series of my own postings see this. For the TGD based explanation of the CDF anomaly and earlier similar anomalies see this.

This anomaly had really bad luck. This kind of new light particles are not possible in standard model: intermediate gauge boson decay widths simply leave no room for them. Average theorist of cause refuses to accept observations which are in conflict with the theory so that all these anomalies have been forgotten. As a crackpot I however respect experimental and empirical facts more than the orders of hegemony. The assumption that these particles are dark matter particles in TGD sense resolves the paradox. Being dark matter particle in TGD Universe means having a non-standard value of Planck constant coming as an integer multiple of the ordinary Planck constant. Only particles with the same value of Planck constant can appear in same vertex so that intermediate gauge boson decay widths do not pose any restrictions.

I have applied the general format of the model of CDF anomaly in several contexts (see this and this). One of the common aspects of the applications is that pion like states of leptohadron physics of scaled variant of hadron physics seem to appear in several mass scales comings as the octaves of the lowest one. Also in TGD based model for light hadrons quarks appear in several p-adic mass scales, and more precise considerations show that it is not at all clear whether also ordinary pion could have these higher octaves: their production and detection is not all easy but would provide a brilliant confirmation of p-adic length scale hypothesis.

The three year old CDF anomaly generated a lot of debate as also its slightly more than half year old brother last spring (having interpretation in terms of pion of M89 hadron physics and involving very similar basic mechanism). D0 collaboration however reported that they do not see anything. Tommaso Dorigo demonstrated in his blog posting that we can indeed trust D0: it did not have any chance of observing anything!: the acceptance criteria for events made impossible to observe the anomaly. No-one was of course interested on such a minor details and CDF anomaly was safely buried in the sands of time.

By the way, a delicate change of the relevant parameters is a very effective manner to kill undesired anomalies and the people calling themselves skeptics are masters in applying this technique. One can speak of time-honored tradition here. Water memory is one of the victims of this method and no serious scientist dares say this word aloud although anyone with eyes can see concrete images taken by professional scientists and demonstrating its reality (see this). Do not believe a scientists before you know how pays for him!

To my surprise the situation changed this morning. Tommaso Dorigo reported that CDF has done a much simpler analysis and using larger sample of data. This bloody, disgusting anomaly is still there! The reader interested in details is encouraged to visit Tommaso's blog.

Hegemony is living difficult times: no sign of standard model Higgs or its counterparts in various SUSYs, no sign about standard SUSY, no sign about super-stringy new physics like mini blackholes and large extra dimensions. Just anomalies having no explanation in the theories that hegenomy is ready to consider and even worse: having an elegant explanation TGD framework. The most irritating stone in the to is the accumulating evidence for the scaled up variant of hadron physics predicted by TGD. There is also this nasty neutrino superluminality elegantly explained and almost predicted by TGD. And now this revived CDF anomaly which should have been already buried! The rope is getting tighter and tighter!

How easy it would have been to just explore this wonderful Universe with open mind?! Nature does not seem to care the least about the attempts of Big Science Bosses to tell how it should behave. But by all means my Dear Professors and Leaders of Units of Excellence: try! By beating your brilliant head against the wall sufficiently many times you might finally learn what those contemptible crackpots have learned long time ago just by using simple logic and accepting simple facts!

Sunday, November 20, 2011

Algebraic braids, sub-manifold braid theory, and generalized Feynman diagrams

Ulla send me a link to an article by Sam Nelson about very interesting new-to-me notion known as algebraic knots, which has initiated a revolution in knot theory. This notion was introduced 1996 by Louis Kauffmann so that it is already 15 year old concept. While reading the article I realized that this notion fits perfectly the needs of TGD and leads to a progress in attempts to articulate more precisely what generalized Feynman diagrams are.

The outcome was an article in which l summarize briefly the vision about generalized Feynman diagrams, introduce the notion of algebraic knot, and after than discuss in more detail how the notion of algebraic knot could be applied to generalized Feynman diagrams. The algebraic structrures kei, quandle, rack, and biquandle and their algebraic modifications as such are not enough. The lines of Feynman graphs are replaced by braids and in vertices braid strands redistribute. This poses several challenges: the crossing associated with braiding and crossing occurring in non-planar Feynman diagrams should be integrated to a more general notion; braids are replaced with sub-manifold braids; braids of braids ....of braids are possible; the redistribution of braid strands in vertices should be algebraized. In the following I try to abstract the basic operations which should be algebraized in the case of generalized Feynman diagrams.

One should be also able to concretely identify braids and 2-braids (string world sheets) as well as partonic 2-surfaces and I have discussed several identifications during last years. Legendrian braids turn out to be very natural candidates for braids and their duals for the partonic 2-surfaces. String world sheets in turn could correspond to the analogs of Lagrangian sub-manifolds or to minimal surfaces of space-time surface satisfying the weak form of electric-magnetic duality. The latter option turns out to be more plausible. Finite measurement resolution would be realized as symplectic invariance with respect to the subgroup of the symplectic group leaving the end points of braid strands invariant. In accordance with the general vision TGD as almost topological QFT would mean symplectic QFT. The identification of braids, partonic 2-surfaces and string world sheets - if correct - would solve quantum TGD explicitly at string world sheet level in other words in finite measurement resolution.

Irrespective of whether the algebraic knots are needed, the natural question is what generalized Feynman diagrams are. It seems that the basic building bricks can be identified so that one can write rather explicit Feynman rules already now. Of course, the rules are still far from something to be burned into the spine of the first year graduate student.

For details and background see the article Algebraic braids, sub-manifold braid theory, and generalized Feynman diagrams or the new chapter Generalized Feynman Diagrams as Generalized Braids of "Towards M-matrix".

Saturday, November 19, 2011

The most recent revelations about Higgs

As the readers of tabloids must have already noticed, the latest combination of ATLAS and CMS searches for Higgs boson have been released and the results are practically identical with the results anticipated by Philip Gibbs. Both Jester, Peter Woit, and Tommaso Dorigo have commented the results.

There are no big surprises. Higgs is now excluded above 141 GeV. Second figure in Tommaso's blog shows the observed and predicted distribution of so called local p-value as a function of Higgs boson mass. Assuming that Higgs exists, the predicted local p-value has enormous downwards peak above 140 GeV. The observed distribution has also small downwards peak at 140 GeV here but it is not clear to me whether this really signals about the presence of a neutral particle: in TGD Universe it would be neutral M89 pion with mass of 139 GeV whereas charged M89 pions would have mass 144 GeV.

Here I am angry to myself because of my sloppiness: my first MATLAB estimate for M89 mass performed when CDF anomaly came was based on the approximation there is no electromagnetic splitting and I scaled the mass of charged pions and got 144 GeV. I realized my error only about month or two ago. In any case, the prediction is that there should be charged companions of neutral M89 pion at 144 GeV besides 139 GeV neutral M89 pion.

Second downwards peak in p-value distribution is at 120 GeV. My proposal is that it corresponds to M89 spion (see this). This requires some explanation. The basic assumption is that squarks and quarks have same p-adic mass scales and perhaps even identical masses and shadronization via the exchange of almost massless gluinos takes place much faster than the selectro-weak decays of squarks to quarks and electro-weak gauginos. This prevent the events with missing energy (lightest SUSY particle) predicted by standard SUSY but not observed. The missing missing energy has already led to models based on R-parity breaking requiring the non-conservation of either lepton number or baryon number. I have discussed shadronization here and will not go to further details.

A considerable mixing between pion and spion due to the large value of color coupling strength at low energies takes place and makes mass squared matrix non-diagonal. It must be diagonalized and in the case of ordinary light hadrons second mass squared eigen value is negative meaning tachyonicity. The pragmatic conclusion, which might horrify a colleague appreciating aesthetical values (I am ready to discuss about this;-)), is that the tachyonic state is absent from the spectrum and SUSY would effectively disappear for light hadrons. In the case of charmonium states mixing is weaker since αs is weaker and both states would be non-tachyonic. The mysterious X and Y bosons would be in good approximation scharmonium states (see this).

I agree with the Jester who compares the situation to that in formed Soviet Union when secretary general has not appeared in publicity for a long time and working class and lower party officials were asking whether he is sick, unconscious, dead, and if dead how long he has been dead. My guess is that Higgs is not there and the evil particle physics hegemony might already know it but do not inform ordinary folks;-).

M89 hadron physics has survived. One of the really dramatic signatures of M89 would be a production of jets coming in multiples of three due to the decay of M89 quarks to quark and quark pair. In the case of M89 proton this would yield at least nine jets (see this). The production of M89 proton pair would produce at least 18 jets: something so sensational tha it would probably revive the speculations about mini black-holes at LHC!

Nanopoulos et al (see the posting of Lubos) have taken the ultrahigh jet multiplicities seriously and proposed an explanation in terms of pair production of gluinos: the problem of model is of course the missing missing energy. 8 would be the lower bound for the jet number whereas the decay of M89 proton predicts at least 9 jets. Lubos indeed speaks of nona-jets. The estimate for gluino mass by Nanopoulos et al is 518 GeV. By direct scaling the mass of M89 proton is 489 GeV.

I am quite too much a theoretician with head in the clouds. Therefore I have not considered the practical implications of discovering a scaled up copy of hadron physics instead of Higgs at LHC. The recent competing two big projects for the next collider are CLIC (compact linear collider) and ILC (International Linear Collider).

  1. The assumption motivating these projects is that Higgs (and possibly also SUSY) will be found at LHC. These colliders would allow to perform high precision measurements related to Higgs (and SUSY). For this reason one uses electron-positron collisions and the highest energy achieved in CLIC (ILC) would be 3 TeV (1 TeV) s compared to 14 TeV to be achieved at LHC. Electrons would lose their energy via brehmstrahlung if collider were circular so that linear collider is the only possible option. The mass of M89 proton is predicted to be around .5 TeV. This does not exclude the study of M89 hadron physics at these colliders: for instance, the annihilation to photon or Z0 followed by the decay to quark pair of M89 hadron physics hadronizing to M89 hadrons is possible.

  2. I am of course not a professional, but it seems to me that a better choice for the next collider would be a scaled up variant of LHC with a higher collision energy for protons. This would mean starting from scratch. Sorry! I swear that I did my best to tell! I began to talk about M89 hadron physics already 15 years ago but no-one wanted to listen;-)!

By adding to the soup the super-luminal neutrinos, I can say that my life as an eternally un-employed academic pariah has changed to a captivating adventure! It is wonderful to witness the encounter of theory and reality although theory is usually the one which does not survive the collision.

Friday, November 18, 2011

Are quantum states only figments of imagination?

The article The quantum state cannot be interpreted statistically by Fusey, Barrett and Rudolf has created a lot of turmoil after Nature's hype for it. Lubos reacted strongly. Also Sean Carroll commented the article in so politically correct manner that it remained unclear what he was actually saying.

The starting point is the orthodox probabilistic interpretation stating that quantum states are only mathematical constructs allowing to make predictions and therefore lack a genuine ontological status. The main point of the authors is that quantum states are real and that this destroys the probabilistic interpretation. They argue that if quantum states are only probabilistic constructs one ends up with a contradiction with quantum measurement theory. I share the belief that quantum states are real but I cannot understand how this could destroy probabilistic interpretation.

The argument

I scanned through the paper and indeed found very difficult to understand how their argument could prove that probabilistic interpretation of quantum theory is wrong.

  1. The authors assume that probabilistic interpretation of quantum theory makes possible to prepare the system in two different manners yielding two non-orthogonal states which cannot be distinguished by quantum measurements. The reason for having several preparation processes producing different states which cannot be distinguished by quantum measurements would be that the state preparation process is probabilistic. Why this should be the case I cannot understand: in fact, they use argument based on classical probability but quantum probability is not classical!

  2. The authors assume a pair of this kind of systems giving rise to four in-distiguishable states and apply quantum measurement theory to show that one of these states is produced with vanishing probability in quantum measurement so that the states are actually distinguishable. From this reduction ad absurdum they conclude that the probabilistic interpretation is wrong.

What goes wrong with the argument?

What could go wrong with this argument? I think that the problem is that the notions used are not defined precisely and classical and quantum probabilities are confused with each other. The following questions might help to clarify what I mean.

  1. What one means with probabilistic interpretation? The idea that probability amplitudes can be coded by classical probabilities is certainly wrong: the probabilities defined by coefficients in the quantum superposition correspond to probabilities only in the state base used and there is infinite number of them and also the phases of coefficients matter. A good example comes from classical physics: the relative phases of Fourier components in sound wave are very important for how the sound is heard, the mere intensities of Fourier components are not enough. For instance, time reversed speech has the same power spectrum as ordinary speech but sounds totally different. The authors however use a plausibility argument based on classical probability to argue that the system can be prepared to two non-orthogonal states. This destroys the already foggy argument.

  2. What really happens in quantum measurement? What do state function reduction and state preparation really mean? Does state function reduction really occur? The authors do not bother to ponder these questions. The same can be said about any mainstream physicist who has a healthy does of opportunism in his genes. State function reduction is in a blatant conflict with the determinism of the Schrödinger equation and this alone is an excellent reason to shut up and calculate. This is also what Lubos does although "shut up" in the case of Lubos has only symbolic meaning.

  3. If one begins to ponder what might happen in state function reduction and preparation, one soon ends up to make questions about the relationship between experienced time and the time of physicists and free will, and eventually one becomes a consciousness theorist. The price payed is life long unemployment as a pariah of academic community regarded as intellectually retarded individual by the brahmins. I can tell this on basis of personal experience.

Why I believe that quantum states are real?

Although I was not convinced by the argument of the authors, I agree with their main point that quantum states are indeed real.

  1. In TGD Universe quantum states are as real as something can be real. Mathematically quantum states correspond to modes of WCW ("world of classical worlds") spinor fields. They are definitely not fuzzy thought constructs of theorist needed to predict probabilities (ironically enough, WCW spinor fields allow to model the Boolean cognition among other things). And the overall important point is that this in no way excludes quantum probabilistic interpretation. There is also ontological minimalism: WCW spinor fields are the fundamental objects, there is no physical reality behind them: they are the physical realities. About this aspect more in the next section.

  2. The non-determinism of state function reduction is the stone in the shoe of the main stream physicist and the claim that quantum states represent an outcome of probabilistic imagination is an attempt to get rid of this stone or at least forget its painful presence. The idea that Schrödinger equation would temporarily cease to be true but in such a manner that various conservation laws are obeyed is of course nonsense and the manner to escape this conclusion is to give up the notion of objective reality altogether. This does not make sense physically. The alternative trick is to assume infinitude of quantum parallel realities. This does not make sense mathematically. What is the preferred basis for these parallel realities or are all basis allowed? The attempts to answer these questions make clear that the idea is absurd.

What is the anatomy of quantum jump?

What is the anatomy of quantum jump in TGD Universe?

  1. In TGD Universe quantum quantum jump consisting of a unitary process - I call it U - followed by a state function reduction. Unitary process acts on initial prepared state by universal unitary matrix U and generates a lot of entanglement. Quantum jump as a whole replaces WCW spinor field with a new one and the laws of physics determining the modes of WCW spinor field are not given up temporarily in quantum jump. One can say that quantum jump replaces entire time evolution with a new one and this non-determinism is outside the realm of geometric time and state space. It is this non-determinism which corresponds to that associated with subjective time identified as a sequence of quantum jumps. The reduction of act of what we call free will to quantum jump is an attractive starting point of quantum consciousness theorizing.
  2. Our view about world is created by quantum jumps between quantum states in accordance with the basic finding that conscious experience is always accompanied change (visual conscious disappears if saccadic motion is made impossible). Consciousness is between two worlds, not in the world.
  3. There is no need to assume physical reality behind WCW spinor fields: they are the physical realities. Note however that I use plural: the idea about unique physical reality introduced in physics by Galilei must be given up. This does not require giving up the notion of objective reality completely. One only assumes hat there is infinitude of them as any physical theory indeed predicts. The key aspect of consciousness is that it makes possible to study these realities by simply staying conscious! Every quantum jump recreates the Universe and cosmic and biological and all other evolutions reduce to this endless re-creation. God as a clock smith who built the clock and forgot the whole thing after that is replaced with Divine identified as a moment of re-creation. The new ontology also liberates us from the jailhouse of materialism.
  4. What really happens in the state function reduction? U process in general entangles the initial state with environment and creates enormously entangled universe. State function reduction proceeds for a given CD as cascade of state function reductions downwards in the the fractal hierarchy of CDs inside CDs inside... For a given subsystem the process stops when it is not anymore possible to reduce entanglement entropy by state function reduction (by Negentropy Maximization Principle). If one accepts the notion of number theoretic entropy making sense in the intersection of real and p-adic worlds, the entanglement entropy can be also negative. This kind of entanglement is stable against state function reduction and is carrier of conscious information. Hence the reduction process stops when the system is negentropic. The conjecture is that living matter resides in the intersection of real and p-adic worlds (matter and cognition) and is therefore a carrier of negentropic entanglement.

Zero energy ontology unifies state function reduction and state preparation

Zero energy ontology (ZEO) brings in additional constraints and actually unifies state function reduction and preparation.

  1. Zero energy states are pairs of positive and negative energy states at opposite boundaries of causal diamond identified as Cartesian product of CP2 and intersection of future and past directed light cones of M4. The counterparts of positive (negative) energy states in positive energy ontology are initial (final) states. In ZEO quantum jumps have fractal structure: quantum jumps occur within quantum jumps. This corresponds also to hierarchy of conscious entities: selves having sub-selves as mental images. CD is the imbedding space correlate for self.

  2. The arrow of time is broken already the level of zero energy states and even at single particle level: in standard ontology it is broken at the level of quantum dynamics defined by state function reductions. The positive energy part (initial state) of zero energy state corresponds to a prepared state having among other things well-defined particle numbers. For non-trivial M-matrix (and S-matrix) the negative energy part of the state (final state) cannot have these properties. In state function reduction negative energy (final state) obtains these properties.

  3. The following question shows how a genuinely new idea immediately leads to science fictive considerations. Does ZEO imply that state preparation correspond to state function reduction for positive energy part of the state (initial state) and state function reduction to the same process for the negative energy part of the state (final state)? Could these processes would be completely symmetrical? What this symmetry would imply? Can one imagine that there reduction-preparation-preparation-.... does reduction for negative-positive-negative... energy state? One would have a kind of time flip-flop: state function reduction at the other end of CD would generate non-prepared state at the other end. The arrow of time would change in each reduction.

    Mystic might talk about cycle of birth and rebirth as eternal return to youth. Eternal return to youth would be nice but would make sense only if one forgets that quantum jump involves the unitary process U. if one assumes that each quantum jump involves also U, the situation changes. Some processes in living matter - say spontaneous self assembly- seem to have reversed geometric arrow of time and one might wonder whether this kind of flip-flop can occur in living matter. Here is a home exercise for the alert reader;-).