Tuesday, September 20, 2011

Speed of neutrino larger than speed of light?

The newest particle physics rumour is that the CERN OPERA team working in Gran Sasso, Italy has reported 6.1 sigma evidence that neutrinos move with a super-luminal speed. The total travel time is measured in milliseconds and the deviation from the speed of the light is nanoseconds meaning Δ c/c ≈ 10-6 which is roughly 103 times larger than the uncertainty 4.5× 10-9 in the measured value of the speed of light. If the result is true it means a revolution in the fundamental physics.

The result is not the first of this kind and the often proposed interpretation is that neutrinos behave like tachyons. The following is the abstract of the article giving a summary about the earlier evidence that neutrinos can move faster than the speed of light.

From a mathematical point of view velocities can be larger than c. It has been shown that Lorentz transformations are easily extended in Minkowski space to address velocities beyond the speed of light. Energy and momentum conservation fixes the relation between masses and velocities larger than c, leading to the possible observation of negative mass squared particles from a standard reference frame. Current data on neutrino mass squared yield negative values, making neutrinos as possible candidates for having speed larger than c. In this paper, an original analysis of the SN1987A supernova data is proposed. It is shown that all the data measured in '87 by all the experiments are consistent with the quantistic description of neutrinos as combination of superluminal mass eigenstates. The well known enigma on the arrival times of the neutrino bursts detected at LSD, several hours earlier than at IMB, K2 and Baksan, is explained naturally. It is concluded that experimental evidence for superluminal neutrinos was recorded since the SN1987A explosion, and that data are quantitatively consistent with the introduction of tachyons in Einstein's equation.

Personally I cannot take tachyonic neutrinos seriously. I would not however choose the easy option and argue that the result is due to a bad experimentation as Lubos and Jester do. This kind of effect is actually one of the basic predictions of TGD and emerge for more than 20 years ago. Also several Hubble constants are predicted and explanation for why the distance between Earth and Moon seems to increasing as an apparent phenomenon emerges. There are many other strange phenomena which find an explanation.

In TGD Universe space-time is many-sheeted 4-surface in 8-D imbedding space M4× CP2 and since the light-like geodesics of space-time sheet are not light-like geodesics of Minkowski space it takes in general longer time to travel from point A to B along them than along imbedding space geodesics. Space-time sheet is bumpy and wiggled so that the path is longer. Each space-time sheet corresponds to different light velocity as determined from the travel time. The maximal signal velocity is reached only in an ideal situation when the space-time geodesics are geodesics of Minkowski space.

Robertson-Walker cosmology gives a good estimate for the light velocity in cosmological scales.

  1. One can use the relationship

    da/dt= gaa-1/2

    relating the curvature radius a of RW cosmology space (equal to M4 light-cone proper time, the light-like boundary of the cone corresponds to the moment of Big Bang) and cosmic time t appearing in Robertson-Walker line element

    ds2=dt2- a223.

  2. If one believes that Einstein's equations in long scales, one obtains

    (8πG/3)× ρ =(gaa-1-1)/a2.

    One can solve from this equation gaa and therefore get an estimate the cosmological speed of light as

    c#=(gaa)1/2.

  3. By plugging in the estimates

    a= ≈ t≈ 13.8× Gy (the actual value is around 10 Gy)

    ρ≈ 5 mp/m3 (5 protons per cubic meter)

    G= 6.7× 1o-11 m3kg-1s-2

    one obtains the estimate

    (gaa)1/2 ≈ .73.

What can we conclude from the result? The light velocity identified as cosmic light velocity would 27 per cent smaller than the maximal signal velocity. This could easily explain why neutrinos arrived from SN1987A few hours earlier than photons: they just arrived along different space-time sheet containing somewhat less matter. One could also understand OPERA results.

If these findings survive they will provide an additional powerful empirical support for the notion of many-sheeted space-time. Sad that TGD predictions must still be verified via accidental experimental findings. It would be much easier to do the verification of TGD systematically. In any case, Laws of Nature do not care about science policy, and I dare hope that the mighty powerholders of particle physics are sooner or later forced to accept TGD as the most respectable known candidate for a theory unifying standard model and General Relativity.

For background see the chapter TGD and GRT of the online book "Physics in Many-Sheeted Space-time" or the article Are neutrinos superluminal?.

9 Comments:

At 10:15 AM, Blogger ThePeSla said...

Welcome back,

This post I find very clear and again remarkably connected in notions of the frontier, especially of our alternative physics bloggers and again part of the theme of my brief thoughts of last night- this confluence of thought shows that even in our trivial considerations in the right context- some can take things to the logical deeper conclusions. (Then again some cannot seem to do it in the academic and funding state of things- yes, unifying the standard model too seems right on.

I have long thought we can have what seems differences in the light speed invariant and I suppose this idea connects to that of a many sheeted so to speak hierarchy of Planck values. But at this point by super or sub luminal I mean the same thing. For we have to understand these things from a wider perspective than rigid classification and separation of the Non-Euclidean geometries.

What about those conclusions early on that inside a quasar light seems to move at warp ten? Are deep into the heart of visual illusions explained at last?

ThePeSla

 
At 10:08 PM, Blogger Ulla said...

Gosch, it is true. Unbelievable. But so many questions arise that my flood of questions seems miniscule :)

http://arxiv.org/abs/1109.4897

Baryonic contra leptonic matter?
c contra hbar?

This anomaly corresponds to a relative difference of the muon neutrino velocity with respect to the speed of light (v-c)/c = (2.48 \pm 0.28 (stat.) \pm 0.30 (sys.)) \times 10-5.

It seems an exellent demonstration of the hierarchy.

So what is the true nature of LIGHT? A signal of what? And again the nature of time?

You will kill yourself with work now :) Relax.

 
At 4:51 AM, Anonymous matpitka@luukku.com said...

The key difference between TGD and special and general relativities is that abstract manifold geometry is replaced with sub-manifold geometry. TGD fuses the good aspects of special and general relativities: symmetries and hence conservation laws from special relativity and the geometric description of gravitation from GRT.

The variation of the operationally determined light-velocity is the simplest imaginable, purely kinematical implication. If the findings about neutrinos (and other anomalies discovered but forgotten during years) are true they will mean for TGD what Mickelson-Morley meant for special relativity.

The visionary inside me regards the situation as settled, the professional within me does its best to remain skeptic.


By the way, Lubos has a good analysis about possible measurement errors in neutrino experiment.

P.S. I think that the paradoximal looking finding related to quasars can be understood when one distinguishes between signal velocity and apparent signal velocity (when you turn around lamp the light spot on the wall can move with velocity higher than c in principle but this does not represent signal.

 
At 12:51 PM, Blogger Bert Morrien said...

About the measurement of neutrino speed.
Watching the presentation and scrutinizing the report, I was impressed by
the team's effort but less than happy with the data analysis section.
As a retired electronic engineer my inventive skills were triggered, the result is a simple alternative analysis method.
At an event, sum the current detection probability; if TOF is correct, probability is valid and more than half the events will add a higher probability to the sum and less than half a lower, eventally sum is higher than if TOF is wrong. In that case only half the events will add a higher value and half a lower.
The correct TOF is found via an iteration procedure.

Comment please. Thanks
See also
http://home.tiscali.nl/b.morrien/FTL/OtherNeutrinoVelocityDataAnalysis.txt

 
At 7:33 PM, Anonymous matpitka@luukku.com said...

The statistical treatment of neutrino pulse might of course contain error: I think that the assumption is that the neutrino pulse must have same shape as the proton pulse having duration of order microsecond much longer than nanosecond. I am not competent to say about it anything which could be taken seriously: am not a specialist.

What I can do is to look the TGD interpretation for the possible super-luminality, and in TGD framework it has nothing to do with tachyonicity but could be naturally understood in terms of induced metric and spinor structure, which distinguish TGD from general and special relativities.

The generalizations of Relativity Principle (Poincare Invariance), Equivalent Principle, and General Coordinate Invariance of course hold true.

 
At 1:30 PM, Blogger Bert Morrien said...

The error is clear.
Read http://static.arxiv.org/pdf/1109.4897.pdf
They did discard invalid PEW's, i.e. PEW;s without an associated event, but they did not discard invalid samples in the valid PEW's, i.e. samples without an associated event. As a result, virtually all samples in the PEW's are invalid.
This can be tolerated if they are ignored at the Maximum Likelihood Procedure.
However, the MLP assumes a valid PDF and because the PDF is constructed by summing the PEW's the PDF is not valid, as explained below.
The effect of summing is that all valid samples are buried under a massive amount of invalid samples.
which makes the PDF not better than a PDF which is constructed only with invalid PEW's
This is a monumental and bizar error.

Why is it that all these scientists were not missing the missing
probability data or did not
stumble over the rumble in the PDF used by the data analysis?

For a more formal proof:
http://home.tiscali.nl/b.morrien/FTL...easurement.txt

Bert Morrien, Eemnes,The Netherlands
Report Post Edit/Delete Message

 
At 7:35 PM, Anonymous matpitka@luukku.com said...

Dear Bert,

I tried to get to the link about formal proof but the link failed.

I find it difficult to believe in statistical error: two groups of metrologists, etc...! For an outsider -at leat me- it is very difficult to say anything about the procedures.

Usually the error is believed to be in the timing and distance measurements. I would bet timing. On the other hand, standard model neutrinos interact so weakly that the neutrinopulse sholud have same same and proton pulse.

The claim is world view changing and got support from the earlier experiments. Therefore I believe that the experiment will be repeated and work done to find the error.

 
At 1:41 PM, Blogger Bert Morrien said...

First my apologies for my arrogant and futile "proof" that the TOF was wrong.
The culprit was my wrong interpretation of the effect of summing up the PEW's
The effect of summing up of the PEW's to construct the PDF does indeed bury
the timing information hidden in the PEW's under a "lot of noise".
However, the noise is random while the timing information is not.
Hence, after adding 10,000 PEW's we have amplified the timing information with 10,000
while half of the lot of noise is added and the other half is subtracted,
so that the noise is not amplified at all, making "lot of noise" untrue.
As a technician I am familiar with this princilple a long time; how it is possible that I
stubbornly ignored my own knowledge in the last weeks is a riddle for me.

===================

A question about possible bias.
The team performed lots of tests before the TOF was established.
After that, they checked for consistency of the TOF.
Did they ever checed for bias after the TOF was established?
I cannot find anything of the sort in the report.

Now the TOF is found, a possible check could be done as follows.
For each event, a small region in the corresponding PEW can be identified that must contain the extremely noisy time information corresponding to this event;
this region is replaced by noise, so that the time information is erased, assuming the TOF is correct.
Then the PDF is rebuit and the maximum likelihood analysis is performed.
Now the event distribution is matched with the averaged PEW's.
Since only noise is available, the result should be zero.
Is this true?
If not, the nonzero result can be explained in two ways:
1. the time information was not removed and this check failed,
2. the time information was removed and the result is bias.
In both cases the TOF must be incorrect.

Does this make sense?

 
At 8:04 PM, Anonymous matpitka@luukku.com said...

Of course your argument could well make sense. I must however be honest and make clear that I really do not have the needed specialization to the technicalities of statistical methods so that I cannot tell! I have even difficulties with all this PEWs and PDFs although TOF I can easily guess;-).

I am just a statistically stupid theoretician who takes the highly refined data, assumes that it is ok (just to be able to apply his wonderful theory;-)), and tries to develop an explanation in a particular theoretical framework.

 

Post a Comment

<< Home