The reactions to the potential discovery depend on whether the person can imagine some explanation for the finding or not. In the latter case the reaction is denial: most physics bloggers have chosen this option for understandable reasons. What else could they do? The six sigma statistics does not leave much room for objections but there could of course be some very delicate systematical error involved. Lubos wrote quite an interesting piece about possible errors of this kind and classified the possible errors to timing errors either at CERN or Italy or to errors in distance measurement.
The neutrinos are highly relativistic having average energy 17 GeV much larger than the mass scale of neutrinos of order .1 eV. The distance between CERN and Gran Sasso is roughly 750 km, which corresponds to the time of travel equal to T=2.4 milliseconds. The nasty neutrinos arrived to Gran Sasso &Delta T=60.7+/-6.9 (statistical) +/-7.4 (systematic) ns before they should have done so. This time corresponds to a distance Δ L= 18 m. From this is is clear that the distance and timing measurements must be extremely accurate. The claimed distance precision is 20 cm (see this.
Experimentalists tell that they have searched for al possible systematic errors that they are able to imagine. The relative deviation of neutrino speed from the speed of light is (c-v)/v= (5.1+/- 2.9)×10-5 which is much larger than the uncertainty related to the value of the speed of light. The effect does not depend on neutrino energy. 6.1 sigma result is in question (for sigmas see this) so that it can be a statistical fluctuation with probability of 10-9 in the case that there is no systematic error.
I already wrote about TGD based explanation of the effect assuming that it is real. The tachyonic explanations of the finding fail because different tachyonic mass is required to explain SN1987A and recent anomaly and other similar anomalies. Tachyons are of course also in conflict with causality. I repeat here the main points here and andd some new points that have emerged in numerous email discussions and blog discussion in viXra log.
- It is sub-manifold geometry which allows to fuse the good aspects of both special relativity (the existence of well-defined conserved quantities due to the isometries of imbedding space) and general relativity (geometrization of gravitation in terms of the induced metric). As an additional bonus one obtains a geometrization of the electro-weak and color interactions and of standard model quantum numbers. The choice of the imbedding space is unique. The new element is the generalization of the notion of space-time: space-time identified as a four-surface has shape as seen from the perspective of the imbedding space M4×CP2. The study of field equations leads among other things to the notion of many-sheeted space-time.
- For many-sheeted space-time light velocity is assigned to light-like geodesic of space-time sheet rather than light-like geodesics of imbedding space M4×CP2. The effective velocity determined from time to travel from point A to B along different space time sheets is different and therefore also the signal velocity determined in this manner. The light-like geodesics of space-time sheet corresponds in the generic case time-like curves of the imbedding space so that the light-velocity is reduced from the maximal signal velocity.
- For Robertson-Walker cosmology imbedded as 4-surface (this is crucial!) in M4×CP2 (see this) the light velocity would be about 73 per cent from the maximal one which would be along light-like geodesics of M4 factor as the simple estimate of the previous posting demonstrates.
This leaves a lot of room to explain various anomalies (problems with determination of Hubble constant, apparent growth of the Moon-Earth distance indicated by the measurement of distance by laser signal,....). The effective velocity can depend on the scale of space-time sheet along which the relativistic particles arrive (and thus on distance distinguishing between OPERA experiment and SN1987A), it can depend on the character of ultra relativistic particle (photon, neutrino, electron,...), etc. The effect is testable by using other relativistic particles -say electrons.
- The energy independence of the results fits perfectly with the predictions of the model since the neutrinos are relativistic. There can be dependence on length scale: in other words distance scale and this is needed to explain SN1987A -CERN difference in Δ c/c. For SN1987A neutrinos were also relativistic and travelled a distance is L=cT=168,000 light years and the neutrinos arrived about Δ T=2-3 hours earlier than photons (see this). This gives Δ c/c = Δ T/T≈ .8-1.2 ×10-6 which is considerably smaller than for the recent experiment. Hence the tachyonic model fails but scale and particle dependent maximal signal velocity can explain the findings easily.
- The space-time sheet along which particles propagate would most naturally correspond to a small deformation of a "massless extremal" ("topological light ray", see this) assignable to the particle in question. Many-sheeted space-time could act like a spectroscope forcing each (free) particle type at its own kind of "massless extremal". The effect is predicted to be present for any relativistic particle. A more detailed model requires a model for the propagation of the particles having as basic building bricks wormhole throats at which the induced metric changes its signature from Minkowskian to Euclidian: the Euclidian regions have interpretation in terms of lines of generalized Feynman graphs. The presence of wormhole contact between two space-time sheets implies the presence of two wormhole throats carrying fermionic quantum numbers and the massless extremal is deformed in the regions surrounding the wormhole throat. At this stage I am not able to construct detailed model for deformed MEs carrying photons, neutrinos or some other relativistic particles.
If I were a boss at CERN, I would suggest that the experiment would be carried out for relativistic electrons whose detection would be much easier and for which one could use much shorter scale.
- Could one use both photon and electron signal simultaneously to eliminate the need to measure precisely the distance between points A and B.
- Can one imagine using mirrors for photons and relativistic electrons and comparing the times for A→ B→ A?
As a matter fact, there is an old result by electric engineer Obolensky that I have mentioned earlier (see this), and which states that in circuits signals seem to travel at superluminal speed. The study continues the tradition initiated by Tesla who started the study of what happens when relays are switched on or off in circuits.
- The experimental arrangement of Obolensky suggest that that part of circuit - the base of the so called Obolensky triangle- behaves as a single coherent quantum unit in the sense that the interaction between the relays defining the ends of the base is instantaneous: the swithing of the relay induces simultaneously a signal from both ends of the base.
- There are electromagnetic signals propagating with velocities c0(with values 271 +/- 1.8× 106 m/s and 278 +/- 2.2× 106 m/s) and c1 (200.110× 106 m/s): these velocities are referred to as Maxwellian velocities and they are below light velocity in vacuum equal to c=3× 108 m/s. c0 and c1 would naturally correspond to light velocities affected by the interaction of light with the charges of the circuit.
- There is also a signal propagating with a velocity c2 ((620+/- 2.7) × 106 m/s), which is slightly more than twice the light velocity in vacuum. Does the identification c2=cmax, where cmax is the maximal signal velocity in M4× CP2, make sense? Could the light velocity c in vacuum correspond to light velocity, which has been reduced from the light velocity c#= .73 cmax in cosmic length scales due to the presence of matter to c#=.48cmax. Note that this interpretation does not require that electrons propagate with a super-luminal speed.
- If Obolensky's findings are true and interpreted correctly, simple electric circuits might allow the study of many-sheeted space-time in garage!
To conclude, if the finding turns out to be true it will mean for TGD what Mickelson-Morley meant for special relativity.
For background see the chapter TGD and GRT of the online book "Physics in Many-Sheeted Space-time" or the article Are neutrinos superluminal?.
Addition: Those string theorists are simply incredible. Here is This Week's hype which appeared in New Scientist.
So if OPERA’s results hold up, they could provide support for the existence of sterile neutrinos, extra dimensions and perhaps string theory. Such theories could also explain why gravity is so weak compared with the other fundamental forces. The theoretical particles that mediate gravity, known as gravitons, may also be closed loops of string that leak off into the bulk. “If, in the end, nobody sees anything wrong and other people reproduce OPERA’s results, then I think it’s evidence for string theory, in that string theory is what makes extra dimensions credible in the first place,” Weiler says.
This is absolute nonsense. What is wrong in physics community: why lying has become everyday practice?
Addition: From the comment section in Peter Woit's blog I learned that M-theorists have already represented a direct modification of the TGD explanation for neutrino super-luminality by replacing space-time surfaces with branes: web is a very effective communication tool;-). My guess was that the "discovery" takes place within a week. I would not be surprised if neutrino super-luminality would become the last straw for drowning M-theory. Sad that we must still tolerate a decade M-theoretic non-sense.
22 comments:
http://wbabin.net/historical/pappas.pdf Electronics & wireless world, dec.1988 1162-1165: Thirtysix seconds faster than light. P.T. Pappas & Alexis Guy Obolensky
Note, one earlier "Faster than light signals", from 1983 by Harold W Milnes.
Phil also shortly mentioned what if there are two different signals? One photon (c), and one neutrino (hbar?, mass?) Then E=mc^2 must be modified? What induce the mass-change? Is there an energy threshold, or phase change? The energy and mass 'constantness' also challenged?
Too many questions remain open to conclude anything, but an opening has been reached?
Koelman had also a post, but disappointing...
Tommaso's post is back.
Thank you. I found the article just hour ago myself and wrote a comment to the posting to only find that you had also found the article.
The effect does not relate at all to masses or to E=mc^2 but to different kinematics due the assumption that space-time is 4-surface in M^4xCP_2 rather than abstract 4-manifold as in special and general relativities. This is one of the simplest implications of TGD.
What is needed that one has relativistic particles be their electrons or neutrinos or photons. My guess is that this experiment and what follows it will be for TGD what Mickelson-Morley was for special relativity.
There are no other natural explanations and its is perfectly understandable that bloggers who do not have a slightest idea about the explanatory power of TGD just deny the effect. Few years will be needed before colleagues are mature to accept the unavoidable. We are living in the middle of deepest crisis in the entire history of theoretical physics.
Today I reached an understanding and this is the best that could happen. It doesn't at all violate Einstein, instead it strengthen his reasonings. And here is also an explanation to the non-locality. I should have time to write down my thoughts, which only are strengthened by this anomaly. Maybe you then can discuss them with me, or am I asking of too much?
On Nature are a very good discussion too. http://www.nature.com/news/2011/110922/full/news.2011.554.html
And the link to your thesis went back here.
It seems that the worst denial phase is changing to a reluctant acceptance of facts.
During the night Lubos has completely changed his attitudes against breaking of Lorentz invariance: earlier he regarded it as ultimate crackpottism. Now Lubos says that the breaking is possible in string theory on basis of non-commutative geometry and might explain the effect and bases his argument on Witten's authority.
For some strange reason Lubos has not realized that branes could allow to modify TGD based explanation without introducing breaking of Lorentz invariance. No doubt braney string theorists will represent the TGD based explanation as their own within week.
Sascha Vongher already proposed something like this in his blog: I sent a comment. I hope that the comment survives moderation: see
http://www.science20.com/alpha_meme/neutrinos_can_go_faster_light_without_violating_relativity-82950?nocache=1
Koelman said: What have you smoked, Sascha? Sascha is utterly reductionist?
Kea has something wonderful too. Soon this anomaly is solved. Not exactly in the way Cern waited for. A Big Grin.
Lubos is skeptic (string model based explanations are of course exception for Lubos) but has represented precise arguments for his skepticism (systematic error in GPS system which is of course just a speculation).
Also Jester and Tommaso have represented good critical arguments and told about the physics involved.
Sascha has discussed a rather realistic approach and realized that no breaking of causality is needed if one expands the mental horizons interesting view demonstrating open mindedness.
Kea has decided that the extremely successful geometrization of physics must be thrown away as something old fashioned. Same about symmetries and the very notion of space-time. What remains seems to be numerology with 3x3 matrices. To me this does not look particularly wonderful although I am wondering.
The Cern video is up. http://cdsweb.cern.ch/record/1384486
Tommasos TED talk: http://www.youtube.com/watch?v=CgJ3pV2hL_o
Neutrinos tell about TIME. Are neutrinos following the symmetry?
http://prola.aps.org/abstract/PR/v159/i5/p1089_1 from 1967. A productive scientist with many interesting papers.
Matti, maybe is Your forerunner N.A. Kozyrev?Our bodies cannot travel through solid matter, but the current of aetheric energy certainly can – and this is one of the many things that Keely, Tesla, Kozyrev and others demonstrated. A star or planet must continually draw energy from its environment in order to “stay alive.” Kozyrev had made very similar conclusions about our Sun back in the 1950s, concluding that stars acted as “machines that convert the flow of time into heat and light.
Kozyrev's claimed discovery of signals proceeding also backwards in time as well as instantaneous "signals" fit with TGD view about time.
Phase conjugate laser beams are one practical example of signals propagating backwards in time.
In zero energy ontology they correspond to reversal for the arrow of geometric time having realization as mathematical properties of zero energy states.
Instantaneous "signals" correspond to the reaction of the sender and receiver as a single quantum coherent particle like system: this is possible because space-time sheets are correlates for particles.
I saw somewhere lately an article about information, not used, that travel backwards in time. Important for quantum computations.
David Schroeder says at vixra: neutrinos can take “shortcuts off the brane through large extra dimensions...
What brane? Neutrinos are clearly not ON OUR BRANES IN THIS INTERACTIVE WORLD. we measure them through wormholes?
http://blog.vixra.org/2011/09/19/can-neutrinos-be-superluminal/#comment-10870
Brannen on Keas blog: What's unique about neutrinos is that they are the only particle whose speed we've accurately measured that **does not** participate in electromagnetic interactions.
About the same statement. We begin to see a world from completely new corner. The post-Einsteinian era? Something is happening now with the physics?
photons-leptons? Graviton? What about the 'magneton'? So few talks of Higgs now.
Imaginary masses have also been discussed.
http://www.noncommutative.org/index.php/the-exceptional-map-and-mersenne-primes.html look, what happens? You should say something :)
I have thought that the number of Mersenne primes has been proven to be finite. I was wrong. If the number is finite then there would be only finite number of really fundamental p-adic length scales (note however the possibility of Gaussian Mersennes whose number is probably much large- or is it also finite?). This would be very strong physical prediction in TGD framework.
I tried to tell you that the (in)finity of Mersenne primes remains open question. :)
Been reading a bit of philosophy of Math, so far it seems only two positions don't fail at the first sight, namely Fictionalism (http://plato.stanford.edu/entries/fictionalism-mathematics/#RevHer) and 1) Platonic realism. Though only either of the positions might be true, there is no indication that they are mutually exclusive.
It would be very interesting to hear, from the fictionalist point of view, say, what would happen in the story of mathematics number of Mersenne primes would be finite. Accepting the sentence "thermodynamics is square of quantum physics" as given - which sounds important, somehow,
How many would there be, in addition to those found, if any more? Is there qualitative mathematical difference between finding Mersenne Primes by human cognition only and with help of computer calculus? Can the puzzle of mass and particles be definitely solved without finding all members of finite set of Mersenne Primes? And would the story of particle masses change with finding of each Mersenne Prime? Also in psychological time, that AFAIK according to the story emerges from thermodynamics, which would be, as said, square of Quantum Theory. I'm trying to imagine what kind of plane QT*QT could be, or page in the - how thin? - book of finite Mersenne Primes.
This is allready getting perhaps too mythological, so let's ad new plot: what if the proof of finity of Mersenne Primes entailed that the exact number cannot be known - kind of uncertainty principle - that the exact number of pages cannot be known, even if and when we jump to the last page as curious readers - and Mersenne computers - often do, without reading first all the pages, also those with smaller page numbers?
What would be most enjoyable, in your opinion? :)
Am I annoying you with my comments? This is such an opportunity to learn physics.
We must try to remember that the most probable outcome of this is no statistical difference. But I think the principle has evolved further, and now we don't need to think at that.
Sabine Hossenfelder has an interesting note in 2007/07, Planck scale. http://backreaction.blogspot.com/2007/07/planck-scale.html
In many formulas c =1, among them the Compton wavelength and Planck length (the relation between both given by the Planck constant). Now, with superluminal speed this change a little. The c = 1 also assume a constant speed of light in vacuum, but actually it varies quite a bit (10-fold at least) downward, depending on other interactions such as gravity also told of at vixra blog,(?)and also this should invoke on the Compton w and Planck ratio. http://www.wired.com/wiredscience/2011/09/galaxies-einstein-relativity/
Newton's constant G, is dimensionful, which she points out today in http://backreaction.blogspot.com/2011/09/on-universal-length-appearing-in-theory.html
She seems to doubt the GUT theory and the fundamentality of Planck scale.
This just in:
http://www.rdmag.com/News/2011/09/General-Science-Physics-LHC-discovers-the-protons-big-ego/
To Santeri:
I *assume*- like experimentalist does- that Mersenne primes characterize various copies of hadron physics (also their leptonic analogs). Also other p-adic length scales- those corresponding to some powers of 2- are proposed to be important and characterize quark mass scales. It would be interesting if this number where finite but there would be no practical consequences for the physics in human length scales. Already the Mersenne next to M_127 characterizing electron corresponds to completely super-astronomical length scale.
In the case of primes there is no other way than the "experimental" computer based approach since there is no formula for prime. Quantum computers will make the process of finding Mersennes and primes much more effective.
Matti and Santeri,
The infinite or finite aspect of such numbers, p-adic or otherwise can seem a relative point of view where the multiverse and manyworld concepts apply.
Of course the reality of the physical case would resolve in the universe if it is quantumly computed and thus more efficient.
But the quantum case is not the deepest explanation or level of what is our reality. The same with numbers and geometry.
Such things are quasi-finite (-infinite) where they can be said to cut off or continue- in any case I am not talking some sort of post modern relativism here but that which does make some of our reality scientific and actual.
This is a rather deep question and seems undecided at least in terms of philosophy when we do reach these higher theories of new physics.
The PeSla
http://arxiv.org/PS_cache/arxiv/pdf/1002/1002.2441v1.pdf
Flavor sensitivity to phi-13 and sign(delta m232) for neutrinos from solar WIMP annihilation.
He use a sensitive mixing angle 13, Kea 12. The problem is the sensitivity, to get the right handed neutrinos interacting? Note the adiabatic neutrino.
He also talks of annihilation of photons , photon behmstrahlung in other papers. What is that? Can it touch your Higgs scenario?
This makes the clue to DM leptonic, still mass is fermionic (QCD), so fermions are constraining and makes the finiteness?
Get support by this from Keas blog. http://arxiv.org/PS_cache/arxiv/pdf/0911/0911.5012v1.pdf Neutrinos as Dirac particles (part of).
Then you see DM has to be the DM hierarchy, it cannot only be leptonic. The double worlds are needed. This is where Kea has gone wrong, what she begins to see now? She did not liked my comment about it:) But she has still a good model.
http://www.galaxyzooforum.org/index.php?topic=277933.msg562452#msg562452 also up and down, makes me ask???
What have I done? Sry.
http://www.galaxyzooforum.org/index.php?topic=277933.150
Post a Comment