Sunday, September 02, 2012

New evidence for anomalies of radio-active decay rates

Lubos Motl told about new evidence for periodic variations of nuclear deay rates reported by Sturrock et al in their article
Analysis of Gamma Radiation from a Radon Source: Indications of a Solar Influence . The abstract of the article summarizes the results.

This article presents an analysis of about 29,000 measurements of gamma radiation associated with the decay of radon in a sealed container at the Geological Survey of Israel (GSI) Laboratory in Jerusalem between 28 January 2007 and 10 May 2010. These measurements exhibit strong variations in time of year and time of day, which may be due in part to environmental influences. However, time-series analysis reveals a number of periodicities, including two at approximately 11.2 year-1 and 12.5 year-1. We have previously found these oscillations in nuclear-decay data acquired at the Brookhaven National Laboratory (BNL) and at the Physikalisch-Technische Bundesanstalt (PTB), and we have suggested that these oscillations are attributable to some form of solar radiation that has its origin in the deep solar interior. A curious property of the GSI data is that the annual oscillation is much stronger in daytime data than in nighttime data, but the opposite is true for all other oscillations. This may be a systematic effect but, if it is not, this property should help narrow the theoretical options for the mechanism responsible for decay-rate variability.

Quantitative summary of findings

The following gives a brief quantitative summary of the findings. Radioactive decays of nuclei have been analyzed in three earlier studies and also in the recent study.

  1. BNL data are about 36Cl and 32Si nuclei. Strong day-time variation in month time scale was observed. Twofrequency bands ranging from 11.0 to 11.2 year-1 and from 12.6 to 12.9 year-1 were observed.
  2. PTB data are about 226Ra nuclei. Also now strong day-time variation was observed with frequency bands ranging from 11.0 to 11.3 year-1 and from 12.3 to 12.5 year-1 .
  3. GIS data are about 222Ra nuclei. Instead of strong day-time variation a strong night-time variation was observed. Annual oscillation was centered on mid-day. 2 year-1 is the next strongest feature. Also a night time feature with a peak at 17 hours was observed. There are also features at 12.5 year-1 and 11.2 year-1 and 11.9 year-1. All these three data sets lead to oscillations in frequency bands ranging from 11.0 to 11.4 year-1 and from 12.1 to 12.9 year-1.
  4. Bellotti et al studied 137Cl nuclei deep underground in Gran Sasso. No variations were detected.

Could exotic nuclear states explain the findings?

The TGD based new physics involved with the effect could relate to the excitations of exotic nuclear states induced by em radiation arriving from Sun. This would change the portions of various excited nuclei with nearly the same ground state energy and affect the average radio-active decay rates.

  1. The exotic nuclei emerge in the model of nucleus as a nuclear string with nucleons connected by color flux tubes having quark and antiquark at ends (see this). The excitations could be also involved with cold fusion. For the normal nuclei color flux tubes would be neutral but one can consider also excitations for which quark pair carries a net change +/- e. This would give rise to a large number of nuclei with same em charge and mass number but having actually abnormal proton and neutron numbers. If the energy differences for these excitations are in keV range they might represent a fine structure of nuclear levels not detected earlier.

    Could these exchanges take place also between different nuclei? For instance, could it be that in the collision of deuterium nuclei the second nucleus can be neutralized by the exchange of scaled down W boson leading to neutralization of second deuterium nucleus so that Coulomb wall could disappear and make possible cold nuclear reaction. It seems that the range of this scaled variant of weak interaction is quite too short. M127 variant of weak interactions with W boson mass very near to electron mass could make possible this mechanism.

  2. The exchange of weak bosons could be responsible for generating these excitations: in this case two neutral color bonds would become charged with opposite charges. If one takes seriously the indications for 38 MeV new particle (see this), one can even consider a scaled variant of weak interaction physics with weak interaction length scale given by a length scale near hadronic length scale (see this). E(38) could be scaled down Z boson with mass of about 38 MeV.
Em radiation from Sun inducing transitions of ordinary nuclei to their exotic counterparts could be responsible for the variation of the radio-active decay rates. If course, exotic nuclei in the above sense are only one option and the following argument below applies quite generally.

Kinetic model for the evolution for the number of excited nuclei

A simple model for the evolution of the number of excited nuclei is as follows:

dN/dt= kJ-k1N for t∈ [t0,t1] ,

dN/dt= -k1N for t∈ [t1,t0+T] .

J denotes the flux of incoming radiation and N the number of excited nuclei. t0 corresponds to the time of sunrise and t1 to the time of sunset and T is 24 hours in the approximation that sun rises at the same time every morning. The time evolution of N(t) is given by

N(t) = k/k1 J+(N(t0)-kJ/k1 )exp[-k1(t-t0] for t∈ [t0,t1] ,

N(t)= N(t1)exp[-k1(t-t1)] for t∈ [t1,t0+T] .

Explanation for the basic features of the data

The model can explain the qualitative features of the data rather naturally.

  1. The period of 1 year obviously correlates with the distance from Sun. .5 year period correlates with the fact that the distance from Sun is minimal twice during a year. Day-time night-time difference can be explained with the fact that em radiation at night-time does not penetrate Earth. This explains also why Gran Sasso in deep underground observes nothing.

  2. The large long time scale variation for the day-time data for BNL and PTB seems to be in apparent constrast with that for the night-dime data at GIS. It is however possible to understand the difference.

    1. If the rate parameter k1 is large, one can understand why variations are strong at day-time in BNL and BTB. For large value of k1 N(t) increases rapidly to its asymptotic value Nmax= kJ/k1 and stays in it during day so that day-time variations due to solar distance are large. At night-time N(t) rapidly decreases to zero so that night-time variation due to the variation of the solar distance is small.

    2. For GIS the strong variation is associated with the night-dime data. This can be understood in terms of small value of k1 which can be indeed smaller for 226Ra than for the nuclei used in the other studies. During daytime N(t) slowly increases to its maximum at N(t1) and decreases slowly during night-time. Since N(t1) depends on the time of the year, the night-time variation is large.

    3. The variations in time scales of roughly the time scale of month should be due to the variations in the intensity of the incoming radiation. The explanation suggested in the article is that the dynamics of solar core has these periodicities manifested also as the periodicities of the emission of radiation at the frequencies involved. These photons would naturally correspond to the photons emitted in the transitions between excited states of nuclei in the solar core or possibly in solar corona having temperature of about 300 eV. One could in fact think that the mysterious heating of solar corona to a temperature of 3 million K could be due to the exotic excitations of the nuclei by radiation coming from Sun. At this temperature the maximum of black body distribution with respect to frequency corresponds to energy of .85 keV consistent with the proposal that the energy scale for excitations is keV.

    4. The difference of frequencies 12.49 year-1 and 11.39 year-1 is in good approximation 1 year-1, which suggests modulation of the average frequency with a period of year being due to the rotation of Earth around Sun. The average frequency is 11.89 year-1 that is 1/month. The explanation proposed in the article explanation proposed in the article is in terms of rotation velocity of the inner core which would be smaller but same order of magnitude as that of the outer core (frequency range from 13.7 to 14.7 year-1). It is however not plausible that the keV photons could propagate from the iinner core of Sun unless they are dark in TGD sense. In TGD framework it would be natural to assign the frequency band to solor Corona.

Can one assign the observed frequency band to the rotation of solar corona?

The rotation frequency band assignable to photosphere is too high by about Δ f=3 year-1 as compared to that appearing in decay rate variation. Could one understand this discrepancy?

  1. One must distinguish between the synodic rotation frequency fS measured in the rest system of Sun and the rotation frequency observed in Earth rotating with frequency f=1 year-1 around Sun: these frequencies relate by fE= fS-f giving frequency range 12.7 to 13.7 year-1. This is still too high by about Δ f=2 year-1.

  2. Could corona rotate slower than photosphere? The measurements by Mehta give the value range 22 - 26.5 days meaning that the the coronal synodic frequency fC would be in the range 14.0-16.6 year-1. The range of frequences observed at Earth would be 13-15.6 year-1 and too high by about Δ =2 year-1.

    If I have understood correctly, the coronal rotational velocity is determined by using solar spots as markers and therefore refers to the magnetic field rather than the gas in the corona. Could the rotation frequency of the gas in corona be about Δ f=2 year-1 lower than that for the magnetic spots?

One can develop a theoretical argument in order to understand the rotational periods of photosphere and corona and why they could differ by about Δ f=2 year-1.
  1. Suppose that one can distinguish between the rotation frequencies of magnetic fields (magnetic body in many-sheeted space-time) and gas. Suppose that photosphere (briefly 'P') and corona (briefly 'C') can be treated in the first approximation as rigid spherical shells having thus moment of inertia I= (2/3)mR2 around the rotational axis. The angular momentum per unit mass is dL/dm= (2/3)R2ω. Suppose that the value of dL/dm is same for the photosphere and Corona. If the rotation velocity magnetic fields determined from magnetic spots is same as the rotation velocity of gas in corona, this implies fC/fP= (RS/RC)2, where RS is solar radius identifiable as the radius of photosphere. The scaling of 13 year-1 down to 11 year-1 would require RC/RS≈ 1.09. This radius should correspond to the hottest part of the corona at temperature about 1-2 million K.

    The inner solar corona extends up to (4/3)RS (see this). This would give average radius of the inner coronal shell about 1.15RS. The constancy of dL/dm(R) would give a differential rotation with frequency varying as 1/R2. If the frequency band reflects the presence of differential rotation, one has Rmax/Rmin≈ (fmax/fmin)1/2 ≈ (15/13)1/2≈ 1.07.

  2. One can understand why angular momentum density per mass is constant if one accepts a generalization of the Bohr quantization of planetary orbits originally proposed by Nottale and based on the notion of gravitational Planck constant hbargr. One has hbargr= GMm/v0 and is assigned with the flux sheets mediating gravitational interaction between Sun and the planet or some other astrophysical object near Sun. The dependence on solar mass and planetary mass is is fixed by Equivalence Principle. v0 has dimensions of velocity and therefore naturally satisfies v0<c. For the three inner planets one has v0/c≈ 2-11. Angular momentum quantization gives mR2ω= n× hbargr giving R2ω= nGM/v0 so that the angular momentum per mass is integer valued. For the inner planets n has values 3,4,5.

  3. One could argue that for the photosphere and corona regarded as rigid bodies a similar quantization holds true but with the same value of n since the radii are so near to each other. Also v0 should be larger. Consider first photosphere. One can apply the angular momentum quantization condition to photosphere approximate as a spherical shell and rigid body. IωP= nGmM/v0P for n=1 gives (2/3)R2ω= GM/v0P. For v0P=c one would obtain ωPE= (3/2) (RE/R)2(v0/v0P). For RP= .0046491 RE (solar radius) this gives ωPE ≈ 12.466 for the v0/c= 4.6× 10-4 used by Nottale (see this): I have often used the approximate nominal value v0/c= 2-11 but now it this approximation is too rough. Taking into account the frequency shift due to Earth's orbital motion one obtains ωPE ≈ 11.466 which is consistent with the lower bound of the observed frequency band and would correspond to Rmax. The value v0P=v0C=c looks unrealistic if interpreted as a physical velocity of some kind the increase of RC allows however to reduce the value of v0C so that it seems possible to understand the situation quantitatively.

    If one wants to generalize this argument to differential rotation, one must decompose the system spherical shells or more general elements rotating at different velocities and having different value of hbargr assignable to the flux tubes connecting them to Sun and mediating gravitational interaction. This decomposition must be physical.

For the background see the chapter Nuclear string hypothesis of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy". See also the article.


At 12:03 AM, Blogger Ulla said... with Robert Nemiroff, Giovanni Amelino-Camelia

published in June in the journal Physical Review Letters, threatens to set theoretical physicists back several decades by scrapping a whole class of theories that attempt to reconcile Einstein's theory with quantum mechanics.

"Originally we were looking for something else, but were struck when two of the highest energy photons from this detected gamma-ray burst appeared within a single millisecond," Nemiroff told. When the physicists looked at the data more closely, they found a third gamma ray photon within a millisecond of the other two.

Computer models showed it was very unlikely that the photons would have been emitted by different gamma ray bursts, or the same burst at different times. Consequently, "it seemed very likely to us that these three photons traveled across much of the universe together without dispersing," Nemiroff said. Despite having slightly different energies (and thus, different wavelengths), the three photons stayed in extremely close company for the duration of their marathon trek to Earth. "But nothing that we know can un-disperse gamma-ray photons," Nemiroff said. "So we then conclude that these photons were not dispersed. So if they were not dispersed, then the universe left them alone. So if the universe was made of Planck-scale quantum foam, according to some theories, it would not have left these photons alone. So those types of Planck-scale quantum foams don't exist."
the photons' near-simultaneous arrival indicates that space-time is smooth as Einstein suggested, rather than pixilated as modern theories require

To prove that Planck-scale pixels don't exist, the researchers would have to rule out the possibility that the pixels dispersed the photons in ways that don't depend in a straightforward way on the photons' wavelengths, A-C said. The pixels could exert more subtle "quadratic" influences, for example, or could have an effect called birefringence that depends on the polarization of the light particles. Nemiroff and his colleagues would have to rule out those and other possibilities. To prove the photon trio wasn't a fluke, the results would then require independent confirmation; a second set of simultaneous gamma-ray photons with properties similar to the first must be observed.

If all this is accomplished, Amelino-Camelia said, "at least for some approaches to the quantum-gravity problem, it will indeed be a case of going back to the drawing board."

At 3:21 AM, Anonymous Matti Pitkanen said...

The finding has implications for theories assuming discretization or quantum foam at at Planck length scale. Loop quantum gravity is example of this kind of theory. It made prediction that photon's would propagate with different velocities depending on frequency. It turned out to be wrong.

Competitors seem to disappear from arena one by one. When is my turn?;-) Will Higgs be my fate?

At 6:02 AM, Blogger Ulla said...

What would you then do? There is a change to the lighter, I saw it :)

Penrose talked of an anyonic condensation phase in his consciousness talk. Helper, this guy and Hamerhoff of course


Microtubules - Electric Oscillating Structures in Living Cells (Google Workshop on Quantum Biology)

Clarifying the Tubulin bit/qubit - Defending the Penrose-Hameroff Orch OR Model

Classical and Quantum Information in DNA

At 9:56 AM, Blogger ThePeSla said...

Matti, I see you posted on this bit of evidence too... seeing Lubos before yours I've already entered a comment there (if it is not moderated negatively)

But we have suspected such things for awhile now in the various new alternative physics speculations.


At 8:24 PM, Anonymous Matti Pitkanen said...

Many indications for new nuclear physics have emerged during years. First indications were discovered already by the pioneers of quantum theory as temperature dependence of nuclear rates. Varying radio-active decay rates and cold fusion belong to these indications. There are also books about bio-fusion.

The text book dogma is however that the worlds of nuclear and atomic physics are completely isolated. Facts do not matter when people have a dogma. Although nuclear physics was born before the era of computers and modern data analysis, its empirical rules are taken as God given.

The existence of keV excitations - small energies as compared to MeV scale of ordinary nuclear excitations- means a completely new branch of nuclear physics. Also a new technology based direct interaction between atomic physics which also produces keV energies for nuclei around Z>=10.

Energy production based on cold fusion will be one application. The artificial generation of valuable minerals - modern civilization needs desperately metals - will be second important technology and would realize the ancient dream of alchemist without massive nuclear reactors and high temperatures.

How fast the revolution takes place depends on how long it takes to beat the paralyzing power of the dogmatic Big Science. At thirties this kind of revolution could have taken place in decade or two. Now the situation is totally different. Modern does not mean always progressive!

At 10:42 AM, Blogger Ulla said...

Sabine has a new idea about gravity and she suggests Planck constant can vanish at high temp.

At 12:30 PM, Blogger Ulla said...

This seems interesting. E8, octonions, Kähler etc.

At 10:56 PM, Anonymous Matti Pitkanen said...

To Ulla:

Thank you for the link to Sabine's article. It is interesting to compare her approach to my own.

a) Sabine introduces varying Planck constant by making it a field- it seems quantum field. This is something very different from what I am proposing. In my case the minimal vision relies on many-valuedness of the normal derivatives of imbedding space coordinates as functions of canonical momentum densities at partonic 2-surfaces. This is essentially due to the vacuum degeneracy of Kahler action which is certainly the key feature of TGD distinguishing it from standard QFT dynamics.

The *effective* Planck constant is integer multiple of the ordinary one and has as geometric correlate the poly-sheeted space-time with different sheets corresponding to same values of canonical momentum densities.

Warning: This multi-sheetedness familiar from Riemann surfaces of many-valued analytic functions is not the same thing as many-sheetedness! In string theory stacks of brains represent the geometric counterpart for this. At partonic 2-surfaces at the boundaries of causal diamonds the sheets would indeed co-incide just as the branes in the stack become infinitesimally near to each other.

b) Sabine's motivation for introducing dynamical Planck constant is the idea that at short length scale limit Planck constant would approach zero and in this sense theory would become classical. The hope is that this would resolve the problems of quantum gravity due to divergences reflecting short distance dynamics. Also black holes and related problems are short distance phenomena. This would be taming of quantum gravitation at short distances.

In TGD framework sub-manifold gravity could be seen as an alternative manner to solve the problem. In TGD Universe black holes are excluded because they are not imbeddable classically: the basic blackhole solutions imbed only down to some critical radius.

Space-time as 4-surface - sub-manifold- gravitation - could be seen as a TGD based solution to the same problem (and many other problems plaguing GTR and GRT based cosmology: the most important of them being the problems with the definition of energy-momentum as Noether charge, which is simply lost).

Wormhole throats representing elementary particles at basic level replace black hole horizons and blackhole interiors are replaced with Euclidian regions of space-time having interpretation as lines of generalized Feynman diagrams.

At 11:03 PM, Anonymous Matti Pitkanen said...

To Ulla:

Still a comment on Sabine's posting.

As practically all people in the field, Sabine takes Planck length proportional to sqrt(G) as something microscopic and fundamental. It is ironic that gravitational constant G characterizes dynamics at very *long* length scales. Hence Planck length mystic has no real justification, and it is quite well possible that all this sweat, blood, and tears have been shed in vain!

a) In TGD framework CP_2 size R replaces Planck length as a genuine geometric length. Planck length is only a formal quantity with dimensions of length deduced essential as sqrt(G), which is just a quantity with dimensions of length (for units for which c=1, hbar=1 holds true). Indeed, the study of preferred extremals leads to Einstein's equations with cosmological constant as consistency condition: G and Lambda are now predictions rather than inputs! This is very important distinction!! I wish I could deduce their values in terms of R and hbar: G= k R^2/hbar, k a numerical constant, is what dimensional analysis gives. For large hbar G goes to zero.

b) Does the fact that G = kR^2/hbar goes to zero for large hbar mean that gravitation becomes very weak for large hbar - rather than small hbar as in Sabine's vision!

The original idea behind hierarchy of Planck constants was that coupling strengths scale as 1/hbar and this makes coupling strengths small and perturbation series converges. This would be true for a given sheet of the resulting polysheeged space-time.

Could the scaling of Planck constant zooming up the size of microscopic systems tame both gauge interactions and quantum gravity for a given sheet of *multisheeted* (!!) structure? Is this indeed the fundamental reason for the hierarchy of Planck constants and dark matter? Why no one has asked why Nature should need the dark matter?;-)

Clearly, my own proposal for taming of gravity is a diametrical opposite of Sabine's proposal. The distinction comes from the fact that we have different motivations.

a) Sabine wants classical theory at short length scales and wants therefore hbar-->0 limit.

b) I want perturbation theory to converge rapidly and since perturbative expasnion is in powers of gauge coupling strength proportional to 1/hbar hbar--->infty makes higher perturbative corrections small.

At 11:25 PM, Blogger Ulla said...

This is something my female brain could not handle :)

I at once compared to the behavior of the second Law and thought this could be some version of entropic gravity.

Also the very different behavior of hbar struck. But Sabines approach would mean invisibility? So she makes her spacetime sheets DARK?

What happen if G is replaced by your eq. Then this all would be a hbar ratio? It would also have implications for the time travel through a wormhole which opens up? This would mean tachyons are possible?

Why is so little talked about G? The G=hbar=c is no good practical alliance?

At 12:45 AM, Anonymous Matti Pitkanen said...

Sabine's varying Planck constant is completely different from what I proposed. It has nothing to do with dark matter nor entropic gravity, which by the way seems to be buried and forgotten;-). The funeral expenses where however millions of euros form EU;-).

I am considering only what happens to G propto 1/hbar when hbar has spectrum coming as integer multiples. Standard value of hbar gives standard G. This has nothing to do with time travel or tachyons;-).

It makes sense to ask whether Sabine with her hbar--> 0 or me with my hbar--->infty is right. To my own little bit more than opinion the idea that small hbar means classicality is wrong at the level of QFT perturbation theory although at the level of algebra of observables it is very natural.

The reason is that the real expansion parameter in perturbation theory is alpha=g^2/4pi*hbar and thus 1/hbar rather than hbar. This strange looking situation is due to the use of propagators in the expansion.

Indeed, tree diagrams in QFT (they correspond to hbar=infty!) give what is interpreted as the classical approximation and loops give the genuine quantum corrections in powers of alpha. What explains this strange situation is non-analyticity in hbar . This makes the limit hbar-->0 limit very delicate since perturbation theory ceases to converge. It is not even clear whether this limit makes sense at all.

At 6:05 AM, Blogger Ulla said...

Verlinde gets his share. Must look for his latest talks, maybe he explains how Dm and ordinary matter differ.

3.4.12 Kavli

At 8:31 PM, Anonymous matti Pitkanen said...

Entropic gravity is example of hype physics. It is easy to demonstrate where things go wrong conceptually: I did this in a little article a couple of years ago:

There is considerable evidence from neutron's behavior in gravitational field of Earth that they it obeys Schrodinger equation. Entropic gravity does not allow this and is therefore experimentally excluded.

It is of course possible that entropy and more generally, thermodynamics, have a correlate or analogy at the level of space-time geometry: .

This is however something different from Verlinde's primitive dimensional analysis.

If you are a name - that is have made significant contributions few decades ago - you can manage to create temporary hype enough for getting billions of euros. Decision makers make their decisions on rather irrational arguments nowadays and lobbing is important part of this process.

The "iltalypsy" performed by string theorists to this Russian billionaire is second example of what has happened to theoretical physics. Sad.

At 10:29 AM, Blogger Ulla said...

While still employing the metric of curved spacetime that Einstein used in his field equations, the researchers argue the presence of dark matter and dark energy - which scientists believe accounts for at least 95 percent of the universe - requires a new set of gravitational field equations that take into account a new type of energy caused by the non-uniform distribution of matter in the universe. This new energy can be both positive and negative, and the total over spacetime is conserved,

At 7:20 PM, Anonymous Matti Pitkanen said...

I saw the article. This is one more attempt to tinker with Einstein's equations in order to understand dark matter. Who would count how many attempts of this kind have been made. The basic conceptual problem about which also these authors are unaware, is that one cannot speak of global conservation of energy in GRT: for some reason this simple fact is not realized.

In the long discussion in viXra org only some participants - in particular Lubos- understood this completely non-trivial fact.

The conclusion from the fact that Noether currents associated with translations and supposed to define four-momentum currents vanish is that Einstein's equations cannot follow from a variational principle. This is what indeed happens in TGD: they emerge as mere consistency conditions for preferred extremals.

It is amusing that Einstein indeed derived his equations just from the condition of local conservation originally! Once again Einstein's mathematically un-educated intuition was nearer to truth! Math is extremely powerful tool but extremely dangerous.


Post a Comment

<< Home