Wednesday, July 27, 2011

SUSY according to TGD

Standard SUSY seems to be dead. TGD predicts a different variant of SUSY (for the basic phenomenology see this). Super-conformal invariance for light-like 3-surfaces is the basic symmetry. In D=8 one does not need Majorana spinors so that quark and lepton numbers assigned with different chiralities of imbedding space spinors are conserved separately. The many fermion states assignable to partonic 2-surface representations of SUSY with large number of N but broken badly by the geometry of CP2. Right handed neutrino generates the least broken SUSY. R-parity associated with nuR is broken since right and left handed neutrinos mix.

Although SUSY is not needed to stabilize Higgs mass in TGD, the anomaly of muonic g-2 requires SUSY. The following strongly symmetry inspired picture is what allows rather precise predictions for sfermion masses.

  1. In TGD (p-adic thermodynamics) based SUSY mass formulas are same for particles and sparticles and only the p-adic length scale is different. This resolves the extremely problematic massivation issue of supersymmetric QFTs.

  2. Ordinary leptons are characterized by Mersennes or Gaussian Mersennes: (M127,MG,113, M107) for (e,μ,τ). If also sleptons correspond to Mersennes of Gaussian Mersennes, then (selectron, smuon, stau) should correspond to (M89,MG,79,M61) is one assumes that selectron corresponds to M89. This is of course a prediction assuming additional number theoretic symmetry.

    Selectron mass would be 250 GeV and smuon mass 13.9 TeV. g-2 anomaly for muon suggests that the mass of selectron should not be much above .1 TeV and M89 indeed fits the bill. Valence quarks correspond to the primes not smaller than the Gaussian Mersenne k=113, which suggests that squarks have k ≥ 79 so that squark masses should be above 13 TeV. If sneutrinos correspond to Gaussian Mersenne k=167 then sneutrinos could have mass below electron mass scale. Selectron would remain the only experiment signature of TGD SUSY at this moment.

  3. One decay channel for selectron would be to electron+ sZ or neutrino+ sW. sZ/sW (spartner of weak boson) would eventually decay to possibly virtual Z+ neutrino/W+neutrino: that is weak gauge boson plus missing energy. Neutralino and chargino need not decay in the detection volume. The lower bound for neutralino mass is 46 GeV from intermediate gauge boson decay widths. Hence this option is not excluded by experimental facts.

Muonic g-2 anomaly is an excellent test for this vision. The poor man's calculation (see this) modifying suitably MSSM calculation gives a value consistent with data if the mass of W gauge is twice the mass of W boson and sneutrinos are light in W boson mass scale. The result does not depend in the lowest order appreciably on the mass of muonic sneutrino. 250 GeV selectron remains the prediction testable at LHC.

The basic differences between TGD and MSSM and related approaches deserve to be noticed (for the experimental aspects of MSSM see this). If Higgses and Higgsinos are absent from the spectrum, SUSY in TGD sense does not introduce flavor non-conserving currents (FNCC problem plaguing MSSM type approaches). In MSSM approach the mass spectrum of superpartners can be only guessed using various constraints and in a typical scenario masses of sfermions are assumed to be same in GUT unification scales so that at long length scales the mass spectrum for sfermions is inverted from that for fermions with stop and stau being the lightest superpartners. In TGD framework p-adic thermodynamics and the topological explanation of family replication phenomenon changes the situation completely and the spectrum of sfermions is very naturally qualitatively similar to that of fermions (genus generation correspondence is the SUSY invariant answer to the famous question of Rabi "Who ordered them?" !). This is essential for the explanation of g-2 anomaly for instance. Note that the experimental searches concentrating on finding the production of stop or stau pairs are bound to fail in TGD Universe.

Another key difference is that in TGD the huge number of parameters of MSSM is replaced with a single parameter- the universal coupling characterizing the decay

sparticle→ particle+right handed neutrino,

which by its universality is very "gravitational". The gravitational character suggests that it is small so that SUSY would not be badly broken meaning for instance that sparticles are rather longlived and R-parity is a rather good symmetry.

One can try to fix the coupling by requiring that the decay rate of sfermion is proportional to gravitational constant G or equivalently, to the square of CP2 radius

R≈ 107+1/2(G/hbar0)1/2.

Sfermion-fermion-neutrino vertex coupling to each other same fermion M4 chiralities involves the gradient of the sfermion field. Yukawa coupling - call it L - would have dimension of length. For massive fermions in M4 it would reduce to dimensionless coupling g different M4 chiralities. In equal mass case g would be proportional to L(m1+m2)/hbar, where mi are the masses of fermions.

  1. For the simplest option L is expressible in terms of CP geometry alone and corresponds to

    L= kR .

    k is a numerical constant of order unity. hbar0 denotes the standard value of Planck constant, whose multiple the efffective value of Planck constant is in TGD Universe in dark matter sectors. The decay rate of sfermion would be proportional to k2R2 (M/hbar)3≈ k2107× (G/hbar0)× (M/hbar)3,

    where M is the mass scale characterizing the phase space volume for the decays of sfermion. M is the mass of sfermion multiplied by a dimensionless factor depending on mass ratios. The decay rate is extremely low so that R-parity conservation would be an excellent approximate symmetry. In cosmology this could mean that zinos and photinos would decay by an exchange of sfermions rather than directly and could give rise to dark matter like phase as in MSSM.

  2. Second option carries also information about Kähler action one would have apart from a numerical constant of order unity k= αK. The Kähler coupling strength αK= gK2/4π×hbar0≈ 1/137 is the fundamental dimensionless coupling of TGD analogous to critical temperature.

  3. For the option which "knows" nothing about CP2 geometry the length scale would be proportional to the Schwartchild radius

    L= kGM

    In this case the decay rate would be proportional to k2G2M2(M/hbar)3 and extremely low.

  4. The purely kinematic option which one cannot call "gravitational" "knows" only about sfermion mass and f Planck constant, and one would have

    L= k× hbar/M.

    Decay rate would be proportional to the naive order of magnitude guess k2(M/hbar) and fast unlike in all "gravitational cases". R-parity would be badly broken. Again k propto αK option can be considered.

Note that also in mSUGRA gravitatational sector in short length scales determines MSSM parameters via flavor blind interactions and also breaking of SUSY via breaking of local SUSY in short scales.

To my opinion the success of TGD using only simple scaling and symmetry arguments instead of heavy computational approach demonstrates how important it is to base the models on a genuine theory. Blind phenomenology can be completely misleading when basic principles are replaced with ad hoc assumptions. Here of course the problem is that super strings and M-theory can provide no principles helping the phenomenologist.

Addition: Tommaso Dorigo mentions the eprint Supersymmetry and Dark Matter in Light of LHC 2010 and XENON 100 Data as one of the first reactions of SUSY specialists to the 2010 LHC data represented in Europhysics 2011. The message is that one can find simple enough variants of MSSM which can cope with the experimental constraints (the number of parameters of MSSM is more than hundred). Authors are however cheating a little bit: they have dropped from their fit the really painful muonic g-2 anomaly requiring light wino or zino scale and/or some light sleptons, say light sneutrino. Taking this contraints into the fit *very* probably kills it. If not, authors would not have forgotten to mention it!;-)

For TGD SUSY see appropriate section in the chapter New Particle Physics Predicted by TGD: Part I of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy".

16 Comments:

At 11:51 PM, Blogger Ulla said...

http://indico2.lal.in2p3.fr/indico/getFile.py/access?contribId=35&sessionId=13&resId=0&materialId=slides&confId=1507

Higgs Hunting Theory summary 30.7.

 
At 3:57 AM, Anonymous matpitka@luukku.com said...

Thank you. Experimental input has been extremely inspiring. Eagerly waiting new results.

Since results are formulated as tests for existing theories there is no other way than to win the repugnancy towards the competing models and to learn about their many problems at least. It is amazing how hopelessly artificial the visions about the mass spectrum of sparticles are because some key ideas -such as p-adic length scale hypothesis and thermodynanics and topological interpretation of family replication phenomenon- are lacking. Theoreticians are led to completely wrong track.

In this manner I have learned also a lot about TGD SUSY during EP 2011 and after it. Bird's eye of view about particle spectrum is what is needed and I think I might now understand also the sparticle portion of it.

 
At 4:23 AM, Blogger Ulla said...

I would need more knowledge of the background to be able to interpret the results.
1.the doubts about Monte Carlo simulations. They say the method is much improved but still they needed another method for really rare events.
2. the W-and Z-bosonic behavior. The background curve is evenly plotted, with 95%CI also quite even, but this cannot be the case? There should be narrow and broad parts. Sometimes bars are used and they show this behavior.
3 at really low energies the plot is very uncertain? This is not at all shown and one get the impression it is as certain as the rest.
4. Gibbs showed his second last plot with a collapse first, then a reconsiling phase and last a 'jump' up. Does this tell anything essential?

Still the Higgs well is there, but the singularity is dead? As I understand it the well is also impossible. If the well is made of bosons, how can then adding more bosons ever make them pop up? The 'agglutination' (braiding) becomes even less probable? Here Kea (and y) are on the right track?

A link of thermodynamic quantum arithmetics that solve some problems of von Neuman? http://arxiv.org/ftp/arxiv/papers/1107/1107.3924.pdf

 
At 8:24 AM, Anonymous matpitka@luukku.com said...

I explained in previous posting my understanding about these plots. It is always comparison of data with expected. Now Higgless model is is compared with standard model having Higgs.

There are always many events having interpretation in terms of decay products of Higgs even if there is no Higgs there and their number can be predicted: one obtains band since fluctuations are present. The n sigma bandd represent those kind of events assuming that there is no Higgs normalized with the prediction of theory with Higgs. n characterizes largest fluctuation in the band.

If the observations are not consistent with this interpretation something interesting is involved. IF the experimental curve is are above 5 sigma band, then it is decided that Higgs is found provided that statiscs is good enough -in other words sufficiently many 1/fb:s have been collected. If below also then something strange happens: for some reason the production of final states is much smaller than in theory with Higgs.

This is like throwing dart. Only darts which hit the target are counted and one can calculate the number of hits per area and express it using 1/fb as unit. When sufficiently many 1/fbs are collected (actually much less is enough;-)), one can reliably deduce the probability of 10,9,8,....,1 just by counting their numbers and dividing by the total number of hits. If the participant is blind or possesses motor skills comparable to those of mine;-), the number of n:s (n=1,...,10) should be proportional to the ratio of the n-ring to the total area.


If some participant is able to get much more big numbers than the blind man or me, we conclude that he or she is very good in throwing dart which in turn suggests that he or she is spending quite too much time in the local pub. Higgs has revealed itself;-).

Tommaso Dorigo has a nice posting about dangers of a blind application of this approach.

 
At 12:32 PM, Blogger ThePeSla said...

Matti,

Of course if we throw a dark at the number line the chances are zero for all practical purposes that we do not find a transcendental number.

In my post today I think I found an area where your theories may apply to distance measure as the diagonal of matrices and of cubic structures.

See Hexacubes.
Of course from my view limiting the numbers to ten or eleven things is not good enough to transcend standard motion spaces. Clearly, (2^6 - 1)x 2^5 in the count of + or - 32 could be interpreted as 11D - 5D representations of continuous algebras of such vaguely defined dimension of the Clifford type.

In my hexaplex and hexanex count differences some binary representations occur on many scales where the count of each is equivalent. I am open to a better term for this. Note that Kea's relation to the zeta function considerations at 1/2 is very relevant. But how this can be a good future and of shown importance in society for the careers of those involved may take some time- the next century maybe as we are in a sort of dark ages awhile.

ThePeSla

 
At 2:00 PM, Blogger Ulla said...

Frasca points out that the strong coupling is not solved, and the Wjj bump points in that direction too. There may be a possibility. Of an almost 6 sigma is 2 left, maybe due to counteracting forces (goldboson?). He did not however liked my comment about this. http://marcofrasca.wordpress.com/2011/08/03/whats-going-on-with-higgs-particle

Strong coupling means heavy Higgs, but now we have hints at light Higgs (many particles -state)?

 
At 7:10 PM, Anonymous matpitka@luukku.com said...

Dear Pesla,

my point was not that one obtains rationals as an estimates for probabilities in all experiments but an attempt to illustrate what the basic idea behind the talk about sigmas is.

 
At 7:24 PM, Anonymous matpitka@luukku.com said...

Ulla,

concerning Higgs, my bet is that it will not be found. Copy of hadron physics instead will be. 145 GeV bump is of course the central question mark here. It will take one year to get finally rid of Higgs.

After this theoreticians begin to beat head against technicolored wall and after a discovery of sufficiently many hadron like states and grave objections against technicolor someone will propose TGD like theory. They will of course forget to mention who actually developed the idea in the first place. "Rediscovery" or may be more ambitious "discovery" of p-adic thermodynamics will be the great challenge but they might be able to achieve even that.

This sounds cynical but I have had so many opportunities to see how average career builder behaves ethically.

By the way, I did my best to find what finnish media has said about the historical Europhysics 2011 conference. Not a single comment. Finnish particle physicists have been completely silent about the matter. Optimistic could hope that they stop thinking only during July which is the month for summer vacations but I am afraid that this is wishful thinking.

At seventies and at the days of Laurikainen discovery of a new particle was a big media event deserving a colloqium. The only finnish comment about Europhysics 2011 that I found from web was by Syksy Räsänen who is cosmologist. Academic assembly line is on but in Finland theoretical particle physics -defined as a genuine intellectual discipline- has been dead and buried for a long time.

 
At 7:53 AM, Blogger ThePeSla said...

Hi Matti,

I hope you did not take my standard comment as a challenge- rather it points toward a confirmation of where some deep arithmetic applies.

I would post a challenge if I had to but would that be objective as science and dishonest science to want to not find new breakthroughs - I do not want to remain in the dark?

So I have failed to communicate or connect to what amounts so far to our less than what is generally considered elegant recursive ideas when there is no other way to compute things.

From a philosophic perspective I offer you a metaphor from the television show Nightline- Beyond Belief. You have taken up the issue of consciousness and certainly we do have strange experiences like out of body and life flashing by before death. The conclusions were that there are yet unexplained phenomena which may not be resolved short of what is beyond.

So the metaphor is not so much what our consciousness is but the age old question if we have a soul and what is its nature.

BTW I no longer think that the key to physics is about particles like some bloggers repeat as a party line. There is much we need to clear up about numbers and topological fields and how say they describe these brain processes of which our central problem is applying measure to the linear aspects of field ideas which can be done somewhere beyond the current logic of the rationals as per finally a science magazine article on it last week- beyond the transfinites.

Why would there be a limitation of the first few Mersene primes?

Such a question I feel applies to the diagonal of orthogons to the center if any at least and there one just might apply an intelligible p-adic theory more recursive than pi as powers of hypervolumes.

I guess it is too hard to research this, invention or discovery Ulla, as we are doing now- so I will stand down until the climate changes more.

ThePeSla

 
At 8:02 AM, Blogger ThePeSla said...

See:

http://www.newscientist.com/article/mg21128231.400-ultimate-logic-to-infinity-and-beyond.html?page=1

link from my post a few back and with conclusions I intuited earlier for some time now. I do not think these problems will take another century.

ThePeSla

 
At 9:42 AM, Blogger ThePeSla said...

"An immediate consequences of this results is that the observation of the Higgs particle
may be not enough to understand the proper mechanism of mass generation."

Well, Ulla
I am not impressed with this paper from your supplied link save maybe the quote above rings true- so the issue is what we mean by SUSY indeed, but their conclusions come as a surprise perhaps only to them.

ThePeSla

 
At 7:45 PM, Anonymous matpitka@luukku.com said...

To Pesla,

the question about soul is of fascinating. Already the word is so beautiful and as an admirer of Chopin's music full of soul I would like to believe that also Chopin's soul exists in some sense;-).

If one however identifies consciousness in terms of quantum jumps - moment of consciousness as something between two realities- it is not possible to assign anything conserved to conscious entities. How it could be if the basic aspect of consciousness is recreation, change!

Conscious experience could be however analogous to endless stream: this would be weaker assumption that the assumption about soul requiring something- say personal trait- to be conserved. We identify our periods of sleep as periods of unconsciousness but it could be just that we to not remember anything about these periods.


Concerning Mersennes the basic reason considering only rather few of them is p-adic length scale hypothesis. p-Adic length scale is proportional to sqrt(p), p prime. For Mersennes one has p= 2^n-1, n some prime- not any prime!- and there are very very few Mersenne primes. For n=127 you get electron length scale.

The next Mersenne after M_127 you can find from a list in web -I do not remember the value of n- and corresponding p-adic length scale is completely super-astrophysical. Particles with mass scale corresponding to this Mersenne might exist but they would be extremely light, massless for all practical purposes. In any case, for humans it is impossible to do any quantum physics in this length scale. The needed momentum resolution would be completely beyond our technology.

The beauty of the p-adic length scale hypothesis is that the p-adic length scales increase exponentially. In cosmological scales you have only few hundred relevant p-adic length scales. The predictions for length and mass scales are also exponentially sensitive to the value of k in p=about 2^k so that the hypothesis is easily testable. Superstring hypist would say "even super-testable" as one of them did for some years ago while proposing some test for superstrings (already forgotten of course;-)).

 
At 7:55 PM, Anonymous matpitka@luukku.com said...

To Ulla:

Conventional Higgs has *two *functions.

a) All components of Higgs except one become longitudinal polarizations of weak bosons W,Z.

b) The remaining component develops vacuum expectation value and gives mass to the gauge bosons and also fermions.


The deviation of Higgs field from vacuum expectation which is constant remains to a theory as dynamical degree of freedom and this is the Higgs which has been searched for.


In TGD Higgs has only *one* basic function. Higgs gives additional polarization state for gauge bosons. Not only for W and Z but also photon. Since all Higgs components disappear, one cannot speak about Higgs vacuum expectation and mass generation must be based on some other mechanism. There is no Higgs to be searched for.

p-Adic thermodynamics for conformal weight proportional to mass squared provides the *microscopic* mechanism for massivation with Higgs vacuum expectation. There is also additional contribution which is important for gauge bosons but this is small for fermions

I wish I could stop some colleague and shout to him "MICROSCOPIC MECHANISM OF MASSIVATION, DO YOU HEAR: MICROSCOPIC!" but because I am a zombie in the academic world, I cannot due this: the colleague just does not have any sensory percepts about me. We live in different worlds;-).

 
At 8:23 PM, Anonymous matpitka@luukku.com said...

To Pesla:

The New Scientist article about infinity is excellent. Continuum hypothesis and axiom of selection are just the stuff which average physicist does not bother to take seriously.

Cantor predicts infinite hierarchy of infinities and countable infinity and the number of points of the real line are only particular examples about infinities in infinite series of them.

Continuum hypothesis states that there is no additional Cantorian infinity between countable infinity and number of real points. The article tells that the problems of set theory relate to continuum hypothesis and selection axiom.

I was surprised to find that it was already Goedel who first proposed infinite ladder of infinities which was tailored in such a manner that axiom of choice and selection axiom held true. Goedel's L world was mathematicians GUT. The article talks about the newest of this kind which also manages to keep these axioms.

My own approach to infinite primes/integers and rationals is different and is based on the notion of divisibility rather than successor axiom (every integer has a successor). Divisibility is the decisive notion instead of the number of elements in the set and gives rise to a more refined notion of infinity
than Cantor's approach based on cardinality: two sets have same number of elements if there is bijection between them.

Infinite hierarchy of infinite rational emerges also now but in number theoretical sense. I am not sure whether these different levels of infinities are countable: mathematician could deduce the answer without difficulty by using the basic definitions which can be expressed in terms of a hierarchy obtained by considering first the set of finite subsets of integers, after that the set of finite subsets of this set, and so on....

Maybe the mathematician would tell that this hierarchy of infinities is still between countable infinite and continuum so that the notion of divisibility would break continuum hypothesis by making it possible to distinguish between infinities not distinguishable in Cantorian sense.

The ladder of infinite primes/integers/rationals obtained by repeated second quantization of an arithmetic supersymmetric QFT with bosons and fermions labelled by finite primes is something new.
What is amazing is the connection with quantum physics: maybe the counterpart for the L-world of Godel is after all the world according to the ultimate theory of physics!


Then comes the question about reals. Different number fields( reals and various p-adics) are glued together among common numbers -rationals in particular to form a larger structure. Each point of real axis is replaced with infinite-D space obtained by replacing it with the set of real units obtained as ratios of infinite rationals at some level of infinity.

This structure has enormous representative power and one could dream that the world of classical worlds and spinor fields in this space could allow representation in terms of these real units. Brahman Atman Identity would be realized: the structure of single space-time point invisible to ordinary physics would represent the world of classical worlds! Single space-time point would be the Platonia!

 
At 3:09 PM, Anonymous Anonymous said...

The holy number of the coiled kundalini serpentine (which in Finnish language has the simple name "Luonto"), 3½, is the sum of ½, 1 and 2. In other words, the sum of One, Divisibility axiom represented by halving it into ½ and successor axiom represented by 2.

As to these two sets of infinities categorized by either axiom, what happens when Axiom of Choise is applied? When one creates sets picking from and consisting of both kinds of infinities? CP-sets of infinities (after Cantor and Pitkänen infinities ;) (which also rhymes nicely with speed of light c and prime p, and what else?)?

As to Mersenne primes, M127 with 39 digits in decimal system is by the way 12th Mersenne and the biggest one counted by hand in 1876, by Lucas, from 13th Mp to current biggest known 47th Mp have been calculated with computers and lately, shared processing capacity provided by the Internet. 13th Mersenne is M521 and has 157 digits.

Now, AFAIK it is still at least formally open question, if the set of Mersenne primes is finite or infinite, e.g. are there bigger ones than the currently known biggest Mersenne prime? Is there any proof beyond physical actualization of computation results?

 
At 10:08 PM, Anonymous matpitka@luukku.com said...

Dear Anonymous:

I am not sure whether the following rule of thumb holds true. If you consider the set formed by all finite subsets of a countable set you get a countable set. If *all* subsets (also infinite) are allowed the set is not countably finite. If continuum hypothesis holds true it has at least as many elements as real line.

2-adic integers are good example. Consider first all 2-adic numbers with a *finite* number of non-vanishing bits (finite as real numbers). You get a countably infinite set since you can map these bit sequences to natural numbers in an obvious manner.

Consider next all possible bit sequences: most of them have infinite number bits. These numbers form naturally 2-adic continuum with 2-adic topology and differentiability. 2-adics can be mapped to real continuum in simple manner: canonical identification allows to do this continuously. The cardinality of these bit sequences is same as for reals as the rule of thumb would predict.

The hierarchy of infinite integers is based on number theoretical view about infinity and it would seem that these infinities are between the countable infinity and infinity defining the number of points of real axis. This reflects the fact that number theoretic infinite is much more refined notion than Cantorian infinity.

One must be however very cautious in making comparisions: infinite integer is defined purely algebraically. It is not defined as the cardinality of some set. There is of course also the notion of infinity associated with ordinals.

I have the impression that it has been shown that the number of Mersenne primes is infinite. What is now known is whether Hilbert's conjecture stating that Combinatorial Hierarchy consisting of Mersenne primes defined recursively as

M(n+1)= 2^{M(n)}-1

M(1)= 2

is infinite.

This series begins with 2,3,7,127,2^127-1 and one does not know whether the higher representatives are primes: classical computers do not have enough computational capacity to check this.

I have proposed that all members of the hierarchy could correspond to codes. The code associated with M_7=127 would allow a representation using 6 bits: the 64 codons would correspond to genetic codons. M_127 would correspond to what I call memetic code: the codons would allow representation as sequences of 126 bits.

Note that M_127 corresponds to electron and corresponding secondary p-adic time scale is .1 seconds, the fundamental time scale in living matter
defining among other things alpha rhythm.

 

Post a Comment

<< Home