https://matpitka.blogspot.com/2015/

Thursday, December 31, 2015

Solution of the Ni62 mystery associated with Rossi's E-Cat

In my blog a reader calling himself Axil made a highly interesting comment. He told that in the cold fusion ashes from Rossi's E-Cat there is 100 micrometer sized block containing almost pure Ni62 isotope. This is one of Ni isotopes but not the lightest Ni58 whose isotope fraction 67.8 per cent. Axil gave a link providing additional information and I dare to take the freedom to attach it here.

Ni62 finding looks really mysterious. One interesting finding is that the size 100 micrometers of the Ni62 block corresponds to the secondary p-adic length scale for W bosons. Something deep? Let us however forget this clue.

One can imagine all kinds of exotic solutions but I guess that it is the reaction kinetics "dark fusion + subsequent ordinary fusion repeated again and again", which leads to a fixed point, which is enrichment by Ni62 isotope. This is like iteration. This guess seems to work!

  1. The reaction kinematics in the simplest case involves three elements.

    1. The addition of protons to stable isotopes of Ni. One can add N=1,2,... protons to the stable isotope of Ni to get dark nuclear string NiX+N protons. As such these are not stable by Coulomb repulsion.

    2. The allowed additional stabilising reactions are dark W boson exchanges, which transfer charge between separate dark nuclear strings at flux tubes. Beta decays are very slow processes since outgoing W boson decaying to electron and neutrino is very massive. One can forget them. Therefore dark variants of nuclei decaying by beta decay are effectively stable.

    3. The generation of dark nuclei and their transformation to ordinary nuclei occurs repeatedly. Decay products serve as starting point at the next round. One starts from stable isotopes of NiX, X=58, 60, 61, 62, 64 and adds protons some of which can by dark W exchange transform to neutrons. The process produces from isotope NiX heavier isotopes NiY, Y= X+1, X+2,.. plus isotopes of Zn with Z=30 instead of 28, which are beta stable in the time scale considered. Let us forget them.

  2. The key observation is that this iterative kinematics increases necessarily mass number!! The first guess is that starting from say X=58 one unavoidably ends up to the most massive stable isotope of Ni! The problem is however that Ni62 is not the heaviest stable isotope of Ni: it is Ni64!! Why the sequence does not continue up to Ni64?

    The problem can be solved. The step Ni62→Ni62+p leads to Cu63, which is the lightest stable isotope of Copper. No beta exchanges anymore and the iteration stops! It works!

  3. But how so huge pieces of Ni62 are possible? If dark W bosons are effectively massless only below atomic length scale - the minimal requirement - , one cannot expect pieces to be much larger than atomic length scale. Situation changes if the Planck constant for dark weak interactions is so large that the scaled up weak scale corresponds to secondary p-adic length scale. This requires heff/h≈ 245≈3.2 × 1013. The values of Planck constant in TGD inspired mode of living matter are of this order of magnitude and imply that 10 Hz EEG photons have energies in visible and UV range and can transform to ordinary photons identifiable as bio-photons ideal for the control of bimolecular transitions! 100 micrometers in turn is the size scale of large neuron! So large value of heff/h would also help to understand why large breaking of parity symmetry realized as chiral selection is possible in cellular length scales.

Clearly, this kind of fixed point dynamics is the unique feature of the proposed dark fusion dynamics and provides an easily testable prediction of the TGD based model. Natural isotope fractions are not produced. Rather, the heaviest stable isotope dominates unless there is lighter stable isotope which gives rise to stable isotope by addition of proton.

See the article Cold Fusion Again or the chapter with the same title.

For a summary of earlier postings see Links to the latest progress in TGD.

Wednesday, December 30, 2015

How to design your own light saber?

Designing of light sabers seems to be the latest craze. My grandchildren are actually proud owners of light sabers but it did not occur to me to ask whether their built them by their own hands (they are Jedi masters in lego building).

The receipe for light saber depends on what Universe you choose to live in. Don Lincoln suggests a construction in rather conventional Universe - just standard physics at relatively low energies - M-theorists would call this kind of Universes in belittling tone "just low energy phenomenology".

Lincoln rejects the idea that light saber consists of a laser light beam because he wants it to have a fixed length as in Star Trek movies. Light saber should be able to meld metals and therefore he proposes that light could be generated by a hot plasma consisting of charged particles which would radiate and make the saber visible . Plasmas are however rather rarified and one cannot produce much damage using even plasma torches. One can however have plasma cutter: plasma cutter works best when the material cut is a conductor.

Plasma must be inside somekind of container with a shape of saber. Nuclear fusion search uses magnetic fields to confine plasmas. Lincoln however finds that the power needed to melt metal is beyond existing technologies. Plasma should be extremely hot. The further problem is that you could burn your fingers!

Also Bee has taken the challenge. Bee wants to live in the world of General Relavity governed by some kind of GUT allowing magnetic monopoles to exist. The saber would emit magnetic monopoles, which would somehow do the damage at the second end. I did not quite catch the idea for how this would occur. In order to make the saber glow you must feed electrically charged particles in the strong magnetic field of the saber. The problem is that GUT magnetic monopoles have masses, which are considerable fraction of Planck mass so that the practical design is probably not feasible in the near future.

My proposal for a light saber can be build only in TGD Universe as you could have guessed and very probably did so. I happened to develop the recipe for light saber completely accidentally while developing a model for cold fusion. I had never thought that TGD could be used to build weapons for Star Wars. The recipe for light saber goes like follows.

  1. Take TGD Universe and a vessel of water. Take care that the water begins to bubble. You could create the needed cavitation by sound waves or building little private water fall or boil the water. You get bubbles of vapor, which compress and re-explode (sonoluminescence is based on this and one indeed observes gamma rays in sonoluminescence - something totally unexpected suggesting nuclear fusion).

    In the compression to a very small volume protons compress along radial monopole flux tubes to an ultrahigh density- something like 1000 times the ordinary density of water in the direction of the flux tube. The outcome is ultradense 1-D solid associated with monopole flux tube.

    Remark: You could also use laser beams as in laser induced cold fusion (don't forget the TGD Universe!). Also now ultradense 1-D matter is created at monopole flux tubes in the direction of the laser beam and Coulomb explosion occurs and the nuclei leak out along flux tubes. Just as in the case of bubble explosion.

    Dark atomic nuclei with large Planck constant heff/h= 203 are formed and the process liberates dark nuclear binding energy proportional to 1/heff by simple dimensional guess. The energy is in keV range (X rays) rather than MeV range (gamma rays) as for ordinary nuclei.

    At flux tubes you have dark nuclei containing not only protons but also neutrons if you have weak bosons with large enough heff. These weak bosons must be effectively massless below atomic length scale. If so, they allow protons to transform rapidly to neutrons by dark W exchange with other nuclei. Neutronization also helps to overcome Coulomb wall still present and would precent dark nuclear fusion. Besides laser induced fusion this process actually happens in sonofusion, bubble fusion, and cavitation induced fusion, and even in electrolysis assisted fusion in which also electric field generates bubbles and dark nuclei.

  2. The dark nuclei however tend to escape the system along magnetic flux tubes carrying monopole fluxes.

    Remark: In TGD Universe there are monopole fluxes but no actual monopoles as in GUT based Universes. The reason is that CP2 has a non-trivial topology: there exist incompressible two-surfaces but they are not surfaces of holes in which case one would have monopole.

    Monopole flux tubes provide the desired container for the dark nuclei, which do not give rise to hot plasma.

    Remark: There is analogy with the magnetic confinement in hot fusion: the failure of this confinement is that monopole fluxes are not used so that pinches destroying confinement can occur. Also hot fusion would become possible if colleagues would not stubbornly refuse to live in TGD Universe.

    Positively charged dark nuclei interact and generate dark photons and some fraction of them leaks out by transforming to ordinary photons. This can make the light saber visible. As a matter of fact, in TGD inspired quantum biology biophotons are ordinary photons resulting dark photons at magnetic flux tubes with energy spectrum in visible and UV and thus ideal for inducing molecular transitions.

  3. In order to do damage to the enemy you must be able to transform dark nuclei to ordinary ones so that the enemy finds himself in the midst of nuclear explosion and if this is not enough, is killed by radiation disease.

    Remark: LeClair claims that he got radiation disease in his experiments which he claims produce a lot ordinary nuclei by cavitation induced fusion.

    This process liberates a huge energy but only at the enemy's end. The liberate nuclear energy is higher than in ordinary nuclear fission or fusion since the binding energy of ordinary nuclei would be about thousand times higher than that of dark nuclei (taking the experimental data about laser induced fusion seriously). The liberated energy is essentially the total binding energy of the ordinary nuclei generated: much more than in ordinary nuclear fusion or fission. And the best of all: you need not worry about burning your fingers with hot plasmas!

    How to achieve all this destruction? Assume that the target is metal and therefore a conductor (Yes yes yes! Your enemy has probably left his metallic armour home. But let me still continue). Provide the metal with a surface charge density by putting it into electric field. You could combine the saber with slowly varying longitudinal electric field in the direction of beam. At the other side of the conductor the charge density is negative and attracts the positive charged dark nuclei at the flux tubes and some fraction of them decides to return back to the visible world and decays to ordinary nuclei producing gigantic energies and melting the metallic armour.

    Amazingly, this process have been claimed aeons ago by Brown but serious scientists have not taken the claims seriously. Brown's gas emitted in electrolysis (one manner to generate dark nuclei by bubble fusion in bubbles produced by the electric field) is claimed to melt metals although its temperature is measured using 100 degrees of Celsius as a natural unit. Also LeClair claims that his cavitation induced fusion generates nuclei of all elements at aluminium target and claims that any metal can be used. Corrosion of metals might have similar origin and electric power plants might have secretely served as dark nuclear power plants since the water coming to the turbine cavitates!

  4. The problem (or is it really a problem might my daughter - the mother of these two young owners of light sabers - ask), Brown's gas is claimed to have no effect on living matter! Biosystems somehow avoid the transformation of dark nuclei to ordinary ones. Biomatter would make ideal Jedi fighters and without the metallic armour even better ones! It would seem that the light saber could be used for technological purposes like melting metals. Neutron bomb destroys all living beings it leaves buildings intact. Light saber would destroy everything consisting of metal but spare the living creatures. Thinking twice this is after all a good thing!

    The ability of living matter to survive as Jedi fighters makes sense from Darwinistic point of view if living matter generates dark nuclei as the reported occurrence of biofusion suggests. Living matter should have developed tools to avoid uncontrolled transformation of dark nuclei to ordinary ones generating ultrahigh temperature evaporating living matter instantaneously. Since/if biomolecules are di-electrets they do not generate electronic surface charges in external electric field and the dark nuclei remain in their own dark world unless they are needed in useful purposes like building the shell of hen egg (here Ca is needed for this as also for buildup of bones)! Warning: do not apply your brand new light saber to living matter: this argument need not be quite water tight! DNA molecules carry constant negative charge per unit length!

    Remark: TGD inspired quantum biology indeed predicts that dark nuclear fusion indeed occurs in living matter routinely. The starting point Pollack's findings about generation of negatively charged bubbles (exclusion zones) explained as resulting as part of protons becomes dark and goes to the magnetic flux tubes and forms dark nuclei. Dark nuclear fusion is the outcome.

    Remark: There might be a connection to Lincoln's design of light saber based on plasma cutters. Charge separation occurs also in plasmas and could also lead to the generation of dark nuclei at magnetic flux tubes. Plasma cutters work best for conductors! Could the transformation of dark nuclei to ordinary ones occur at the metal end?

In case that you encounter some technical problems in the building of your own light saber, see the article Cold Fusion Again.

For a summary of earlier postings see Links to the latest progress in TGD.





Does electrolysis involve dark matter and new physics?

During years I have many times tried to understand what happens in electrolysis and every time I have been forced to to admit that I do not! Very embarrassing observation. I have tried to gain wisdom from an old chemistry book with 1000 pages again and again but always in vain. This is especially embarrassing because a unified theory builder to be taken seriously is expected to build brave new brane worlds in 11 or 12 dimensions to possibly explain a possible detected particle at mass 750 GeV at LHC instead of trying to understand age old little problems solved aeons ago. The wau-coefficient of chemistry is zero as compared to the awesome 10500 of M-theory.

Energetics has been my personal problem (besides funding). I learn from chemistry book that an electric field - say voltage of 2 V per 1 mm splits molecules to ions. The bond energies of molecules are in few eV range. For instance, O-H bond has 5 eV energy. V= 2V/mm electric field corresponds to electrostatic energy E=eVd∼ 2-10 eV energy gain for a unit charge moving from the end of the bond to the other one. This is incredibly small energy and to my understanding should have absolutely no effect to the state molecule. Except that it has!

A heretic thought: could it be that chemists have just accepted this fact (very reasonable!) and built their models as mathematical parameterizations without any attempt to understand what really happens? Could the infinite vanity of theoretical physicists have prevented them from lowering themselves to the intellectual level of chemists and prevented them from seeing that electrolysis is not at all understood?

In order that this kind of energy would have so drastic effect as splitting molecule to pieces, the system molecule + yet unidentified "something" must be in critical state. Something at the top of hill so that even slightest perturbation makes it fall down. The technical term is criticality or even quantum criticality.

  1. Biological systems are critical systems extremely sensitive to small changes. Criticality means criticality against molecular ionization - charge separation basically. Also in electrolysis this criticality is present. Both DNA and cell are negatively charged. Inside cells there are various kinds of ions. In TGD Universe all matter is quantum critical.

  2. Charge separation occurs also in Pollack's experiments in which the fourth phase of water is generated. This phase contains negatively charged regions with effective H1.5O stoichiometry (hydrogen bonded state of two water molecules which has lost proton). Positive charge associated with lost protons has gone outside these regions.

What produces quantum criticality against charge separation? What is this unidentified "something" besides the system? Magnetic body carrying dark matter! This is the answer in TGD Universe. The TGD inspired model assumes that the protons transform to dark protons at dark magnetic flux tubes possibly carrying monopole flux. If these protons form dark nuclei the liberated dark nuclear energy can split further O-H bonds and transform protons to dark phase. The energy needed is about 5 eV and is in the nuclear binding energy scale scaling as 1/heff (like distance) if the size scale of dark protons proportional to heff/h is 1 nm. One would have heff/h≈ 106: the size scale of DNA codons - not an accident in TGD Universe. The liberated dark nuclear energy can ionize other molecules such as KOH, NaOH, HCl, Ca(OH)2, CaO,...

Entire spectrum of values of heff/h is possible. For laser pulse induced fusion (see the article ) assumed to induce longitudinal compression one would have heff/h≈ 103. Dark nuclear physics with non-standard values of Planck constant would be a crucial element of electrolysis. Condensed matter physics and nuclear physics would not live in totally separate compartments and dark matter an ordinary matter would interact! How humiliating for theoreticians! I do not hear the derisive laughter of superstring theoreticians anymore!

Ordinary electrolysis would thus produce dark nuclei. The problem is that most of them would leak out from the system along dark flux tubes and potentially available nuclear energy is lost! As also various elements so badly needed by modern techno-society! For instance, in the splitting of water to hydrogen, the flux tubes assignable to the beam containing hydrogen would take the dark nuclei away. Could one transform dark nuclei to ordinary ones?

  1. If this beam collides with say metal target, some fraction of the dark nuclei could however transform to ordinary nuclei and liberate really huge energy: the difference between nuclear binding energies of initial and finals state would be essentially that of the final state unlike in ordinary nuclear fusion.

  2. In particular, electrodes could induce transformation of the dark nuclei to ordinary ones. Even in the experiments of Pons and Fleichman the role of porous Pd target could be secondary: it would be only a target allowing the dark nuclei produced by bubble fusion to transform to ordinary nuclei and the large surface area would help in this respect. Same applies to Rossi's E-Cat.

  3. So called Brown's gas generated in the splitting of water is claimed to be able to melt metals although its temperature is relatively low- around 100 Celsius. The claims is of course taken not seriously by a "serious" scientists as the Wikipedia article so clearly demonstrates. It could be however understood if the melting is caused the transformation of dark nuclei to ordinary ones. The corrosion of the metallic surface in the presence of cavitating water would be also due to the dark nuclear energy. Not all of the energy would be used to produce corrosive effects, and I have in some discussions been told that in electric plants an anomalous production of energy assignable to corrosive effects in turbine has been observed. Electric plants could have served secretly as dark nuclear plants! Unfortunately, I do not have reference to this claim. TGD inspired model for it article later affects aluminium disks inside cavitating water corrosively: LeClair (LeClair effect is discussed here ) might have reinvented Brown's gas!

    But why metals? The surface of metal in external electric field carries negative charge density of conduction electrons. Could it be that they attract the positively charged dark nuclei from the magnetic flux tubes back to the visible world, and help them to tranform back to ordinary nuclei? Conductors in electric fields would thus help to transform dark nuclei to ordinary matter.

  4. Brown's gas is reported to have no effect on living matter? Why? If living matter uses dark nuclear physics as a basic tool, it should have developed tools to avoid the transformation of dark nuclei to ordinary nuclei in uncontrollable manner. What
    aspect of quantum biophysics could make this possible? Negentropy Maximization Principle defining the basic variational principle of TGD inspired theory of consciousness could be the general reason preventing this transformation (see this). The negentropy characterizing negentropic entanglement serving as a measure for potentially conscious information assignable to non-standard values of heff would be reduced if heff is reduced. But how to understand this at a more detailed level? Could the fact that bio-molecules are mostly insulators rather than electronic conductors explain this?

See the article Cold Fusion Again or the chapter with the same title.

For a summary of earlier postings see Links to the latest progress in TGD.

Monday, December 28, 2015

Cold Fusion Again

Reader calling himself "Axil" made a very interesting comment to earlier blog posting about cold fusion and gave several links to works about which I had not been aware of. This inspired a fresh look to cold fusion from TGD perspective.

During years I have developed two models of cold fusion and the decision to build a more quantitative model led to a model combining these models together. The basic idea of the resulting TGD based model of cold is that cold fusion occurs in two steps. First dark nuclei (large heff=n× h) with much lower binding energy than ordinary nuclei are formed at magnetic flux tubes possibly carrying monopole flux. These nuclei can leak out the system along magnetic flux tubes. Under some circumstances these dark nuclei can transform to ordinary nuclei and give rise to detectable fusion products.

An essential additional condition is that the dark protons can decay to neutrons rapidly enough by exchanges of dark weak bosons effectively massless below atomic length scale. This allows to overcome the Coulomb wall and explains why final state nuclei are stable and the decay to ordinary nuclei does not yield only protons. Thus it seems that the model inspired by Pollack's findings combined with the TGD variant of Widom-Larsen model could explain nicely the existing data. In particular, the model explains the strange findings about cold fusion - in particular the fact that only stable nuclei are produced and the composition of produced nuclei is the naturally occurring one- and suggests that also ordinary nuclear reactions might have more fundamental description in terms of similar model. This would mean the replacement of the existing models of nuclear physics with much deeper and simpler theory.

After this unifying step I am convinced that cold fusion is real. There is no doubt about it. Not only because TGD explains all the peculiar observations interpreted by skeptics as a sign of fraud but also because the success of the model gives very strong support for p-adic length scale hypothesis, dark matter hierarchy with dark matter residing at magnetic flux tubes, and also dark variants of weak interaction physics. Also a precise quantitative view about large parity breaking in living matter leading to chiral selection emerges. Also a concrete suggestion for making cold fusion effective manner to produce energy and new elements by transmutation emerges: the problem of old approaches is that dark nuclei leaks out from the system along magnetic flux tubes.

In the article and new chapter (see links below) I will describe the steps leading to the TGD inspired model for cold fusion combining the earlier TGD variant of Widom-Larsen model with the model inspired by the TGD inspired model of Pollack's fourth phase of water using as input data findings from laser pulse induced cold fusion discovered by Leif Holmlid and collaborators. I consider briefly also alternative options (models assuming surface plasma polariton and heavy electron). After that I apply TGD inspired model in some cases (Pons-Fleischman effect, bubble fusion, and LeClair effect).

For details see the article Cold Fusion Again or the new chapter with the same title.

For a summary of earlier postings see Links to the latest progress in TGD.

Saturday, December 19, 2015

M89 hadron physics is there and maybe also MG,79 hadron physics!

I have been busily building overall view about the relation of M89 and the various bumps. The basic prediction is entire spectroscopy of mesons and baryons with masses scaled up by factor 29=512 from those of ordinary hadronic physics assignable to M107. Also MG,79 ("G" for "Gaussian") with scaling factor 214.

After few days it has become clear that there are indications about bumps at masses of all low lying mesons of M89 physics! Pion around 68 GeV, η at 274 GeV decaying to 137 GeV gamma pairs detected by Fermi telescope, kaon a assignable to 250 GeV bump reported ATLAS, scaled up variants of η(1405) and η(1500) with masses 702.5 GeV (bump at 700 GeV ) and 750 GeV (the bump about which community has been talking about last week)! Also an excess in the production of dijets above 500 GeV dijet mass has been reported and could relate to the decays of η'(958) with scaled up mass of 479 GeV. Also digamma bump should be detected.

According to Lubos, also excess of tb pairs in range 200-600 GeV serving as signature of the production of charged M89 mesons is detected.

Hence it seems that at least M89 hadron physics is there! Things are developing much faster than I dared to dream! The hypothesis should be easy to test since the masses seem to obey the naive scaling. A smoking gun evidence would be detection of production of pairs of M89 nucleons with masses predicted by naive scaling to be around 470 GeV. This would give rise to dijets above 940 GeV cm energy with jets having total quantum numbers of ordinary nucleons. Each M89 nucleon consisting of 3 quarks of M89 hadron physics could also transform to ordinary quarks producing 3 ordinary hadron jets.

What about MG,79 hadron physics? Tommaso Dorigo told about indications for a neutral di-boson bump at 2 TeV. The mass of M79 pion is predicted to be 2.16 TeV by a direct scaling of the mass 135 MeV of the ordinary neutral pion!

For more detailed summary see the previous posting and the article Indications for the new physics predicted by TGD.

For a summary of earlier postings see Links to the latest progress in TGD.

Tewari's space-energy generator for two decades later

N-machine of De Palma and Tewari's space-energy generator are basically similar free energy devices, which I have considered as examples of possible new physics with technological importance. Tewari's space-energy generator corresponds to a rotating cylinder with metal disk attach to it and rotating with it. The energy of rotation is reported to transform to electric energy with COP or order 2.5. There is some energy source and the challenge is to identify it.

I constructed the first model for Tewari's space-energy generator in TGD framework for about two decades ago. The models of N-machine and Tewari's space-energy generator rely on simple observations about rotating magnetic systems. One might hope of extracting energy from the system by connecting the system with ground via a load so that the charge can flow through it and di-electric breakdown is avoided. The Coulomb energy is however rather small so that this is not a promising idea. The model for rotating magnetic systems involved transfer of energy between magnetic body of the rotating system and system utilizing the energy by negative energy signals but I was not able to propose any detailed model for how the energy is generated at magnetic body.

About two decades later I saw an article about Tewari's space-energy generator. The title of the article was India permits free energy technology despite threats from UK, US, Saudi Arabia. Title certainly tells something about the attitudes of those how make decisions about energy.

New elements

It is interesting to reconsider Tewari's space-energy generator using a conceptual framework considerably more refined as twenty years ago.

  1. At the time of first writing I knew nothing about dark matter assigned with the hierarchy of Planck constants heff=n× h, and its applications to biological systems and free energy systems.

  2. I had not formulated Zero Energy Ontology (ZEO) at that time. I had discovered the notion of negative energy signals but I did not have a formulation of consciousness theory as a the generalization of quantum measurement theory in which causal diamonds (CDs) are in key role and state function reductions take place at either boundary of CD. When the first reduction to the opposite boundary takes place, the arrow of time changes. Neither had I realized that conscious entities - selves- can be identified as state function reduction sequences at the same boundary of CD so that the presence of negative energy signals transferring information or energy can be seen as signature for the macroscopic quantum jumps changing the arrow of time. The period of Zeno effect - repeated reductions with no effect on the state of system - is replaced with period of consciousness for the system. Sensory input corresponds to the boundary where state changes and which also itself is shifted (the distance between tips of CD increases). Together with Negentropy Maximization Principle (NMP) this allows to understand fundamental aspects of consciousness.

  3. Gradually the general ideas developed more detailed as I tried to understand the strange effects related to rotating magnetic systems. I identified the magnetic body of the rotating system as a carrier of dark matter cyclotron Bose-Einstein condensates carrying angular momentum as spin and angular momentum. The coherent transfer of both energy and spin and angular momentum from the magnetic body to the rotating system was proposed as a manner for rotating magnetic system to angular momentum. One could understand the generation of angular momentum by assuming that the net angular momentum vanishes but the the origin of the rotational energy remained unclear.

  4. Concerning the origin of energy in Tewari's space-energy generator, the most conservative view is that the negative Coulomb interaction energy between the rotating system and magnetic body serves as the source of energy. The Coulomb energy is however rather small.

    Charge could be also fed to the system and is transferred outside it so that Coulomb energy begins to increase and eventually di-electric breakdown could take place. If the charge fed to the system does not rotate with it, a radial Lorentz force is generated. If it is of the same sign as the charged already generated, it is transferred outside by Coulomb repulsion. This however tends to reduce the Coulomb energy, which is undesirable.

    If the incoming charge rotates with the system, it experiences no Lorentz force. In this case the mechanism driving the charge outside the system could be based on the preservation the E=-v× B condition: this involves new physics since the condition implies vacuum charge density. This option looks more plausible.

  5. How are the charges transferred to the magnetic body and how the negative Coulomb energy could be generated?
    It is easy to see that vacuum charge density is negative if the direction of rotation is clockwise and magnetic field points "upwards" (B>0, ω<0) in right-handed coordinate frame. In this case protons could be transferred to the magnetic body and negative Coulomb energy would increase in magnitude. The preferred direction of rotation means large parity breaking possibly related to the large breaking of parity in living matter. By energy conservation the negative Coulomb energy must be compensated by some form of energy, say that associated with the rotational motion of system and dark matter at it with opposite angular moment, or by external load utilizing the Coulomb energy (wire connecting the rotating system to the load and back would be enough).

  6. An attractive idea is that the matter at the magnetic body is dark and thus makes possible macroscopic quantum coherence. In this respect an especially interesting effect is the generation of what Pollack calls the fourth phase of water. This phase consists of negatively charged regions - exclusion zones (EZs) - with positive charge outside them. The TGD inspired proposal is that the phase is formed as protons from the hydrogen bonded water molecule pairs inside EZs are transferred to the dark magnetic flux tubes outside EZs having large value of heff=n× h and form dark proton sequences identifiable as dark nuclear strings identified as dark nuclei. Some strange findings motivate the hypothesis that these dark proton strings are assumed to form a fundamental representation for genetic code. Earlier the ordinary nuclei are identified as nuclear strings.

    Besides Coulomb energy also the dark nuclear binding energy liberated in the formation of dark nuclei could be usable energy. This energy could also stabilize the flux tubes against Coulomb repulsion as it does in the case of the ordinary nuclei. If the nuclear binding energy scales like 1/distance, it would be of the order of the energy of bio-photons for dark nuclei of atomic atom size - that is the energy range of visible and UV light. The liberated energy could be utilized. Hence the Coulombic binding energy need not be the only source of energy. For slow enough feed the system is expected to keep its original state, it must transfer this positive charge to the magnetic body so that its positive charge increases and new dark nuclei are formed and also Coulomb interaction energy increases in magnitude.

Could this dark proton phase should be formed in the case of Tewari space-energy generator? It is an experimental fact that the rotating Faraday disk becomes charged. The sign of the charge however depends on the direction of the rotation. This means large parity breaking. Does Pollack effect occur only for the rotation direction for which the generated charge - vacuum charge in the above model - is negative? Or can the dark nuclei form also at the flux tubes inside the Faraday disk? If dark nuclei are formed, the liberated dark nuclear energy could go to rotational energy of the rotating magnetic system. In principle it is possible that the dark nuclei transform to ordinary nuclei. If this happens, huge nuclear energies are liberated. I have proposed that this could explain the claimed bio-fusion (an amusing accident is that Tewari is nuclear engineer!). In the sequel Tewari's space-energy generator is considered from this point of view.

An updated model for Tewari's space-energy generator

One can formulate an explicit model for the situation.

  1. Assume a cylinder of radius R (with area S=π R2) and length L rotating with angular frequency ω and carrying constant magnetic field B, whose flux arrives along single or even more cylindrical magnetic walls. Assume that from the conservation of magnetic flux the return flux has same value of magnetic field so that the total area of return flux tubes is same as the area of the cylinder: Sret= S=π R2.

  2. The condition E=v× eB determining the radial electric field associated with longitudinal
    electro field rotation with velocity v= ω× ρ could be interpreted in terms of mechanical equilibrium.
    One could see the condition as as generalization of Faraday law for linear motion following from Lorentz invariance
    to that for a rotational motion. This generalization does not however follow from Maxwell's equations. A further interpretation natural in TGD framework is that the electric field is obtained automatically when one puts the 3-surface in rotation motion so that the induced gauge potential A(ρ,φ) is replaced with A(ρ,φ-ω t).

    What is remarkable that electric field is not anymore sourceless so that one obtains vacuum charge density the sign of which depends on the direction of rotation. The interpretation is that some fraction of protons or electrons is transferred outside the rotating cylinder to the cylindrical magnetic walls. Assume that protons are transferred outside the rotating cylinder to magnetic flux tubes carrying the return magnetic flux and are transformed to dark matter with a large value heff=n× h of Planck constant. This requires quantum criticality in some sense.

  3. The assumption that dark protons form dark proton sequences identifiable as dark nuclei with a binding energy, which scales like 1/distance so that it scales like 1/heff. For the scaled up nucleon size about a=1 Angstrom one would have heff/h= a/λp≈ 105. The binding energy per nucleon would scale from its typical value of MeV to 10 eV. An attractive assumption is that the range of biophoton energies covering visible and UV lengths covers the binding energy range. The binding energy is liberated as dark photons with energies of visible and UV photons and can provide energy for the rotating system.

  4. A more quantitative estimate is obtained from the expression of electric field E= ω eBρ. The charge density is ρc= ω eB. The number of elementary charges per unit length is

    dN/dl=( ω/c)(Φ/Φ0) =(S/S0) ,

    Φ= ∫ eBdS=eBS .

    Φ/Φ0 is the magnetic flux using as quantum of magnetic flux with area S0 =π lB2/2, where the
    magnetic length lB is given by lB= (hbar c/eB)1/2 . One has lB≈ 26 nm (Tesla/B)1/2.

  5. Consider a fraction of cylinder with length a which corresponds to scaled up nucleon size defining the length of one unit in dark nucleon string. Consider a fraction of cylinder with length a which corresponds to scaled up nucleon size defining the length of one unit in dark nucleon string. The total number nucleons at cylindrical return flux quanta per nucleon length is

    Δ N= (dN/dl)a= (ω a/c) (S/S0) .

    The total area per single charge at return flux tubes using S0 as unit is

    (Δ S/S0)= (Sret/S0) (1/Δ N)=Δ (Sret/S0) = (Sret/S0 (1/Δ N)=
    (c/ω a)= (c/ω λp) (h/heff) .

    This gives

    Δ S/S= (c/ω λp) (h/heff) (S0/S) .

    One must have Δ S/S<1 (the number of protons at dark flux tube is larger than one). This gives a lower bound to the value of ω as

    fmin= (c/2πλp)(h/heff) (S0/S) .

  6. Consider as an example a cylinder of radius R= 1 meter carrying magnetic field of 1 Tesla and assume heff/h=106 giving nm sized dark protons suggested to be important in biology. From S0/S=2.62× 10-16 m2 and (c/2πλp)= 2.3× 1023 Hz one obtains ωmin≈ 25 Hz. Large enough value of Planck constants help to lower the minimal rotation frequency. Rather small numbers of dark protons are involved so that the power liberated by the formation of dark nuclei remains rather small. One Watt would require 6.84× 1018 eV/s. A cylinder with radius of 1 m and length of 10 meters would liberate a total energy of about 109 eV (about 10-9 Joule).

A continual production of energy requires a continual feed of positive charge to the cylinder implying a continual feed to the magnetic flux tubes in steady state.
  1. The rate defining step is the transfer of charge to the flux tubes. This process is probably a quantum process. The feed dNp/dt of positive charge cannot exceed the rate of this process. The power produced would be

    P = IΔ E/e =(dNp/dt)Δ E,

    where Δ Ed= (h/heff)Δ E is the binding energy per dark nucleon and Δ E∼ 1 MeV ordinary binding energy per nucleon. This assuming that binding energy scales as 1/length.

  2. One obtains Δ E=∼ 1 eV for heff/h=106. For Δ E=1 eV this would give P= (I/A)× (106/heff) W from the fact that I=1 Ampre corresponds to a current of dNp/dt= 6.84× 1018 charge carriers per second. A continual transformation of energy to electric energy could be achieved if the liberated energy does not go to accelerated rotation of the cylinder but only to the compensation of dissipative effects. One should also have a model for the transfer of energy from the flux tubes to the rotating system and to the energy of the current. This step is expected to involve dissipative losses.

  3. The power for ohmic losses is given by Ploss= UI= I2R (here external load is included) and in steady state one has P= Ploss giving voltage

    U= IR= (106/heff) V .

    This is rather small number. One can of course ask whether supra current could help in the situation.

  4. Note that this process would generate an increasing voltage between flux tubes and cylinder, which as such could serve as source of electrostatic energy. This would happen even without the occurrence of dark fusion. This would not however yield excess energy.

One can consider also the possibility of ordinary cold nuclear fusion. Could one induce transformation of dark nuclei located at magnetic flux tubes to ordinary nuclei thus liberating binding energy of nucleon about 1 MeV? This would be equivalent with cold nuclear fusion and evidence for it has been found in living matter and systems involving splitting of water \citeallb/geesink. A possible mechanism would rely on bringing negative charge to the rotating system. This would increase the Coulobic attraction between dark nuclei at flux tubes and could bring them to the cylinder, where they would transform to ordinary matter and liberate nuclear binding energy. This kind of possibility would mean technological revolution.

A continual transformation of energy to electric energy could be achieved if the liberated energy does not go to accelerated rotation of the cylinder but only to the compensation of dissipative effects. One should also have a model for the transfer of energy from the flux tubes to the rotating system and to the energy of the current. This step is expected to involve dissipative losses. One can of course ask whether supra current could help the situation. Note that the proposed model might quite generally apply to the modelling of rotating magnetic systems and suggests that a continual current through the system might make possible continual production of energy.

See the article Tewari's space-energy generator two decades later or the chapter About Strange Effects Related to Rotating Magnetic Systems of "TGD and Fringe Physics".

For a summary of earlier postings see Links to the latest progress in TGD.

Friday, December 18, 2015

Indications for the new physics predicted by TGD


The recently reported 750 GeV bump at LHC seems to be more important than I though originally. This bump is only one instance of potential anomalies of the standard the model, which TGD could explain. TGD indeed predicts a lot of new physics at LHC energy scale. For this reason I decided to write a more organized version of the earlier posting.

  1. TGD suggests the existence of two scaled up copies of the ordinary hadron physics labelled by Mersenne prime M107=2107-1. The first copy would corresponds to M89 with mass spectrum of ordinary hadrons scale by factor 29= 512 and second one to Gaussian Mersenne MG,179=(1+i)79-1 with mass spectrum of ordinary hadrons scaled by 214. The signature of the this new physics is the existence of entire hadronic spectroscopy of new states rather than just a couple of exotic elementary particles. If this new physics is there it is eventually bound to become visible as more information is gathered.

  2. TGD also suggests the existence of copies of various gauge bosons analogous to higher fermion generations assisupgned to the genus g=0,1,2 of boundary topology of partonic 2-surface: genus is actually the of partonic 2-surface whose light-like orbit is the surface at which the induced metric changes its signature from Minkowskian to Euclidian. Copies of gauge bosons (electroweak bosons and gluons) and Higgs correspond to octet representations for the dynamical "generation color" group SU(3) assignable to 3 fermion generations. The 3 gauge bosons with vanishing "color" are expected to be the lightest ones: for them the opposite throats of wormhole contact have same genus. The orthogonality of charge matrices for bosons implies that the couplings of these gauge bosons (gluons and electroweak bosons) to fermions break universality meaning that they depend on fermion generations. There are indications for the breaking of the universality. TGD differs from minimal supersymmetric extension of standard model in that all these Higgses are almost eaten by weak gauge bosons so that only the neutral Higgses remain.

    One can ask whether the three lightest copies of weak and color physics for various boson families could correspond M89, MG,79 and M61.

  3. TGD SUSY is not N=1. Instead superpartners of particle is added by adding right handed neutrino or antineutrino
    or pair of them to the state. In quark sector one obtains leptoquark like states and the recent indications for the breaking of lepton universality has been also explained in terms of leptoquarks which indeed have quantum numbers of bound states of quark and right-handed neutrino also used to explain the indications for the breaking of lepton universality.
During last years several indications for the new physics suggested by TGD have emerged. Recently the first LHC Run 2 results were announced and there was a live webcast.
  1. The great news was the evidence for a two photon bump at 750 GeV about which there had been rumors. Lubos told earlier about indications for diphoton bump around 700 GeV. This mass differs only few percent from the naive calling estimate for the mass of ρ and ω mesons of M89 hadron physics for which masses for the simplest option are obtained by using p-adic length scale hypothesis by scaling with the factor 2(107-89)/2 =512 the masses of these mesons for ordinary M107 hadron physics.

    There is however a problem: these mesons do not decay to gamma pairs! The effective interaction Lagrangian for photon and ρ is product of Maxwell action with the divergence of ρ vector field. ρ is massive. Could the divergence be non-vanishing and could the large mass of ρ make the decay rate high enough? No. The problem is that the divergence should vanish for on mass shell states also for massive ρ. Also off mass shell states with unphysical polarization of ρ near resonance are excluded since the propagator should eliminate time-like polarizations in the amplitude. Scalar, pseudoscalar, or spin 2 resonance is the only option.

    If the scaling factor is the naive 512 so that M89 pion would have mass about 70 GeV, there are several meson candidates with relative angular momentum L=1 for quarks assignable to string degrees of freedom in the energy region considered. The inspection of the experimental meson spectrum shows that there is quite many resonances with desired quantum numbers. The scaled up variants of neutral scalar mesons η(1405) and η(1475) consisting of quark pair would have mases 702.5 GeV and 737.5 GeV and could explain both 700 GeV and 750 bump. There are also neutral exotic mesons, which cannot be quark pairs but pairs of quark pairs f0(400), f0(980), f2(1270), f0(1370), f0(1500), f2(1430), f2(1565), f2(1640), f?(1710) (the subscript tells the total spin and the number inside brackets gives mass in MeVs) would have naively scaled up masses 200, 490, 635, 685, 725, 750, 715, 782.5, 820, 855 GeV. The charged exotic meson a0(1450) scales up to 725 GeV state.

  2. There is a further mystery to be solved. Matt Strassler emphasizes the mysterious finding fact that the possible particle behind the bump does not seem to decay to jets: only 2-photon state is observed.

    Situation might of course change when data are analyzed. Jester in fact reports that 1 sigma evidence for Zγ decays has been observed around 730 GeV. The best fit to the bump has rather large width, which means that there must be many other decay channels than digamma channels. If they are strong as for TGD model, one can argue that they should have been observed.

    As if the particle would not have any direct decay modes to quarks, gluons and other elementary particles. If the particle consists of quarks of M89 hadron physics it could decay to mesons of M89 hadron physics but we cannot directly observe them. Is this enough to explain the absence of ordinary hadron jets: are M89 jets somehow smoothed out as they decay to ordinary hadrons? Or is something more required? Could they decay to M89 hadrons leaking out from the reactor volume before a transition to ordinary hadrons?

    The TGD inspired idea that M89 hadrons are produced at RHIC in heavy ion collisions and in proton heavy ion collisions at LHC as dark variants with large value of heff= n×h with scaled up Compton length of order hadron size or even nuclear size conforms with finding that the decay of string like objects identifiable as M89 hadrons in TGD framework explains the unexpected properties of what was expected to be simple quark gluon plasma analogous to blackbody radiation. Could dark M89 eta mesons decaying only via digamma annihilation to ordinary particles be in question? Large heff states are produced at quantum criticality (they are responsible for quantal long range correlations) and the criticality would
    correspond to the phase transition fron confined to de-confined phase (at criticality confinement in the same or larger scale but with much longer Compton wavelength!). They have life times which are scaled up by heff factor: could this imply the leak out? Note that in TGD inspired biology dark EEG photons would have energies in bio-photon energy range (visible and UV) and would be exactly analogous to dark M89 hadrons.

  3. Lubos mentions in his posting several excesses which could be assigned with the above mentioned states. The bump at 750 GeV could correspond to scaled up copy of η(1475)$ or - less probably - of f0(1500). Also the bump structure around 700 GeV for which there are indications could be explained as a scaled up copy of η(1405) with mass 702.5 GeV or - less plausibly - f0(1370) with mass around 685 GeV. Lubos mentions also a 662 GeV bump. If it turns out that there are several resonances in 700 TeV region (and also elsewhere) then the only reasonable explanation relies on hadron like states since one cannot expect a large number of Higgs like elementary particles. One can of course ask why the exotic states should be seen first.

  4. Remarkably, for the somewhat ad hoc scaling factor 2× 512∼ 103 one does not have any candidates so that the M89 neutral pion should have the naively predicted mass around 67.5 GeV. Old Aleph anomaly had mass 55 GeV. This anomaly did not survive. I found from my old writings that Delphi and L3 have also observed 4-jet anomaly with dijet invariant mass about 68 GeV: M89 pion? There is indeed an article about search of charged Higgs bosons in L3 telling about an excess in cs*τ-ν*τ production identified in terms of H++H- annihilation suggesting charged Higgs mass 68 GeV. TGD based interpretation would in terms of the annihilation of charged M89 pions.

    The gammas in 130-140 GeV range detected by Fermi telescope were the motivation for assuming that M89 pion has mass twice the naively scaled up mass. The digammas could have been produced in the annihilation of a state with mass 260 GeV. The particle would be the counterpart of the ordinary η meson η(548) with scaled up mass 274 GeV thus decaying to two gammas with energies 137 GeV. Also scaled up eta prime shold be there. Also an excess in the production of two-jets above 500 GeV dijet mass has been reported and could relate to the decays of η' (958) with scaled up mass of 479 GeV! Also digamma bump should be detected.

  5. What about M89 kaon? It would have scaled up mass 250 GeV and could also decay to digamma. There are indications for a Higgs like state with mass of 250 GeV from ATLAS! It would decay to 125 GeV photons - the energy happens to be equal to Higgs mass. There are thus indications for both pion, kaon, all three scaled up η mesons, kaon and η' with predicted masses! The lowest lying M89 meson spectroscopy could have been already seen!

  6. Lubos tells that ATLAS sees charged boson excess manifesting via decay to tb in the range 200-600 TeV. Here Lubos takes the artistic freedom to talk about charged Higgs boson excess since Lubos still believes in standard SUSY predicting copies several Higgs doublets. TGD does not allow them. In TGD framework the excess could be due to the presence of charged M89 mesons: pion, kaon, ρ, ω.

  7. A smoking gun evidence would be detection of production of pairs of M89 nucleons with masses predicted by naive scaling to be around 470 GeV. This would give rise to dijets above 940 GeV cm energy with jets having total quantum numbers of ordinary nucleons. Each M89 nucleon consisting of 3 quarks of M89 hadron physics could also transform to ordinary quarks producing 3 ordinary hadron jets.

Is there any evidence for MG,79 hadron physics? Tommaso Dorigo told about indications for a neutral di-boson bump at 2 TeV. The mass of M79 pion is predicted to be 2.16 TeV by a direct scaling of the mass 135 MeV of the ordinary neutral pion!

What about higher generations of gauge bosons?

  1. There has been also a rumour about a bump at 4 TeV. By scaling Higgs mass 125 GeV by 32 one obtains 4 TeV! Maybe the Higgs is there but in different sense than in standard SUSY! Could one have copy of weak physics with scale up gauge boson masses and Higgs masses waiting for us! Higgs would be second generation Higgs associated with second generation of weak bosons analogous to that for fermions predicted by TGD? Actually one would have octet associated with dynamical "generation color" symmetry SU(3) but neutral members of the octet are expected to be the lightest states. This Higgs would have also only neutral member after massivation and differ from SUSY Higgs also in this respect. The scaled up weak boson masses would be by scaling with factor 32 from 80.4 GeV for W and 91 GeV for Z would be 2.6 TeV and 2.9 TeV respectively. Lubos mentions also 2.9 GeV dilepton event: decay of second generation Z0?!

  2. There is already evidence for second generation gauge bosons from the evidence for the breaking of lepton universality. The couplings of second generation weak bosos depend on fermion generation because their charge matrices must be orthogonal to those of the ordinary weak bosons. The outcome is breaking of universality in both lepton and quark sector. An alternative explanation would be in terms leptoquarks, which in TGD framework are super partners of quarks identifiable as pairs of right-handed neutrinos and quarks.

We are living exciting times! If TGD is right, experimenters and theorists are forced to change their paradigm completely. Instead of trying to desperately to identify elementary particle predicted by already excluded theories like SUSY they must realize that there is entire zoo of hadron resonances whose existence and masses are predicted by scaled up hadron physics. Finding a needle in haystack is difficult. In the recent situation one does not even know what one is searching for! Accepting TGD framework one would know precisely what to search for. The enormous institutional inertia of recent day particle physics community will not make the paradigm shift easy. The difficult problem is how to communicate bi-directionally with the elite of particle physics theorists, which refuses to take seriously anyone coming outside the circles.

See the article New indications for the new physics predicted by TGD and chapter New Particle Physics Predicted by TGD: Part I of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy".

For a summary of earlier postings see Links to the latest progress in TGD.

Monday, December 14, 2015

Newest LHC Run 2 rumours

Today the first LHC Run 2 results will be announced and there will be a live webcast. There are already now rumours circulating.

  • Jester is tweeting a rumor about two photon bump at 750 GeV: differs only few percent from the naive calling estimate for the mass of M89 ρ and ω! There is however a problem: these mesons do not decay to gamma pairs! The effective interaction Lagrangian for photon and ρ is product of Maxwell action with the divergence of ρ vector field. ρ is massive. Could the divergence be non-vanishing and could the large mass of ρ make the decay rate high enough? No. The problem is that the divergence should vanish for on mass shell states also for massive ρ. Also off mass shell states with unphysical polarization of ρ near resonance are excluded since the propagator should eliminate time-like polarizations in the amplitude. Scalar, pseudoscalar, or spin 2 resonance is the only option.

    If the scaling factor is the naive 512 so that M89 pion would have mass about 70 GeV, there are several meson candidates with relative angular momentum L=1 for quarks assignable to string degrees of freedom in the energy region considered. The inspection of the experimental meson spectrum shows that the resonances in question are a0(1450), f0(1500), f2(1430), f2(1565) (the subscript tells the total spin) would have naively scaled up masses 725, 750, 715, 782.5 GeV. Also the states around 700 GeV could be explained. For scaling factor 2*512 one does not have any candidates so that the M89 pion has mass around 70 GeV. There have been indications about a bump in this region. The only interpretation for the claimed bump around 140 GeV for which there are some indications would be as p-adically scaled up state. If it turns out to be several resonances in 700 TeV region (and also elsewhere) then the only reasonable explanation relies on hadron like states since one cannot expect a large number of Higgs like elementary particles.

  • Tommaso Dorigo tells about indications for a di-boson bump at 2 TeV (see this). The particle should be neutral. Amusingly, by scaling electron mass from Mersenne prime M127 to Gaussian Mersenne M79 one one obtains in good accuracy 2 TeV. Unfortunately, neither electron nor selectron (electron + right handed neutrino or antineutrino in TGD) is neutral. Strange!

    Should check p-adic mass calculations whether quark-antiquark state with lowest generation (U or D type quark pair - recall genus-generation correspondence as explanation of family replication phenomenon in TGD) could have mass equal to electron mass.

    There has been also a rumour about bump at 4 TeV which brings in mind electro-pion - bound state of color octet excitations of electron - with mass very precisely 2 times electron mass for which evidence was found already at seventies but forgotten because light exotic does not conform with weak boson decay widths.


  • Lubos tells that ATLAS sees charged boson excess manifesting via decay to tb in the range .2-.6 TeV. Here Lubos takes the artistic freedom to talk about charged Higgs boson excess since Lubos still believes in standard SUSY predicting copies several Higgs doublets. In TGD framework the excess could be due to the presence of charged M89 mesons: pion, kaon, ρ,ω.


  • One must however notice that by scaling Higgs mass 125 GeV by 32 one obtains 4 TeV! Maybe the Higgs is there but in different sense than in standard SUSY! Could one have copy of weak physics with scale up gauge boson masses and Higgs masses waiting for us! Higgs would be second generation Higgs associated with second generation of weak bosons analogous to that for fermions predicted by TGD? Actually one would have octet associated with dynamical "generation color" symmetry SU(3) but neutral members of the octet are expected to be the lightest states. This Higgs would have also only neutral member after massivation and differ from SUSY Higgs also in this respect. The scaled up weak boson masses would be by scaling with factor 32 from 80.4 GeV for W and 91 GeV for Z would be 2.6 TeV and 2.9 TeV respectively. Lubos mentions also 2.9 GeV dilepton event: decay of second generation Z0?!

    There is already evidence for second generation gauge bosons from the evidence for the breaking of lepton universality. The couplings of second generation weak bosos depend on fermion generation because their charge matrices must be orthogonal to those of the ordinary weak bosons. The outcome is breaking of universality in both lepton and quark sector. An alternative explanation would be in terms leptoquarks, which in TGD framework are super partners of quarks identifiable as pairs of right-handed neutrinos and quarks.

We are living exciting times! If TGD is right, experimenters and theorists are forced to change their paradigm completely. Instead of trying to desperately to identify elementary particle predicted by already excluded theories like SUSY they must realize that there is entire zoo of resonances predicted by scaled up hadron physics. The enormous institutional inertia of recent day particle physics community will not make the paradigm shift easy.

For a more organised summary see New indications for the new physics predicted by TGD.

For a summary of earlier postings see Links to the latest progress in TGD.

Wednesday, December 09, 2015

Is non-associative physics and language possible only in many-sheeted space-time?

In Thinking Allowed Original there was very interesting link added by Ulla about the possibility of non- associative quantum mechanics.

Also I have been forced to consider this possibility.

  1. The 8-D imbedding space of TGD has octonionic tangent space structure and octonions are non-associative. Octonionic quantum theory however has serious mathematical difficulties since the operators of Hilbert space are by definition associative. The representation of say octonionic multiplication table by matrices is possible but is not faithful since it misses the associativity. More concretely, so called associators associated with triplets of representation matrices vanish. One should somehow transcend the standard quantum theory if one wants non-associative physics.

  2. Associativity therefore seems to be fundamental in quantum theory as we understand it recently. Associativity is indeed a fundamental and highly non-trivial constraint on the correlation functions of conformal field theories. In TGD framework classical physics is an exact part of quantum theory so that quantum classical correspondence suggests that associativity could play a highly non-trivial role in classical TGD.

    The conjecture is that associativity requirement fixes the dynamics of space-time sheets - preferred extremals of Kähler action - more or less uniquely. One can endow the tangent space of 8-D imbedding H=M4× CP2 space at given point with octonionic structure: the 8 tangent vectors of the tangent space basis obey octonionic multiplication table.

    Space-time realized as n-D surface in 8-D H must be either associative or co-associative: this depending on whether the tangent space basis or normal space basis is associative. The maximal dimension of space-time surface is predicted to be the observed dimension D=4 and tangent space or normal space allows a quaternionic basis.

  3. There are also other conjectures (see this) about what the preferred extremals of Kähler action defining space-time surfaces are.
    1. A very general conjecture states that strong form of holography allows to determine space-time surfaces from the knowledge of partonic 2-surfaces and 2-D string world sheets.
    2. Second conjecture involves quaternion analyticity and generalization of complex structure to quaternionic structure involving generalization of Cauchy-Riemann conditions.
    3. M8-M4× CP2 duality stating that space-time surfaces can be regarded as surface in either M8 or M4× CP2 is a further conjecture.
    4. Twistorial considerations select M4× CP2 as a completely unique choice since M4 and CP2 are the only spaces allowing twistor space with Kähler structure. The conjecture is that preferred extremals can be identified base spaces of 6-D sub-manifolds of the product CP3× SU(3)/U(1)× U(1) of twistor spaces of M4 and CP2 having the property that it makes sense to speak about induced twistor structure.
    The "super(optimistic)" conjecture is that all these conjectures are equivalent.
One must be of course very cautious in order to not draw too strong conclusions. Above one considers quantum physics at the level of single space-time sheet. What about many-sheeted space-time? Could non-associative physics emerge in TGD via many-sheeted space-time? To answer this question one must first understand what non-associativity means.

  1. In non-associative situation brackets matter. A(BC) is different from (AB)C. From schooldays or at least from the first year calculus course one recalls the algorithm: when calculating the expression involving brackets one first finds the innermost brackets and calculates what is inside them, then proceed to the next innermost brackets, etc... In computer programs the realization of the command sequences involving brackets is called parsing and compilers perform it. Parsing involves decomposition of program to modules calling modules calling.... Quite generally, the analysis of linguistic expressions involves parsing. Bells start to ring as one realizes that parsings form a hierarchy as also do the space-time sheets!

  2. More concretely, there is hierarchy of brackets and there is also a hierarchy of space-time sheets, perhaps labelled by p-adic primes. B and C inside brackets form (BC), something analogous to a bound state or chemical compound. In TGD this something could correspond to a "glueing" space-time sheets B and C at the same larger space-time sheet. More concretely, (BC) could correspond to braided pair of flux tubes B and C inside larger flux tube, whose presence is expressed as brackets (..). As one forms A(BC) one puts flux tube A and flux tube (BC) containing braided flux tubes B and C inside larger flux tube. For (AB)C flux one puts tube (AB) containing braided flux tubes A and B and tube C inside larger flux tube. The outcomes are obviously different. A

  3. Non-associativity in this sense would be a key signature of many-sheeted space-time. It should show itself in say molecular chemistry, where putting on same sheet could mean formation of chemical compound AB from A and B. Another highly interesting possibility is hierarchy of braids formed from flux tubes: braids can form braids, which in turn can form braids,... Flux tubes inside flux tubes inside... Maybe this more refined breaking of associativity could underly the possible non-associativity of biochemistry: biomolecules looking exactly the same would differ in subtle manner.

  4. What about quantum theory level? Non-associativity at the level of quantum theory could correspond to the breaking of associativity for the correlation functions of n fields if the fields are not associated with the same space-time sheet but to space-time sheets labelled by different p-adic primes. At QFT limit of TGD giving standard model and GRT the sheets are lumped together to single piece of Minkowski space and all physical effects making possible non-associativity in the proposed sense are lost. Language would be thus possible only in TGD Universe! My nasty alter ego wants to say now something - my sincere apologies: in superstring Universe communication of at least TGD has indeed turned out to be impossible! If superstringy universe allows communications at all, they must be uni-directional!

Non-associativity is an essentially linguistic phenomenon and relates therefore to cognition. p-Adic physics labelled by p-adic primes fusing with real physics to form adelic physics are identified as the physics of cognition in TGD framework.
  1. Could many-sheeted space-time of TGD provides the geometric realization of language like structures? Could sentences and more complex structures have many-sheeted space-time structures as geometrical correlates? p-Adic physics as physics of cognition would suggests that p-adic primes label the sheets in the parsing hierarchy. Could bio-chemistry with hierarchy of magnetic flux tubes added, realize the parsing hierarchies?

  2. DNA is a language and might provide a key example about parsing hierarchy. The mystery is that human DNA and DNAs of most simplest creatures do not differ much. Our cousins have almost identical DNA with us. Why do we differ so much? Could the number of parsing levels be the reason- p-adic primes labelling space-time sheets? Could our DNA language be much more structured than that of our cousins. At the level of concrete language the linguistic expressions of our cousin are indeed simple signals rather than extremely complex sentences of old-fashioned German professor forming a single lecture each. Could these parsing hierarchies realize themselves as braiding hierarchies of magnetic flux tubes physically and more abstractly as the parsing hierarchies of social structures. Indeed, I have proposed that the presence of collective levels of consciousness having hierarchy of magnetic bodies as a space-time correlates distinguishes us from our cousins so that this explanation is consistent with more quantitative one relying on language.

  3. I have also proposed that intronic portion of DNA is crucial for understanding why we differ so much from our cousins (see this and this). How does this view relate to the above proposal? In the simplest model for DNA as topological quantum computer introns would be connected by flux tubes to the lipids of nuclear and cell membranes. This would make possible topological quantum computations with the braiding of flux tubes defining the topological quantum computer program.

    Ordinary computer programs rely on computer language. Same should be true about quantum computer programs realized as braidings. Now the hierarchical structure of parsings would correspond to that of braidings: one would have braids, braids of braids, etc... This kind of structure is also directly visible as the multiply coiled structure of DNA. The braids beginning from the intronic portion of DNA would form braided flux tubes inside larger braided flux tubes inside.... defining the parsing of the topological quantum computer program.

    The higher the number of parsing levels, the higher the position in the evolutionary hierarchy. Each braiding would define one particular fundamental program module and taking this kind of braided flux tubes and braiding them would give a program calling these programs as sub-programs.

  4. The phonemes of language would have no meaning to us (at our level of self hierarchy) but the words formed by phonemes and involving at basic level the braiding of "phoneme flux tubes" would have. Sentences and their substructures would in turn involve braiding of "word flux tubes". Spoken language would correspond to a temporal sequence of braidings of flux tubes at various hierarchy levels.

  5. The difference between us and our cousins (or other organisms) would not be at the level of visible DNA but at the level of magnetic body. Magnetic bodies would serve as correlates also for social structures and associated collective levels of consciousness. The degree of braiding would define the level in the evolutionary hierarchy. This is of course the basic vision of TGD inspired quantum biology and quantum bio-chemistry in which the double formed by organism and environment is completed to a triple by adding the magnetic body.

p-Adic hierarchy is not the only hierarchy in TGD Universe: there is also the hierarchy of Planck constants heff=n× h giving rise to a hierarchy of intelligences. What is the relationship between these hierarchies?
  1. I have proposed that speech and music are fundamental aspects of conscious intelligence and that DNA realizes what I call bio-harmonies in quite concrete sense (see this and this): DNA codons would correspond to 3-chords. DNA would both talk and sing. Both language and music are highly structured. Could the relation of heff hierarchy to language be same as the relation of music to speech?

  2. Are both musical and linguistic parsing hierarchies present? Are they somehow dual? What does parsing mean for music? How musical sounds could combine to form the analog of two braided strand? Depending on situation we hear music both as separate notes and as chords as separate notes fuse in our mind to a larger unit like phonemes fuse to a word.

    Could chords played by single instrument correspond to braidings of flux tubes at the same level? Could the duality between linguistic and musical intelligence (analogous to that between function and its Fourier transform) be very concrete and detailed and reflect itself also as the possibility to interpret DNA codons both as three letter words and as 3-chords (see this)?

See the article Is non-associative physics and language possible only in many-sheeted space-time? or the new chapter Is non-associative physics and language possible only in many-sheeted space-time? of "Towards M-matrix".

For a summary of earlier postings see Links to the latest progress in TGD.

Why trust a theory?

I read in the same morning two opposite views about the talk String theory to the rescue of Joe Polchinski in the ongoing Munich workshop "Why trust a theory". Polchinski himself did not deliver the talk in person.

The first view is by Peter Woit . Woit uses citations from Polchinski's own text to reveal rather convingly what the situation in the field is. After having used 38 years to develop a unified theory starting as a generalization of super string model, I feel that I have some background to express also my opinion. I cannot but agree with Woit.

Second view is by Lubos Motl" and tells mostly about the mindscape of a phanatic who has lost the connection with reality. The earlier postings demonstrate this in many other areas of life: previous posting and earlier postings give a good idea how profound this loss of contact with reality is.

What is remarkable that both Polchinski and Lubos are very intelligent persons according to the standard measures. This shows how little intelligence matters when egos enter the game. In case of Polchinski it is easy to understand the situation: his lifework is about super strings and it is certainly extremely difficult to admit that the model which served as inspiration failed. Of course, mathematical aspects of his and others' work can be used by other researchers in future despite the fact that superstring model was not the theory: strings and holography are in well-defined sense also key element in TGD but emerge rather than being the starting point. Superstring community has all the technical skills needed to start doing real physics in TGD framework.

In the case of Lubos I find myself wondering why on Earth a relatively young person does not realize that he could do something useful instead of wasting his time as eternal superstring fan and producing hate talk about people realizing environmental and climate problems, women, "leftists", communists, black, moslems, etc.. Lubos could do better: I would use time to talk about Lubos unless he would not produce now and then about particle physics interesting pieces of text

For a summary of earlier postings see Links to the latest progress in TGD.

Monday, December 07, 2015

Do I hear M89 bells ringing?

p-Adic length scale hypothesis is one of the cornerstones of TGD. It leads to a stunning prediction. For instance, QCD should have several scaled variants with different mass scales and most plausibly they are associated with with p-adic primes which correspond to Mersenne primes or Gaussian Mersennes (primes for complex integers).

At LHC two candidates for scaled variant of hadron physics are suggestive: M89 hadron physics and MG,79 with mass scales differing by factor 25=32. As the first guess one can just multiply the mass spectrum of mesons of ordinary hadron physics by factor 2(107-89)/2)=512 or 2(107-79)/2) =214 to get an estimate for the meson masses of the new physics. For M89 however additional p-adic scaling by factor 2 is needed if one identifies its pion in terms of 135 GeV bump detected by Fermi telescope. Alternative would be 67.5 GeV: I do not know whether even this can be excluded. If these predictions are correct, a new Golden Age of physics is waiting for us to realise that it is there.

Adam Falkowski has the best particle physics rumours. According to Lubos he tells at this time via Twitter(@Resonaances) about indications for a bump at about 700 GeV decaying to two photons. According to Lubos, there are earlier indications for a bump at 662 GeV and also for a bump around 130-135 GeV observed by Fermi telescope: also I have talked a lot about this. 135 GeV is rather precisely 210 times neutral pion mass. By direct scaling the corresponding neutral ρ and ω meson masses would be 770 and 782 GeV differing 10 per cent from 700 GeV? Should I hear M89 bells ringing?

What will happen if these bumps turn out to be real particles? First attempted interpretations will certainly be in terms of standard SUSY, which in TGD framework is replaced with a variant in which addition of right handed neutrino or antineutrino line or both to the orbit of partonic 2-surface geberates sparticles. Squarks are predicted have quantum numbers of leptoquarks and there is a bunch of anomalies which have interpretation in terms of leptoquark something which no-one expected. It will be long and bitter debate comparable to string wars.

TGD model for the observations about heavy ion collisions at RHIC and later about proton heavy nucleus colissions at LHC assumes that the produced M89 mesons are dark with heff/h > 512 so that they would have Compton lengths of order nucleon size although mass would be much higher than proton mass. This implies quantum coherence at least in 512 longer scale as otherwise. In perturbative QCD however quarks and gluons are treated using a kinematic approach reducing nuclei to independent partons so that the description of collisions is in terms of quark and gluon collisions. Quantum coherence implied by large heff suggest a different, holistic description. There is just a collision or not: one cannot classify the collisions by number of parton-parton collisions.

The surprising finding from RHIC is that this seems to be the case! Collisions cannot be described by classifying them by the number of parton collisions taken place. This is like spending evening in party and remembering only that one was in party but remembering nothing about the people one met.

For a summary of earlier postings see Links to the latest progress in TGD.

Sunday, December 06, 2015

What Fermilab's Holometer experiment has to do with Quantum Gravity?

Bee told in rather critical tone about an article titled "Search for Space-Time Correlations from the Planck Scale with the Fermilab Holometer" reporting Fermilab experiment. The claim of Craig Hogan, who leads the experimental group, is that that the experiment is able to demonstrate the absence of quantum gravity effects. The claim is based on a dimensional estimate for transversal fluctuations of distances between mirrors reflecting light. The fluctuations of the distances between mirrors would be visible as a variation of interference pattern and the correlations of fluctuations between distant mirrors could be interpreted as correlations forced by gravitational holography. No correlations were detected and the brave conclusion was that predicted quantum gravitational effects are absent.

Although no quantitative theory for the effect exists, the effect is expected to be extremely small and non-detectable. Hogan has however different opinion based on his view about gravitational holography not shared by workers in the field (such as Lenny Susskind). Argument seems to go like follows (I am not a specialist so that there might be inaccuracies).

One has volume size R and the area of of its surface gives bound on entanglement entropy implying that fluctuations must be correlated. A very naive dimensional order of magnitude estimate would suggest that the transversal fluctuation of distance between mirrors (due to the fluctuations of space-time metric) would be given by ⟨ Δ x2 ⟩ ∼ (R/lP) ×lP2. For macroscopic R this could be measurable number. This estimate is of course ad hoc, involves very special view about holography, and also Planck length scale mysticism is involved. There is no theory behind it as Bee correctly emphasizes. Therefore the correct conclusion of the experiments would have been that the formula used is very probably wrong.

Why I saw the trouble of writing about this was that I want to try to understand what is involved and maybe make some progress in understanding TGD based holography to the GRT inspired holography.

  1. The argument of Hogan involves an assumption, which seems to be made routinely by quantum holographists: the 2-D surface involved with holography is outer boundary of macroscopic system and bulk corresponds to its interior. This would make the correlation effect large for large R if one takes seriously the dimensional estimate large for large R. The special role of outer boundaries is natural in AdS/CFT framework.

  2. In TGD framework outer boundaries do not have any special role. For strong form of holography (SH) the surfaces involved are string world sheets and partonic 2-surfaces serving as "genes" from which one can construct space-time surfaces as preferred extremals by using infinite number of conditions implying vanishing of classical Noether charges for sub-algebra of super-symplectic algebra.

    For weak form of holography one would have 3-surfaces defined by the light-like orbits or partonic 2-surfaces: at these 3-surfaces the signature of the induced metric changes from Minkowskian to Euclidian and they have partonic 2-surfaces as their ends at the light-like boundaries of causal diamonds (CDs). For SH one has at the boundary of CD fermionic strings and partonic 2-surfaces. Strings serve as geometric correlates for entanglement and SH suggests a map between geometric parameters - say string length - and information theoretic parameters such as entanglement entropy.

  3. The typical size of the partonic 2-surfaces is CP2 scale about 104 Planck lengths for the ordinary value of Planck constant. The naive scaling law for the the area of partonic 2-surfaces would be A ∝ heff2, heff=n×h. An alternative form of the scaling law would be as A ∝heff. CD size scale T would scale as heff and p-adic length scale as its square root ( diffused distance R satisfies R∼ Lp∝ T1/2 in diffusion; p-adic length scale would be analogous to R ).

  4. The most natural identification of entanglement entropy would be as entanglement entropy assignable with the union of partonic 2-surfaces for which the light-like 3-surface representing generalized Feynman diagram is connected. Entanglement would be between ends of strings beginning from different partonic 2-surfaces. There is no bound on the entanglement entropy associated with a given Minkowski 3-volume coming from the area of its outer boundary since interior can contain very large number of partonic 2-surfaces contributing to the area and thus entropy. As a consequence, the correlations between fluctuations are expected to be weak.

  5. Just for fun one can feed numbers into the proposed dimensional estimate, which of course does not make sense now. For R about of order CP2 size it would predict completely negligible effect for ordinary value of Planck constant: this entropy could be interpreted as entropy assignable to single partonic 2-surface. Same is true if R corresponds to Compton scale of elementary particle.
This argument should demonstrate how sensitive the quantitative estimates are for the detailed view about what holography really means. Loose enough definition of holography can produce endless number of non-sense formulas and it is quite possible that AdS/CFT modelled holography in GRT is completely wrong.

The difference between TGD based and GRT inspired holographies is forced by the new view about space-time allowing also Euclidian space-time regions and from new new view about General Coordinate Invariance implying SH. This brings in a natural identification of the 2-surfaces serving as holograms. In GRT framework these surfaces are identified in ad hoc manner as outer surfaces of arbtrarily chosen 3-volume.

After writing above comments I realized that around 2008 I have written about a proposal of Hogan. The motivation came from the email of Jack Sarfatti. I learned that gravitational detectors in GEO600 experiment have been plagued by unidentified noise) in the frequency range 300-1500 Hz. Craig J. Hogan had proposed an explanation in terms of holographic Universe. By reading the paper I learned that assumptions needed might be consistent with those of quantum TGD. Light-like 3-surfaces as basic objects, holography, effective 2-dimensionality, are some of the terms appearing repeatedly in the article. The model contained some unacceptable features such as Planck length as minimal wave length in obvious conflict with Lorentz invariance.

Having written the above comments I got again interested in the explanation of the reported noise. It might be real although Hogan's explanation is not plausible to me. Within light of afterwisdom generated during 7 years it is clear that the diffraction analog serving as the starting point in Hogan's model cannot be justified in TGD framework. Fortunately, diffraction can be replaced by diffusion emerging very naturally in TGD framework and finally allows to understand how Planck length emerges from TGD framework, where CP2 size is the fundamental length parameter.

  1. One could give up diffraction picture and begin directly from the formula Δ x= (lPL)1/2. This would allow also to avoid problems with Lorentz invariance generated by the idea about minimum wavelength. One would give up the interpretation of lPL) as wavelength so that the formula would be just dimension analytic guess and therefore unsatisfactory.

  2. Could one assign Δ x to the randomness of the light-like orbit of wormhole contact/partonic 2-surface/fermionic line at it. Δ x would represent the randomness of the transversal coordinate for light-like parton orbit. This randomness could be also assigned to the light-like curves defining fermion lines at the orbits of partonic 2-surfaces. Diffusion would provide the physical analogy rather than diffraction.

    T=L/c would correspond to time and Δ x would be analogous to the mean square distance ⟨ r2⟩ = DT, D= c2tP, diffused during time T. This would also conform qualitatively with the basic idea of p-adic thermodynamics. One would also find the long sought interpretation of Planck length as diffusion constant in TGD framework, where CP2 length scale is the fundamental length scale.

  3. Why the noise would appear at certain frequency range? A possible explanation is that large Planck constants are involved. The ratios of the frequency fhigh of laser beam to the relatively low frequencies fl in the frequency range of noise would correspond to the values of Planck constant involved: heff= fhigh/fl? Maybe low frequencies could correspond to bunches of dark low energy photons with total energy equal to that of laser photon. Dark photons could relate to the long range correlations inside laser beam.

    The presence of large values of Planck constants suggests strongly quantum criticality, which should relate to the long range coherence of the laser beam. Could one assign the long range correlations of laser beam with quantum criticality realized as spectrum of Planck constants?

How does this relate to the recent experimental finding reporting no fluctuations? I am not experimentalists but the experimental situations look very much the same. The simplest explanation is that the frequency range studied in Fermilab experiment does not correspond to the frequencies made possible by the available spectrum of Planck constants. If I have understood correctly, the range corresponds to considerably higher frequencies than the range 300-1500 Hz for the noise detected in the original experiments.

I do not know whether the noise reported in the motivating article has been eliminated. I hope not! It is unclear whether how the model relates to the Hogan's later model proposing that the correlations implied by holography as he interprets it, are not found. Certainly the idea that Planck wave length waves would be amplified to observable noise does not make sense in TGD framework. It is diffusion of fermion lines in transversal degrees of freedom of light-like random orbits of partonic 2-surfaces serving as a signature of non-pointlikeness of fundamental objects, which would become visible as noise.

For TGD based view model for the noise claimed in GEO600 experiment see the article Quantum fluctuations in geometry as a new kind of noise? or the chapter More about TGD and Cosmology of "Physics in Many-Sheeted Space-time".

For a summary of earlier postings see Links to the latest progress in TGD.