Tuesday, April 26, 2005

Cold fusion and phase transitions increasing hbar

I started to read Tadahiko Misunos book "Nuclear Transmutations: The Reality of Cold Fusion". For few weeks ago I proposed a rather detailed model for the process in terms of a phase transition in which the value of hbar increases. Since then the view about these phase transitions in terms of inclusions of von Neumann algebras has become much more precise (the postings during last weeks are mostly about the evolution of the ideas related to von Neumann algebras). The immediate observation was that the model contained a little bug. Standard nuclear physics predicts that neutron and and tritium should be produced in cold fusion proceeding via D+D-->... reactions in equal amounts since the rates for ^3He + neutron and ^3H+proton are predicted to be identical in good approximation. The detected flux of neurons is however smaller by several orders of magnitude as for instance Misuno demonstrated (any theoretician should the book to learn how incredibly tortuous the path leading to a successful measurement really is). I did not realize this constraint in the first version of the model but it turned out that the model explains it naturally and circumvents also other objections. Due to the enormous importance of the reality of cold fusion both for the world view and future energy technology, I glue below the key argument of the revised model appearing also in the earlier longer posting "ORMEs, cold fusion, sonofusion, sono-luminescence". My sincere hope is that physics community would finally begin to make return to reality from the Nethernetherland of M-theory and realize that TGD not only provides an elegant unification of fundamental interactions but predicts already now new technologies. 1. What makes possible cold fusion? I have proposed that cold fusion might be based on Trojan horse mechanism in which incoming and target nuclei feed their em gauge fluxes to different space-time sheets so that electromagnetic Coulomb wall disappears. If part of Palladium nuclei are "partially dark", this is achieved. Another mechanism could be the de-localization of protons to a larger volume than nuclear volume induced by the increase of hbar. This means that reaction environment differs dramatically from that appearing in the usual nuclear reactions and the standard objections against cold fusion would not apply anymore. Actually this mechanism implies the first one since ordinary and exotic protons do not interact appreciably. 2. Objections against cold fusion The following arguments are from an excellent review article by Storms.
  • Coulomb wall requires an application of higher energy. Now electromagnetic Coulomb wall disappears. In TGD framework classical Z^0 force defines a second candidate for a Coulomb wall but according to the model for neutrino screening discussed in the screening is highly local and could overcome the problem. Of course, one must re-evaluate earlier models in light of the possibility that also neutrons might delocalized in some length scale.
  • If a nuclear reaction should occur, the immediate release of energy can not be communicated to the lattice in the time available. In the recent case the time scale is however multiplied by the factor r=hbar_s/hbar and the situation obviously changes.
  • When such an energy is released under normal conditions, energetic particles are emitted along with various kinds of radiation, only a few of which are seen by various CANR (Chemically Assisted Nuclear Reactions) studies. In addition, gamma emission must accompany helium, and production of neutrons and tritium, in equal amounts, must result from any fusion reaction. None of these conditions is observed during the claimed CANR effect, no matter how carefully or how often they have been sought. The large value of hbar implying a small value of fine structure constant would explain the small gamma emission rate. If only protons form the quantum coherent state then fusion reactions do not involve neutrons and this could explain the anomalously low production of neutrons and tritium.
  • The claimed nuclear transmutation reactions (reported to occur also in living matter) are very difficult to understand in standard nuclear physics framework. The model allows them since protons of different nuclei can re-arrange in many different manners when the dark matter state decays back to normal.
  • Many attempts to calculate fusion rates based on conventional models fail to support the claimed rates within PdD (Palladium-Deuterium). The atoms are simply too far apart. This objection also fails for obvious reasons.
3. Mechanism of cold fusion One can deduce a more detailed model for cold fusion from observations, which are discussed systematically in the article of Storms and in the references discussed therein.
  • A critical phenomenon is in question. The average D/Pd ratio must be in the interval (.85,.90). The current must be over-critical and must flow a time longer than a critical time. The effect occurs in a small fraction of samples. D at the surface of the cathode is found to be important and activity tends to concentrate in patches. The generation of fractures leads to the loss of the anomalous energy production. Even the shaking of the sample can have the same effect. The addition of even a small amount of H_2O to the electrolyte (protons to the cathode) stops the anomalous energy production. All these findings support the view that patches correspond to a macroscopic quantum phase involving delocalized nuclear protons. The added ordinary protons and fractures could serve as a seed for a phase transition leading to the ordinary phase.
  • When D_2O is used as electrolyte, the process occurs when PdD acts as a cathode but does not seem to occur when it is used as anode. This suggests that the basic reaction is between the ordinary deuterium D=pn of electrolyte with the the exotic nucleus of the cathode. Denote by p_ex the exotic proton and by D_ex= np_ex exotic deuterium at the cathode. For ordinary nuclei fusions to tritium and ^3He occur with approximately identical rates. The first reaction produces neutron and ^3He via D+D--> n+ ^3He, whereas second reaction produces proton and tritium by 3H via D+D--> p+ ^3H. The prediction is that one neutron per each tritium nucleus should be produced. Tritium can be observed by its beta decay to ^3He and the ratio of neutron flux is several orders of magnitude smaller than tritium flux as found for instance by Tadahiko Misuno and his collaborators. Hence the reaction producing ^3He cannot occur significantly in cold fusion which means a conflict with the basic predictions of the standard nuclear physics. The explanation is that the proton in the target deuterium D_ex is in the exotic state with large Compton length and the production of ^3He occurs very slowly since p_ex and p correspond to different space-time sheets. Since neutrons and the proton of the D from the electrolyte are in the ordinary state, Coulomb barrier is absent and tritium production can occur. The mechanism also explains why the cold fusion producing ^3He and neutrons does not occur using water instead of heavy water.
  • Also more complex reactions between D and Pd for which protons are in exotic state can occur since Coulomb wall is absent. These can lead to the reactions transforming the nuclear charge of Pd and thus to nuclear transmutations. Also ^4He, which has been observed, can be products in reactions such as D+D_ex--> ^4He.
  • Gamma rays, which should be produced in most nuclear reactions such as ^4He production to guarantee momentum conservation are not observed. The explanation is that the recoil momentum goes to the macroscopic quantum phase and eventually heats the electrolyte system. This provides obviously the mechanism by which the liberated nuclear energy is transferred to the electrolyte difficult to imagine in standard nuclear physics framework.
  • The proposed reaction mechanism explains why neutrons are not produced in amounts consistent with the anomalous energy production. The addition of water to the electrolyte however induces neutron bursts. A possible mechanism is the production of neutrons in the phase transition p_ex--> p . D_ex--> p+n could occur as the proton contracts back to the ordinary size in such a manner that it misses the neutron. This however requires energy of 2.23 MeV if the rest masses of D_ex and D are same. Also D_ex+D_ex--> n+^3He could be induced by the phase transition to ordinary matter when p_ex transformed to p does not combine with its previous neutron partner to form D but recombines with D_ex to form ^3He_ex-->^3He so that a free neutron is left.
      The reader interested in more detailed model can consult the chapter Quantum Coherent Dark Matter and Bio-Systems as Macroscopic Quantum Systems of "Genes, Memes, Qualia, and Semitrance" and the chapter TGD and Nuclear Physics of "TGD and p-Adic Numbers".

      Monday, April 25, 2005

      Quaternions, Octonions, and Hyperfinite Type II_1 Factors

      Quaternions and octonions as well as their hyper counterparts obtained as subspace of complexified quanternions and octonions are central elements of the number theoretic vision about TGD. The latest progress in understanding of hyper-finite II_1 factors relates to the question of how could one understand quaternions and octonions and their possible quantum counterparts in this framework. The quantum counterparts of quaternions and octonions have been proposed around 1999 and would be directly relevant to the construction of vertices as quantum quaternionic (at least) multiplication and co-multiplication. The notion of quaternionic or octonionic Hilbert space is not attractive (problems with the notions of orthogonality, hermiticity, and tensor product). A more natural idea is that Hilbert space is hyper-Kaehler manifold in that operator algebra allows a representation of quaternion units. The fact that unit would have trace 1 would automatically imply that for inclusions N subset M quantum quaternion algebra would appear. The fact that for Jones index M:N=4 inclusion hyper-finite II_1 factor is nothing but infinite Connes tensor product of quaternionic algebra represented as 2x2 matrices indeed suggests that Hyper-Kahler property and its quantum version are inherent properties of II_1 factor. The choice of preferred Kahler structure is necessary and corresponds to the choice of the preferred octonionic imaginary unit at space-time level. Hyper Kähler structure in tangent space of configuration space makes it Hyper-Kähler manifold with vanishing Einstein tensor and Ricci scalar. If this were not the case, constant curvature property would imply infinite Ricci scalar and perturbative divergences in configuration space functional integral coming from perturbations of the metric determinant. This indeed happens in case of loop spaces, which is a very strong objection against string models. The octonionic structure is more intricate since non-associativity cannot be realized by linear operators. * operation is however anti-linear and the classical Cayley-Dickson construction uses it to build up a hierarchy of algebras by adding imaginary units one by one. It is possible to extend any * algebra (in particular von Neumann algebra) by adding * operation, call it J, as an imaginary unit to the algebra by posing 3 constraints. The restriction of the construction to quaternionic units of von Neumann algebra gives rise to quantum octonions. J acts time reversal T/CP operation and superpositions of states created by linear operators A and antilinear operators AJ are not allowed in general since they would break fermion number conservation. Only states created by operators of form A or JA are allowed and J would transform vacuum to a new one and give rise to negative energy states in TGD framework. One might perhaps say that A and JA create bra and ket type states. At space-time level the correlates for this are hyper-quaternionic and co-hyper-quaternionic space-time surfaces obeying different variant of Kahler calibration. For the first one the value of the Kahler action for a space-time region inside which action density is of fixed sign is as near as possible to zero and for the second one as far as possible from zero. The fact that CP and T are broken supports the view that these dual and non-equivalent dynamics correspond to opposite time orientations of space-time sheets. To sum up, hyper-finite factors of type II_1 seem to have very close relationship with quantum TGD and it remains to be seen how much they catch from axiomatics needed to fix TGD completely. Even M^4xCP_2 might emerge from extension of von Neumann algebra by * since M^8< --> M^4xCP_2 duality, which provides a number theoretic realization of compactification as wave particle duality in the cotangent bundle of the configuration space, is made possible by the choice of the preferred octonionic imaginary unit. Could the requirement that the hyper-finite II_1 factor has a classical space-time representation be enough to fix the theory? For more details see the chapter Was von Neumann Right After All? of TGD. Matti Pitkanen

      Sunday, April 24, 2005

      The world of classical worlds is zero-dimensional

      Also this day meant a considerable progress in the understanding of quantum TGD in terms of hyper-finite type II_1 factors. An interesting question is how much besides II_1 factors, their inclusions, and generalized Feynman diagrammatics is needed to end up with TGD. In particular, do the crucial number theoretical ideas follow from the framework of von Neumann algebras? The world of classical worlds is zero-dimensional Configuration space CH corresponds to the space of 3-surfaces of 8-D imbedding sapce, the world of classical worlds. Its tangent space corresponds to the sub-space of gamma matrices of CH Clifford algebra having von Neumann dimension 1 and thus has dimension D= 2log_2(Tr(Id)=1)=0 obtained by generalizing the formula applying in finite-dimensions. A rather paradoxical looking result indeed but understandable since the trace of the projector to this sub-space is infinitely smaller than the trace of identity which equals to one. The generalization of the notion of number by allowing infinite primes, integers and rationals however led already earlier to the result that there is infinite variety of numbers which differ from each other by multiplication with numbers which are units in the real sense but have varying p-adic norms. Single space-time point becomes infinitely structured in the sense of number theory. What more could a number mystic dream than algebraic holography: single space-time point containing in its structure the configuration space of all classical worlds! Loop becomes closed: from point to infinite-dimensional space of classical worlds which turns out to be the point! Brahman=Atman taken to extreme! Configuration space tangent space as logarithm of quantum plane The identification of the tangent space of configuration space as a subspace of gamma matrices allows a natural imbeddding to the Clifford algebra and can be regarded as log_2(M:N)<=2-dimensional module for various sub-factors. One can say that the quantum dimension of configuration space as N module is never larger than 2. Note that the dimension of configuration space as N module occurs in the proposed formula for hbar. State space has quantum dimension D<=8 Bosonic and fermionic sectors of the state space correspond both to II_1 sectors by super-symmetry: thus the dimension is D<=4 for both quark and lepton sectors (it is essential that couplings to the Kähler form of CP_2 are different) and the total quantum dimension is d=4log_2(M:N)<=8 as N module. Hence the quantum counterpart of the imbedding space seems to be in question. Quantum version D=8 Bott periodicity probably holds true so that by self-referential property of von Neumann algebras imbedding space dimension is unique. How effective 2-dimensionality can be consistent with 4-dimensionality The understanding about how effective 2-dimensionality can be consistent with space-time dimension D=4 improved also considerably. Classical non-determinism of Kähler action and cognitive interpretation are decisive in this respect. At the basic level the physics is 2-dimensional but the classical non-determinism making possible cognitive states means that additional dimensions emerge as two parameters appearing in the direct integrals of II_1 factors. These parameters correspond to the two light-like Hamilton Jacobi coordinates labelling partonic 2-surfaces appearing in all known classical solutions of field equations. There is somewhat different manner to state it: same 3-surface X^3 can corresponds to a large number of space-time surfaces X^4(X^3) by classical non-determinism and the "position" of partonic 2-surface characterizes this non-determinism and defines the third spatial coordinate as dynamical coordinate. One further manner to see the emergence of time. Classical non-determinism makes possible time-like entanglement and cognitive states. Without classical non-determinism TGD would reduce to a string model. If string models were correct, our Universe would not be able to form self representations and there would be no string theorists. Thus the mere existence of string theorists proves that they are wrong;-)! For more details see the chapter Was von Neumann Right After All? of TGD. Matti Pitkanen

      Saturday, April 23, 2005

      Cognitive entanglement as Connes tensor product

      In the construction of the cognitive hierarchy of S-matrices that I discussed few days ago, the lowest level N represents matter and higher levels give cognitive representations. The ordinary tensor product S \x S and its tensor powers define a hierarchy of S-matrices and the two-sided projections of these S-matrices in turn define entanglement coefficients for positive and negative energy states at various levels of hierarchy. The following arguments show that the cognitive tensor product restricted to projections of S-matrix corresponds to the so called Connes tensor product appearing naturally in the hierarchy of Jones inclusions. A slight generalization of earlier scenario predicting matter-mind type transitions is forced by this identification and a beautiful interpretation for these transitions in terms of space-time correlates emerges.

      1. Connes tensor product

      Connes has introduced a variant of tensor product allowing to express the union cup_i M_i, where M_i the inclusion hierarchy as infinite tensor product M\x_N M \x_N M\x_N... The Connes tensor product \x_N differs from the standard tensor product and is obtained by requiring that in the Connes tensor product of Hilbert spaces H_1 and H_2 the condition n a\x_N b= a \x_N nb for all n in N holds true. Connes tensor product means forces to replace ordinary statistics with braid statistics. The physical interpretation proposed by Connes is that this tensor product could make sense when N represents observables common to the Hilbert spaces H_1 and H_2. Below it will be found that TGD suggests quite different interpretation. Connes tensor product makes sense also for finite-dimensional right and left modules. Consider the spaces M_{nxq} of nxq-matrices and M_{pxn} of pxn matrices for which nxn matrix algebra M_{nxn} acts as a left resp. right multiplier. The tensor product x_N for these matrices is the ordinary matrix product of m_{pxn}\times m_{nxq} and belongs to M_{pxq} so that the dimension of tensor product space is much lower than mxqx n^2 and does not depend on n. For Jones inclusion N takes the role of M_{nxn} and since M can be regarded as beta=N:M-dimensional N-module, tensor product can be said to give sqrt{beta}x sqrt{beta}-dimensional matrices with N-valued entries. In particular, the inclusion sequence is an infinite tensor product of sqrt{beta} x sqrt{beta}-dimensional matrices.

      2. Does Connes tensor product generate cognitive entanglement?

      One can wonder why the entanglement coefficients between positive and negative energy states should be restricted to the projections of S-matrix. The obvious guess is that it gives rise to an entanglement equivalent with Connes tensor product so that the action of N on initial state is equivalent with its action on the final state. This indeed seems to be the case. The basic symmetry of Connes tensor product translates to the possibility to move an operator creating particles in initial state to final state by conjugating it: this is nothing but crossing symmetry characterizing S-matrix. Thus Connes tensor product generates zero energy states providing a hierarchy of cognitive representations.

      3. Do transitions between different levels of cognitive hierarchy occur?

      The following arguments suggest that the proposed hierarchy of cognitive representations is not exhaustive.
      • Only tensor powers of S involving (2^n)^{th} powers of S appear in the cognitive hierarchy as it is constructed. Connes tensor product representation of cup_iM_i would however suggest that all powers of S appear.
      • There is no reason to restrict the states to positive energy states in TGD Universe. In fact, the states of the entire Universe have zero energy. Thus much more general zero energy states are possible in TGD framework than those for which entanglement is given by a projection of S-matrix, and they occur already at the lowest level of the hierarchy.
      On basis of these observations there is no reason to exclude transitions between different levels of the cognitive hierarchy transforming ordinary tensor product of positive and negative energy states with vanishing conserved quantum numbers to a Connes tensor product involving only the projection of S-matrix as entanglement coefficients. These transitions would give rise to S-matrices connecting different levels and thus fill the gaps in the spectrum of allowed tensor powers of S.

      4. Space-time correlates for the matter-to-mind transitions

      The scatterings in question would represent kind of matter-to-mind transitions, enlightment, or transition to a Buddha state. At space-time level zero energy matter would correspond to positive and negative energy states with a space-like separation whereas cognitive states would correspond to positive and negative energy states with a time-like separation. By the failure of the complete classical determinism time like entanglement makes sense but due to the fact determinism is not completely lost, entanglement could be of a very special kind only, and S-matrix could appear as entanglement coefficients. Light-like causal determinants (CDs) identifiable as orbits of both space-like partonic 2-surfaces and light-like stringy surfaces, can be said to represent both matter and mind. According to the earlier proposal, light-like CDs would correspond to both programs and computers for topological quantum computation, and matter-mind transformation would be also involved with the realization of the genetic code both as cognitive and material structures. This would support the view that the stringy 2-surfaces in the foliation of the space-time surface are time-like in the interior of the space-time sheet (or more generally, outside light-like causal determinants) and light-like causal determinants correspond to critical line between matter and mind. For more details see the chapter Was von Neumann Right After All? of TGD.

      Thursday, April 21, 2005

      Electronic alchemy becoming established science!

      For some time ago I wrote about mono-atomic elements, something which not a single academic scientists would take seriously publicly, since it would mean a loss of all academic respectability. Since I have cannot do anything to my self-destructive trait of being intellectually honest and taking seriously even the claims of people who do not possess academic merits but happen to have brain with open mind, I proposed for a decade ago a model for the mono-atomic elements in terms of what I called electronic alchemy. The idea was that valence electrons of these elements could drop to larger space-time sheets and form a kind of super-conducting states. If this would not be enough for an academic suicide, for week or two ago I went on to generalize this model, and proposed that mono-atomic elements could be manifestations of "partially dark matter" with a large value of Planck constant hbar (see earlier postings and previous link). My self-suicidical behavior mode continued. The need to formulate more precisely the theoretical basis for quantization of quantization I was forced to improve my understanding of the imbeddings of type II_1 factors of von Neumann algebras which originally had inspired the hypothesis about the quantization of hbar. This led to a beautiful general picture about the construction of S-matrix in TGD framework and a fascinating generalization of quantum theory to describe also the dynamics of cognitive representations in terms of inclusion hierarchies of II_1 factors: this means nothing less than Feynman rules for TGD inspired theory of consciousness! Of course, the extremely hostile and highly un-intellectual attitude of skeptics stimulates fear in anyone possessing amygdala, and I am not an exception. Therefore it was a very pleasant surprise to receive an email telling about an article A new kind of alchemy published in New Scientist.

      1. Clusters of atoms mimick atoms

      The article tells that during last two decades a growing evidence for a new kind of chemistry have been emerging. Groups of atoms seem to be able to mimick the chemical behavior of single atom. For instance, clusters of 8, 20, 40, 58 or 92 sodium atoms mimick the behavior of noble gase atoms. By using oxygen to strip away electrons one by one from clusters of Al atoms it is possible to make the cluster to mimic entire series of atoms. For aluminium cluster-ions made of 13, 23 and 37 atoms plus an extra electron are chemically inert. The proposed explanation is that the valence electrons form a kind of mini-conductor with electrons delocalized in the volume of the cluster. The electronic analog of nuclear shell model predicts that full electron shells define stable configurations analogous to magic nuclei. The model explains the numbers of atoms in chemically inert Al and Ca clusters and generalizes the notion of valence to the level of cluster so that the cluster behave like single super-atom.

      2. TGD based model

      My own explanation for mono-atomic elements was that valence electrons are dropped to a larger space-time sheet and behave as a super-conductor. I did not realize at that time that shell model could provide an obvious manner to quantify the model. The electronic shell model as such is of course not the full story. The open question is whether standard physics really allows this kind of de-localization of electrons. A fascinating possibility is that the dropped electrons might correspond to a large value of hbar increasing the Compton lengths of electrons. One cannot exclude the possibility that this mechanism might be at work even in the case of ordinary conduction electrons.
      • The interaction strength of electron with atom is characterized by k=Z*alpha (hbar=c=1) in a complete analogy with gravitational case where one has k=GM_1M_2.
      • Generalizing the formula for the gravitational Planck constant h_gr given by hbar_gr/hbar= GM_1M_2/v_0, one would obtain hbar_s/hbar= Z*alpha/v_0, v_0=about 4.8*10^{-4}. Now I hear a critical voice saying that k=Z*alpha for Na (Z=11) does not satisfy the proposed criterion k>1 for the phase transition increasing hbar to occur. Despite this I continue my argument.
      • The Compton length l_e=about 2.4*10^{-12} m of electron would be scaled up by a factor about 15.2*Z. For Z=11 (Na) this would scale electron Compton length to 4 Angstroms and the atomic cluster contained within single electron could contain up to 64 atoms. This is not a bad estimate. Electrons with this Compton wave length would be naturally delocalized in the volume of the cluster as assumed in the model.
      Matti Pitkanen

      Wednesday, April 20, 2005

      Feynman diagrams as higher level particles and their scattering as dynamics of self consciousness

      The hierarchy of imbeddings of hyper-finite factors of II_1 as counterpart for many-sheeted space-time lead inevitably to the idea that this hierarchy corresponds to a hierarchy of generalized Feynman diagrams for which Feynman diagrams at a given level become particles at the next level. Accepting this idea, one is led to ask what kind of quantum states these Feynman diagrams correspond, how one could describe interactions of these higher level particles, what is the interpretation for these higher level states, and whether they can be detected. In the following M_n denotes a II_1 factor in the hierarchy of Jones inclusions M_0 subset M_1 subset.... (for the notations and background see the earlier postings).

      1. Higher level Feynman diagrams

      The lines of Feynman diagram in M_{n+1} are geodesic lines representing orbits of M_n and this kind of lines meet at vertex and scatter. The evolution along lines is determined by Delta_{M_{n+1}}. These lines contain within themselves M_n Feynman diagrams with similar structure and the hierarchy continues down to the lowest level at which ordinary elementary particles are encountered. For instance, the generalized Feynman diagrams at the second level are ribbon diagrams obtained by thickening the ordinary diagrams in the new time direction. The interpretation as ribbon diagrams crucial for topological quantum computation and suggested to be realizable in terms of zero energy states in is natural. At each level a new time parameter is introduced so that the dimension of the diagram can be arbitrarily high. The dynamics is not that of ordinary surfaces but the dynamics induced by the Delta_{M_n}.

      2. Quantum states defined by higher level Feynman diagrams

      The intuitive picture is that higher level quantum states corresponds to the self reflective aspect of existence and must provide representations for the quantum dynamics of lower levels in their own structure. This dynamics is characterized by S-matrix whose elements have representation in terms of Feynman diagrams.
      • These states correspond to zero energy states in which initial states have "positive energies" and final states have "negative energies". The net conserved quantum numbers of initial and final state partons compensate each other. Gravitational energies, and more generally gravitational quantum numbers defined as absolute values of the net quantum numbers of initial and final states do not vanish. One can say that thoughts have gravitational mass but no inertial mass.
      • States in sub-spaces of positive and negative energy states are entangled with entanglement coefficients given by S-matrix at the level below.
      To make this more concrete, consider first the simplest non-trivial case. In this case the particles can be characterized as ordinary Feynman diagrams, or more precisely as scattering events so that the state is characterized by S_1== P_{in}SP_{out}, where S is S-matrix and P_{in} resp. P_{out} is the projection to a subspace of initial resp. final states. An entangled state with the projection of S-matrix giving the entanglement coefficients is in question. The larger the domains of projectors P_{in} and P_{out}, the higher the representative capacity of the state. The norm of the non-normalized state hat{S} is Trace(S_1 S_1^dagger) and is smaller or equal to one for II_1 factors, and equals to one at the limit S_1=S. Hence, by II_1 property, the state always entangles infinite number of states, and can in principle code the entire S-matrix to entanglement coefficients.

      3. The interaction of M_n Feyman diagrams at the second level of hierarchy

      What constraints can one pose to the higher level reactions? How Feynman diagrams interact? Consider first the scattering at the second level of hierarchy (M_1), the first level M_0 being assigned to the interactions of the ordinary matter.
      • Conservation laws pose constraints on the scattering at level M_1. The Feyman diagrams can transform to new Feynman diagrams only in such a manner that the net quantum numbers are conserved separately for the initial positive energy states and final negative energy states of the diagram. The simplest assumption is that positive energy matter and negative energy matter know nothing about each other and effectively live in separate worlds. The scattering matrix form Feynman diagram like states would thus be simply the tensor product SxS^{\dagger}, where S is the S-matrix characterizing the lowest level interactions and x denotes tensor product. Reductionism would be realized in the sense that, apart from the new elements brought in by Delta_{M_n} defining single particle free dynamics, the lowest level would determine in principle everything occurring at the higher level providing representations about representations about... for what occurs at the basic level. The lowest level would represent the physical world and higher levels the theory about it.
      • The description of hadronic reactions in terms of partons serves as a guide line when one tries to understand higher level Feynman diagrams. The fusion of hadronic space-time sheets corresponds to the vertices M_1. In the vertex the analog of parton plasma is formed by a process known as parton fragmentation. This means that the partonic Feynman diagrams belonging to disjoint copies of M_0 find themselves inside the same copy of M_0. The standard description would apply to the scattering of the initial resp. final state partons.
      • After the scattering of partons hadronization takes place. The analog of hadronization in the recent case is the organization of the initial and final state partons to groups I_i and F_i such that the net conserved quantum numbers are same for I_i and F_i. These conditions can be satisfied if the interactions in the plasma phase occur only between particles belonging to the clusters labelled by the index i. Otherwise only single particle states in M_1 would be produced in the reactions in the generic case. The cluster decomposition of S-matrix to a direct sum of terms corresponding to partitions of the initial state particles to clusters which do not interact with each other obviously corresponds to the "hadronization". Therefore no new dynamics need to be introduced.
      • One cannot avoid the question whether the parton picture about hadrons indeed corresponds to a higher level physics of this kind. This would require that hadronic space-time sheets carry the net quantum numbers of hadrons. The net quantum numbers associated with the initial state partons would be naturally identical with the net quantum numbers of hadron. Partons and they negative energy conjugates would provide in this picture a representation of hadron about hadron. This kind of interpretation of partons would make understandable why they cannot be observed directly. A possible objection is that the net gravitational mass of hadron would be three times the gravitational mass deduced from the inertial mass of hadron if partons feed their gravitational fluxes to the space-time sheet carrying Earth's gravitational field.
      • This picture could also relate to the suggested duality between string and parton pictures. In parton picture hadron is formed from partons represented by space-like 2-surfaces X^2_i connected by join along boundaries bonds. In string picture partonic 2-surfaces are replaced with string orbits. If one puts positive and negative energy particles at the ends of string diagram one indeed obtains a higher level representation of hadron. If these pictures are dual then also in parton picture positive and negative energies should compensate each other. Interestingly, light-like 3-D causal determinants identified as orbits of partons could be interpreted as orbits of light like string word sheets with "time" coordinate varying in space-like direction.

      4. Scattering of Feynman diagrams at the higher levels of hierarchy

      This picture generalizes to the description of higher level Feynman diagrams.
      • Assume that higher level vertices have recursive structure allowing to reduce the Feynman diagrams to ordinary Feynman diagrams by a procedure consisting of finite steps.
      • The lines of diagrams are classified as incoming or outgoing lines according to whether the time orientation of the line is positive or negative. The time orientation is associated with the time parameter t_n characterizing the automorphism Delta_{M_n}^{it_n}. The incoming and outgoing net quantum numbers compensate each other. These quantum numbers are basically the quantum numbers of the state at the lowest level of the hierarchy.
      • In the vertices the M_{n+1} particles fuse and M_n particles form the analog of quark gluon plasma. The initial and final state particles of M_n Feynman diagram scatter independently and the S-matrix S_{n+1} describing the process is tensor product S_nxS_n^{\dagger} (x denotes tensor product). By the clustering property of S-matrix, this scattering occurs only for groups formed by partons formed by the incoming and outgoing particles M_n particles and each outgoing M_{n+1} line contains and irreducible M_n diagram. By continuing the recursion one finally ends down with ordinary Feynman diagrams.

      5. A connection with TGD inspired theory of consciousness

      The implications of this picture TGD inspired theory of consciousness are rather breathtaking.
      • The hierarchy of self representations and the reduction of their quantum dynamics to the dynamics of the material world apart from the effects brought in by the automorphisms Delta_{M_n} determining the free propagation of thoughts, would mean a concrete calculable theory for the quantum dynamics of cognition. My sincere hope is however that no one would ever christen these states "particles of self consciousness". These states are not conscious, consciousness would be in the quantum jump between these states.
      • Cognitive representations would possess "gravitational" charges, in particular gravitational mass, so that thoughts could be put into "gravitational scale". I have proposed that "gravitational" charges correspond to classical charges characterizing the systems at space-time level as opposed to quantum charges.
      • As found, even hadrons could form self representations usually assigned with human brain. This is certainly something that neuroscientist would not propose but conforms with the basic prediction of TGD inspired theory of consciousness about infinite self hierarchy involving cognitive representations at all levels of the hierarchy (see for instance the chapter Time, Space-time, and Consciousness of "Genes,Memes, Qualia,...".
      • The TGD inspired model of topological quantum computation in terms of zero energy cognitive states inspired the proposal that the appearance of a representation and its negative energy conjugate could relate very intimately to the fact that DNA appears as double helices of a strand and its conjugate. This could also relate to the fact that binary structures are common in living matter.
      • One is forced to consider a stronger characterization of the dark matter as a matter at higher levels of the hierarchy with vanishing net inertial quantum numbers but with non-vanishing "gravitational" quantum numbers. We would detect dark matter via its "gravitational" charges. We would also experience it directly since our thoughts would be dark matter! The cosmological estimates for the proportion of dark matter and dark energy would give also estimate for the gravitational mass of thoughts in the Universe: if this interpretation is correct the encounters with UFOs and aliens cease to be material for news!
      For more details see the new chapter "Was von Neumann Right After All?" of TGD. Matti Pitkanen

      Monday, April 18, 2005

      Feynman diagrams within Feynman diagrams and reflective levels of consciousness

      Here is the little step forward of this day made in understanding of the role of Jones inclusions of hyper-finite factors of type II_1 as a key element in the construction quantum counterpart for the many-sheeted space-time. It is possible to assign to a given Jones inclusion N subset M an entire hierarchy of Jones inclusions M_0 subset M_1 subset M_2..., M_0=N, M_1=M. A natural interpretation for these inclusions would be as a sequence of topological condensations. This sequence also defines a hierarchy of Feynman diagrams inside Feynman diagrams. The factor M containing the Feynman diagram having as its lines the unitary orbits of N under Delta_{M} (, which defines a canonical automorphism in II_1 factor) becomes a parton in M_1 and its unitary orbits under Delta_{M_1} define lines of Feynman diagrams in M_1. The outcome is a hierarchy of Feynman diagrams within Feynman diagrams, a fractal structure for which many particle scattering events at a given level become particles at the next level. The particles at the next level represent dynamics at the lower level: they have the property of "being about" representing perhaps the most crucial element of conscious experience. Since net conserved quantum numbers can vanish for a system in TGD Universe, this kind of hierarchy indeed allows a realization as zero energy states. Crossing symmetry can be understood in terms of this picture and has been applied to construct a model for S-matrix at high energy limit. The quantum image for the orbit of parton has dimension log_2(M:N) +1<= 3. Two subsequent inclusions form a natural basic unit since the bipartite diagrams classifying Jones inclusions are duals of each other by black-white duality. In this double inclusion a two-parameter family of deformations of M counterpart of a partonic 2-surface is formed and has quantum dimension log_2(M:N) +2<=4. One might perhaps say that quantum space-time corresponds to a double inclusion and that further inclusions bring in N-parameter families of space-time surfaces. For more details see the new chapter Was von Neumann Right After All? Matti Pitkanen

      Saturday, April 16, 2005

      Yes, von Neumann was right!

      I already told about progress in understanding quantum TGD in terms of von Neumann algebras, in particular hyper-finite factors of type II_1. I have now worked out the first draft and I dare to say that the resulting picture is incredibly elegant allowing a concrete and precise formulation for what non-commutative space-time means. I attach here the abstract of the new chapter Was von Neuman Right After All?.
      The work with TGD inspired model for quantum computation led to the realization that von Neumann algebras, in particular hyper-finite factors of type II_1 could provide the mathematics needed to develop a more explicit view about the construction of S-matrix. In this chapter I will discuss various aspects of type II_1 factors and their physical interpretation in TGD framework.

      1. Philosophical ideas behind von Neumann algebras

      The goal of von Neumann was to generalize the algebra of quantum mechanical observables. The basic ideas behind the von Neumann algebra are dictated by physics. The algebra elements allow Hermitian conjugation and observables correspond to Hermitian operators. A measurable function of operator belongs to the algebra. The predictions of quantum theory are expressible in terms of traces of observables. The highly non-trivial requirement of von Neumann was that identical a priori probabilities for a detection of states of infinite state system must make sense. Since quantum mechanical expectation values are expressible in terms of operator traces, this requires that unit operator has unit trace. For finite-dimensional case it is easy to build observables out of minimal projections to 1-dimensional eigen spaces of observables. For infinite-dimensional case the probably of projection to 1-dimensional sub-space vanishes if each state is equally probable. The notion of observable must thus be modified by excluding 1-dimensional minimal projections, and allow only projections for which the trace would be infinite using the straightforward generalization of the matrix algebra trace as dimension of the projection. The definitions of adopted by von Neumann allow more general algebras than algebras of II_1 for with traces are not larger than one. Type I_n algebras correspond to finite-dimensional matrix algebras with finite traces whereas I_\infty does not allow bounded traces. For algebras of type III traces are always infinite and the notion of trace becomes useless.

      2. von Neumann, Dirac, and Feynman

      The association of algebras of type I with the standard quantum mechanics allowed to unify matrix mechanism with wave mechanics. Because of the finiteness of traces von Neumann regarded the factors of type II_1 as fundamental and factors of type III as pathological. The highly pragmatic and successful approach of Dirac based on the notion of delta function, plus the emergence of Feynman graphs and functional integral meant that von Neumann approach was forgotten to a large extent. Algebras of type II_1 have emerged only much later in conformal and topological quantum field theories allowing to deduce invariants of knots, links and 3-manifolds. Also algebraic structures known as bi-algebras, Hopf algebras, and ribbon algebras to type II_1 factors. In topological quantum computation based on braids and corresponding topological S-matrices they play an especially important role.

      3. Factors of type II_1 and quantum TGD

      There are good reasons to believe that hyper-finite (ideal for numerical approximations) von Neumann algebras of type II_1 are of a direct relevance for TGD. 3.1 Equivalence of generalized loop diagrams with tree diagrams The work with bi-algebras led to the proposal that the generalized Feynman diagrams of TGD at space-time level satisfy a generalization of the duality of old-fashioned string models. Generalized Feynman diagrams containing loops are equivalent with tree diagrams so that they could be interpreted as representing computations or analytic continuations. This symmetry can be formulated as a condition on algebraic structures generalizing bi-algebras. The new element is the possibility of vacuum lines having natural counterpart at the level of bi-algebras and braid diagrams. At space-time level they correspond to vacuum extremals. 3.2. Inclusions of hyper-finite II_1 factors as a basic framework to formulate quantum TGD The basic facts about von Neumann factors of II_1 suggest a more concrete view about the general mathematical framework needed.
      • The effective 2-dimensionality of the construction of quantum states and configuration space geometry in quantum TGD framework makes hyper-finite factors of type II_1 very natural as operator algebras of the state space. Indeed, the elements of conformal algebras are labelled by discrete numbers and also the modes of induced spinor fields are labelled by discrete label, which guarantees that the tangent space of the configuration space is a separable Hilbert space and Clifford algebra is thus a hyper-finite type II_1 factor. The same holds true also at the level of configuration space degrees of freedom so that bosonic degrees of freedom correspond to a factor of type I_{\infty} unless super-symmetry reduces it to a factor of type II_1.
      • Four-momenta relate to the positions of tips of future and past directed light cones appearing naturally in the construction of S-matrix. In fact, configuration space of 3-surfaces can be regarded as union of big-bang/big crunch type configuration spaces obtained as a union of light-cones with parameterized by the positions of their tips. The algebras of observables associated with bounded regions of M^4 are hyper-finite and of type III_1. The algebras of observables in the space spanned by the tips of these light-cones are not needed in the construction of S-matrix so that there are good hopes of avoiding infinities coming from infinite traces.
      • Many-sheeted space-time concept forces to refine the notion of sub-system. Jones inclusions N subset M for factors of type II_1 define in a generic manner imbedding interacting sub-systems to a universal II_1 factor which now corresponds naturally to infinite Clifford algebra of the tangent space of configuration space of 3-surfaces and contains interaction as M:N-dimensional analog of tensor factor. Topological condensation of space-time sheet to a larger space-time sheet, formation of bound states by the generation of join along boundaries bonds, interaction vertices in which space-time surface branches like a line of Feynman diagram: all these situations could be described by Jones inclusion characterized by the Jones index M:N assigning to the inclusion also a minimal conformal field theory and conformal theory with k=1 Kac Moody for M:N=4. M:N=4 option need not be realized physically as quantum field theory but as string like theory whereas the limit D=4-epsilon--> 4 could correspond to M:N--> 4 limit. An entire hierarchy of conformal field theories is thus predicted besides quantum field theory.
      3.3 Generalized Feynman diagrams are realized at the level of M as quantum space-time surfaces The key idea is that generalized Feynman diagrams realized in terms of space-time sheets have counterparts at the level of M identifiable as the Clifford algebra associated with the entire space-time surface X^4. 4-D Feynman diagram as part of space-time surface is mapped to its beta= M:N<=4-dimensional quantum counterpart.
      • von Neumann algebras allow a universal unitary automorphism A--> Delta^{it}A Delta^{-it} fixed apart from inner automorphisms, and the time evolution of partonic 2-surfaces defining 3-D light-like causal determinant corresponds to the automorphism N_i--> Delta^{it}N_iDelta^{-it performing a time dependent unitary rotation for N_i along the line. At configuration space level however the sum over allowed values of t appear and should gives rise to the TGD counterpart of propagator as the analog of the stringy propagator INT_0^t exp(iL_0t)dt. Number theoretical constraints from p-adicization suggest a quantization of t as t=SUM_i n_iy_i>0, where z_i=1/2+y_i are non-trivial zeros of Riemann Zeta.
      • At space-time level the "ends" of orbits of partonic 2-surfaces coincide at vertices so that also their images N_i subset M also coincide. The condition N_i= N_j=...=N, where the sub-factors N at different vertices differ only by automorphism, poses stringent conditions on the values t_i and Bohr quantization at the level of M results. Vertices can be obtained as a vacuum expectations of of the operators creating the states associated with the incoming lines (crossing symmetry is automatic).
      • The equivalence of loop diagrams with tree diagrams would be due to the possibility to move the ends of the internal lines along the lines of the diagram so that only diagrams containing 3-vertices and self energy loops remain. Self energy loops are trivial if the product associated with fusion vertex and co-product associated with annihilation compensate each other. The possibility to assign quantum group or Kac Moody group to the diagram gives good hopes of realizing product and co-product. Octonionic triality would be an essential prerequisite for transforming N-vertices to 3-vertices. The equivalence allows to develop an argument proving the unitarity of S-matrix.
      • A formulation using category theoretical language suggests itself. The category of space sheets has as the most important arrow topological condensation via the formation of wormhole contacts. This category is mapped to the category of II_1 sub-factors of configurations space Clifford algebra having inclusion as the basic arrow. Space-time sheets are mapped to the category of Feynman diagrams in M with lines defined by unitary rotations of N_i induced by Delta^{it}.
      3.4 Is hbar dynamical? The work with topological quantum computation inspired the hypothesis that hbar might be dynamical, and that its values might relate in a simple manner to the logarithms of Beraha numbers giving Jones indices M:N. The model for the evolution of hbar implied that hbar is infinite for the minimal value M:N=1 of Jones index. The construction of a model explaining the strange finding that planetary orbits seem to correspond to a gigantic value of "gravitational" Planck constant led to the hypothesis that when the system gets non-perturbative so that the perturbative expansion in terms of parameter k=alpha Q_1Q_2 ceases to converge, a phase transition increasing the value of hbar to hbar_s= k*hbar/v_0, where v_0=4.8*10^{-4} is the ratio of Planck length to CP_2 length, occurs. This involves also a transition to a macroscopic quantum phase since Compton lengths and times increase dramatically. Dark matter would correspond to ordinary matter with large value of hbar, which is conformally confined in the sense the sum of complex super-canonical conformal weights (related in a simple manner to the complex zeros of Riemann Zeta) is real for the many-particle state behaving like a single quantum coherent unit. The value of hbar for M:N=1 is large but not infinite, and thus in conflict with the original original proposal. A more refined suggestion is that the evolution of hbar as a function of M:N=4cos^2(pi/n) can be interpreted as a renormalization group evolution for the phase resolution. The earlier identification is replaced by a linear renormalization group equation for 1/hbar allowing as its solutions the earlier solution plus an arbitrary integration constant. Hence 1/hbar can approach to a finite value 1/hbar(3)= v_0/k*hbar(n-->\infty) at the limit n--> 3. The evolution equation gives a concrete view about how various charges should be imbedded in Jones inclusion to the larger algebra so that the value of hbar appearing in commutators evolves in the required manner. The dependence of hbar on the parameters of interacting systems means that it is associated with the interface of the interacting systems. Instead of being an absolute constant of nature hbar becomes something characterizing the interaction between two systems, the "position" of II_1 factor N inside M. The interface could correspond to wormhole contacts, join along boundaries bond, light-like causal determinant, etc... This property of hbar is consistent with the fact that vacuum functional expressible as an exponent of K\"ahler action does not depend at all on hbar. For more details see the new chapter Was von Neumann Right After All? Matti Pitkanen

      Thursday, April 14, 2005

      Towards automaticized publishing

      A new era in science publishing is at its dawn. You can find from the web programs producing automatically publications. If you are interested in an effective boosting of your career, and perhaps even extending your competence to entirely new branches of science, say to become a competent M-theorists, I recommend to make a visit here. The publication generator has been already able to produce a paper accepted for publication without review! As a matter fact, on basis of my own experiences about hep-th during last years I have a strong feeling that this novel method of producing publications has been discovered long ago by ingenious M-theorists but they have kept it as an "industrial secret" for obvious reasons. We are living fascinating times! Matti Pitkanen

      Monday, April 11, 2005

      Was von Neumann right after all?

      The work with TGD inspired model for topological quantum computation led to the realization that von Neumann algebras, in particular hyper-finite factors of type II_1 seem to provide the mathematics needed to develop a more explicit view about the construction of S-matrix. I have already discussed a vision for how to achieve this. In this chapter I will discuss in more explicit manner various aspects fascinating aspects of type II_1 factors and their physical interpretation in TGD framework.

      Philosophical ideas behind von Neumann algebras

      The goal of von Neumann was to generalize the algebra of quantum mechanical observables. The basic ideas behind the von Neumann algebra are dictated by physics. The algebra elements allow Hermitian conjugation and observables correspond to Hermitian operators. A measurable function of operator belongs to the algebra. The predictions of quantum theory are expressible in terms of traces of observables. Density matrix defining expectations of observables in ensemble is the basic example. The highly non-trivial requirement of von Neumann was that identical a priori probabilities for a detection of states of infinite state system must make sense. Since quantum mechanical expectation values are expressible in terms of operator traces, this requires that unit operator has unit trace. For finite-dimensional case it is easy to build observables out of minimal projections to 1-dimensional eigen spaces of observables. For infinite-dimensional case the probably of projection to 1-dimensional sub-space vanishes if each state is equally probable. The notion of observable must thus be modified by excluding 1-dimensional minimal projections, and allow only projections for which the trace would be infinite using the straightforward generalization of the matrix algebra trace as dimension of the projection. The non-trivial implication of the fact that traces of projections are never larger than one is that the eigen spaces of the density matrix must be infinite-dimensional for non-vanishing projection probabilities. Quantum measurements can lead with a finite probability only to mixed states with density matrix which is projection operator to infinite-dimensional subspace. The simple von Neumann algebras for which unit operator has unit trace are known as factors of type II_1. The definitions of adopted by von Neumann allow however more general algebras. Type I_n algebras correspond to finite-dimensional matrix algebras with finite traces whereas I_{infty} does not allow bounded traces. For algebras of type III traces are always infinite and the notion of trace becomes useless (it might be however possible to assign to the trace a number theoretic interpretation, say as infinite prime having unit norm for any finite-p p-adic topology).

      von Neumann, Dirac, and Feynman

      The association of algebras of type I with the standard quantum mechanics allowed to unify matrix mechanism with wave mechanics. Note however that the assumption about continuous momentum state basis is in conflict with separability but the particle-in-box idealization allows to circumvent this problem (the notion of space-time sheet brings the box in physics as something completely real). Because of the finiteness of traces von Neumann regarded the factors of type II_1 as fundamental and factors of type III as pathological. The highly pragmatic and successful approach of Dirac based on the notion of delta function, plus the emergence of Feynman graphs and functional integral meant that von Neumann approach was forgotten to a large extent. Algebras of type II_1 have emerged only much later in conformal and topological quantum field theories allowing to deduce invariants of knots, links and 3-manifolds. Also algebraic structures known as bi-algebras, Hopf algebras, and ribbon algebras relate closely to type II_1 factors. In topological quantum computation based on braids and corresponding topological S-matrices they play an especially important role. In axiomatic quantum field theory defined in Minkowski space the algebras of observables associated with bounded space-time regions correspond quite generally to the type III_1 hyper-finite factor. One can criticize the idea about identical a priori probabilities but the assumption could be also justified by the finiteness of quantum theory. Indeed, it is traces which produce the infinities of quantum field theories. The regularization procedures used to eliminate the divergences might be actually a manner to transform III_1 type algebra of quantum field theories to II_1 type algebra.

      Factors of type II_1 and quantum TGD

      For me personally the realization that TGD Universe is tailored for topological quantum computation led also to the realization that hyper-finite (ideal for numerical approximations) von Neumann algebras of type II_1 have a direct relevance for TGD. 1. Equivalence of generalized loop diagrams with tree diagrams The work with bi-algebras led to the proposal that the generalized Feynman diagrams of TGD at space-time level satisfy a generalization of the duality of old-fashioned string models. Generalized Feynman diagrams containing loops are equivalent with tree diagrams so that they could be interpreted as representing computations or analytic continuations. This symmetry can be formulated as a condition on algebraic structures generalizing bi-algebras. The new element is the possibility of vacuum lines having natural counterpart at the level of bi-algebras and braid diagrams. At space-time level they correspond to vacuum extremals. 2. Generalized Feynman diagrams and basic properties of hyper-finite II_1 factors The basic facts about von Neumann factors of II_1 suggest a more concrete view about the general mathematical framework needed.
      • The effective 2-dimensionality of the construction of quantum states and configuration space geometry in quantum TGD framework makes hyper-finite factors of type II_1 very natural as operator algebras of the state space. Indeed, the elements of conformal algebras are labelled by discrete numbers and also the modes of induced spinor fields are labelled by discrete label, which guarantees that the tangent space of the configuration space is a separable Hilbert space and Clifford algebra is thus a hyper-finite type II_1 factor. The same holds true also at the level of configuration space degrees of freedom so that bosonic degrees of freedom correspond to a factor of type I_infty unless super-symmetry reduces it to a factor of type II_1.
      • Four-momenta relate to the positions of tips of future and past directed light cones appearing naturally in the construction of S-matrix. In fact, configuration space of 3-surfaces can be regarded as union of big-bang/big crunch type configuration spaces obtained as a union of light-cones with parameterized by the positions of their tips. The algebras of observables associated with bounded regions of M^4 are hyper-finite and of type III_1. The algebras of observables in the space spanned by the tips of these light-cones are not needed in the construction of S-matrix so that there are good hopes of avoiding infinities coming from infinite traces.
      • Many-sheeted space-time concept forces to refine the notion of sub-system. Jones inclusions N\subset M for factors of type II_1 define in a generic manner imbedding interacting sub-systems to a universal II_1 factor which now corresponds naturally to infinite Clifford algebra of the tangent space of configuration space of 3-surfaces and contains interaction as M:N-dimensional analog of tensor factor. Topological condensation of space-time sheet to a larger space-time sheet, formation of bound states by the generation of join along boundaries bonds, interaction vertices in which space-time surface branches like a line of Feynman diagram: all these situations could be described by Jones inclusion characterized by the Jones index M:N assigning to the inclusion also a minimal conformal field theory and conformal theory with k=1 Kac Moody for M:N=4. M:N=4 option need not be realized physically and might correspond to the fact that dimensional regularization works only in D=4-epsilon.
      • The construction of generalized Feynmann diagrams requires the identification of the counterparts of propagators as unitarity evolutions of single particle systems along 3-D light-like causal determinants representing lines of generalized Feynman diagrams as orbits of partons. von Neumann algebras allow universal unitary automorphism Delta^{it} fixed apart for inner automorphisms, and this automorphism is extremely natural candidate for this unitary evolution. Only the value of the parameter t would remain open.
      • The vertices must be constructed as overlaps of lines (light-like 3-D CDs) entering the vertex, which is 2-D partonic surface. Jones inclusions might define universal vertices. A simultaneous imbedding of all lines to a factor M of type II_1 is required and the vertex can be obtained as a vacuum expectation of the product of operators defining the state. The only non-uniqueness problem is that the imbeddings are fixed apart from inner automorphism only. Algebraic hologram idea is realized in the sense that the operator algebras of all other lines are imbeddable to the operator algebra of a given line. The triviality of loops for generalized Feynman diagrams gives strong conditions on vertices.
      3. Is hbar dynamical? The work with topological quantum computation inspired the hypothesis that hbar might be dynamical, and that its values might relate in a simple manner to the logarithms of Beraha numbers giving Jones indices M:N. The model for the evolution of hbar implied that hbar is infinite for the minimal value M:N=1 of Jones index. The construction of a model explaining the strange finding that planetary orbits seem to correspond to a gigantic value of "gravitational" Planck constant led to the hypothesis that when the system gets non-perturbative so that the perturbative expansion in terms of parameter k=alpha Q_1Q_2 ceases to converge, a phase transition increasing the value of hbar to hbar_s= k*hbar/v_0, where v_0 the ratio of Planck length to CP_2 length, occurs. This involves also a transition to a macroscopic quantum phase since Compton lengths and times increase dramatically. Dark matter would correspond to ordinary matter with large value of hbar, which is conformally confined in the sense the sum of complex super-canonical conformal weights (related in a simple manner to the complex zeros of Riemann Zeta) is real for the many-particle state behaving like a single quantum coherent unit. The value of hbar for M:N=1 is large but not infinite, and thus in conflict with the original original proposal. A more refined suggestion is that the evolution of hbar as a function of M:N=4cos^2(pi/n) can be interpreted as a renormalization group evolution for the phase resolution. The earlier identification is replaced by a linear renormalization group equation for 1/hbar allowing as its solutions the earlier solution plus an arbitrary integration constant. Hence 1/hbar can approach to a finite value 1/hbar(3)= v_0/k*hbar(n--> infty) at the limit n--> 3. The evolution equation gives a concrete view about how various charges should be imbedded in Jones inclusion to the larger algebra so that the value of hbar appearing in commutators evolves in the required manner. The dependence of hbar on the parameters of interacting systems means that it is associated with the interface of the interacting systems. Instead of being an absolute constant of nature hbar becomes something characterizing the interaction between two systems, the "position" of II_1 factor N inside M. The interface could correspond to wormhole contacts, join along boundaries bond, light-like causal determinant, etc... This property of hbar is consistent with the fact that vacuum functional expressible as an exponent of Kähler action does not depend at all on hbar. If this vision is correct, and the evidence for its correctness is growing steadily, the conclusion is that the struggle with the infinities of quantum field theories which started at the days of Dirac and for which M-theory represented the catastrophic grand finale has been solely due to the bad mathematics. If the pragmatic colleagues had believed von Neumann, the landscape of theoretical physics might look quite different now. Matti Pitkänen

      Saturday, April 02, 2005

      ORMEs, cold fusion, sonofusion, sono-luminescence and quantum coherent dark matter

      Parodies of string models are nowadays difficult to distinguish from real articles. I glue here an abstract of parody of the standard string model paper send by Nobelist Sheldon Glashow to hep-th in Fool's Day. With a minor modifications this would look like any of those seminal works about landscape flowing in hep-th nowadays. Unfortunately, the paper itself has been removed by administrators.
      Particle masses from a Calabi-Yau Authors: S.L. Glashow Comments: 89 pages (sorry) Report-no: BU-4/2005 We identify the unique Calabi-Yau background that leads to a realistic heterotic Standard Model at low energies. A dual G2 compactification of M-theory and a Calabi-Yau four-fold compactification of F-theory is constructed. A self-mirror discrete symmetry protects the stability of the proton, stabilizes all moduli, and implies realistic neutrino mass matrices, in agreement with our recent results. In order to avoid abstract rubbish, we calculate the masses of elementary quarks and leptons. After including first 7 loops, worldsheet instantons and D3-brane instantons, the muon/electron mass ratio turns out to be approximately 206.76826535236246, for example. The low-energy electroweak theory by Glashow et al. with three fundamental Higgs bosons is a trivial consequence of the model. The lightest supersymmetric particle is predicted to be a neutralino at 200.504 GeV. There have been 63.8 e-foldings of inflation. We speculate that the backgrounds without the Glashow self-mirror discrete symmetry are exponentially suppressed in the Hartle-Hawking state, and our vacuum is therefore a unique SUSY-breaking four-dimensional cosmology that arises from string theory. Finally, we argue that our discoveries make string theory safe. Permanently safe. We also write down the most general bootstrap conditions for string/M-theory, and show that the two-dimensional worldsheet conformal field theories and the AdS/CFT models represent two large classes of solutions of our conditions.
      Sheldon Glashow was one of the few who for two decades ago did his best to warn for what might happen when all intellectual resources are forced to work with single idea, which has no empirical support. Now the worst has happened. Just for fun and knowing that I had nothing to lose, also I told for twenty years ago in my application to a research post in the Institute of Theoretical Physics of Helsinki University, what the failures of string theory are, and I was correct and for obvious reasons. I am now waiting that my colleagues come to tap me on shoulder, to praise my far-sightedness, and apologize;-)! Back to business. I continued to work with the claimed properties ORMEs and irrespective of whether ORMEs are fraud or not, I have now a beautiful model for what happens in the phase transition increasing the value of hbar and leading to the formation of conformally confined blocks of protons and electrons. Conservation laws and Uncertainty Principle give decisive constraints on the model for this phase transition. The crucial new observation is that the phase transition to dark matter can occur at the level of protons and electrons and thus would increase their Compton lengths by a factor hbar_s/hbar.

      Input from conservation laws

      The conservation of angular momentum and energy-momentum pose strong conditions on what can happen when Planck constant increases. 1. Angular momentum quantization hbar_s becomes a new unit of angular momentum. Unless the system possesses a vanishing angular momentum, something analogous to a spontaneous magnetization must happen with magnetized region becoming basic unit. If Hudson's claim about high rotational states of nuclei is true, this mechanism might explain them. Of course, the large unit of angular momentum is a very precise experimental signature. 2. Scaling of Compton length and time Energy and momentum conservation and Uncertainty Principle imply that Compton length and Compton time must increase by a factor hbar_s/hbar . In particular, the space-time sheets defining the Compton lengths of particles increase in size by this factor. In the case of atomic nucleus the generalization of the basic formula would give r= hbar_s/hbar =Z^2alpha/v0, v_0= 4.6*10^{-4}. v_0 is proportional the ratio of Planck length and CP_2 length in TGD Universe. For Z=46 (Pd) the formula would give r =about 3.36*10^4 so that proton Compton length would become roughly 10^{-11} meters. This size scale is smaller than atomic size of order 10^{-10} meters which would explain why the phase transition does not occur under the usual circumstances. For Z=79 (Gold) one has r=about 9.9*10^{4} so that (and rather remarkably!) Compton length achieves atomic size. In the phase transition protonic space-time sheets would fuse together to form a larger join along boundaries condensate. There is a direct analogy with the transition to super-conductivity or super-fluidity believed to involve the increase of the non-relativistic wavelength lambda= hbar/p defined by the three-momentum p so that the volumes defined by this wavelength overlap for the neighboring particles. This could in turn increase the net charge of the conformally confined sub-systems and hence also the parameter hbar_{s} so that Compton length would increase further. A kind of cascade like process could occur. Nuclear protons would de-localize whereas neutrons would remain localized to a nuclear volume. One could speak of conduction particles or even super-conducting particles, where particle could refer to proton, nucleus, or block of nuclei. If the p-adic prime characterizing the resulting Compton space-time sheet is larger than the space-time sheets (flux tubes?) carrying Earth's gravitational field, only the gravitational mass of neutrons would respond to it. 3. Objection There is an objection against the proposed picture. Nuclei possess large em and Z^0 charges. If the naive criterion Z^2\alpha >1 is correct, the transition should occur already for Z>Z_0=11, which corresponds to Mg. Why doesn't the phase transition occur for ordinary nuclei and delocalize their em charges to a volume of size of order L= r\times L, L ordinary nuclear size, r=hbar_s/hbar? a) The manner to circumvent this paradox might be simple. The precise criterion for the occurrence of the phase transition is that the perturbation series in powers of Z^2\alpha fails to converge. Z^2\alpha=1 is just a naive guess for when this occurs. b) Also the assumption that nucleons always form single join along boundaries condensate inside nucleus is too naive. Rather, several clusters of protons and neutrons are expected to be present as TGD inspired model for nucleus indeed assumes. In fact, the non-occurrence of this transition for ordinary nuclei would give an upper bound for the size of this kind of clusters in terms of Z and A-Z. Even the strange sounding claim of Hudson about high spin states of nuclei might make sense. The formation of high rotational states means that nucleus behaves a single coherent whole. Therefore join along boundaries condensates of size of entire nucleus are more probable for high spin states. A conceivable possibility is that also in RHIC experiments with colliding Gold nuclei this kind of phase transition occurs so that there might after all be a connection with the esoteric claims of Hudson.

      Connection with cold fusion

      The basic prediction of TGD is a hierarchy of fractally scaled variants of non-asymptotically free QCD like theories and that color dynamics is fundamental even for our sensory qualia (visual colors identified as increments of color quantum numbers in quantum jump). The model for ORMEs suggest that exotic protons obey QCD like theory in the size scale of atom. If this identification is correct, QCD like dynamics might be studied some day experimentally in atomic or even macroscopic length scales of order cell size and there would be no need for ultra expensive accelerators! The fact that Palladium is one of the "mono-atomic" elements used also in cold fusion experiments as a target material obviously puts bells ringing. 1. What makes possible cold fusion? I have proposed that cold fusion might be based on Trojan horse mechanism in which incoming and target nuclei feed their em gauge fluxes to different space-time sheets so that electromagnetic Coulomb wall disappears. If part of Palladium nuclei are "partially dark", this is achieved. Similar mechanism might be required also in the case of classical Z^0 force, and one cannot exclude the possibility that also blobs of neutrons can spend some time in dark matter states. Note however that the claim of Hudson about 4/9 reduction of weight does not support this directly. Another mechanism could be the de-localization of protons to a larger volume than nuclear volume induced by the increase of hbar. This means that reaction environment differs dramatically from that appearing in the usual nuclear reactions and the standard objections against cold fusion would not apply anymore. 2. Objections against cold fusion The following arguments are from an excellent review article by Storms. a) Coulomb wall requires an application of higher energy. Now electromagnetic Coulomb wall disappears. In TGD framework classical Z^0 force defines a second candidate for a Coulomb wall but according to the model for neutrino screening discussed in the screening is highly local and could overcome the problem. Of course, one must re-evaluate earlier models in light of the possibility that also neutrons might delocalized in some length scale. b) If a nuclear reaction should occur, the immediate release of energy can not be communicated to the lattice in the time available. In the recent case the time scale is however multiplied by the factor r=hbar_s/hbar and the situation obviously changes. c) When such an energy is released under normal conditions, energetic particles are emitted along with various kinds of radiation, only a few of which are seen by various CANR (Chemically Assisted Nuclear Reactions) studies. In addition, gamma emission must accompany helium, and production of neutrons and tritium, in equal amounts, must result from any fusion reaction. None of these conditions is observed during the claimed CANR effect, no matter how carefully or how often they have been sought. The large value of hbar implying a small value of fine structure constant would explain the small gamma emission rate. If only protons form the quantum coherent state then fusion reactions do not involve neutrons and this could explain the anomalously low production of neutrons and tritium. c) The claimed nuclear transmutation reactions (reported to occur also in living matter) are very difficult to understand in standard nuclear physics framework. The model allows them since protons of different nuclei can re-arrange in many different manners when the dark matter state decays back to normal. d) Many attempts to calculate fusion rates based on conventional models fail to support the claimed rates within PdD (Palladium-Deuterium). The atoms are simply too far apart. This objection also fails for obvious reasons. 3. Mechanism of cold fusion One can deduce a more detailed model for cold fusion from observations, which are discussed systematically in the article of Storms and in the references discussed therein. a) A critical phenomenon is in question. The average D/Pd ratio must be in the interval (.85,.90). The current must be over-critical and must flow a time longer than a critical time. The effect occurs in a small fraction of samples. D at the surface of the cathode is found to be important and activity tends to concentrate in patches. The generation of fractures leads to the loss of the anomalous energy production. Even the shaking of the sample can have the same effect. The addition of even a small amount of H_2O to the electrolyte (protons to the cathode) stops the anomalous energy production. All these findings support the view that patches correspond to a macroscopic quantum phase involving delocalized nuclear protons. The added ordinary protons and fractures could serve as a seed for a phase transition leading to the ordinary phase. b) When D_2O is used as electrolyte, the process occurs when PdD acts as a cathode but does not seem to occur when it is used as anode. This suggests that the basic reaction is between the ordinary deuterium D=pn of electrolyte with the the exotic nucleus of the cathode. Denote by p_ex the exotic proton and by D_ex= np_ex exotic deuterium at the cathode. For ordinary nuclei fusions to tritium and ^3He occur with approximately identical rates. The first reaction produces neutron and ^3He via D+D--> n+ ^3He, whereas second reaction produces proton and tritium by 3H via D+D--> p+ ^3H. The prediction is that one neutron per each tritium nucleus should be produced. Tritium can be observed by its beta decay to ^3He and the ratio of neutron flux is several orders of magnitude smaller than tritium flux as found for instance by Tadahiko Misuno and his collaborators. Hence the reaction producing ^3He cannot occur significantly in cold fusion which means a conflict with the basic predictions of the standard nuclear physics. The explanation is that the proton in the target deuterium D_ex is in the exotic state with large Compton length and the production of ^3He occurs very slowly since p_ex and p correspond to different space-time sheets. Since neutrons and the proton of the D from the electrolyte are in the ordinary state, Coulomb barrier is absent and tritium production can occur. The mechanism also explains why the cold fusion producing ^3He and neutrons does not occur using water instead of heavy water. c) Also more complex reactions between D and Pd for which protons are in exotic state can occur. These can lead to the reactions transforming the nuclear charge of Pd and thus to nuclear transmutations. Also ^4He, which has been observed, can be products in reactions such as D+D_ex--> ^4He. The Z^0 Coulomb wall is not present in standard model. In TGD the situation is the same if the model for neutrino screening is correct or if the delocalization occurs also for neutrons. The reported occurrence of nuclear transmutation such as ^{23}Na+^{16}O--> ^{39}K in living matter (Kervran) allowing growing cells to regenerate elements K, Mg, Ca, or Fe, could be understood in this model too. d) Gamma rays, which should be produced in most nuclear reactions such as ^4He production to guarantee momentum conservation are not observed. The explanation is that the recoil momentum goes to the macroscopic quantum phase and eventually heats the electrolyte system. This provides obviously the mechanism by which the liberated nuclear energy is transferred to the electrolyte difficult to imagine in standard nuclear physics framework. e) The proposed reaction mechanism explains why neutrons are not produced in amounts consistent with the anomalous energy production. The addition of water to the electrolyte however induces neutron bursts. A possible mechanism is the production of neutrons in the phase transition p_ex--> p . D_ex--> p+n could occur as the proton contracts back to the ordinary size in such a manner that it misses the neutron. This however requires energy of 2.23 MeV if the rest masses of D_ex and D are same. Also D_ex+D_ex--> n+^3He could be induced by the phase transition to ordinary matter when p_ex transformed to p does not combine with its previous neutron partner to form D but recombines with D_ex to form ^3He_ex-->^3He so that a free neutron is left.

      Connection with sonoluminescence and sonofusion

      Sono-luminescence is a poorly understood phenomenon in which the compression of bubbles in liquid leads to very intense emission of photons and generation of temperatures which are so high that even nuclear fusion might become possible. I have discussed sono-luminescence from the point of view of p-adic length scale hypothesis here. Sono-fusion is a second closely related poorly understood phenomenon. In bubble compression the density of matter inside bubble might become so high that the Compton lengths associated with possibly existing conformally confined phases inside nuclei could start to overlap so that a delocalized phase of protons and/or neutrons could form and em and Z^0 Coulomb walls could disappear. Nuclear fusion would occur and the energy produced would explain the achieved high temperatures and emission of photons. Thus the causal relation would be reversed from what it is usually believed to be. The same anomalies are predicted as in the case of cold fusion also now. Bubble compression brings in mind "mini crunch" which occurs also in RHIC experiments, and p-adic fractality suggests that analogy might be rather precise in that magnetic flux tubes structure carrying Bose-Einstein condensate of possibly conformally confined protons, electrons and photons might form. The intense radiation of photons might be an analog of thermal radiation from an evaporating black hole. The relevant p-adic scale is probably not smaller than 100 nm, and this would give Hagedorn temperature which is around T_H\sim 10 eV for ordinary Planck constant and much smaller than fusion temperature. For hbar_s the Hagedorn temperature would be scaled up to T_H=rT_H , r=hbar_s/hbar . For r=10^5 temperatures allowing nuclear fusion would be achieved. Needless to say, the quantitative understanding of what happens in the formation of dark matter would have far reaching technological consequences and it is a pity that learned colleagues still refuse to touch on anything I have written. These superstring revolutions followed by a big crunch could have been avoided if TGD would have been taken seriously already for 20 years ago. A more detailed summary of above described ideas can be found here. Matti Pitkanen