https://matpitka.blogspot.com/2012/01/

Thursday, January 26, 2012

Quantum p-adic deformations of space-time surfaces as a representation of finite measurement resolution?

A mathematically - and also physically - fascinating question is whether one could use quantum arithmetics as a tool to build quantum deformations of partonic 2-surfaces or even of space-time surfaces and how could one achieve this. These quantum space-times would be commutative and therefore not like non-commutative geometries assigned with quantum groups. Perhaps one could see them as commutative semiclassical counterparts of non-commutative quantum geometries just as the commutative quantum groups (see this) could be seen commutative counterparts of quantum groups.

As one tries to develop a new mathematical notion and interpret it, one tends to forget the motivations for the notion. It is however extremely important to remember why the new notion is needed.

  1. In the case of quantum arithmetics Shnoll effect is one excellent experimental motivation. The understanding of canonical identification and realization of number theoretical universality are also good motivations coming already from p-adic mass calculations. A further motivation comes from a need to solve a mathematical problem: canonical identification for ordinary p-adic numbers does not commute with symmetries.

  2. There are also good e motivations for p-adic numbers? p-Adic numbers and quantum phases can be assigned to finite measurement resolution in length measurement and in angle measurement. This with a good reason since finite measurement resolution means the loss of ordering of points of real axis in short scales and this is certainly one outcome of a finite measurement resolution. This is also assumed to relate to the fact that cognition organizes the world to objects defined by clumps of matter and with the lumps ordering of points does not matter.

  3. Why quantum deformations of partonic 2-surfaces (or more ambitiously: space-time surfaces) would be needed? Could they represent convenient representatives for partonic 2-surfaces (space-time surfaces) within finite measurement resolution?

    1. If this is accepted there is not compelling need to assume that this kind of space-time surfaces are preferred extremals of Kähler action.

    2. The notion of quantum arithmetics and the interpretation of p-adic topology in terms of finite measurement resolution however suggest that they might obey field equations in preferred coordinates but not in the real differentiable structure but in what might be called quantum p-adic differentiable structure associated with prime p.

    3. Canonical identification would map these quantum p-adic partonic (space-time surfaces) to their real counterparts in a unique a continuous manner and the image would be real space-time surface in finite measurement resolution. It would be continuous but not differentiable and would not of course satisfy field equations for Kähler action anymore. What is nice is that the inverse of the canonical identification which is two-valued for finite number of pinary digits would not be needed in the correspondence.

    4. This description might be relevant also to quantum field theories (QFTs). One usually assumes that minima obey partial differential equations although the local interactions in QFTs are highly singular so that the quantum average field configuration might not even possess differentiable structure in the ordinary sense! Therefore quantum p-adicity might be more appropriate for the minima of effective action.

    The conclusion would be that commutative quantum deformations of space-time surfaces indeed have a useful function in TGD Universe.

Consider now in more detail the identification of the quantum deformations of space-time surfaces.

  1. Rationals are in the intersection of real and p-adic number fields and the representation of numbers as rationals r=m/n is the essence of quantum arithmetics. This means that m and n are expanded to series in powers of p and coefficients of the powers of p which are smaller than p are replaced by the quantum counterparts. They are quantum quantum counterparts of integers smaller than p. This restriction is essential for the uniqueness of the map assigning to a give rational quantum rationals.

  2. One must get also quantum p-adics and the idea is simple: if the pinary expansions of m and n in positive powers of p are allowed o become infinite, one obtains a continuum very much analogous to that of ordinary p-adic integers with exactly the same arithmetics. This continuum can be mapped to reals by canonical identification. The possibility to work with numbers which are formally rationals is utmost importance for achieving the correct map to reals. It is possible to use the counterparts of ordinary pinary expansions in p-adic arithmetics.

  3. One can defined quantum p-adic derivatives and the rules are familiar to anyone. Quantum p-adic variants of field equations for Kähler action make sense.

    1. One can take a solution of p-adic field equations and by the commutativity of the map r=m/n→ rq=mq/nq and of arithmetic operations replace p-adic rationals with their quantum counterparts in the expressions of quantum p-adic imbedding space coordinates hk in terms of space-time coordinates xα.

    2. After this one can map the quantum p-adic surface to a continuous real surface by using the replacement p→ 1/p for every quantum rational. This space-time surface does not anymore satisfy the field equations since canonical identification is not even differentiable. This surface - or rather its quantum p-adic pre-image - would represent a space-time surface within measurement resolution. One can however map the induced metric and induced gauge fields to their real counterparts using canonical identification to get something which is continuous but non-differentiable.

  4. This construction works nicely if in the preferred coordinates for imbedding space and partonic (space-time) surface itself the imbedding space coordinates are rational functions of space-time coordinates with rational coefficients of polynomials (also Taylor and Laurent series with rational coefficients could be considered as limits). This kind of assumption is very restrictive but in accordance with the fact that the measurement resolution is finite and that the representative for the space-time surface in finite measurement resolution is to some extent a convention. The use of rational coefficients for the polynomials involved implies that for polynomials of finite degree WCW reduces to a discrete set so that finite measurement resolution has been indeed realized quite concretely!

Consider now how the notion of finite measurement resolution allows to circumvent the objections against the construction.

  1. Manifest GCI is lost because the expression for space-time coordinates as quantum rationals is not general coordinate invariant notion unless one restricts the consideration to rational maps and because the real counterpart of the quantum p-adic space-time surface depends on the choice of coordinates. The condition that the space-time surface is represented in terms of rational functions is a strong constraint but not enough to fix the choice of coordinates. Rational maps of both imbedding space and space-time produce new coordinates similar to these provided the coefficients are rational.

  2. Different choices for imbedding space and space-time surface lead to different quantum p-adic space-time surface and its real counterpart. This is an outcome of finite measurement resolution. Since one cannot order the space-time points below the measurement resolution, one cannot fix uniquely the space-time surface nor uniquely fix the coordinates used. This implies the loss of manifest general coordinate invariance and also the non-uniqueness of quantum real space-time surface. The choice of coordinates is analogous to gauge choice and quantum real space-time surface preserves the information about the gauge.

For background see chapter Quantum Arithmetics of "Physics as Generalized Number Theory".

The anatomy of quantum jump in zero energy ontology

The understanding of the anatomy of quantum jump identified as a moment of consciousness in the framework of Zero energy ontology (ZEO) is gradually getting more detailed and the following is the summary of the recent understanding. The general vision about quantum jump in zero energy ontology generalizes the ordinary auantum measurement theory bringing in also the selection of maximal set of mutually commuting set of observables. Also the connection with the breaking of time reversal invariance at the level of zero energy states as a necessary condition for the non-triviality of U-matrix is new.

  1. Quantum jump begins with unitary process U described by unitary matrix assigning to a given zero energy state a quantum superposition of zero energy states. This would represent the creative aspect of quantum jump - generation of superposition of alternatives.

  2. The next step is a cascade of state function reductions proceeding from long to short scales. It starts from some CD and proceeds downwards to sub-CDs to their sub-CDs to ...... At a given step it induces a measurement of the quantum numbers of either positive or negative energy part of the quantum state. This step would represent the measurement aspect of quantum jump - selection among alternatives.

  3. The basic variational principle is Negentropy Maximization Principle(NMP) stating that the reduction of entanglement entropy in given quantum jump between two subsystems of CD assigned to sub-CDs is maximal. Mathematically NMP is very similar to the second law although states just the opposite but for individual quantum system rather than ensemble. NMP actually implies second law at the level of ensembles as a trivial consequence of the fact that the outcome of quantum jump is not deterministic.

    For ordinary definition of entanglement entropy this leads to a pure state resulting in the measurement of the density matrix assignable to the pair of CDs. For hyper-finite factors of type II1 (HFFs) state function reduction cannot give rise to a pure state and in this case one can speak about quantum states defined modulo finite measurement resolution and the notion of quantum spinor emerges naturally. One can assign a number theoretic entanglement entropy to entanglement characterized by rational (or even algebraic) entanglement probabilities and this entropy can be negative. Negentropic entanglement can be stable and even more negentropic entanglement can be generated in the state function reduction cascade.

The irreversibility is realized as a property of zero energy states (for ordinary positive energy ontology it is realized at the level of dynamics) and is necessary in order to obtain non-trivial U-matrix. State function reduction should involve several parts. First of all it should select the density matrix or rather its Hermitian square root. After this choice it should lead to a state which prepared either at the upper or lower boundary of CD but not both since this would be in conflict with the counterpart for the determinism of quantum time evolution.

Generalization of S-matrix

ZEO forces the generalization of S-matrix with a triplet formed by U-matrix, M-matrix, and S-matrix. The basic vision is that quantum theory is at mathematical level a complex square roots of thermodynamics. What happens in quantum jump was already discussed.

  1. U-matrix as has its rows M-matrices , which are matrices between positive and negative energy parts of the zero energy state and correspond to the ordinary S-matrix. M-matrix is a product of a hermitian square root - call it H - of density matrix ρ and universal S-matrix S commuting with H: [S,H]=0. There is infinite number of different Hermitian square roots Hi of density matrices which are assumed to define orthogonal matrices with respect to the inner product defined by the trace: Tr(HiHj)=0. Also the columns of U-matrix are orthogonal. One can interpret square roots of the density matrices as a Lie algebra acting as symmetries of the S-matrix.

  2. One can consider generalization of M-matrices so that they would be analogous to the elements of Kac-Moody algebra. These M-matrices would involve all powers of S.

    1. The orthogonality with respect to the inner product defined by < A| B> = Tr(AB) requires the conditions Tr(H1H2Sn)=0 for n≠ 0 and Hi are Hermitian matrices appearing as square root of density matrix. H1H2 is hermitian if the commutator [H1,H2] vanishes. It would be natural to assign n:th power of S to the CD for which the scale is n times the CP2 scale.

    2. Trace - possibly quantum trace for hyper-finite factors of type II1) is the analog of integration and the formula would be a non-commutative analog of the identity ∈tS1 exp(inφ) dφ=0 and pose an additional condition to the algebra of M-matrices. Since H=H1H2 commutes with S-matrix the trace can be expressed as the sum

      i,jhisj(i)= ∑i,j hi(j)sj

      of products of correspondence eigenvalues and the simplest condition is that one has either ∑j sj(i)=0 for each i or ∑i hi(j)=0 for each j.

    3. It might be that one must restrict M matrices to a Cartan algebra for a given U-matrix and also this choice would be a process analogous to state function reduction. Since density matrix becomes an observable in TGD Universe, this choice could be seen as a direct counterpart for the choice of a maximal number of commuting observables which would be now hermitian square roots of density matrices. Therefore ZEO gives good hopes of reducing basic quantum measurement theory to infinite-dimensional Lie-algebra.

Unitary process and choice of the density matrix

Consider first unitary process followed by the choice of the density matrix.

  1. There are two natural state basis for zero energy states. The states of these state basis are prepared at the upper or lower boundary of CD respectively and correspond to various M-matrices MK+ and ML-. U-process is simply a change of state basis meaning a representation of the zero energy state MK+/- in zero energy basis MK-/+ followed by a state preparation to zero energy state M+/-K with the state at second end fixed in turn followed by a reduction to ML-/+ to its time reverse, which is of same type as the initial zero energy state.

    The state function reduction to a given M-matrix MK+/- produces a state for the state is superposition of states which are prepared at either lower or upper boundary of CD. It does not yet produce a prepared state on the ordinary sense since it only selects the density matrix.

  2. The matrix elements of U-matrix are obtained by acting with the representation of identity matrix in the space of zero energy states as

    I= ∑K | K+> < K+|

    on the zero energy state | K-> (the action on | K+> is trivial!) and gives

    U+KL= Tr(M+KM+L) .

    In the similar manner one has

    U-KL=(U+†)KL= Tr(M-LM-K) = (U+LK)* .

    These matrices are Hermitian conjugates of each other as matrices between states labelled by positive or negative energy states. The interpretation is that two unitary processes are possible and are time reversals of each other. The unitary process produces a new state only if its time arrow is different from that for the initial state. The probabilities for transitions |K+> → |K-> are given by

    pmn= |Tr(MK+ ML+)|2.

State function preparation

Consider next the counterpart of the ordinary state preparation process.

  1. The ordinary state function process can act either at the upper or lower boundary of CD and its action is thus on positive or negative energy part of the zero energy state. At the lower boundary of CD this process selects one particular prepared states. At the upper boundary it selects one particular final state of the scattering process.

  2. Restrict for definiteness the consideration to the lower boundary of CD. Denote also MK by M. At the lower boundary of CD the selection of prepared state - that is preparation process- means the reduction

    m+n-M+/-m+n-| m+> | n-> → ∑n-M+/-m+n-| m+> | n-> .

    The reduction probability is given by

    pm= ∑n- | Mm+n-|2 = ρm+m+ .

    For this state the lower boundary carries a prepared state with the quantum numbers of state | m+> . For density matrix which is unit matrix (this option giving pure state might not be possible) one has pm=1.

State function reduction process

The process which is the analog of measuring the final state of the scattering process is also needed and would mean state function reduction at the upper end of CD - to state | n-> now.

  1. It is impossible to reduce to arbitrary state | m+> | n-> and the reduction must at the upper end of CD must mean a loss of preparation at the lower end of CD so that one would have kind of time flip-flop!

  2. The reduction probability for the process

    | m+ >== ∑n-Mm+n-| m+> | n-> → n->= ∑m+Mm+n-| m+> | n->

    would be

    pmn =| Mmn|2 .

    This is just what one would expect. The final outcome would be therefore a state of type | n-> and - this is very important- of the same type as the state from which the process began so that the next process is also of type U+ and one can say that a definite arrow of time prevails.

  3. Both the preparation and reduction process involves also a cascade of state function reductions leading to a choice of state basis corresponding to eigenstates of density matrices between subsystems.

Can the arrow of geometric time change?

A highly interesting question is what happens if the first state preparation leading to a state | K+> is followed by a U-process of type U- rather than by the state function reduction process |K+> → |L->. Does this mean that the arrow of geometric time changes? Could this change of the arrow of geometric time take place in living matter? Could processes like molecular self assembly be entropy producing processes but with non-standard arrow of geometric time? Or are they processes in which negentropy increases by the fusion of negentropic parts to larger ones? Could the variability relate to sleep-awake cycle and to the fact that during dreams we are often in our childhood and youth. Old people are often said to return to their childhood. Could this have more than a metaphoric meaning? Could biological death mean return to childhood at the level of conscious experience? I have explained the recent views about the arrow of time here .

For background see new chapter Construction of Quantum Theory: More About Matrices of "Towards M-matrix" .

Tuesday, January 24, 2012

How it went?

Mark McWilliams requested some kind of summary about the development of TGD, and I decided to write an article about the the history of TGD. I could not avoid telling also about turning points of my personal life since my work and life are to high extent one and the same thing.

I have tried to represent the development chronologically but I must confess that I have forgotten precise dates so that the chronology is not exact. Very probably I have also forgotten many important ideas and many side tracks which led nowhere. Indeed, the study of the tables of contents of books and old blog postings and What's New articles at the homepage forces me to wonder how I can forget something so totally.

The article should help a novice to get an overall view about the basic ideas of TGD are their evolution during these 34 years. To myself a real surprise was to see how many deep ideas have emerged after 2005: one can really speak about a burst of new ideas. Most of them relate to the evolution of the mathematical aspects of TGD and to their physical interpretation but also the experimental input from LHC, Fermilab, and elsewhere has played a decisive role in stimulating ideas about the interpretation of the theory.

Unavoidably the emphasis is on the latest ideas and there is of course the risk that some of them are not here to stay. Even during writing process some ideas developed into more concrete form. A good example is the vision about what happens in quantum jump and what the unitarity of U-matrix really means, how M-matrices generalize to form Kac-Moody type algebra, and how the notion of quantum jump in zero energy ontology (ZEO) reproduces the basic aspects of quantum measurement theory. Also a slight generalization of quantum arithmetics suggested itself during the preparation of the article.

I gave to the article a title which is easy to guess: "Evolution of TGD". It can be found at my homepage which is now living at webhotel with address http://tgdtheory.com/.

Note: The links of old postings to my homepage do not work anymore. Apologies. To get to the link work one can replace "http://tgd.wippiespace.com/" with "http://tgdtheory.com/", and if this does not work, with "http://tgdtheory.com/public_html/".

Thursday, January 19, 2012

Does 2-adic quantum arithmetics explain p-adic length scale hypothesis?

For p=2 quantum arithmetics looks singular at the first glance. This is actually not the case since odd quantum integers are equal to their ordinary counterparts in this case. This applies also to powers of two interpreted as 2-adic integers. The real counterparts of these are mapped to their inverses in canonical identification.

Clearly, odd 2-adic quantum quantum rationals are very special mathematically since they correspond to ordinary rationals. It is fair to call them "classical" rationals. This special role might relate to the fact that primes near powers of 2 are physically preferred. CDs with n=2k would be in a unique position number theoretically. This would conform with the original - and as such wrong - hypothesis that only these time scales are possible for CDs. The preferred role of powers of two supports also p-adic length scale hypothesis.

The discussion of the role of quantum arithmetics in the construction of generalized Feynman diagrams allows to understand how for a quantum arithmetics based on particular prime p particle mass squared - equal to conformal weight in suitable mass units - divisible by p appears as an effective propagator pole for large values of p. In p-adic mass calculations real mass squared is obtained by canonical identification from the p-adic one. The construction of generalized Feynman diagrams allows to understand this strange sounding rule as a direct implication of the number theoretical universality realized in terms of quantum arithmetics.

Wednesday, January 18, 2012

Witten about mass gap

Witten has a nice talk about mass gap problem in 3-D (mostly) and 4-D gauge theories demonstrating how enormous his understanding and knowledge about mathematical physics is. Both Peter Woit and Kea have commented it.

In 3-D case the coupling strength g2 has the dimension of inverse length and therefore it would not be surprising if mass gap would emerge. Witten argues that by adding to the theory a Chern-Simons term the theory could reduce in long length scales to non-trivial topological QFT at the IR limit. This would be also a nice manner to resolve the IR difficulties of 3-D gauge theories. Could one imagine effective reduction to topological QFT in long length scales also in 4-D case as a solution to IR divergences?

In D=4 the situation is much more difficult since the gauge coupling is dimensionless. My un-educated is opinion is that the proper question is whether the theory actually exists mathematically and my equally un-educated guess is that it does not - unless one brings in the mass scale somehow by hand. The standard Muenchausen trick to bring the scales in perturbation theory is via UV and IR cutoffs. This is going outside what one means with gauge theory strictly mathematically. In order to make progress, one must bring in the new physics and mathematics. A rigorous mathematical formulation of 4-D gauge theory is not enough: it simply does not exist since something very important is missing.

TGD view about the mass gap problem

TGD is one proposal for what this new physics and mathematics could be. I do not try to re-explain in any detail what this new physics and mathematics might be since I have done this explaining for 6 years in this blog. The basic statement is however that the fundamental UV length scale must be present explicitly in the definition of the theory and must have concrete geometric interpretation rather than being a dimensional number like string tension. In TGD framework it corresponds to the "radius" of CP2, which is fixed from simple symmetry arguments as the only possible choice. This scale is not an outcome of some conceptually highly questionable procedure like spontaneous compactification, which has paralyzed theoretical physics for more than two decades and led to the landscape problem and the proposal to bring anthropic principle to physics - something extremely uninviting for anyone who has spent few minutes by trying to understand what one can say about consciousness as a physicist and mathematician.

To my rebellionary view the mathematics of standard gauge theories is not enough.

  1. Quite a far reaching generalization is needed besides the replacement of the recent view about space-time with the identification of space-times as 4-surfaces.

  2. The usual positive energy ontology having its roots in Newtonian mechanics based on absolute time (Hamiltonian approach especially) must be replaced with zero energy ontology which is natural in the relativistic context.

  3. A further generalization is number theoretical universality requiring that the physics in different number fields must be unified to single coherent whole.

  4. In some of the latest postings I have explained how the number theoretical universality would be realized in terms of quantum arithmetics - something also missing from ordinary gauge theory but for the existence of which there exist indications (quantum groups, inclusions of hyper-finite factors, and Shnoll effect on experimental side). In particular, the size scales of CDs coming as powers of 2 correspond to p=2 quantum arithmetics, which is very special in the sense that for odd integers it is just the ordinary arithmetics. For other primes p the corresponding p-adic length scale is in preferred position since the states with mass squared proportional to p as almost massless states giving an almost pole to the propagator which is given in terms of M2 momentum. I dare to hope that these observations finally answer the question why p-adic primes near powers of two are favored physically.

Further comments about mass gap

I cannot avoid the temptation to represent some further comments related to how the mass gap - or rather a hierarchy of mass gaps defined by p-adic mass scales in turn expressible in terms of p-adic prime and the fundamental mass scale defined by CP2 mass - emerges in TGD.

  1. One of the surprises of zero energy ontology was that that all - including those associated with virtual particles - braid strands carrying fermion numbers are on massless on mass shell states with possibly negative sign of energy so that wormhole contact can have space-like virtual net momentum. This leads to extremely powerful restrictions of loops integrals and guarantees finiteness and with certain additional natural assumptions deriving from ZEO the number of contributing diagrams is finite (discussed in recent postings: see this and this), and therefore guarantees algebraic universality (sum of infinite number of rationals (rational functions) need not be rational (rational function)!).

  2. Quite recently I have learned finally to accept that for generalized Feynman diagrams the presence of preferred M2⊂M4 having interpretation in terms of quantization axis of energy and spin is unavoidable (see for instance this). Of course, also propagators for on mass shell massless states are literally infinite unless one restricts the momentum in propagator to its M2 projection. There is integral over different choices of M2 so that Poincare invariance is not lost. Also number theoretic vision forces M2 with an interpretation as commutative subspace of complexified octonions. The last posting about Very Special Relativity and TGD gave one additional justification M2.

  3. Witten talks about 3-D gauge theories with emphasis on Chern-Simons term and the idea that in long length scales one obtains a non-trivial topological QFT for non-trivial mass gap. In TGD framework effective 2-dimensionality - or strong form of holography - follows from strong form of General Coordinate Invariance and for preferred extremals of Kähler action the action reduces to Chern-Simons terms if weak form of electric-magnetic duality holds true at the space-like 3-D surfaces at the ends of space-time sheet and at wormhole throats.

    The special features of light-like 3-surfaces and boundaries of CDs is that they allow an extension of 2-D conformal invariance by their metric 2-dimensionality: this actually raises 4-D space-time and 4-D Minkowski space in completely unique position mathematically. An extremely simple and profound discovery, whose communication has turned out to be impossible- I think that even my cat is able to understand its significance-: what is wrong with these bright-minded colleagues in their academies;-)? An interesting question raised by Witten's talk is whether also TGD as almost topological TGD in some sense reduces to topological QFT at long length scales for given p-adic length scale. Exponential decrease of correlation function as function of distance might imply this but what happens on light-like boundaries of CDs?

  4. There are also open questions. For instance, should one assign different M2 to each sub-CD of CD or to each propagator line connecting the 3-vertices? One can be even more general and also consider a local choices of M2 defined by an integrable distribution of M2⊂ M4 defining the analog of string world sheet.

The special role of M2⊂M4 in relation to mass gap

The special role of M2⊂M4 in the construction of generalized Feynman diagrams deserves additional comments.

  1. What is remarkable that gauge conditions generalize in the sense that it is M2 momentum that appears in gauge conditions so that also the third polarization for gauge bosons creeps into the spectrum and even photon, gluons, and gravitons would receive small mass given in terms of IR cutoff given by the largest causal diamond in the hierarchy of causal diamonds defining the experimental range of length scales about which experimentalist can gain information. This is of course extremely natural from the view point of experimentalist.

  2. Physical particles are bound states of massless states with parallel M2 momenta assigned with the wormhole throats of the same wormhole contact. Also this bring in the overall important IR cutoff - and thus mass gap - not present in gauge theories. UV cutoff given by the size of the smallest CD gives UV cutoff. As already mentioned, there is no breaking of Poincare invariance.

  3. The highly non-trivial question is how the p-adic mass calculations can be consistent with the massless of braid strands. How can wormhole throat satisfy stringy mass formula if it is massless? One of the latest realizations is that that it is not the full M4mass squared but the longitudinal M2 mass squared, which is quantized by the stringy mass formula! The modified Dirac equation indeed strongly suggests that M2 momenta have integer valued components. I could not however decide whether only hyper-complex primes should be accepted: it now seems that integers coming as multiple of given hyper-complex integer, whose modulus square is prime, must be allowed. Particles would get longitudinal mass squared by p-adic thermodynamics and this mass would be the observed mass. Mass gap again but only in longitudinal degrees of freedom

  4. There is also experimental support for the necessity of introducing M2. In QCD one characterizes partons with M2 momentum and this again brings to gauge theory as a purely mathematical construct - something which really is not there! The great experimental question is whether Higgs exists or not. In TGD Higgs mechanism is replaced by a microscopic mechanism based on p-adic thermodynamics and identification of the mass squared as longitudinal mass squared (in Lorentz invariance manner since one averages over different M2:s). The natural prediction is that instead of Higgs there is entire M89 hadron physics to be discovered. If Higgs really is there as some bloggers have already revealed to us;-), profound re-interpretation of TGD is necessary.

New physics in non-perturbative sector

Asymptotically free gauge theories can handle the UV divergences by using renormalization group approach bringing in a scale analogous to QCD Lambda. Lambda defines the IR scale identified as the length scale associated with hadronization and confinement. One gets rid of UV scale altogether (but not from the mathematically tedious and ugly procedures removing UV infinities). In perturbative gauge theories IR remains however a source of difficulties since one really does not know how to calculate anything since the proposed expression for IR scale is non-analytic function of coupling constant strength (expressible in terms of exp(-8π2ℏ/g2), I hope I remember correctly). Also twistor approach is plagued by IR divergences. These difficulties are of course the reason for arranging a conference about mass gap problem! I do not however believe that the mass gap is a mathematical problem within the framework of 4-D gauge theories. One must go outside the system.

In TGD framework magnetic flux tubes are the concrete classical space-time correlate for non-perturbative aspects of quantum theory and appear in all applications from primordial cosmology to biology to elementary particle physics. They are not present in gauge theories. They are obtained as deformations of what I call cosmic strings, which are Cartesian products of string world sheets in M2 with 2-D complex sub-manifolds of CP2. In this case one cannot anymore speak about space-time as a small deformation of Minkowski space. The quantized size scale of the complex sub-manifolds brings in the scale via string tension. Wormhole contacts themselves are magnetic monopoles and thus homologically non-trivial surfaces of CP2 with quantized area so that again the fundamental mass scale creeps in. Note that the Kähler action for the magnetic flux tubes and also for deformations CP2 vacuum extremals contains a power of exp(-8π2ℏ/g2) giving rise to non-analyticity in g. In gauge theories classical action for instantons would give rise kind of factor.

Note: The address of my homepage has changed and the links to my homepage from the earlier postings will fail. The cure of the problem is the replacement of tgd.wippiespace or tgdq.wippiespace in the address with tgdtheory.

Tuesday, January 17, 2012

Very Special relativity and preferred role of M2 for generalized Feynman graphs

The preferred role of M2 in the construction of generalized Feynman diagrams could be used as a criticism. Poincare invariance is lost. The first answer to the criticism is that one integrates of the choices of M2 so that Poincare invariance is lost. One can however defend this assumption also from different view point. Actually Glashow and Cohen did this in their Very Special Relativity proposal! While scanning old files, I found an old text about Very Special Relativity of Glashow and Cohen, and realized that it relates very closely to the special role of M2 in the construction of generalized Feynman diagrams. There is article Very Special Relativity and TGD at my homepage but for some reason the text has disappeared from the book that contained it. I add the article more or less as such here.

Configuration space ("world of classical worlds", WCW) decomposes into a union of sub-configuration spaces associated with future and past light-cones and these in turn decompose to sub-sub-configuration spaces characterized by selection of quantization axes of spin and color quantum numbers. At this level Poincare and even Lorentz group are reduced. The possibility that this kind of breaking might be directly relevant for physics is discussed below.

One might think that Poincare symmetry is something thoroughly understood but the Very Special Relativity proposed by nobelist Sheldon Glashow and Andrew Cohen suggests that this might belief might be wrong. Glashow and Cohen propose that instead of Poincare group, call it P, some subgroup of P might be physically more relevant than the whole P. To not lose four-momentum one must assume that this group is obtained as a semi-direct product of some subgroup of Lorentz group with translations. The smallest subgroup, call it L2, is a 2-dimensional Abelian group generated by Kx+Jy and Ky-Jx. Here K refers to Lorentz boosts and J to rotations. This group leaves invariant light-like momentum in z direction. By adding Jz acting in L2 like rotations in plane, one obtains L3, the maximal subgroup leaving invariant light-like momentum in z direction. By adding also Kz one obtains the scalings of light-like momentum or equivalently, the isotropy group L4 of a light-like ray.

The reasons why Glashow and Cohen regard these groups so interesting are following.

  1. All kinematical tests of Lorentz invariance are consistent with the reduction of Lorentz invariance to these symmetries.

  2. The representations of group L3 are one-dimensional in both massive and massless case (the latter is familiar from massless representations of Poincare group where particle states are characterized by helicity). The mass is invariant only under the smaller group. This might allow to have left-handed massive neutrinos as well as massive fermions with spin dependent mass.

  3. The requirement of CP invariance extends all these reduced symmetry groups to the full Poincare group. The observed very small breaking of CP symmetry might correlate with a small breaking of Lorentz symmetry. Matter antimatter asymmetry might relate to the reduced Lorentz invariance.

The idea is highly interesting from TGD point of view. The groups L3 and L4 indeed play a very prominent role in TGD.

  1. The full Lorentz invariance is obtained in TGD only at the level of the entire configuration space ("world of classical worlds", WCW) which is union over sub-configuration spaces associated with what I call causal diamonds (see this). These sub-configuration spaces decompose further into a union of sub-sub-configuration spaces for which a choice of quantization axes of spin reflects itself at the level of generalized geometry of the imbedding space (quantum classical correspondence requires that the choice of quantization axes has imbedding space and space-time correlates) (see this). The construction of the geometry for these sub-worlds of classical worlds reduces to light-cone boundary so that the little group L3 leaving a given point of light-cone boundary invariant is in a special role in TGD framework.

  2. The selection of a preferred light-like momentum direction at light-cone boundary corresponds to the selection of quantization axis for angular momentum playing a key role in TGD view about hierarchy of Planck constants associated with a hierarchy of Jones inclusions implying a breaking of Lorentz invariance induced by the selection of quantization axis. The number theoretic vision about quantum TGD implies a selection of two preferred axes corresponding to time-like and space-like direction corresponding to real and preferred imaginary unit for hyper-octonions (see this). In both cases L4 emerges naturally.

  3. The TGD based identification of Kac-Moody symmetries as local isometries of the imbedding space acting on 3-D light-like orbits of partonic 2-surfaces involves a selection of a preferred light-like direction and thus the selection of L4.

  4. Also the so called massless extremals representing a precisely targeted propagation of patterns of classical gauge fields with light velocity along typically cylindrical tubes without a change in the shape involve L4. A very general solution ansatz to classical field equations involves a local decomposition of M4 to longitudinal and transversal spaces and selection of a light-like direction (see this).

  5. The parton model of hadrons assumes a preferred longitudinal direction of momentum and mass squared decomposes naturally to longitudinal and transversal mass squared. Also p-adic mass calculations rely heavily on this picture and thermodynamics mass squared might be regarded as a longitudinal mass squared (see this). In TGD framework right handed covariantly constant neutrino generates a super-symmetry in CP2 degrees of freedom and it might be better to regard left-handed neutrino mass as a longitudinal mass.

This list justifies my own hunch that Glashow and Cohen might have discovered something very important.

Reader interested in background can consult to the article Algebraic braids, sub-manifold braid theory, and generalized Feynman diagrams and the new chapter Generalized Feynman Diagrams as Generalized Braids of "Towards M-Matrix".

MY HOMEPAGE ADDRESS CHANGES!!

I accidentally learned that the host providing the computer storage finishes all this kind of activity. Accidentally because the finnish enterprises behind Wippie - Saunalahti and Elisa - had decided to do the whole thing in complete secrecy and had cut both phone and email connections related to wippie-space.

In trying to contact the only response was that your password is wrong. Many extremely frustrated and irritated people in web told about this. The big bosses have however calculated that the people practically losing their how life work in this manner are a minority which they need not care of.

It was extremely difficult to find what is happening and what I should do. It turned out that I cannot use the old addresses.

The old web addresses

tgd.wippiespace. com

and

tgdq.wippiespace.com

will be replaced by

tgdtheory.com.

For some time I will disappear from web just as I did for three years ago when Helsinki University decided to get rid of my web presence and also managed quite well.

I will do best to update the web addresses appearing in the books and articles in home page and also in viXra.org but this will take time. Apologies. The new web page should be at use within few days.

Any concrete ideas to help in the situation are welcome!! In particular, there should be manners to communicate search engines about the change of the address.

Sunday, January 15, 2012

Number theoretical universality and quantum arithmetics, renormalization, and relation of TGD to N=4 SYM

In the previous posting I already discussed the proposal for how twistorial construction could generalize to apply to generalized Feynman diagrams in TGD framework. During last days I have made a further progress in understanding of the number theoretical aspects of the proposed construction.

In particular, I finally have simple and general justification for the hypothesis that length scales coming as powers of two and p-adic length scales associated p-adic primes near powers of 2 are very special. The explanation is extremely simple: quantum arithmetics is characterized by prime p and for p=2 all odd quantum integers are identical with ordinary integers so that only powers of two mapped to their inverses distinguish 2-adic quantum arithematics from the ordinary one.

I have also corrected some erratic statements in the view about coupling constant evolutiont and compared the approach to that of N=4 SYM developed by Nima Arkani Hamed and others. Therefore this posting contains some material overlapping with the previous posting.

Number theoretical universality

The construction of the amplitudes should be number theoretically universal meaning that amplitudes should make sense also in p-adic number fields or perhaps in adelic sense in the tensor product of p-adic numbers fields. Quantum arithmetics is characterized by p-adic prime and canonical identification mapping p-adic amplitudes to real amplitudes is expected to make number theoretical universality possible.

This is achieved if the amplitudes should be expressible in terms of quantum rationals and rational functions having quantum rationals as coefficients of powers of the arguments. This would be achieved by simply mapping ordinary rationals to quantum rationals if they appear as coefficients of polynomials appearing in rational functions.

Quantum rationals are characterized by p-adic prime p and p-adic momentum with mass squared interpreted as p-adic integer appears in the propagator. If M2 mass squared is proportional to this p-adic prime p, propagator behaves as 1/P2∝ 1/p, which means that one has pole like contribution for these on mass shell longitudinal masses. p-Adic mass calculations indeed give mass squared proportional to p. The real counterpart of propagator in canonical identification is proportional to p. This would select the all CD characterized by n divisible by p as analogs of propagator poles. Note that the infrared singularity is moved and the largest p-adic prime appearing as divisor of integer characterizing the largest CD indeed serves as a physical IR cufoff.

It would seem that one must allow different p-adic primes in the generalized Feynman diagram since physical particles are in general characterized by different p-adic primes. This would require the analog of tensor product for different quantum rationals analogous to adeles. These numbers would be mapped to real (or complex) numbers by canonical identification.

How to get only finite number of diagrams in a given IR and UV resolution?

In gauge theory one obtains infinite number of diagrams. In zero energy ontology the overall important additional constraint comes from on mass shell conditions at internal lines and external lines and from the requirement that the M2 momentum squared is quantized for super-conformal representation in terms of stringy mass squared spectrum.

This condition alone does not however imply that the number of diagrams is finite. If forward scattering diagram is non-vanishing also scattering without on mass shell massive conditions on final state lines is possible. One can construct diagrams representing a repeated n→ n scattering and combining these amplitudes with non-forward scattering amplitude one obtains infinite number of scattering diagrams with fixed initial and final states. Number theoretic universality however requires that the number of the contributing diagrams must be finite unless some analytic miracles happens.

The finite number of diagrams could be achieved if one gives for the vision about CDs within CDs a more concrete metric meaning. In spirit of Uncertainty Principle, the size scale of the CD defined by the temporal distance between its tips could correspond to the inverse of the momentum scale defined as its inverse. A further condition would be that the sub-CDs and their Lorentz boosts are indeed within the CD and do not overlap. Obviously the number of diagrams representing repeated n-n scattering forward scattering is finite if these assumptions are made. This would also suggest a scale hierarchy in powers of 2 for CDs: the reason is that given CD with scale T=nT(CP2) can contain two non-overlapping sub-CDs with the same rest frame only if sub-CD has size scale smaller than nTCP2/2. This applies also to the Lorentz boosts of the sub-CDs.

Amplitudes would be constructed by labeling the CDs by integer n defining its size scale. p-Adicity suggests that the factorization of n to primes must be important and if n=p condition holds true, a new resonant like contribution appears corresponding to p-adic diagrams involving propagator.

Should one allow all M2 momenta in the loops in all scales or should one restrict the M2 momenta to have a particular mass squared scale determined somehow by the size of CD involved? If this kind of constraint is posed it must be posed in mathematically elegant manner and it is not clear how to to this.

Is this kind of constraint really necessary? Quantum arithmetics for the length scale characterized by p-adic prime p would make M2 mass squared values divisible by p to almost poles of the propagators, and this might be enough to effectively select the particular p and corresponding momentum scale and CD scale. Consider only the Mersenne prime M127=2127-1 as a concrete example.

How to realize the number theoretic universality?

One should be able to realized the p-adicity in some elegant manner. One must certainly allow different p-adic primes in the same diagram and here adelic structure seems unavoidable as tensor product of amplitudes in different p-adic number fields or rather - their quantum arithmetic counterparts characterized by a preferred prime p and mapped to reals by the substitution p→ 1/p. What does this demand?

  1. One must be able to glue amplitudes in different p-adic number fields together so that the lines in some case must have dual interpretation as lines of two p-adic number fields. It also seems that one must be able to assign p-adic prime and quantum arithmetics characterized by a given prime p to to a given propagator line. This prime is probably not arbitrarily and it will be found that it should not be larger than the largest prime dividing n characterizing the CD considered.

  2. Should one assign p-adic prime to a given vertex?

    1. Suppose first that bare 3-vertices reduce to algebraic numbers containing no rational factors. This would guarantee that they are same in both real and p-adic sense. Propagators would be however quantum rationals and depend on p and have almost pole when the integer valued mass squared is proportional to p.

    2. The radiative corrections to the vertex would involve propagators and this suggests that they bring in the dependence on p giving rise to p-adic coupling constant evolution for the real counterparts of the amplitudes obtained by canonical identification.

      1. Should also vertices obey p-adic quantum arithmetics for some p? What about a vertex in which particles characterized by different p-adic primes enter? Which prime defines the vertex or should the vertex somehow be multi-p p-adic? It seems that vertex cannot contain any prime as such although it could depend on incoming p-adic primes in algebraic or transcendental manner.

      2. Could the radiative corrections sum up to algebraic number depending on the incoming p-adic primes? Or are the corrections transcendental as ordinary perturbation theory suggests and involve powers of π and logarithm of mass squared and basically logarithms of some primes requiring infinite-dimensional transcendental extension of p-adic numbers? If radiative corrections depend only on the logarithms of these primes p-adic coupling constant evolution would be obtained. The requirement that radiative vertex corrections vanish does not look physically plausible.

    3. Only CDs corresponding to integers m< n would be possible as sub-CDs. A geometrically attractive possibility is that CD characterized by integer n allows only propagator lines which correspond to prime factors of integers not larger than the largest prime dividing n in their quantum arithmetics. Bare vertices in turn could contain only primes larger than the maximal prime dividing n. This would simplify the situation considerably- This could give rise to coupling constant evolution even in the case that the radiative corrections are vanishing since the rational factors possibly present in vertices would drop away as n would increase.

    4. Integers n=2k give rise to an objection. They would allow only 2-adic propagators and vertices containing no powers of 2. For p=2 the quantum arithmetics reduces to ordinary arithmetics and ordinary rationals correspond to p=2 apart from the fact that powers of 2 mapped to their inverses in the canonical identification. This is not a problem and might relate to the fact that primes near powers of 2 are physically preferred. Indeed, the CDs with n=2k would be in a unique position number theoretically. This would conform with the original - and as such wrong - hypothesis that only these time scales are possible for CDs. The preferred role of powers of two supports also p-adic length scale hypothesis.

These observations give rather strong clues concerning the construction of the amplitudes. Consider a CD with time scale characterized by integer n.

  1. For given CD all sub-CDs with m<n are allowed and all p-adicities corresponding to the primes appearing as prime factors of given m are possible. m=2k are in a preferred position since p=2 quantum rationals not containing 2 reduce to ordinary rationals.

  2. The geometric condition that sub-CDs and their boosts remain inside CD and do not overlap together with momentum conservation and on-mass-shell conditions on internal lines implies that only a finite number of generalized Feynman diagrams are possible for given CD. This is essential for number theoretical universality. To each sub-CD one must assign its moduli spaces including its not-too-large boosts. Also the planes M2 associated with sub-CDs should be regarded as independent and one should integrate over their moduli.

  3. The construction of amplitudes with a given resolution would be a process involving a finite number of steps. The notion of renormalization group evolution suggests a generalization as a change of the amplitude induced by adding CDs with size smaller than smallest CDs and their boosts in a given resolution.

  4. It is not clear whether increase of the upper length scale interpreted as IR cutoff makes sense in the similar manner although physical intuition would encourage this expectation.

How to understand renormalization flow in twistor context?

In twistor context the notion of mass renormalization is not straightforward since everything is massless. In TGD framework p-adic mass scale hypothesis suggests a solution to the problem.

  1. At the fundamental level all elementary particles are massless and only their composites forming physical particles are massive.
  2. M2 mass squared is given by p-adic mass calculations and should correspond to the mass squared of the physical particle. There are contributions from magnetic flux tubes and in the case of baryons this contribution dominates.
  3. p-Adic physics discretizes coupling constant flow. Once the p-adic length scale of the particle is fixed its M2 momentum squared is fixed and massless takes care of the rest.

Consider now how renormalization flow would emerge in this picture. At the level of generalized Feynman diagrams the change of the IR (UV) resolution scale means that the maximal size of the CDs involve increases (the minimal size of the sides decreases).

Concerning the question what CD scales should be allowed, the situation is not completely clear.

  1. The most general assumption allows integer multiples of CP2 scale and would guarantee that the products of hermitian matrices and powers of S-matrix commuting with them define Kac-Moody type algebra assignable to M-matrices. If one uses in renormalization group evolution equation CDs corresponding to integer multiples of CP2 length scale, the equation would become a difference equation for integer valued variable.

  2. p-Adicity would suggest that the scales of CDs come as prime multiples of CP2 scale. The proposed realization of p-adicity indeed puts CDs characterized by p-adic primes p in a special position since they correspond to the emergence of a vertex corresponding to p-adic prime p which depends on p in the sense that the radiative corrections to 3-vertex can give it a dependence on log(p). This requires infinite-D transcendental extension of p-adic numbers.

    As far as coupling constant evolution in strict sense is considered, a natural looking choice is evolution of vertices as a function of p-adic primes of the particles arriving to the vertex since radiative corresponds are expected to depend on their logarithms.

  3. p-Adic length scale hypothesis would allow only p-adic length scales near powers of two. There are excellent reasons to expect that these scales are selected by a kind of evolutionary process favoring those scales for CDs for which particles are maximally stable. The fact that quantum arithmetics for p=2 reduces to ordinary arithmetics when quantum integers do not contain 2 raises with size scales coming as powers of 2 in a special position and also supports p-adic length scale hypothesis.

Renormalization group equations are based on studying what happens in an infinitesimal reduction of UV resolution scale would mean. Now the change cannot be infinitesimal but must correspond to a change in the scale of CD by one unit defined by CP2 size scale.

  1. The decrease of UV cutoff means addition of new details represented as bare 3-vertices represented by truncated triangle having size below the earlier length scale resolution. The addition can be done inside the original CD and inside any sub-CD would be in question taking care that the details remain inside CD. The hope is that this addition of details allows a recursive definition. Typically addition would involve attaching two sub-CDs to propagator line or two propagator lines and connecting them with propagator. The vertex in question would correspond to a p-adic prime dividing the integer characterizing the sub-CDs. Also the increase of the shortest length scale makes sense and means just the deletion of the corresponding sub-CDs. Note that also the positions of sub-CDs inside CD manner since the number of allowed boosts depends on the position. This would mean an additional complication.

  2. The increase of IR cutoff length means that the size of the largest CD increases. The physical interpretation would be in terms of the time scale in which one observes the process. If this time scale is too long, the process is not visible. For instances, the study of strong interactions between quarks requires short enough scale for CD. At long scales one only observes hadrons and in even longer scales atomic nuclei and atoms.

  3. One could also allow the UV scale to depend on the particle. This scale should correspond to the p-adic mass scales assignable to the stable particle. In hadron physics this kind of renormalization is standard operation.

Comparison with N=4 SYM

The ultimate hope is to formulate all these ideas using precise formulas. This goal is still far away but one can make trials. Let us first compare the above proposal to the formalism in N=4 SYM.

  1. In the construction of twistorial amplitudes the 4-D loop integrals are interpreted as residue integrals in complexified momentum space and reduces to residues around the poles. This is analogous to using "on mass shell states" defined by this poles. In TGD framework the situation is different since one explicitly assigns massless on-mass-shell fermions to braid strands and allows the sign of the energy to be both positive and negative.

  2. Twistor formalism and description of momentum and helicity in terms of the twistor (λ,μ) certainly makes sense for any spin. The well-known complications relate to the necessity to use complex twistors for M4 signature: this would correspond to complexified space-time or momentum space. Also region momenta and associated momentum twistors are the TGD counterparts so that the basic building bricks for defining the analogs of twistorial amplitudes exist.

An important special feature is that the gauge potential is replaced with its N=4 super version.

  1. This has some non-generic implications. In particular gluon helicity -1 is obtained from 1 ground state by "adding" four spartners with helicity +1/2 each. This interpretation of the two helicities of a massless particle is not possible in N<4 theories nor in TGD and the question is whether this is something deep or not remains open.

  2. In TGD framework it is natural to interpret all fermion modes associated with partonic 2-surface (and corresponding light-like 3-surfaces) as generators of super-symmetry and fermions are fundamental objects instead of helicity +1 gauge bosons. Right-handed neutrino has special role since it has no electroweak or color interactions and generates SUSY for which breaking is smallest.

  3. The N=2 SUSY generated by right-handed neutrino and antineutrino is broken since the propagator for states containing three fermion braid strands at the same wormhole throat behaves like 1/p3: this is already an anyon-like state. The least broken SUSY is N=1 SUSY with spartners of fermions being spin zero states. The proposal is that one could construct scattering amplitudes by using a generalize chiral super-field associated with N equal to the number of spinor modes acting on ground state that has vanishing helicity. For N=4 it has helicity +1. This would suggest that the analogs of twistorial amplitudes exist and could even have very similar formulas in terms of twistor variables.

  4. The all-loop integrand for scattering amplitudes in planar N=4 SYM relies of BCFW formula allowing to sew two n-particle three amplitudes together using single analog of propagator line christened as BCFW bridge. Denote by Yn,k,l n-particle amplitudes with k positive helicity gluons and l loops. One can glue YnL,kL,lL and YnR,kR,lR by using BCFW bridge and add "entangled " removal of two external lines of Y(n+2,k+1,l-1) amplitude with n= nL+nR-2,k=kL+kR,l=lL+lR to get Yn,k,l amplitude recursively by starting from just two amplitudes defining the 3-vertices. The procedure involves only residue integral over the Gl(k,n) for a quantity which is Yangian invariant. The question is whether one could apply this procedure by replacing N=4 SUSY with SUSY in TGD sense and generalizing the fundamental three particle vertices appropriately by requiring that they are Yangian invariants?

  5. One can also make good guesses for the BCFW bridge and entangled removal. By looking the structure of the amplitudes obtained by the procedure from 3-amplitudes, one learns that one obtains tree diagrams for which some external lines are connected to give loop. The simplest situation would be that BCFW bridge corresponds to M2 fermion propagator for a given braid strand and entangled removal corresponds to a short cut of two external lines to internal loop line. One would have just ordinary Feynman graphs but vertices connected with Yangian invariants (not that there is sum over loop corrections). It should be easy to kill this conjecture.

Reader interested in background can consult to the article Algebraic braids, sub-manifold braid theory, and generalized Feynman diagrams and the new chapter Generalized Feynman Diagrams as Generalized Braids of "Towards M-Matrix".

Saturday, January 14, 2012

Proposal for a twistorial description of generalized Feynman graphs

Listening of the lectures of Nima Arkani-Hamed is always an inspiring experience and so also at this time. The first recorded lectures was mostly about the basic "philosophical" ideas behind the approach and the second lecture continued discussion of the key points about twistor kinematics which I should already have in my backbone but do not. The lectures stimulated again the feeling that the generalized Feynman diagrammatics has all the needed elements to allow a twistorial description. It should be possible t to interpret the diagrams as the analogs of twistorial diagrams.

A couple of new ideas emerged as a result of concentrate effort to build bridge to the twistorial approach.

  1. Generalized Feynman diagrams involve only massless states at wormhole throats so that twistorial description makes sense for the kinematical variables. One should identify the counterparts of the lines and vertices of the twistor diagrams constructed from planar polygons and counterparts of the region momenta.

  2. M2⊂ M4 appears as a central element of TGD based Feynman diagrammatics and M2 projection of the four momentum appears in propagator and also in the modified Dirac equation. I realized that p-adic mass calculations must give the thermal expectation value of the M2 mass squared. Since the throats are massless this means that the transversal momentum squared equal to CP2 contribution plus conformal weight contribution to mass squared.

  3. It is not too surprising that a very beautiful interpretation in terms of the analogs of twistorial diagrams becomes possible. The idea is to interpret wormhole contacts as pairs of lines of twistor diagrams carrying on mass shell momenta. In this manner triangles with truncated apexes with double line representing the wormhole throats become the basic objects in generalized Feynman diagrammatics. The somewhat mysterious region momenta of twistor approach correspond to momentum exchanges at the wormhole contacts defining the vertices. A reasonable expectation is that the Yangian invariants used to construct the amplitudes of N=4 SUSY can be used as basic building bricks also now.

  4. Renormalization group is not understood in the usual twistor approach and p-adic considerations and quantization of the size of causal diamond (CD) suggests that the old proposal about discretization of coupling constant evolution to p-adic length scale evolution makes sense. A very concrete realization of the evolution indeed suggest itself and would mean the replacement of each triangle with the quantum superposition of amplitudes associated with triangles with smaller size scale and contained with the original triangle characterized by the size scale of corresponding CD containing it. In fact the incoming and outgoing particles of of vertex could be located at the light-like boundaries of CD.

  5. The approach should be also number theoretically universal and this suggests that the amplitudes should be expressible in terms of quantum rationals and rational functions having quantum rationals as coefficients of powers of the arguments. Quantum rationals are characterized by p-adic prime p and p-adic momentum with mass squared interpreted as p-adic integer appears in the propagator. This means that the propagator proportional to 1/P2 is proportional to 1/p when mass squared is divisible by p, which means that one has pole like contribution. The real counterpart of propagator in canonical identification is proportional to p. This would select the all CD characterized by n divisible by p as analogs of poles.

What generalized Feynman diagrams could be?

Let us first list briefly what these generalized Feynman diagrams emerge and what they should be.

  1. Zero energy ontology and the closely related notion of causal diamond (CD are absolutely essential for the whole approach. U-matrix between zero energy states is unitary but does not correspond to the S-matrix. Rather, U-matrix has as its orthonormal rows M-matrices which are "complex" square roots of density matrices representable as a product of a Hermitian square root of density matrix and unitary and universal S-matrix commuting with it so that the Lie algebra of these Hermitian matrices acts as symmetries of S-matrix. One can allow all M-matrices obtained by allowing integer powers of S-matrix and obtains the analog of Kac-Moody algebra. The powers of S correspond to CD with temporal distance between its tips coming as integer multiple of CP2 size scale. The goal is to construct M-matrices and these could be non-unitary because of the presence of the hermitian square root of density matrix.

  2. If is assumed that M-matrix elements can be constructed in terms of generalized Feynman diagrams. What generalized Feynman diagrams strictly speaking are is left open. The basic properties of generalized Feynman diagrams - in particular the property that only massless on mass shell states but with both signs of energy appear- however suggest strongly that they are much more like twistor diagrams and that twistorial method used to sum up Feynman diagrams apply.

The lines of the generalized Feynman diagrams

Generalized Feynman diagrams are constructed using solely diagrams containing on mass shell massless particles in both external and internal lines. Massless-ness could mean also massless-ness in M4× CP2 sense, and p-adic thermodynamics indeed suggests that this is true in some sense.

  1. For massless-ness in M4× CP2 sense the standard twistor description should fail for massive excitations having mass scale of order 104 Planck masses. At external lines massless states form massive on mass shell particles. In the following this possible difficulty will be neglected. Stringy picture suggests that this problem cannot be fatal.

  2. Second possibility is that massless states form composites which in the case of fermions have the mass spectrum determined by CP2 Dirac operator and and that that physical states correspond to states of super-conformal representations with ground states weight determined by the sum of vacuum conformal weight and the contribution of CP2 mass squared. In this case, one would have massless-ness in M4 sense but composite would be massless in M4× CP2 sense. In this case twistorial description would work.

  3. The third and the most attractive option is based on the fact that its is M2 momentum that appears in the propagators. The picture behind p-adic mass calculations is string picture inspired by hadronic string model and in hadron physics one can assign M2 to longitudinal parts of the parton momenta.

    One can therefore consider the possibility that M2 momentum square obeys p-adic thermodynamics. M2 momentum appears also in the solutions of the modified Dirac equation so that this identification looks physically very natural. M2 momentum characterizes naturally also massless extremals (topological light rays) and is in this case massless. Therefore throats could be massless but M2 momentum identifiable as the physical momentum would be predicted by p-adic thermodynamics and its p-adic norm could correspond to the scale of CD.

    Mathematically this option is certainly the most attractive one and it might be also physically acceptable since integration over moduli characterizing M2 is performed to get the full amplitude so that there is no breaking of Poincare invariance.

There are also other complications.

  1. Massless wormhole throats carry magnetic charges bind to form magnetically neutral composite particles consisting of wormholes connected by magnetic flux tubes. The wormhole throat at the other end of the wormhole carries opposite magnetic charge and neutrino pair canceling the electro-weak isospin of the physical particle. This complication is completely analogous to the appearance of the color magnetic flux tubes in TGD description of hadrons and will be neglected for a moment.

  2. Free fermions correspond to single wormhole throats and the ground state is massless for them. Topologically condensed fermions carry mass and the ground states has developed mass by p-adic thermodynamics. Above considerations suggests that the correct interpretation of p-adic thermal mass squared is as M2 mass squared and that the free fermions are still massless! Bosons are always pairs of wormhole throats. It is convenient to denote bosons and topologically condensed fermions by a pair of parallel lines very close to each other and free fermion by single line.

  3. Each wormhole throat carries a braid and braid strands are carriers of four-momentum.

    1. The four momenta are parallel and only the M2 projection of the momentum appears in the fermionic propagator. To obtain Lorentz invariance one must integrate over boosts of M2 and this corresponds to integrating over the moduli space of causal diamond (CD) inside which the generalized Feynman diagrams reside.

    2. Each line gives rise to a propagator. The sign of the energy for the wormhole throat can be negative so that one obtains also space-like momentum exchanges.

    3. It is not quite clear whether one can allow also purely bosonic braid strands. The dependence of the over all propagator factor on longitudinal momentum is 1/p2n so that throats carrying 1 or 2 fermionic strands (or single purely bosonic strand) are in preferred position and braid strand numbers larger than 2 give rise to something different than ordinary elementary particle. It is probably not an accident that quantum phases q=exp(i2π/n) give rise to bosonic and fermionic statistics for n=1,2 and to braid statistics for n>2. States with n≥ 3 are expected to be anyonic. This also reduces the large super symmetry generated by fermionic oscillator operators at the partonic 2-surfaces effectively to N=1 SUSY.

In the following It will be assume that all braid strands appearing in the lines are massless and have parallel four-momenta and that M2 momentum squared is given by p-adic thermodynamics and actually mass squared vanishes. It is also assumed that M2 momenta of the throats of the wormhole throats are paralleI in accordance with the classical idea that wormhole throats move in parallel. It is convenient to denote graphically the wormhole throat by a pair of parallel lines very close to each other.

Vertices

The following proposal for vertices neglects the fact that physical elementary particles are constructed from wormhole throat pairs connected by magnetic flux tubes. It is however easy to generalizes the proposal to that case.

  1. Conservation of momentum holds in each vertex but only for the total momentum assignable to the wormhole contact rather than for each throat. The latter condition would force all partons to have parallel massless four-momenta and the S-matrix would be more or less trivial. Conservation of four-momentum, the massless on mass shell conditions for 4-momenta of wormhole throatas and on mass shell conditions M2 momentum squared given by stringy mass squared spectrum are extremely powerful and it is quite possible that one obtains in a given resolution defined by the largest and smallest causal diamonds finite number of diagrams.

  2. I have already earlier developed argments strongly suggesting that that only three-vertices are fundamental kenociteallb/elvafu. The three vertex at the level of wormhole throats means gluing of the ends of the generalized line along 2-D partonic two surface defining their ends so that diagrams are generalization of Feynman diagrams rather than 4-D generalizations of string diagrams so that a generalization of a a trouser diagram does not describe particle decay). The vertex can be BFF or BBB vertex or a variant of this kind of vertex obtained by replacing some B:s and F:s with their super-partners obtained by adding right handed neutrino or antineutrino on the wormhole throat carrying fermion number. Massless on mass shell conditions hold true for wormhole throats in internal lines but they are not on mass shell as a massive particles like external lines.

  3. What happens in the vertex is momentum exchange between different wormhole throats regarded as braids with strands carrying parallel momenta. This momentum exchange in general corresponds to a non-vanishing mass squared and can be graphically described as a line connecting two vertices of a triangle defined by the particles emerging into the vertex. To each vertex of the triangle either massless fermion line or pair of lines describing topologically condensed fermion or boson enters. The lines connecting the vertices of the triangle carry the analogs of region momenta kenociteallb/Yangian, which are in general massive but the differences of two adjacent region momenta are massless. The outcome is nothing but the analog of the twistor diagram. 3- vertices are fundamental and one would obtain only 3-gons and the Feynman graph would be a collection of 3-gons such that from each line emerges an internal or external line.

  4. A more detailed graphical description utilizes double lines. For FFB vertices with free fermions one would have 4-gon containing a pair of vertices very near to each other corresponding to the outgoing boson wormhole decribed by double line. This is obtained by truncating the bosonic vertex of 3-gon and attaching bosonic double line to it. For topologically condensed fermions and BBB vertex one would have 6-gon obtained by truncating all apices of a 3-gon.

Some comments about the diagrammatics is in order.

  1. On mass shell conditions and momentum conservation conditions are extremely powerful so that one has excellent reasons to expect that in a given resolution defined by the largest and smallest CD involves the number of contributing diagrams is finite.

  2. The resulting diagrams are very much like twistor diagrams in N=4 D=4 SYM for which also three-vertex and its conjugate are the fundamental building bricks from which tree amplitudes are constructed: from tree amplitudes one in turn obtains loop amplitudes by using the recursion formulas. Since all momenta are massless, one can indeed use twistor formalism. For topologically condensed fermions one just forms all possible diagrams consisting of 6-gons for which the truncated apices are connected by double lines and takes care that n lines are taken to be incoming lines.

  3. The lines can cross, and this corresponds to the analog of non-planar diagram. I have proposed a knot-theoretic description of this situation based on the generalized braiding matrix appearing in integrable QFTs defined in M2. By using a representation for the braiding operation which can be used to eliminate the crossings of the lines one could transform all diagrams to planar diagrams for which one could apply existing construction recipe.

  4. The basic conjecture is that the basic building bricks are Yangian invariants. Not only for the conformal group of M4 but also for the super-conformal algebra should have an extension to Yangian. This Yangian should be related to the symmetry algebra generated by the M-matrices and analogous to Kac-Moody algebra. For this Yangian points as vertices of the momentum polygon are replaced with partonic 2-surfaces.

Generalization of the diagrammatics to apply to the physical particles

The previous discussion has neglected the fact that the physical particles are not wormhole contacts. Topologically condensed elementary fermions and bosons indeed correspond to magnetic flux pairs at different space-time sheets with wormhole contacts at the ends. How could one describe this situation in terms of the generalization Feynman diagrams?

The natural guess is that one just puts two copies of diagrams above each other so that the triangles are replaced with small cylinders with cross section given by the triangle and the edges of this triangular cylinder representing magnetic flux tubes. It is natural to allow momentum exchanges also at the other end of the cylinder: for ordinary elementary particle these ends carry only neutrino pairs so that the contribution to interactions is screening at small momenta. Also momentum exchanges long the direction of the cylinder should be allowed and would correspond to the non-perturbative low energy degrees of freedom in the case of hadrons. This momentum exchange assignable to flux tube would be between the truncated triangle rather than separately along the three vertical edges of the triangular cylinder.

Number theoretical universality and quantum arithmetics

The approach should be also number theoretically universal meaning that amplitudes should make sense also in p-adic number fields. Quantum arithmetics is characterized by p-adic prime and canonical identification mapping p-adic amplitudes to real amplitudes is expected to make the universality possible.

This is achieved if the amplitudes should be expressible in terms of quantum rationals and rational functions having quantum rationals as coefficients of powers of the arguments. This would be achieve by simply mapping ordinary rationals to quantum rationals if they appear as coefficients of polynomials appearing in rational functions.

Quantum rationals are characterized by p-adic prime p and p-adic momentum with mass squared interpreted as p-adic integer appears in the propagator. If M2 mass square is proportional to this p-adic prime p, propagator behaves as 1/P2∝ 1/p, which means that one has pole like contribution for these on mass shell longitudinal masses. p-Adic mass calculations indeed give mass squared proportional to p. The real counterpart of propagator in canonical identification is proportional to p. This would select the all CD characterized by n divisible by p as analogs of poles.

It would seem that one must allow different p-adic primes in the generalized Feynman diagram since physical particles are in general characterized by different p-adic primes. This would require the analog of tensor product for different quantum rationals analogous to adeles. These numbers would be mapped to real (or complex) numbers by canonical identification.

How to understand renormalization flow in twistor context?

In twistor contex the notion of mass renormalization is not straightforward since everything is massless. In TGD framework p-adic mass scale hypothesis suggests a solution to the problem.

  1. At the fundamental level all elementary particles are massless and only their composites forming physical particles are massive.

  2. M2 mass squared is given by p-adic mass calculations and should correspond to the mass squared of the physical particle. There are contributions from magnetic flux tubes and in the case of baryons this contribution dominates.

  3. p-Adic physics discretizes coupling constant flow. Once the p-adic length scale of the particle is fixed its M2 momentum squared is fixed and massless takes care of the rest.

Consider now how renormalization flow would emerge in this picture. At the level of generalized Feynman diagrams the change of the IR (UV) resolution scale means that the maximal size of the CDs involve increases (the minimal size of the sides decreases).

Concerning the question what CD scales should be allowed, the situation is not completely clear.

  1. The most general assumption allows integer multiples and would guarantee that the products of hermitian matrices and powers of S-matrix commuting with them define Kac-Moody type algebra assignable to M-matrices. If one uses in renormalization group evolution equation CDs corresponding to integer multiples of CP2 length scale, the equation would become a difference equation for integer valued variable.

  2. p-Adicity would suggest that the scales of CDs come as prime multiples of CP2 scale.

  3. p-Adic length scale hypothesis would allow only p-adic length scales near powers of two. There are excellent reasons to expect that these scales are selected by a kind of evolutionary process favoring those scales for CDs for which particles are maximally stable.

Renormalization group equations are based on studying what happens in an infinitesimal reduction of UV resolution scale would mean. Now the change cannot be infinitesimal but must correspond to a change in the scale of CD by one unit defined by CP2 size scale.

  1. The decrease of UV cutoff means that the vertex amplitudes associated with smallest truncated 3-polygons in the diagram are replaced with the sum of all amplitudes in which smaller polygons down to the cutoff size and having 3-external legs appear. The change of the total amplitude in this replacements define renormalization group equation. Conservation of four-momentum and on mass shell conditions suggest that only finite number of terms are allowed.

  2. The increase of UV cutoff means that the size of the largest CD increases. The physical interpretation would be in terms of the time scale in which one observes the process. If this time scale is too long, the process is not visible. For instances, the study of strong interactions between quarks requires short enough scale for CD. At long scales one only observes hadrons and in even longer scales atomic nuclei and atoms.

  3. One tends to think that the diagrams are imbedded in M2 allowing identification as 2-plane in Minkowski space-time. This in turn would suggest that the step increasing UV resolution corresponds of replacement of triangles with graphs consisting of smaller triangles contained by them and having no intersections. This interpretation is attractive but might not be needed. Essential conditions are momentum conservation and on mass shell conditions.

  4. One could also allow the UV scale to depend on the particle. This scale should correspond to the p-adic mass scales assignable to the stable particle. In hadron physics this kind of renormalization is standard operation.

Reader interested in background can consult to the article Algebraic braids, sub-manifold braid theory, and generalized Feynman diagrams and the new chapter Generalized Feynman Diagrams as Generalized Braids of "Towards M-Matrix".

Friday, January 13, 2012

Standing waves in TGD

Four-wave mechanism is one of the basic mechanism encountered in laser physics. It is not obvious what the description of four-wave mechanism is at the basic level in TGD framework. In Maxwellian approach one introduces non-linear F4 term in the Lagrangian: here F is the electromagnetic field strength. This approach must be replaced by something else in TGD level if one wants a microscopic description.

To end up with this description by making first the question how to understand amplitude modulation. Even this is not enough. One must ask what is the first principle description for the linear superposition of fields in TGD framework. I have described this description in the earlier posting. The basis idea is that superposition does not occur for fields: only for their quantal effects. Some of the implications are discussed here, here, and here.

Also the phenomena of amplitude modulation and four-wave interaction would be effects appearing as quantal reactions of charged particles to the presence of space-time sheets carrying fields. They need not be present for the induced gauge fields. One might perhaps even say that these effects appear only at the level of conscious perception involving quantum jumps but not at the level of classical fields. This would be a direct evidence for the quantal character of conscious experience.

The summation of effects of em fields can induce amplitude modulation. If charged particles have topological sum contacts to the two space-time sheets carrying classical fields with different frequencies, the rate for quantum jumps is proportional to the modulus squared for the sum of the forces caused by these fields, and one obtains amplitude modulation visible as different and sum of the frequencies involved. In the case of massless extremals the sum and difference of frequencies appear only if MEs corresponding to opposite directions of 3-momentum are present. This leads to an effect that would be regarded as being caused by a standing electromagnetic wave. MEs correspond to waves propagating in single direction for a given sign of frequency, and in TGD framework it is highly implausible that standing waves could be realized as classical gauge fields. Probably I have not been the only child who has experienced standing wave in a rope as mysterious and somehow otherworldly phenomenon.

Similar description applies to four-wave mechanism. Four space-time sheets can give rise to the sums of four frequencies appearing with both signs in the sum and the temporally constant effect is obtained when the sum of the frequencies vanishes.

Wednesday, January 11, 2012

Quantum view about metabolism

After the writing of the first version of the chapter discussing quantum view about metabolism for about decade ago several new ideas have emerged. Since 2007 the evidence that photosynthesis involves quantum coherence in electronic degrees of freedom in the cell scale has been accumulating: this discovery is in blatant conflict with the predictions of standard quantum theory but conforms nicely with one of the oldest TGD based proposal that living matter is high Tc electronic super-conductor and with the identification of dark matter as phases with large value of Planck constant.

Both the new ideas and the new experimental results motivated rewriting of the chapter. Below is a brief summary about the new ideas that have emerged during last years, and questions raised by them, and how they affect the quantum model of metabolism at the basic level. Especially interesting new result is a concrete and testable proposal for how living matter acts as a high T superconductor bound to have practical implications. This proposal allows also to understand the basic aspects of bio-chemistry (why the number of valence bonds per molecule is maximized). Also a unification of three different views about high living matter manages to be macroscopic quantum system emerges. This unification is only one example of amazing convergence of the basic ideas of TGD that has taken place during the last years.

1. Three different views about living matter as a macroscopic quantum system

There are three different views about how living system manages to be a macroscopic quantum system.

  1. The first vision is based on various kinds of super-conductivities. Electronic super-conductivity is assigned with the cell membrane and plays a key role in the model of cell membrane as a Josephson junction. Furthermore, the effects of ELF em fields on vertebrate brain (see this) suggest that biologically important ions form macroscopic quantum states and cyclotron Bose-Einstein condensates of bosonic ions have been suggested. The TGD based view about atomic nuclei (see this) predicts exotic nuclei chemically equivalent with ordinary ones but being bosons rather than fermions. Also these exotic ions could also form cyclotron Bose-Einstein condensates. Large value of Planck constant would guarantee that cycloctron energies proportional to hbar are above thermal energy.

  2. A more precise view about hierarchy of Planck constants as an implication of the enormous vacuum degeneracy of Kähler action has emerged (see this). According to this view non-standard values of Planck constant are only effective.

    As the idea about the hierarchy of Planck constants emerged, I proposed that favored values of Planck constant could comes are powers of 211. This was just a first guess inspired partially by the observation that the mass ratio of proton and electron is 940/.5= 1880 ∼ 211. I managed to find indications supporting this hierarchy and also this chapter contains traces of this idea. I became later skeptic but one could actually imagine a mechanism implying this kind of hierarchy. Dark protons with say r=hbar/hbar0= 1836=4× 33× × 17 would correspond to approximately same Compton length as ordinary electrons. It is natural to assign this value of hbar also to electrons and this gives Compton length 44.6 Angstroms not far from the p-adic length scale L(149)≈ 50 Angstroms assigned with the lipid layer of cell membrane. The condition that dark proton corresponds to this Compton length gives r= 18362: the electron Compton length comes now 8.1 μm, which corresponds to cell size scale. One could continue the resulting hierarchy of Planck constants indefinitely.

  3. The notion of negentropic entanglement making sense for rational and even algebraic entanglement probabilities has emerged as a possible characterizer of living matter emerged (see this). Quantum arithmetics allows to generalize the notion of rational so that p-adic real correspondence mediated by canonical identification is fixed uniquely and is both continuous and respects symmetries. One implication is an explanation for Shnoll effect , which could be important also in living matter.

This raises several questions.

  1. How high Tc super conductivity based on dark electron pairs and negentropic entanglement relate?

  2. Could it be that electron pairs in valence bonds are the carriers of negentropic entanglement and that they generate the magnetic flux tubes as parts of their magnetic bodies and in this manner space-time correlates for macroscopic quantum coherence? This makes sense only if the valence electron pairs in living matter have spin 1. The Cooper pairs of high Tc super-conductors are ineed known to have spin 1 (see this). If this view is correct, biological evolution would favor the maximization of covalent electron pairs and this indeed seems to be the case (carbohydrates, fundamental biomolecules, phosphates having as many as 8 valence bonds!) .

  3. Why large hbar would make possible negentropic entanglement or even force it? Is there some purely number theoretic reason for this? For instance, could the p-adic prime p characterizing quantum arithmetics divide the integer n characterizing the Planck constant or could prime valued integers n be favored?

2. Genetic code and dark nucleon states

New realization of the genetic code in terms of dark proton sequences identified as dark nucleons was discovered (see this and this).

  1. The states of dark proton are in natural one-one correspondence with DNA, RNA, tRNA, and amino-acids and vertebrate genetic code is realized in a natural manner. Dark nucleons realized DNA codons as entangled quark triplets. The effective chemical formula H1.5O for water in atto-second time scale supports this view (see this). How the notion of dark nucleon relates to negentropic entanglement of electrons? Could dark electron pairs and dark nucleons correspond to the same value of Planck constant? Could both dark protons and dark electrons play a key role in metabolism.

  2. The simplest guess is that DNA strands are accompanied by dark nuclei with one dark proton per DNA nucleotide. The resulting positive charged would stabilize the system by partially neutralizing the negative charge density due to the phosphorylation (2 negative charges per nucleotide). Dark proton sequences could be associated also with other important bio-polymers. If the spins of the dark protons are parallel the dipole magnetic fields give rise to flux tubes connecting the protons and one can assign to the large hbar protons a macroscopically quantum coherent phase.

  3. The natural guess would be that dark nucleus realization of the genetic code induces the biological realization as evolution assigns to dark nucleon sequences DNA, RNA, and aminoacid sequences with 1-1 correlation between dark nucleon state and basic unit of the sequence. The dark realization of genetic code suggest a totally new view about biological evolution as a process, which is analogous to R&D in high tech industry rather than being completely random (see this). The candidates for new genes could be tested at dark matter level and in the case that they work they would be transcribed to their chemical equivalents.

3. New ideas related to metabolism

Also new ideas related to metabolism have emerged at the same time when evidence for quantal aspects of photosynthesis has been emerging (see this, this, this, and this).

  1. Negentropic entanglement leads also to the idea about energy metabolism and negentropy transfer as different sides of the same coin. The model for DNA as topological in turn suggest that ADP → ATP and its reverse can be interpreted as a standardized reconnection process re-organizing connections between distant molecules connected by magnetic flux tubes by the relay defined by ATP molecule. Metabolic energy would - or at least could - go to the re-organization of the flux tube connections and therefore of the negentropic quantum entanglement. The question is how to fuse this vision with the hypothesis about metabolic currencies as differences of zero point kinetic energies for space-time sheets.

  2. The radiation from Sun defines the fundamental metabolic currency. Solar radiation cannot be said to negentropic since negentropic entanglement is a 2-particle property. Solar photons could possess a large value of hbar or - more plausibly - suffer at the magnetic body of the living system a phase transition increasing the value of hbar. Could the absorption of large hbar photons arriving from Sun or from magnetic body by electrons generate spin 1 valence electron pairs pairs or provide the metabolic energy needed to re-arrange the flux tube connections between distant molecules by ADP+Pi → ATP process?

4. DNA as a topological quantum computer vision

The vision about DNA as topological quantum computer (see this) has turned out to be very general allowing to imagine several concrete realizations. The essential element is the coding of DNA nucleotides and one can imagine several options.

  1. The original proposal for the realization of DNA as topological quantum computer is based on the representation of DNA nucleotides A, T, C, G as quarks u, d and their antiquarks and requires scaled up version of QCD (see this). This idea looks rather outlandish but could be justified by the strange findings of mathematician Barbara Shipman about honeybee dance (see this) and also by the p-adic length scale hierarchy and the hierarchy of Planck constants suggesting scaled variants of QCD like physics also in the length scale range relevant to the living cell.

  2. The question whether one could one use spin 1 triplet and spin 0 singlet of dark electron pair instead of quarks and their antiquarks to represent codons, is rather obvious. The problem is that S= 0 state for electron pair however gives rise to vanishing dipole field so that flux tube structure would not be possible. The generation of flux tube structure along which supra currents can flow is however an essential element of the proposed mechanism of super-conductivity.

  3. DNA as topological quantum computer hypothesis lead to the hypothesis that it is O= :s to which one must assign the flux tube pair responsible for the representation of the genetic code. Why O= would be in special role? And why should one have a pair of flux tubes? Could this relate to the coding of nucleotides by electron pairs? If there are two parallel flux tubes, one obtains tensor product 3× 3= 5+3+1 of electron triplets at the ends of the flux tubes. Could it be that A,T,C, and G are represented in terms of 3 and 1 and that the breaking of rotational invariance implies mixing of singlet and Sz=0 state of triplet so that nucleotides and their conjugates could correspond to the resulting two pairs related by reflection.

  4. ATP→ ADP+Pi would correspond to the reconnection of the flux tubes of the flux tube pair with hydrogen bonds associated with two water molecules. The flux tubes would split and end to water molecules containing valence electron pair so the negentropic entanglement might not be totally lost. The reverse process would create flux tube connection labelled by the spin state equivalent of A,T,C, or G.

5. Pessimistic generalization of the second law of thermodynamics

The possibility of negentropic entanglement raises the question about the fate of the second law of thermodynamics. The proposal for a generalization of the second law of thermodynamics (see this) based on the most pessimistic vision is that entropy indeed increases also when negentropic entanglement is generated in state function reduction. If the generation of negentropic entanglement is accompanied by a compensating entropic entanglement, how it is generated? Or is the maximally pessimistic generalization really necessary? Is it implied automatically in time scales longer than the characteristic time scale associated with the causal diamonds serving as the basic correlates for conscious selves. One must apply ensemble description in these time scales: does the non-determinism of quantum jump imply second law at the level of ensemble automatically. If this argument is correct, second law would cease to hold in time scales than that characterizing the relevant CD. One might be able to anwer these quesetion by trying to understand the situation in the case of metabolism.

During writing I realized that the old chapter must be split to two pieces. The chapter of "Biosystems as Conscious Holograms" contains the updated material.