https://matpitka.blogspot.com/2012/03/

Sunday, March 25, 2012

Riemann zeta and quantum theory as square root of thermodynamics

Ulla mentioned in the comment section of the earlier posting an intervew of Matthew Watkins. The pages of Matthew Watkins about all imaginable topics related to Riemann zeta are excellent and I can only warmly recommend. I was actually in contact with him for years ago and there might be also TGD inspired proposal for strategy proving Riemann hypothesis at the pages of Matthew Watkins.

The interview was very inspiring reading. MW has very profound vision about what mathematics is and he is able to express it in understandable manner. MW tells also about the recent work of Connes applying p-adics and adeles(!) to the problem. I would guess that these are old ideas and I have myself speculated about the connection with p-adics for long time ago.

MW tells in the interview about the thermodynamical interpretation of zeta function. Zeta reduces to a product ζ(s)= ∏pZp(s) of partition functions Zp(s)=1/[1-p-s] over particles labelled by primes p. This relates very closely also to infinite primes and one can talk about Riemann gas with particle momenta/energies given by log(p). s is in general complex number and for the zeros of the zeta one has s=1/2+iy. The imaginary part y is non-rational number. At s=1 zeta diverges and for Re(s)≤1 the definition of zeta as product fails. Physicist would interpret this as a phase transition taking place at the critical line s=1 so that one cannot anymore talk about Riemann gas. Should one talk about Riemann liquid? Or - anticipating what follows- about quantum liquid? What the vanishing of zeta could mean physically? Certainly the thermodynamical interpretation as sum of something interpretable as thermodynamical probabilities apart from normalization fails.

The basic problem with this interpretation is that it is only formal since the temperature parameter is complex. How could one overcome this problem?

A possible answer emerged as I read the interview.

  1. One could interpret zeta function in the framework of TGD - or rather in zero energy ontology (ZEO) - in terms of square root of thermodynamics! This would make possible the complex analog of temperature. Thermodynamical probabilities would be replaced with probability amplitudes.

  2. Thermodynamical probabilities would be replaced with complex probability amplitudes, and Riemann zeta would be the analog of vacuum functional of TGD which is product of exponent of Kähler function - Kähler action for Euclidian regions of space-time surface - and exponent of imaginary Kähler action coming from Minkowskian regions of space-time surface and defining Morse function.

    In QFT picture taking into account only the Minkowskian regions of space-time would have only the exponent of this Morse function: the problem is that path integral does not exist mathematically. In thermodynamics picture taking into account only the Euclidian regions of space-time one would only the exponent of Kähler function and would lose interference effects fundamental for QFT type systems.

    In quantum TGD both Kähler and Morse are present. With rather general assumptions the imaginary part and real part of exponent of vacuum functional are proportional to each other and to sum over the values of Chern-Simons action for 3-D wormhole throats and for space-like 3-surfaces at the ends of CD. This is non-trivial.

  3. Zeros of zeta would in this case correspond to a situation in which the integral of the vacuum functional over the "world of classical worlds" (WCW) vanishes. The pole of ζ at s=1 would correspond to divergence fo the integral for the modulus squared of Kähler function.

What the vanishing of the zeta could mean if one accepts the interpretation quantum theory as a square root of thermodynamics?

  1. What could the infinite value of zeta at s=1 mean? The The interpretation in terms of square root of thermodynamics implied following. In zero energy ontology zeta function function decomposition to ∏p Zp(s) corresponds to a product of single particle partition functions for which one can assigns probabilities p-s/Zp(s) to single particle states. This does not make sense physically for complex values of s.

  2. In ZEO one can however assume that the complex number p-sn define the entanglement coefficients for positive and negative energy states with energies nlog(p) and -nlog(p): n bosons with energy log(p) just as for black body radiation. The sum over amplitudes over over all combinations of these states with some bosons labelled by primes p gives Riemann zeta which vanishes at critical line if RH holds.

  3. One can also look for the values of thermodynamical probabilities given by |p-ns|2= p-n at critical line. The sum over these gives for given p the factor p/(p-1) and the product of all these factors gives ζ (1)=∞. Thermodynamical partition function diverges. The physical interpretation is in terms of Bose-Einstein condensation.

  4. What the vanishing of the trace for the matrix coding for zeros of zeta defined by the amplitudes is physically analogous to the statement ∫ Ψ dV=0 and is indeed true for many systems such as hydrogen atom. But what this means? Does it say that the zero energy state is orthogonal to vacuum state defined by unit matrix between positive and negative energy states? In any case, zeros and the pole of zeta would be aspects of one and same thing in this interpretation. This is an something genuinely new and an encouraging sign. Note that in TGD based proposal for a strategy for proving Riemann hypothesis, similar condition states that coherent state is orthogonal to a "false" tachyonic vacuum.

  5. RH would state in this framework that all zeros of ζ correspond to zero energy states for which the thermodynamical partition function diverges. Another manner to say this is that the system is critical. (Maximal) Quantum Criticality is indeed the key postulate about TGD Universe and fixes the Kähler coupling strength characterizing the theory uniquely (plus possible other free parameters). Quantum Criticality guarantees that the Universe is maximally complex. Physics as generalized number theory would suggest that also number theory is quantum critical! When the sum over numbers proportional to propabilities diverges, the probabilities are considerably different from zero for infinite number of states. At criticality the presence of fluctuations in all scales implying fractality indeed implies this. A more precise interpretation is in terms of Bose-Eisntein condensation.

  6. The postulate that all zero energy states for Riemann system are zeros of zeta and critical in the sense being non-normalizable combined with the fact that s=1 is the only pole of zeta implies that the all zeros correspond to Re(s)=1/2 so that RH follows from purely physical assumptions. The behavior at s=1 would be an essential element of the argument. Note that in ZEO coherent state property is in accordance with energy conservation. In the case of coherent states of Cooper pairs same applies to fermion number conservation.

    With this interpretation the condition would state orthogonality with respect to the coherent zero energy state characterized by s=0, which has finite norm and does not represent Bose-Einstein condensation. This would give a connection with the proposal for the strategy for proving Riemann Hypothesis by replacing eigenstates of energy with coherent states and the two approaches could be unified. Note that in this approach conformal invariance for the spectrum of zeros of zeta is the axiom yielding RH and could be seen as counterpart for the fundamental role of conformal invariance in modern physics and indeed very natural in the vision about physics as generalized number theory.

Saturday, March 24, 2012

Problems with Higgs interpretation of 125 GeV signal

New Scientist had a very interesting piece with title Is the LHC throwing away too much data?. The not so often mentioned fact is that LHC produces enormous amounts of data, and only the data which could serve as signals of expected kind of new physics is stored. One tries to find only the signatures of theories accepted by hegemony to be worth of considering. If the correct theory is something else than expected, then the only signature of it is the failure to discover the signatures predicted by theories whose proponents have better social skills! No-one takes seriously this kind of indirect argument in favor of a theory like TGD but this is the only argument that a theorists thrown out of the community can represent. Somewhat frustrating!

Theoretically we know that some kind of new physics must emerge in TeV scale. It is however becoming clear that the expected signatures of SUSY are not there: missing energy is missing. Lubos however still fabulates about super-secret information according to which the discovery of stop and sbottom will be reported within few months. Also a long list of exotic and less exotic "predictions" of M-theory and F-theory have been killed (see the posting of Peter Woit about the sad situation in F-theory). Also Jester tells about new SUSY limits from ATLAS.

Could it be that particle physicists in there state of "knowing" have made a horrible mistake. The arrogance of particle physicists is a legend - to see what I mean just look for some postings in the blog posting of Lubos Motl debunking top experimentalist Anton Zeiliger - and arrogance is something which does not go without ultimate penalty.

There is however something genuine there: the 125 GeV signal interpreted tentatively as Higgs but it seems that it might be something different from Higgs. The posting of Tommaso is titled The Say of the Week: Higs Properties. The say of the week is from the article Reconstructing Higgs Boson Properties from the LHC and Tevatron Data by P.P. Giardino, K. Kannike, M. Raidal, and A. Strumia. The say deserves to be glued also here.

After fixing the Higgs boson mass to the best fit value mh = 125 GeV, the SM does not have any free parameter left to vary. Therefore all the anomalies in the present data must be statistical fluctuations and disappear with more statistics. This interpretation is supported by the fact that the average of all data agrees with the SM prediction and the global χ2 is good: 16 for 15 dof (we recall that with n>>1 degrees of freedom one expects χ2 = n+/-n1/2 ).

On the other hand, our best fit has a significantly lower χ2 = 5.5 for 13 dof: a bigger reduction than what is typically obtained by adding two extra parameters (one expects Δ χ2 = -Δ n +/- (/Delta; n)1/2 when adding Δ n >> 1 parameters). The SM is disfavored at more than 95 per cent CL in this particular context, but of course we added the two parameters that allow to fit the two most apparent anomalies in the data, the γγ excess and the WW* deficit.

Only more data will tell if this is a trend, or if we are just fitting a statistical fluctuation.

What is found that the best fit to the signal at 125 GeV is not in good agreement with standard model. Higgs to γγ decay rate is too strong - 4 times too strong in the fit of the paper - and too weak signals to WW* signal and to quark-antiquark channels are the problems.

It has been found that the fit assuming top-phobic or even fermion-phobic Higgs is better than standard model fit but this would mean giving up the very idea about Higgs expectation as a mechanism of massivation! It is also reported that MSSM allowing the mass of stop and parameters of MSSM as additional parameters fails to help in the situation. The conclusion saving standard model Higgs/MSSM Higgs would be that the anomalies are due to statistical fluctuations and that more data is expected to remove the anomalies.

In TGD framework simplest scenario does not predict Higgs at all and 125 GeV signal would correspond to a pion-like state of what I have dubbed as M89 hadron physics. The basic prediction is it decays to γγ pair by the coupling to instanton density. It would decays faster to u and d type quarks than other quarks since it is composite of their p-adically scaled up variants. To heavier quark pairs it decays only via gluon-gluon intermediate states so that the decay rates would be slower. This could be called not fully-fledged q-phobia with q= t, b, c, s combined with bad lepto-phobia. The claimed decays of 125 GeV state to bbar pair could be understood as taking place via gluon pairs. One must of course take also this claim with extreme caution.

During these two years of continual wrong alarms from experimentalists and I have become rather cautious in making optimistic statements. If this were not the case I would proudly declare: Yes this is what I have been saying all the time! The signal is there but its not Higgs!

Friday, March 23, 2012

p-Adic homology and finite measurement resolution

Discretization in dimension D in terms of pinary cutoff means division of the manifold to cube-like objects. What suggests itself is homology theory defined by the measurement resolution and by the fluxes assigned to the induced Kähler form.

  1. One can introduce the decomposition of n-D sub-manifold of the imbedding space to n-cubes by n-1-planes for which one of the coordinates equals to its pinary cutoff. The construction works in both real and p-adic context. The hyperplanes in turn can be decomposed to n-1-cubes by n-2-planes assuming that an additional coordinate equals to its pinary cutoff. One can continue this decomposition until one obtains only points as those points for which all coordinates are their own pinary cutoffs. In the case of partonic 2-surfaces these points define in a natural manner the ends of braid strands. Braid strands themselves could correspond to the curves for which two coordinates of a light-like 3-surface are their own pinary cutoffs.

  2. The analogy of homology theory defined by the decomposition of the space-time surface to cells of various dimensions is suggestive. In the p-adic context the identification of the boundaries of the regions corresponding to given pinary digits is not possible in purely topological sense since p-adic numbers do not allow well-ordering. One could however identify the boundaries sub-manifolds for which some number of coordinates are equal to their pinary cutoffs or as inverse images of real boundaries. This might allow to formulate homology theory to the p-adic context.

  3. The construction is especially interesting for the partonic 2-surfaces. There is hierarchy in the sense that a square like region with given first values of pinary digits decompose to p square like regions labelled by the value 0,...,p-1 of the next pinary digit. The lines defining the boundaries of the 2-D square like regions with fixed pinary digits in a given resolution correspond to the situation in which either coordinate equals to its pinary cutoff. These lines define naturally edges of a graph having as its nodes the points for which pinary cutoff for both coordinates equals to the actual point.

  4. I have proposed earlier kenociteallb/categorynew what I have called symplectic QFT involving a triangulation of the partonic 2-surface. The fluxes of the induced Kähler form over the triangles of the triangulation and the areas of these triangles define symplectic invariants, which are zero modes in the sense that they do not contribute to the line element of WCW although the WCW metric depends on these zero modes as parameters. The physical interpretation is as non-quantum fluctuating classical variables. The triangulation generalizes in an obvious manner to quadrangulation defined by the pinary digits. This quadrangulation is fixed once internal coordinates and measurement accuracy are fixed. If one can identify physically preferred coordinates - say by requiring that coordinates transform in simple manner under isometries - the quadrangulation is highly unique.

  5. For 3-surfaces one obtains a decomposition to cube like regions bounded by regions consisting of square like regions and Kähler magnetic fluxes over the squares define symplectic invariants. Also Kähler Chern-Simons invariant for the 3-cube defines an interesting almost symplectic invariant. 4-surface decomposes in a similar manner to 4-cube like regions and now instanton density for the 4-cube reducing to Chern-Simons term at the boundaries of the 4-cube defines symplectic invariant. For 4-surfaces symplectic invariants reduce to Chern-Simons terms over 3-cubes so that in this sense one would have holography. The resulting structure brings in mind lattice gauge theory and effective 2-dimensionality suggests that partonic 2-surfaces are enough.

The simplest realization of this homology theory in p-adic context could be induced by canonical identification from real homology. The homology of p-adic object would the homology of its canonical image.

  1. Ordering of the points is essential in homology theory. In p-adic context canonical identification x=∑ xnpn→ ∑ xnp-n map to reals induces this ordering and also boundary operation for p-adic homology can be induced. The points of p-adic space would be represented by n-tuples of sequences of pinary digits for n coordinates. p-Adic numbers decompose to disconnected sets characterized by the norm p-n of points in given set. Canonical identification allows to glue these sets together by inducing real topology. The points pn and (p-1)(1+p+p2+...)pn+1 having p-adic norms p-n and p-n-1 are mapped to the same real point p-n under canonical identification and therefore the points pn and (p-1)(1+p+p2+...)pn+1 can be said to define the endpoints of a continuous interval in the induced topology although they have different p-adic norms. Canonical identification induces real homology to the p-adic realm. This suggests that one should include canonical identification to the boundary operation so that boundary operation would be map from p-adicity to reality.

  2. Interior points of p-adic simplices would be p-adic points not equal to their pinary cutoffs defined by the dropping of the pinary digits corresponding pn, n>N. At the boundaries of simplices at least one coordinate would have vanishing pinary digits for pn, n>N. The analogs of n-1 simplices would be the p-adic points sets for which one of the coordinates would have vanishing pinary digits for pn, n>N. n-k-simplices would correspond to points sets for which k coordinates satisfy this condition. The formal sums and differences of these sets are assumed to make sense and there is natural grading.

  3. Could one identify the end points of braid strands in some natural manner in this cohomology? Points with n≤ N pinary digits are closed elements of the cohomology and homologically equivalent with each other if the canonical image of the p-adic geometric object is connected so that there is no manner to identify the ends of braid strands as some special points unless the zeroth homology is non-trivial. In kenociteallb/agg it was proposed that strand ends correspond to singular points for a covering of sphere or more general Riemann surface. At the singular point the branches of the covering would co-incide.

    The obvious guess is that the singular points are associated with the covering characterized by the value of Planck constant. As a matter fact, the original assumption was that all points of the partonic 2-surface are singular in this sense. It would be however enough to make this assumption for the ends of braid strands only. The orbits of braid strands and string world sheet having braid strands as its boundaries would be the singular loci of the covering.

For background see the chapter Quantum Adeles of "Physics as Generalized Number Theory".

Thursday, March 22, 2012

Science and religion

The relationship between science and religion has been a topic of discussion recently. New Scientist has articles about the attempts of scientists to explain spirituality and religion (see for instance this and this). Also Bee has written about this under the title What can science do for you? and this posting is a typo-free version of the comment to this posting with some additions.

What makes for a scientist so difficult to understand spirituality is the failure to realize that genuine spirituality is not a method to achieve something. For a scientist life is endless struggling to achieve some goal by applying some methods: problem solving, fighting against colleagues, intriguing to get a research position or funding, etc..

It is natural that the scientific explanations of spirituality follow the same simple format. For a scientists it is difficult to believe that a person who becomes aware of the existence of higher levels of conscious existence does not calculate that it is good to have this experience since in statistical sense it maximizes her personal happiness. Neither is this experience a result of some method to achieve a relief from a fear of death or of life or to achieve maximal pleasure. It is something completely spontaneous and makes you to realize how extremely limited your everyday consciousness is and how hopelessly it is narrowed down by your ego.

What makes for a member of church so difficult to understand spirituality is that organized religions indeed teach that by applying some method which includes blind belief on dogmas, the registered member of the community can get in contact with God. Even the idea about single God represents example about how the greed for power tends to corrupt spirituality: gods as those conscious entities above us like we above our neurons are replaced with God - the ultimate conqueror and absolute ruler. And after all, spiritual experience is only the realization that higher levels of conscious existence and intelligence are there. This realization comes when one is able for a moment to get rid of ego and live just in this moment. But there is no method to achieve it!

This view is by no means new. For the first time I discovered it from writings of Krishnamurti for about 26 years ago as I tried to understand my own great experience. The writings of Krishnamurti are a blow against face of anyone who has adopted the naive "scientific" view about reality but I felt that Krishnamurti was basically right. I felt that he must have experienced something similar to what I had experienced and I of course hoped to get these two magic weeks back. Certainly I hoped to find a method allowing to achieve this from the writings of Krishnamurti, and I refused to believe when Krishnamurti told again and again that there is no method!

After these years it is easy to agree with Krishnamurti's view about egos as the source of the problems of society. Ego is the castle that we build in the hope of achieving safety. This ego isolates us and in isolation fears multiply and we become paranoids. Coming out from the castle of ego to the fresh air and meeting the reality as it is, is the only solution to our problems. Isms cannot help us since they only help to build new castles. The bad news for the scientist is that there is no method to achieve this. At some moments we are able to just calmly observe our suffering without any kind of violence for our mental images, and the miracle of re-creation takes place.

Hilbert p-adics, hierarchy of Planck constants, and finite measurement resolution

The hierarchy of Planck constants assigns to the N-fold coverings of the imbedding space points N-dimensional Hilbert spaces. The natural identification of these Hilbert spaces would be as Hilbert spaces assignable to space-time points or with points of partonic 2-surfaces. There is however an objection against this identification.

  1. The dimension of the local covering of imbedding space for the hierarchy of Planck constants is constant for a given region of the space-time surface. The dimensions of the Hilbert space assignable to the coordinate values of a given point of the imbedding space are defined by the points themselves. The values of the 8 coordinates define the algebraic Hilbert space dimensions for the factors of an 8-fold Cartesian product, which can be integer, rational, algebraic numbers or even transcendentals and therefore they vary as one moves along space-time surface.

  2. This dimension can correspond to the locally constant dimension for the hierarchy of Planck constants only if one brings in finite measurement resolution as a pinary cutoff to the pinary expansion of the coordinate so that one obtains ordinary integer-dimensional Hilbert space. Space-time surface decomposes into regions for which the points have same pinary digits up to pN in the p-adic case and down to p-N in the real context. The points for which the cutoff is equal to the point itself would naturally define the ends of braid strands at partonic 2-surfaces at the boundaries of CD:s.

  3. At the level of quantum states pinary cutoff means that quantum states have vanishing projections to the direct summands of the Hilbert spaces assigned with pinary digits pn, n>N. For this interpretation the hierarchy of Planck constants would realize physically pinary digit representations for number with pinary cutoff and would relate to the physics of cognition.

One of the basic challenges of quantum TGD is to find an elegant realization for the notion of finite measurement resolution. The notion of resolution involves observer in an essential manner and this suggests that cognition is involved. If p-adic physics is indeed physics of cognition, the natural guess is that p-adic physics should provide the primary realization of this notion.

The simplest realization of finite measurement resolution would be just what one would expect it to be except that this realization is most natural in the p-adic context. One can however define this notion also in real context by using canonical identification to map p-adic geometric objets to real ones.

Does discretization define an analog of homology theory?

Discretization in dimension D in terms of pinary cutoff means division of the manifold to cube-like objects. What suggests itself is homology theory defined by the measurement resolution and by the fluxes assigned to the induced Kähler form.

  1. One can introduce the decomposition of n-D sub-manifold of the imbedding space to n-cubes by n-1-planes for which one of the coordinates equals to its pinary cutoff. The construction works in both real and p-adic context. The hyperplanes in turn can be decomposed to n-1-cubes by n-2-planes assuming that an additional coordinate equals to its pinary cutoff. One can continue this decomposition until one obtains only points as those points for which all coordinates are their own pinary cutoffs. In the case of partonic 2-surfaces these points define in a natural manner the ends of braid strands. Braid strands themselves could correspond to the curves for which two coordinates of a light-like 3-surface are their own pinary cutoffs.

  2. The analogy of homology theory defined by the decomposition of the space-time surface to cells of various dimensions is suggestive. In the p-adic context the identification of the boundaries of the regions corresponding to given pinary digits is not possible in purely topological sense since p-adic numbers do not allow well-ordering. One could however identify the boundaries sub-manifolds for which some number of coordinates are equal to their pinary cutoffs or as inverse images of real boundaries. This might allow to formulate homology theory to the p-adic context.

  3. The construction is especially interesting for the partonic 2-surfaces. There is hierarchy in the sense that a square like region with given first values of pinary digits decompose to p square like regions labelled by the value 0,...,p-1 of the next pinary digit. The lines defining the boundaries of the 2-D square like regions with fixed pinary digits in a given resolution correspond to the situation in which either coordinate equals to its pinary cutoff. These lines define naturally edges of a graph having as its nodes the points for which pinary cutoff for both coordinates equals to the actual point.

  4. I have proposed earlier what I have called symplectic QFT involving a triangulation of the partonic 2-surface. The fluxes of the induced Kähler form over the triangles of the triangulation and the areas of these triangles define symplectic invariants, which are zero modes in the sense that they do not contribute to the line element of WCW although the WCW metric depends on these zero modes as parameters. The physical interpretation is as non-quantum fluctuating classical variables. The triangulation generalizes in an obvious manner to quadrangulation defined by the pinary digits. This quadrangulation is fixed once internal coordinates and measurement accuracy are fixed. If one can identify physically preferred coordinates - say by requiring that coordinates transform in simple manner under isometries - the quadrangulation is highly unique.

  5. For 3-surfaces one obtains a decomposition to cube like regions bounded by regions consisting of square like regions and Kähler magnetic fluxes over the squares define symplectic invariants. Also Kähler Chern-Simons invariant for the 3-cube defines an interesting almost symplectic invariant. 4-surface decomposes in a similar manner to 4-cube like regions and now instanton density for the 4-cube reducing to Chern-Simons term at the boundaries of the 4-cube defines symplectic invariant. For 4-surfaces symplectic invariants reduce to Chern-Simons terms over 3-cubes so that in this sense one would have holography. The resulting structure brings in mind lattice gauge theory and effective 2-dimensionality suggests that partonic 2-surfaces are enough.

Does the notion of manifold in finite measurement resolution make sense?

A modification of the notion of manifold taking into account finite measurement resolution might be useful for the purposes of TGD.

  1. The chart pages of the manifold would be characterized by a finite measurement resolution and effectively reduce to discrete point sets. Discretization using a finite pinary cutoff would be the basic notion. Notions like topology, differential structure, complex structure, and metric should be defined only modulo finite measurement resolution. The precise realization of this notion is not quite obvious.

  2. Should one assume metric and introduce geodesic coordinates as preferred local coordinates in order to achieve general coordinate invariance? Pinary cutoff would be posed for the geodesic coordinates. Or could one use a subset of geodesic coordinates for δ CD× CP2 as preferred coordinates for partonic 2-surfaces? Should one require that isometries leave distances invariant only in the resolution used?

  3. A rather natural approach to the notion of manifold is suggested by the p-adic variants of symplectic spaces based on the discretization of angle variables by phases in an algebraic extension of p-adic numbers containing nth root of unity and its powers. One can also assign p-adic continuum to each root of unity (see this). This approach is natural for compact symmetric Kähler manifolds such as S2 and CP2. For instance, CP2 allows a coordinatization in terms of two pairs (Pk,Qk) of Darboux coordinates or using two pairs (ξkk*), k=1,2, of complex coordinates. The magnitudes of complex coordinates would be treated in the manner already described and their phases would be described as roots of unity. In the natural quadrangulation defined by the pinary cutoff for |ξk| and by roots of unity assigned with their phases, Kähler fluxes would be well-defined within measurement resolution. For light-cone boundary metrically equivalent with S2 similar coordinatization using complex coordinates (z,z*) is possible. Light-like radial coordinate r would appear only as a parameter in the induced metric and pinary cutoff would apply to it.

Hierachy of finite measurement resolutions and hierarchy of p-adic normal Lie groups

The formulation of quantum TGD is almost completely in terms of various symmetry group and it would be highly desirable to formulate the notion of finite measurement resolution in terms of symmetries.

  1. In p-adic context any Lie-algebra g with p-adic integers as coefficients has a natural grading based on the p-adic norm of the coefficient just like p-adic numbers have grading in terms of their norm. The sub-algebra gN with the norm of coefficients not larger than p-N is an ideal of the algebra since one has [gM,gN]⊂ gM+N: this has of course direct counterpart at the level of p-adic integers. gN is a normal sub-algebra in the sense that one has [g,gN]⊂ gN. The standard expansion of the adjoint action ggNg-1 in terms of exponentials and commutators gives that the p-adic Lie group GN=exp(tpgN), where t is p-adic integer, is a normal subgroup of G=exp(tpg). If indeed so then also G/GN is group, and could perhaps be interpreted as a Lie group of symmetries in finite measurement resolution. GN in turn would represent the degrees of freedom not visible in the measurement resolution used and would have the role of a gauge group.

  2. The notion of finite measurement resolution would have rather elegant and universal representation in terms of various symmetries such as isometries of imbedding space, Kac-Moody symmetries assignable to light-like wormhole throats, symplectic symmetries of δCD× CP2, the non-local Yangian symmetry, and also general coordinate transformations. This representation would have a counterpart in real context via canonical identification I in the sense that A→ B for p-adic geometric objects would correspond to I(A)→ I(B) for their images under canonical identification. It is rather remarkable that in purely real context this kind of hierarchy of symmetries modulo finite measurement resolution does not exist. The interpretation would be that finite measurement resolution relates to cognition and therefore to p-adic physics.

  3. Matrix group G contains only elements of form g=1+O(pm), m≥ 1 and does not therefore involve matrices with elements expressible in terms roots of unity. These can be included, by writing the elements of the p-adic Lie-group as products of elements of above mentioned G with the elements of a discrete group for which the elements are expressible in terms of roots of unity in an algebraic extension of p-adic numbers. For p-adic prime p p:th roots of unity are natural and suggested strongly by quantum arithmetics.

For background see the chapter Quantum Adeles of "Physics as Generalized Number Theory".

Thursday, March 15, 2012

ICARUS measures light velocity for neutrino

Icarus collaboration has replicated the measurement of the neutrino velocity. The abstract summarizes the outcome.

The CERN-SPS accelerator has been briefly operated in a new, lower intensity neutrino mode with about 1012 p.o.t. /pulse and with a beam structure made of four LHC-like extractions, each with a narrow width of about 3 ns, separated by 524 ns. This very tightly bunched beam structure represents a substantial progress with respect to the ordinary operation of the CNGS beam, since it allows a very accurate time-of-flight measurement of neutrinos from CERN to LNGS on an event-to-event basis. The ICARUS T600 detector has collected 7 beam-associated events, consistent with the CNGS delivered neutrino flux of 2.2× 1016 p.o.t. and in agreement with the well known characteristics of neutrino events in the LAr-TPC. The time of flight difference between the speed of light and the arriving neutrino LAr-TPC events has been analyzed. The result is compatible with the simultaneous arrival of all events with equal speed, the one of light. This is in a striking difference with the reported result of OPERA that claimed that high energy neutrinos from CERN should arrive at LNGS about 60 ns earlier than expected from luminal speed.

The TGD based explanation for the anomaly would not have been super-luminality but the dependence of the maximal signal velocity on space-time sheet (see this): the geodesics in induced metric are not geodesics of the 8-D imbedding space. In principle the time taken to move from A (say CERN) to point B (say Gran Sasso) depends on space-time sheets involved. One of these space-time sheets would be that assignable to particle beam - a good guess is "massless extremal": along this the velocity is in in the simplest case (cylindrical "massless extremals") the maximal signal velocity in M4×CP2.

Other space-space-time sheets involved can be assigned to various systems such as Earth, Sun, Galaxy and they contribute to the effect (see this). It is important to understand how the physics of test particle depends on the presence of parallel space-times sheets. Simultaneous topological condensation to all the sheets is extremely probable so that at classical level forces are summed. Same happens at quantum level. The superposition of various fields assignable to parallel space-time sheets is not possible in TGD framework and is replaced with the superposition of their effects. This allows to resolve one of the strongest objections against the notion induced gauge field.

The outcome of ICARUS experiment is not able to kill this prediction since at this moment I am not able to fix the magnitude of the effect. It is really a pity that such a fantastic possibility to wake up the sleeping colleagues is lost. I feel like a parent in a nightmare seeing his child to drown and being unable to do anything.

There are other well-established effects in which the dependence of maximal signal velocity on space-time sheet is visible: one such effect is the observed slow increase of the time spend by light ray to propagate moon and back. The explanation is that the effect is not real but due to the change of the unit for velocity defined by the light-velocity assignable to the distant stars. The maximal signal velocity is for Robertson-Walker cosmology gradually increasing and the anomaly emerges as an apparent anomaly when one assumes that the natural coordinate system assignable to the solar system (Minkowski coordinates) is the natural coordinate system in cosmological scales. The size of the effect is predicted correctly. Since the cosmic signal velocity defining the unit increases, the local maximal signal velocity which is constant seems to be reducing and the measured distance to the Moon seems to be increasing.

Wednesday, March 14, 2012

A proposal for microtubular memory code

In an article in the March 8 issue of the journal PLoS Computational Biology, physicists Travis Craddock and Jack Tuszynski of the University of Alberta, and anesthesiologist Stuart Hameroff of the University of Arizona propose a mechanism for encoding synaptic memory in microtubules, major components of the structural cytoskeleton within neurons. The self-explanatory title of the article is Cytoskeletal Signaling: Is Memory Encoded in Microtubule Lattices by CaMKII Phosphorylation?.

1. Basic ideas of the model

The hexagonal cylindrical lattice of microtubule suggests the possibility of lattice consisting of bits and probably very many proposals have been made. One such idea is that bit is represented in terms of the two basic conformations of tubulin molecules called α and β. The recent proposal is that bit corresponds to the phosphorylation state of tubulin. Also a proposal that the bits form 6-bit bytes is considered: 64 different bytes are possible which would suggest a connection with the genetic code.

The motivation for the identification of byte is that CaMKII enzyme has in the active state insect like structure: 6 + 6 legs and the legs are either phosphorylated or not. This geometry is indeed very suggestive of connexion with 6 inputs and 6 outputs representing genetic codons representable as sequences of 6 bits. The geometry and electrostatics of CaMKII is complementary to the microtubular hexagonal lattice so that CaMKII could take care of the phosphorylation of microtubulins: 6 tubulins at most would be phosphorylated at one side. The presence of Ca+2 or calmodulin flux flowing to the neuron interior during nerve pulse is responsible for self-phosphorylation of CaMKII: one can say that CaMKII takes itself care that it remains permanently phosphorylated. I am not sure whether this stable phosphorylation means complete phosphorylation.

It is however difficult to imagine how Ca+2 and calmodulin flux could contain the information about the bit sequence and how this information could be coded in standard manner to phosphorylation pattern of legs. The only possibility which looks natural is that phosphorylation is a random process and only the fraction of phosphorylated legs depends on Ca+2 and calmodulin fluxes. Another possibility would be that the subsequence process of phosphorylation MT by completely phosphorylated CaMKII manages to do it selectively but it is very difficult to imagine how the information about codon could be transferred to the phosphorylation state of MT.

For these reasons my cautious conclusion is that phosphorylation/its absence cannot represent bit. What has been however found is a mechanism of phosphorylation of MTs, and the question is what could be the function of this phosphorylation. Could this phosphorylation be related to memory but in different manner? The 6+6 structure of CaMKII certainly suggests that the analog of genetic code based on 6 bits might be present but realized in some other manner.

1.1 What does one mean with memory?

Before proceeding one must make clear what one means with memory in the recent context. The articles of New Scientists with - almost as a rule - sensationalistic titles, do not pay too much attention for the fact this kind of proposals are always based on some philosophical assumptions which might be wrong.

  1. What one means with "memory" in the recent context? The memory in question is behavioral memory. Conditioning producing reflect like reaction is a typical example of behavioral memory and need not have anything to do with conscious memory such as episodal memory in which one literally re-lives an event of past. Electric stimulation of some regions of temporal lobes can indeed induce this kind of memories. The idea about coding would suggest the identification of this memory with a highly symbolic computer memory based on "carving in stone".

  2. The proposal is inspired by the idea of brain or cell as computer and can be criticized. There is no pressing need for coding since behavioral memory can be reduced to the formation of associations and associative learning by computers is standard example of this kind of behavioral memory. One can of course consider the coding for declarative and verbal memories and genetic code provides an attractive candidate for a universal code. This kind of code might be behind the natural languages as a kind of molecular language.

  3. Behavioral memories can be defined as changes of behavior resulting from a continued stimulus. The understanding of behavioral memory relies on the notions of synaptic strength, synaptic plasticity, and long term potentiation. Synaptic strength tells how strongly the postsynaptic neuron responds to the nerve pulse pattern arriving along pre-synaptic axon and mediated by neurotransmitter over the synaptic gap. For instance, glutamate acts as excitatory neurotransmitter and binding to receptor. At neuronal levels long term potentiation means increase of the synaptic strength so that post-synaptic neuron becomes "more attentive" to the firing of pre-synaptic neuron.

    Hebb's rules - not established laws of Nature and plagued by exceptions - state that the effectiveness of synaptic receptors increases, when the two neurons fire simultaneously: it is important to notice that these firings need not have any causal connection with each other. The simultaneous firing activates NMDA receptors in the post-synaptic neuron and generates Ca+2 flux which correlates with the increase of the synaptic strength. NMDA obeys same chemical formula C5H9NO4 as glutamate: in fact, glutamate and asparagin the two acidic amino-acids. It is also known that the presence of CaKMII is necessary for the increase of the synaptic strengths.

  4. There is however an almost-paradox involved with this view about memory if assumed to explain all kinds of memories - in particular episodal memories. Long term conscious memories can be lifelong. Synaptic structures are however highly unstable since the synapses and proteins involved are cycled. To my view this argument is somewhat naive. There could be a flow equilibrium. The flow pattern of fluid flow in flow equilibrium can be stable although the fluid is replaced with new one all the time. The proposal of authors is that memories are stored to some more stable structures and that microtubules are these more stable structures making possible short term memories. Post-synaptic microtubules, which differ from presynaptic microtubules in several manners are indeed stabilized by MAPs. Authors also propose the thin filaments associated with the cytoskeleton are responsible for long term memories.

    Authors believe on computationalism and they apply standard view about time so that their conclusion is that long term memories are stored elsewhere and remain able to regulate synaptic plasticity. In this framework the notion of memory code is very natural.

1.2 LTP and synaptic plasticity

From Wikipedia one can read that synaptic plasticity means possibility for changes in function, location and/or number of post-synaptic receptors and ion channels. Synapses are indeed very dynamical and synaptic receptors and channel proteins are transient, which does not seem to conform with the standard view about long term memory and indeed suggest that the stable structures are elsewhere.

Long term potentiation, briefly LTP, involves gene expression, protein synthesis and recruitment of new receptors or even synapses. The mechanism of LTP is believed to be following. The glutamate from pre-synaptic neuron binds to post-synaptic receptors, which leads to the opening of Ca+2 channels and influx of Ca+2 ions to dendritic spines, shafts and neuronal cell body. The inflow of Ca+2 induces activation of multiple enzyme including protein kinase A and C and CaMKII. These enzymes phosphorylate intra-neuronal molecules.

It is known that the presence of CaMKII is necessary for long term potentiation. This supports the proposal of authors that microtubules are involved in an essential manner in memory storage and processing and regulation of synaptic plasticity. The observation about the correspondence between the geometries of CaMKII and microtubular surface is rather impressive support for the role of MTs. To my opinion, the hypothesis about memory code is however un-necessary.

1.3 Microtubules

Quite generally, microtubules (MTs) are basic structural elements of cytoskeleton. They are rope like polymers and grow as long as 25 micrometers long. They are highly dynamical. The standard view identifies their basic function as maintaining of cell structures, providing platforms for intracellular transport, forming the spindle during mitosis, etc..

Microtubules are extremely rich in eukaryotic biology and brain neurons. They are believed to connect membrane and cytoskeletal levels of information processing together. MTs are the basic structural elements of axons and MTs in axons and dendrites/neuronal cell bodies are different. Dendrites contain antiparallel arrays MTs interrupted and stabilized by microtubule associated proteins (MAPs) including MAP2. This difference between dendritic and axonal microtubules could be relevant for the understanding of the neuronal information processing. Microtubules are associated also with long neural pathways from sensory receptors, which seem to maximize their length.

For these reasons it would not be surprising if MTs would play a key role in the information processing at neuronal level. Indeed, the more modern view tends to see microtubules as the nervous system of the cell, and the hexagonal lattice like structure of microtubuless trongly suggests information processing as a basic function of microtubules. Many information processing related functions have been proposed for microtubules. Microtubules have been suggested role as cellular automatons and also quantum coherence in microtubular scale has been proposed.

The proposal of the article is that short term memory is realized in terms of a memory code at the level of MTs and that intermediate filaments which are much more stable could be responsible for long term memory.

1.4 CaMKII enzyme

According to the proposal the key enzyme of memory would be Calcium/calmodulin-dependent protein kinase II: briefly CaMKII. Its presence is known to be necessary for long term potentiation.

In passive state CaMKII has snowflake shape. The activated kinase looks like double sided insect with six legged kinase domains on both sides of a central domain. Activation means phosphorylation of the 6+6 legs of this "nano-insect". In the presence of Ca+2 or calmodulin flux CaKMII self-actives meaning self-phosphorylation so that it remains permanently active.

There are however grave objections against phosphate=1--no-phosphate=0 coding.

  1. Only the fluxes of Ca+2 and/or calmodulin matter so that it is very difficult to imagine any coding. One would expect that the fraction of phosphorylated legs depends on these fluxes in equilibrium but it is very difficult to image how these fluxes could carry information about a specific pattern of phosphorylation for legs. If all legs are phosphorylated the coding to microtubular phosphorylation would require that 6 bits of information is fed at this stage by telling which leg actually gives its phosphate to tubulin. This does not look two plausible but one must be very cautious in making too strong conclusions.

  2. Since metabolic energy is necessary for any information processing, the more plausible interpretation would be that phosphorylation makes bit active. Bit itself would be represented in some other manner. The 6+6 leg structure of CaMKII is very suggestive of a connexion with 6 incoming bits and 6 outgoing bits - possible same or conjugated. The interpretation in terms of DNA codon and its conjugate is what comes first in mind.

One should not however throw away child with the wash water. The highly interesting discovery discussed in the article is that the spatial dimensions, geometric shape, and electrostatic binding of the insect-like CamKII and hexagonal lattices of tubulin proteins in microtubules fit nicely together. The authors show how CaMKII kinase domains can collectively bind and phosphorylate MTs. This alone could be an extremely important piece of information. There is no need to identify bit with phosphorylation state.

2. TGD view about the situation

TGD based view about memory could have been developed by starting from the paradox related to long term memories. Memories are long lasting but the structures supposed to be responsible for their storage are short-lived. TGD based solution of the paradox would be based on new view about the relationship between geometric time and experienced time.

  1. According to this view brain is 4-dimensional and primary memories are in the time-place, where the neural event took place for the first time. In principle there would be no need to store memories by "carving them in stone". To remember would be to see in time direction: this view is indeed possible in zero energy ontology. Time-like entanglement and signaling to the geometric past using negative energy signals would be the basic mechanisms of memory.

  2. Stable memories require copies also for another reason. The negative energy signal to geometric past is not expected to allow a precise targeting to a one particular moment of time in past. To circumvent the problem one must make the target large enough in time direction. The strengthening of memory would mean building up large number of copies of memory. These copies are produced in every conscious memory recall and learning would be based on this mechanism. The neuronal mechanism would produce large number of copies of the memory and one can ask whether CaMKII indeed generates phosphorylated sections of MT somehow essential for the representation of long term symbolic memories as names for experiences rather than experiences themselves.

  3. Metabolism must relate also to conscious memory recall. Since negative energy signals are involved, there is great temptation to assume that de-phosphorylation liberating metabolic energy corresponding to the absorbed negative energy accompanies memory recall. Large hbar for the photons involved would allow very low frequencies -expected to characterize the time span of memory recall- and make communications over very long time intervals possible. This would mean that the original memory representation is destroyed in the memory recall. This would conform with the spirit of quantum no-cloning theorem. Several copies of the memory representation would be needed and also feed of metabolic energy to generate new copies. In this framework conscious memory recall would be dynamical event rather than stable bit sequence in accordance with the vision about quantum jump as moment of consciousness.

2.1 Braiding and memory

This leaves a lot of freedom to construct more detailed models of symbolic memories.

  1. Braiding of magnetic flux tubes would make possible not only topological quantum computation but also a universal mechanism of long term memory. In the model of DNA as topological quantum computer the flux tubes connect DNA nucleotides and lipids of cell membrane. It turned out that the flux tubes carrying dark matter - identified as ordinary particles but with non-standard value of Planck constant - could connect all kinds of biomolecules and that braiding and reconnection could serve as basic quantum mechanisms in the functioning of biomolecules. Flux tubes could also connect the tubulins of microtubules and lipids of axonal or dendritic membrane.

  2. Two kinds of braidings are present: the lipid flow defines braiding in time direction as the analog of dance and the fact that lipids are like dancers with threads from shoes the wall - now microtubule surface - so that the dance induce braiding of these threads storing the dynamics of the dance to memory. The presence of both space-like and time-like braiding and the fact that they are in well-defined sense dual has become central idea of quantum TGD itself. Originally it was however discovered in the model for DNA as topological quantum computer.

  3. Both active memory recall by sending negative energy dark photon to geometric past and spontaneous memory recall by receiving a positive energy photons from geometric past require metabolic energy. Therefore the presence of phosphate in braid strands is necessary. The flux tubes defining braid strands can be therefore assumed to be active only if they have phosphate at the other end. A more appropriate TGD based interpretation is that this makes possible negentropic entanglement, which is one of the basic predictions of the number theoretic vision about life. High energy phosphate bond would thus a signature of negentropic entanglement, which could serve as a correlate for the experience of understanding. One could relate ATP-ADP process as a basic process of life directly to cognition. The presence of phosphate would tell that there is magnetic flux tube - actually pair of them- beginning from the molecule.

2.2 TGD inspired microtubular model of memory

The finding of the authors inspires a more detailed formulation for the vision for how memories could be realized at microtubular level.

  1. The phosphorylation of tubulins would generate active braids strands and their presence would make possible memory recall. Note that memories as such could be stored to the braiding in any case if the microtubule-lipid flux tubes are present always. Every nerve pulse pattern would induce a flow of lipids at neuronal membrane if the membrane is in a phase corresponding to 2-D liquid crystal. This flow pattern would be stored to the braiding of the flux tubes.

  2. In the model of DNA as topological quantum computer one assigns to braid strands connecting DNA nucleotides to lipids 4 different states representing the nucleotides A,T,C, G. In the original model the A,T,C,G were mapped to four states defined by quarks u,d and their antiquarks at the ends of braid strands. This proposal can be of course accused of being quite too science fictive. TGD however predicts the possibility of scaled up variants of QCD type physics even in the scale of living matter and there are some indications for this.

    A more down-to-earth realization of the genetic code proposed quite recently is that braid states correspond to pairs of magnetic flux tubes. To the ends of both flux tubes one assigns electron so that the electrons form spin triplet and spin singlet state defining 3+1 states representing A,T,C,G. This gives also a connection with electronic super-conductivity which is fundamental assumption in the model of nerve pulse based on Josephson currents: nerve pulse corresponds to a simple perturbation of the ground state in which all Josephson current along axon are oscillating in the same phase. Mathematically the phase difference behaves like gravitational pendulum (see the TGD inspired model for nerve pulse).

    The 6=2+2+2 legs could correspond to flux tube pairs and each flux tube pair would represent DNA nucleotide in terms of the spin state of electron pair. Phosphorylation would activate the braid strand by making possible negentropic entanglement and information storage and recall. This conforms with the fact of life is that metabolic energy is needed for all kinds of information processing including also information storage.

  3. For this proposal LTP would mean generation of active braid strands. The post-synaptic neuron would be in "wake-up" state and would pay attention to the nerve pulse patterns arriving from the pre-synaptic neuron. This activation would be induced by simultaneous firing of post-synaptic and pre-synaptic neurons. As a consequence, the lipid flow would generate braidings providing memory representations and defining in temporal domain quantum computation like processes.

  4. This does not yet explain why CaMKII is necessary for LTP. There is a high temptation to regard the increase of the synaptic sensitivity as a property of synaptic connection. One can imagine several mechanisms.

    1. For instance, active flux tube connections between presynaptic lipids and postsynaptic microtubuli could be generated by phosphorylation, and the flux tubes might increase the flow of glutamate between pre- and post-synaptic neurons and in this manner increase synaptic strength. Flux tubes might make possible a continual flow of dark particles between pre- and post-synaptic neurons. They could also make possible negentropic entanglement between the two neutrons binding the neurons to single coherent quantum whole.

    2. The strength of this connection could be affected also by the presence of active braid strands making possible quantum memory and topological quantum computation. Also more complex processes assigned with LTP would become possible since microtubules might be seen as conscious intelligent structures able to modify their nearby environment.

For background see chapter Quantum Model for Memory of "TGD Inspired Theory of Consciousness".

Sunday, March 11, 2012

Intelligent neurons

Postings to particle physics blogs seem to have degenerated into boring n-sigma talk, which faitfully reflects the deep stagnation in particle physics theory. In biology and neuroscience situation is totally different. For instance, this morning I found a highly interesting article about about intelligent neurons.

The article gives a link to a Youtube video about a robot, which can move and has learned to avoid obstacles. The robot is controlled by neurons from rat embryo, which have grown to form a densely connected population of about 100.000 neurons. Some kind of training is is performed.

The neuron population has developed a sense of vision. We do not know whether the population sees as we do. Also we can "see" using other senses such as tactile sense and this must be based on building a geometric symbolic representation from environment using the sensory input. Neuron population utilizes the visual input in its motor activities made possible by the coupling to a simple robot and learns to avoid obstacles.

The interpretation in the framework of standard neuroscience is challenging. Neuron population behaves as if it were a conscious creature with intentions, goals, ability to see, and ability to move in a manner taking into account the constraints posed by the environment. But a neuroscientist who has read his text books is materialist and not willing to admit that neuron population could behave like a conscious creature. He tries to understand the situation on basis of classical computation realizing some kind of unconscious general intelligence and realying on some mystical algorithm allowing to recognize patterns and adapt to almost any situation. In principle the neuron population should be replaceable by a general purpose computer program if computationalism works. This kind of extreme flexibility means that this program must be very very long, perhaps too long to be fast enough and be realizable using finite metabolic and material resources!

In the second experiment discussed in the posting ferret's brain was rewired. Visual sensory input from optic nerve was redirected to the brain regions, where the input from ears is normally processed. Despite this the brain region automatically re-configured to make sense of the visual data and allowed the ferret to see with 1/3 of the normal vision. The structures presumably needed for seeing were automatically formed in the brain section used to process sound.

  1. The experiment excludes the idea that genes are responsible for the differences between auditory and visual regions of brain. Neurons are same neurons everywhere in brain and it is self-organization, which is responsible for the specialization to see or hear - or rather, to process sensory inputs whatever they are. Top-down instructions at brain level can be excluded since the ferret's brain does not know about rewiring. Also the feedback teaching the neurons to do the "right" thing as in the first experiments was absent. It seems that the neuron population behaves as a conscious creature able to self-organize to utilize the input from sensory organs, be it visual or auditory.

  2. In the ordinary neuro-science this seems to leave only one option. If neurons are assumed to be un-conscious, the rewiring between neurons should somehow give rise to a symbolic representation about the geometry of the environment. The great mystery is how a mere wiring could give rise to various qualia such has color even in the normal situation. The idea that different wiring topologies could give rise to different qualia looks utterly implausible.

  3. The symbolic representation of the environment could use only geometric data. Whether it involves also qualia such as colors could be clarified an experiment using human subjects and this raises ethical and probably also practical issues. One could however test whether ferret learns to recognize objects with different color: for instance, green object could course a pleasant sensation and red object an unpleasant sensation. A variant of this test might be possible even in the case of neuronal robot.

What about the interpretation in TGD framework? Both experiments conform with the general TGD inspired vision about brain.

  1. For the simplest option the fundamental sensory qualia reside at the level of sensory receptors. This is not the only option but it is very attractive one. I do not repeat here the arguments allowing to circumvent the objections against qualia at the sensory receptors: phantom limb, etc... The sensation of color involves quantum entanglement of magnetic body, brain, and retina. In the latter experiments auditory regions would entangle with retina and the prediction is that also now color qualia are present.

  2. Brain builds up symbolic representations by decomposing the sensory input to objects and giving them names. Sensory organs take care of the "coloring" of the resulting sensory map. Sensory feedback from neurons is needed since our perceptions do not represent what is there but a caricature exaggerating the important features. Sensory feedback is in terms of dark photons from neurons to retina with non-standard value of Planck constant. We call them bio-photons when they leak out by transforming to ordinary photons.

  3. Neuronal lipid layer serves as the analog of computer monitor screen with each lipid carrying various basic attributes associated with neuronal sensory experience, which remains unconscious to us. This explains grandma neurons able to recognize some particular sensory input. Also neurons possess various primary qualia but sensory organs have specialized to produce sensory qualia at our level of the self hierarchy.

  4. Neurons are conscious creatures able to co-operate because they have a collective magnetic body controlling the neuron population, and can therefore rapidly adapt in changing environment as in the first experiment. The presence of the magnetic body of course distinguishes sharply between TGD inspired and neuroscience explanations.

Thursday, March 08, 2012

Updated view about Quantum Adeles

I have been working last weeks with quantum adeles. This has involved several wrong tracks and about five days ago a catastrophe splitting the chapter "Quantum Adeles" to two pieces entitled "Quantum Adeles" and "About Absolute Galois Group" took place, and simplified dramatically the view about what adeles are and led to the notion of quantum mathematics. At least now the situation seems to be settled down and I see no signs about possible new catastrophes. I glue the abstract of the re-incarnated "Quantum Adeles" below.

Quantum arithmetics provides a possible resolution of a long-lasting challenge of finding a mathematical justification for the canonical identification mapping p-adics to reals playing a key role in TGD - in particular in p-adic mass calculations. p-Adic numbers have p-adic pinary expansions ∑ anpn satisfying an<p. of powers pn to be products of primes p1<p satisfying an<p for ordinary p-adic numbers. One could map this expansion to its quantum counterpart by replacing an with their counterpart and by canonical identification map p→ 1/p the expansion to real number. This definition might be criticized as being essentially equivalent with ordinary p-adic numbers since one can argue that the map of coefficients an to their quantum counterparts takes place only in the canonical identification map to reals.

One could however modify this recipe. Represent integer n as a product of primes l and allow for l all expansions for which the coefficients an consist of primes p1<p but give up the condition an<p. This would give 1-to-many correspondence between ordinary p-adic numbers and their quantum counterparts.

It took time to realize that l<p condition might be necessary in which case the quantization in this sense - if present at all - could be associated with the canonical identification map to reals. It would correspond only to the process taking into account finite measurement resolution rather than replacement of p-adic number field with something new, hopefully a field. At this step one might perhaps allow l>p so that one would obtain several real images under canonical identification.

This did not however mean giving up the notion of the idea of generalizing number concept. One can replace integer n with n-dimensional Hilbert space and sum + and product × with direct sum ⊕ and tensor product ⊗ and introduce their co-operations, the definition of which is highly non-trivial.

This procedure yields also Hilbert space variants of rationals, algebraic numbers, p-adic number fields, and even complex, quaternionic and octonionic algebraics. Also adeles can be replaced with their Hilbert space counterparts. Even more, one can replace the points of Hilbert spaces with Hilbert spaces and repeat this process, which is very similar to the construction of infinite primes having interpretation in terms of repeated second quantization. This process could be the counterpart for construction of nth order logics and one might speak of Hilbert or quantum mathematics. The construction would also generalize the notion of algebraic holography and provide self-referential cognitive representation of mathematics.

This vision emerged from the connections with generalized Feynman diagrams, braids, and with the hierarchy of Planck constants realized in terms of coverings of the imbedding space. Hilbert space generalization of number concept seems to be extremely well suited for the purposes of TGD. For instance, generalized Feynman diagrams could be identifiable as arithmetic Feynman diagrams describing sequences of arithmetic operations and their co-operations. One could interpret ×q and +q and their co-algebra operations as 3-vertices for number theoretical Feynman diagrams describing algebraic identities X=Y having natural interpretation in zero energy ontology. The two vertices have direct counterparts as two kinds of basic topological vertices in quantum TGD (stringy vertices and vertices of Feynman diagrams). The definition of co-operations would characterize quantum dynamics. Physical states would correspond to the Hilbert space states assignable to numbers. One prediction is that all loops can be eliminated from generalized Feynman diagrams and diagrams are in projective sense invariant under permutations of incoming (outgoing legs).

I glue also the abstract for the second chapter "About Absolute Galois" group which came out from the catastrophe. The reason for the splitting out was that the question whether Absolute Galois group might be isomorphic with the analog of Galois group assigned to quantum p-adics ceased to make sense.

Absolute Galois Group defined as Galois group of algebraic numbers regarded as extension of rationals is very difficult concept to define. The goal of classical Langlands program is to understand the Galois group of algebraic numbers as algebraic extension of rationals - Absolute Galois Group (AGG) - through its representations. Invertible adeles -ideles - define Gl1 which can be shown to be isomorphic with the Galois group of maximal Abelian extension of rationals (MAGG) and the Langlands conjecture is that the representations for algebraic groups with matrix elements replaced with adeles provide information about AGG and algebraic geometry.

I have asked already earlier whether AGG could act is symmetries of quantum TGD. The basis idea was that AGG could be identified as a permutation group for a braid having infinite number of strands. The notion of quantum adele leads to the interpretation of the analog of Galois group for quantum adeles in terms of permutation groups assignable to finite l braids. One can also assign to infinite primes braid structures and Galois groups have lift to braid groups (see this).

Objects known as dessins d'enfant provide a geometric representation for AGG in terms of action on algebraic Riemann surfaces allowing interpretation also as algebraic surfaces in finite fields. This representation would make sense for algebraic partonic 2-surfaces, and could be important in the intersection of real and p-adic worlds assigned with living matter in TGD inspired quantum biology, and would allow to regard the quantum states of living matter as representations of AGG. Adeles would make these representations very concrete by bringing in cognition represented in terms of p-adics and there is also a generalization to Hilbert adeles.

For details see the new chapters Quantum Adeles and About Absolute Galois Group of "Physics as Generalized Number Theory".

Quantum mathematics

The comment of Pesla to previous posting contained something relating to the self-referentiality of consciousness and inspired a comment which to my opinion deserves a status of posting. The comment summarizes the recent work to which I have associated the phrase "quantum adeles" but to which I would now prefer to assign the phrase "quantum mathematics".

To my view the self referentiality of consciousness is the real "hard problem". The "hard problem" as it is usually understood is only a problem of dualistic approach. My hunch is that the understanding of self-referentiality requires completely new mathematics with explicitly built-in self-referentiality. During last weeks I have been writing and rewriting chapter about quantum adeles and end up to propose what this new mathematics might be. The latest draft is here.

1. Replace of numbers with Hilbert spaces and + and × with direct sum and tensor product

The idea is to start from arithemetics : + and × for natural numbers and generalize it .

  1. The key observation is that + and x have direct sum and tensor product for Hilbert spaces as complete analogs and natural number n has interpretation as Hilbert space dimension and can be mapped to n-dimensional Hilbert space.

    So: replace natural numbers n with n-D Hilbert spaces at the first abstraction step. n+m and n×m go to direct sum n⊕m and tensor product n⊗m of Hilbert spaces. You calculate with Hilbert spaces rather than numbers. This induces calculation for Hilbert space states and sum and product are like 3-particle vertices.

  2. At second step construct integers (also negative) as pairs of Hilbert spaces (m,n) identifying (m⊕r,n⊕r) and (m,n). This gives what might be called negative dimensional Hilbert spaces! Then take these pairs and define rationals as Hilbert space pairs (m,n) of this kind with (m,n) equivalent to (k⊗m,k⊗n). This gives rise to what might be called m/n-dimensional Hilbert spaces!

  3. At the third step construct Hilbert space variants of algebraic extensions of rationals. Hilbert space with dimension sqrt(2) say: this is a really nice trick. After that you can continued with p-adic number fields and even reals: one can indeed understand even what π-dimensional Hilbert space could be!

    The essential element in this is that the direct sum decompositions and tensor products would have genuine meaning: infinite-D Hilbert spaces associated with transcendentals would have different decompositions and would not be equivalent. Also in quantum physics decompositions to tensor products and direct sums (say representations of symmetry group) have phyiscal meaning: abstract Hilbert space of infinite dimension is too rough a concept.

  4. Do the same for complex numbers, quaternions, and octonions, imbedding space M4×CP2, etc.. The objection is that the construction is not general coordinate invariant. In coordinates in which point corresponds to integer valued coordinate one has finite-D Hilbert space and in coordinates in which coordinates of point correspond to transcendentals one has infinite-D Hilbert space. This makes sense only if one interprets the situation in terms of cognitive representations for points. π is very difficult to represent cognitively since it has infinite number of digits for which one cannot give a formula. "2" in turn is very simple to represent. This suggests interpretation in terms of self-referentiality. The two worlds with different coordinatizations are not equivalent since they correspond to different cognitive contents.

Replace also the coordinates of points of Hilbert spaces with Hilbert spaces again and again!

The second key observation is that one can do all this again but at new level. Replace the numbers defining vectors of the Hilbert spaces (number sequences) assigned to numbers with Hilbert spaces! Continue ad infinitum by replacing points with Hilbert spaces again and again.

You get sequence of abstractions, which would be analogous to a hierarchy of n:th order logics. At lowest levels would be just predicate calculus: statements like 4=22. At second level abstractions like y=x2. At next level collections of algebraic equations, etc....

Connection with infinite primes and endless second quantization

This construction is structurally very similar to - if not equivalent with - the construction of infinite primes which corresponds to repeated second quantization in quantum physics. There is also a close relationship to - maybe equivalence with - what I have called algebraic holography or number theoretic Brahman=Atman identity. Numbers have infinitely complex anatomy not visible for physicist but necessary for understanding the self referentiality of consciousness and allowing mathematical objects to be holograms coding for mathematics. Hilbert spaces would be the DNA of mathematics from which all mathematical structures would be built!

Generalized Feynman diagrams as mathematical formulas?

I did not mention that one can assign to direct sum and tensor product their co-operations and sequences of mathematical operations are very much like generalized Feynman diagrams. Co-product for instance would assign to integer m all its factorizations to a product of two integers with some amplitude for each factorization. Same for co-sum. Operation and co-operation would together give meaning to 3-particle vertex. The amplitudes for the different factorizations must satisfy consistency conditions: associativity and distributivity might give constraints to the couplings to different channels- as particle physicist might express it.

The proposal is that quantum TGD is indeed quantum arithmetics with product and sum and their co-operations. Perhaps even something more general since also quantum logics and quantum set theory could be included! Generalized Feynman diagrams would correspond to formulas and sequences of mathematical operations with stringy 3-vertex as fusion of 3 -surfaces corresponding to ⊕ and Feynmannian 3-vertex as gluing of 3-surfaces along their ends, which is partonic 2-surface, corresponding to ⊗! One implication is that all generalized Feynman diagrams would reduce to a canonical form without loops and incoming/outgoing legs could be permuted. This is actually a generalization of old fashioned string model duality symmetry that I proposed years ago but gave it up as too "romantic": see this.

Friday, March 02, 2012

About the meaning of effective 2-dimensionality and strong form of holography

The answer to the questions of Hamed in previous posting involved a lot of talk about effective 2-dimensionality. Effective two-dimensionality boils down to the statement that partonic 2-surfaces and 4-D tangent space data - briefly 2-data - at them code for physics. The partonic 2-surfaces and 4-D tangent space data - briefly 2-data - at them code for physics. What does this statement mean? The answer is not trivial and one can actually consider two alternative answers, which turn out to be equivalent.

  1. The preferred extremal property for Kähler action assigns to both space-like 3-surfaces at boundaries of CDs and to light-like 3-surfaces defined by the orbits of wormhole throats at which the signature of the induced metric changes, the same unique space-time surface apart from delicacies due to the failure of strict determinism of Kähler action. This implies that partonic 2-surfaces and their tangent space data (briefly 2-data) at the boundaries of CDs and sub-CDs code for physics, in particular for the preferred extremal of Kähler action. This represents strong form of holography following from strong form of GCI.

  2. This statement has very simple realization leading to the first option. The formulas for WCW metric involve only data about partonic 2-surfaces and their tangent space. As a matter fact, these formulas emerged much before the realization of strong form of general coordinate invariance and the role of light-like 3-surfaces and ZEO as the only mathematically sensible expressions for WCW metric.

    1. This suggests that there is a gauge symmetry in the sense that all space-like 3-surfaces with same 2-data are gauge equivalent. The gauge symmetry would be generalization of ordinary 2-D conformal symmetry acting on the light-like boundary of CD, which is also metrically 2-D. This was the original interpretation about the situation and is very attractive still.

    2. But what about the dynamics of light-like wormhole throats? Does this dynamics possess gauge invariance too or does the choice of space-like 3-surfaces fix the light-like wormhole orbits completely as the idea that preferred extremal of Kähler action including the orbits of wormhole throats is unique apart from the delicacies caused by the failure of the strict determinism of Kähler action? It seems that one can have gauge invariance due to generalized conformal symmetries associated with light-like 3-surfaces also now: at this time it corresponds to the Kac-Moody symmetries assignable to the isometries of the imbedding space. Light-likeness is the key word and obviously selects space-time dimension four in a unique position mathematically (how many times I have said this in the hope that colleague could manage to win his natural arrogance and listen what I am saying!).

      This interpretation was originally suggested by the properties of the CP2 type vacuum extremals identified as first order model for the lines of generalized Feynman diagrams have light-like random curve as M4 projection. The dynamics is non-deterministic and one has gauge invariance mathematically identifical with conformal invariance restricted to real line. Quite generally, the two kinds of conformal summetries acting as gauge symmetries of WCW metric give rise to gauge invariance for both space-like 3-surfaces and light-like wormhole throat orbits.

  3. There is however also another option. For this option the two time like coordinates transversal to space-like 3-surfaces and wormhole throat orbits respectively would define deterministic dynamics locally - that is in small regions of 3-surface. WCW would not contain all possible space-like 3-surfaces but only those 3-surfaces obeying a deterministic dynamics in space-like directions transversal to the partonic 2-surfaces that it contains. An analogous statement would be true about the orbits of wormhole throats. This option does not seem too plausible as such.

    One can however consider the interpretation in terms of gauge fixing forcing deterministic dynamics just at it does in gauge theories. Since conformal invariance is gauge invariance, one could fix the gauge by choosing one representative for both space-like and light-like 3-surfaces consistent with 2-data without loosing any information about physics. The two options would be equivalent. This interpretation seem to be the most plausible one to me.

Thursday, March 01, 2012

Two little observations about quantum p-adics

The two little observations to be made require some background about quantum p-adics.

  1. The work with quantum p-adics leads to the notion of arithmetic Feynman diagrams with +q and ×q representing the vertices of diagrams and having interpretation in terms of direct sum and tensor product. These vertices correspond to TGD counterparts of stringy 3-vertex and Feynman 3-vertex. If generalized Feynman diagrams satisfy the rules of quantum arithmetics, all loops can be eliminated by move representing the basic rules of arithmetics and the diagrams are invariant under permutations of outgoing resp. incoming legs and incoming legs involve only vertices and outgoing legs only co-vertices in the canonical representation of the generalized Feynman diagram. Possible modifications are possible and would be due to braiding meaning that the exchange of particles is not a mere permutation represented trivially. These symmetries are consistent with the prediction of zero energy ontology that virtual particles are pairs of on mass shell massless particles. The kinetic mass shell constraints indeed imply enormous reduction in the number of allowed diagrams. This means also a far reaching generalization of the duality symmetry of the old fashioned hadronic string model. I proposed this idea for years ago but gave it up as too "romantic".

  2. A beautiful connection with infinite primes emerges and p-adic primes characterizes collections of collections .... of quantum rationals which describe quantum dimensions of pairs of Hilbert spaces assignable to time-like and space-like braids ending at partonic 2-surfaces.

  3. The interpretation for the decomposition of quantum p-adic integers to quantum p-adic prime factors is in terms of a tensor product decomposition to quantum Hilbert spaces with quantum prime dimensions lq and can be related to the singular coverig spaces of imbedding allowing to describe the many-valuedness of the normal derivatives of imbedding space coordinates at space-like ends of space-time sheets at boundaries of CD and at lightlike wormhole throats. The further direction sum decompositions corresponding to different quantum p-adic primes assignable to l>p and represented by various quantum primes lq projecting to l in turn have interpretation in terms of p-adicity. The decomposition of n to primes corresponds to braid with strands labeled by primes representing Hilbert space dimensions.

  4. This gives a connection with the hierarchy of Planck constants and dark matter and quantum arithmetics. The strands of braid labeled by l decompose to strands correponding to the different sheets of covering associated with the singular covering of imbedding space: here one has however quantum direct sum decomposition meaning that particles are delocalized in the fiber of the covering.

    The conservation of number theoretic multiplicative momenta at ×q vertex allows to deduce the selection rules telling what happens in vertices inolving particles with different values of Planck constant. There are two options depending on whether r= hbar/hbar0 satisfies r=n or r=n+1, where n characterizes the Hilbert space- dimension assignable to the covering of the imbedding space. For both options one can imagine a concrete phase transition leading from in-animate matter to living matter interpreted in terms of phases with non-standard value of Planck constant.

Consider now the two little observations.

  1. The first little observation is that these selection rules mean a deviation of the earlier proposal that only particles with same values of Planck constant can appear in a given vertex. This assumption explains why dark matter identified as phases with non-standard value of Planck constant decouples from ordinary matter at vertices. This explanation is however not lost albeit being weakened. If ×q vertex contains two particles with r=n+1 for r=n option (r=1 or 2 for r=n+1 option), also the third particle has ordinary value of Planck constant so that ordinary matter effectively decouples from dark matter. For +q vertex the decoupling of the ordinary from dark matter occurs for r=n+1 option but not for r=n option. Hence r=n+1 could explain the virtual decoupling of dark and ordinary matter from each other.

  2. Second little observation relates to the inclusions of hyper-finite factors which should relate closely to quantum p-adic primes because finite measurement resolution should be describable by HFFs. For prime p=2 one obtains quantum dimension 2q= 2cos(2π/n) in the most general case: n=p corresponds to p-adicity and more general values fo n-adicity. The interesting observation concerns the quantum dimension [M:N] obtained as quantum factor space M/N for Jones inclusion of hyper-finite factor of type I1 with N interpreted as an algebra creating states not distinguishable from each other in the measurement resolution used. This quantum dimension is 2q2 and has interpretation as dimension of 2× 2 quantum matrix algebra. This observation suggests the existence of infinite hierarchy of inclusions with [M:N]= pq2 labelled by primes p. The integer n would correspond to n-adicity meaning p-adicity for factors of n.

For details see the new chapter Quantum Adeles of "Physics as Generalized Number Theory".