https://matpitka.blogspot.com/2013/04/

Sunday, April 28, 2013

Self or only a model of self?

Negentropic entanglement provides a model for associations as rules in which superposition of tensor product states defines rule with entanglement pairs defining its various instances. This generalizes to N-fold tensor products. Associations would be realized as N-neuron negentropic entanglement stable against NMP. One could also think of realizing associative areas in terms of neurons whose inputs form entangled tensor product and when sensory inputs are received they form analogous tensor product in representative degrees of freedom.

Thus negentropic entanglement is necessary for mental images (having sub-CDs as correlates) to mental images representing spatial patterns. Negentropic entanglement in time direction for these patterns (zero energy states) is in turn necessary to bind them to sequences of mental images representing abstract memories as sequences of mental images. Negentropically entangled sequence would be a quantal counterpart for the original association sequence introduced as purely geometric concept.

This picture however challenges the identification of self as quantum jump. Should the negentropically entangled sequences of mental images define selves so that self would be something characterizing zero energy state rather than something identified as quantum jump? Could they define a model of self to be distinguished from self identified as quantum jump? Or could one give up the notion of self alltogether and be satisfied with model of self? At this moment it seems that nothing is lost by assuming only the model of self.

By definition negentropic entanglement tends to be preserved in quantum jumps so that it represents information as approximate invariant: this conforms with the idea of invariant representation and quite generally with the idea that invariants represent the useful information. There is however a problem involved. This information would not be conscious if the original view about conscious information as a change of information is accepted. Could one imagine a reading mechanism in which this information is read without changing the negentropically entangled state at all? This reading process would be analogous to deducing the state of a two-state system in interaction free measurement to be discussed below in more detail.

The assumption that self model is a negentropically entangled system which does not change in state function reduction, leads to a problem. If the conscious information about this kind of subself corresponds to change of negentropy in quantum jump, it seems impossible to get this information. One can however consider a generalization of so called interaction free measurement as a manner to deduced information about self model. This information would be obtained as sequences of bits and might be correspond to declarative, verbal memories rather than direct sensory experiences.


  1. The bomb testing problem of Elitzur and Vaidman gives a nice concrete description of what happens in interaction free measurement.

    The challenge is to find whether the bomb is dud or not. Bomb explodes if it receives photon with given energy. The simplest test would explode all bombs. Interaction free measurement allows to make test by destroying only small number of bombs and at idealized limit no bombs are destroyed.

    The system involves four lenses and two detectors C and D (see the illustration in the link). In the first lense the incoming photon beam splits to reflected and transmitted beams: the path travelled by transmitted beam contains the bomb.

    1. The bomb absorbs photon with a probability which tells the fraction of photon beam going to the path at which bomb is (is transmitted through the lense). The other possibility is that this measurement process creates a state in which photon travels along the other path (is reflected). This photon goes through a lense and ends up to detector C or D through lense.

    2. If the bomb is dud photon travels through both paths and interference at the lense leads the photon to detector D. If C detects photon we know that the bomb was not a dud without exploding it. If D detects the photon, it was either dud or not and we can repeat the experiment as long as bomb explodes, or C detects photon and stop if the detector continues to be D (dud). This arrangement can be refined so that at the ideal limit no explosions take place and all.
  2. The measurement of bomb property is interaction free experiment in the sense that state function reduction performed by absorber/bomb can eliminate the interaction in the sense that photon travels along the path not containing the bomb. One might say that state function reduction is an interaction which can eliminates the usual interaction with photon beam. State function reduction performed by bomb can change the history of photon so it travels along the path not containing the bomb.

This picture is only metaphorical representation of something much more general.
  1. In TGD framework the photon paths branching at lenses correspond to branching 3-surfaces analogous to branching strings in string model and photon wave splits to sum of waves travelling along the two paths.

  2. Bomb could be of course replaced with any two-state system absorbing photons in one state but not in the other state, say atom. Now one would test in which state the atom is gaining one bit of information in the optimal situation. Two-state atom could thus represent bit and one could in principle read the bit sequence formed by atoms (say in row) by this method without any photon absorption so that the row of atoms would remain in the original state.
One can imagine several applications if the information to be read in interaction free manner can be interpreted as bit sequences represented as states of two-state system. Lasers in ground states and its excited state would be analogous many particle quantum system. In TGD framework the analog of laser consisting of two space-time sheets with different sizes and different zero point kinetic energies would be the analogous system.

For instance, a model of memory recall with memories realized as negentropically entangled states such that each state represents a qubit can be considered.

  1. Reading of a particular qubit of memory means sending of negative energy photon signal to the past, which can be absorbed in the reading process. The problem is however that the memory representation is changed in this process since two state system returns to the ground state. This could be seen as analog of no-cloning theorem (the read thoughts define the clone). Interaction free measurement could help to overcome the problem partially. Memory would not be affected at all at the limit so that no-cloning theorem would be circumvented at this limit.

  2. A possible problem is that the analogs of detectors C and D for a given qubit are in geometric past and one must be able to decide whether it was C or D that absorbed the negative energy photon! Direct conscious experience should tell whether the detector C or D fired: could this experience correspond to visual quale black/white and more generally to a pair of complementary colors?

  3. ZEO means that zero energy states appear have both imbedding space arrows of time and these arrows appear alternately. This dichotomy would correspond to sensory representation-motor action dichotomy and would suggest that there is no fundamental difference between memory recall and future prediction by self model and they different only the direction of the signal.

  4. Since photon absorption is the basic process, the conscious experience about the qubit pattern could be visual sensation or even some other kind of sensory qualia induced by the absoroption of photons. The model for the lipids of cell membrane as pixels of a sensory screen suggests that neuronal/cell membranes could serve defined digital self model at the length scale of neurons.

For details see the new chapter Comparison of TGD Inspired Theory of Consciousness with Some Other Theories of Consciousness or the article with the same title.

Thursday, April 25, 2013

A vision about quantum jump as a universal cognitive process

Jeff Hawkins has developed in his book "On Intelligence" a highly interesting and inspiring vision about neo-cortex, one of the few serious attempts to build a unified view about what brain does and how it does it. Since the key ideas of Hawkins have quantum analogs in TGD framework, there is high motivation for developing a quantum variant of this vision. The vision of Hawkins is very general in the sense that all parts of neo-cortex would run the same fundamental algorithm, which is essentially checking whether the sensory input can be interpreted in terms of standard mental images stored as memories. This process occurs at several abstraction levels and involve massive feedback. If it succeeds at all these levels the sensory input is fully understood.

TGD suggests a generalization of this process. Quantum jump defining moment of consciousness would be the fundamental algorithm realized in all scales defining an abstraction hierarchy. Negentropy Maximization Principle (NMP) would be the variational principle driving this process and in optimal case lead to an experience of understanding at all levels of the scale hierarchy realized in terms of generation of negentropic entanglement. The analogy of NMP with second law suggests strongly thermodynamical analogy and p-adic thermodynamics used in particle mass calculations might be also seen as effective thermodynamics assignable to NMP. Quantum jump sequence realised as alternate reductions at the future and past boundaries of causal diamonds (CDs) carrying positive and negative energy parts of zero energy states.

The anatomy of quantum jump implies alternating arrow of geometric time at the level of imbedding space. This looks strange at first glance but allows to interpret the growth of syntropy as growth of entropy in reversed direction of imbedding space time. As a matter fact, one has actually wave function in the moduli space of CDs and in state function reductions localisation of either boundary takes place and gradually leads to the increase of the imbedding space geometric time and implies the alternating arrow for this time. The state function reduction at positive energy boundary of CD has interpretation as a process leading to sensory representation accompanied by p-adic cognitive representation. The time reversal of this process has interpretation as motor action in accordance with Libets findings. This duality holds true in various length scales for CDs. In the same manner p-adic space-time sheets define cognitive representations and their time reversals as intentions. It seems that selves (identified earlier as quantum jumps) could be assigned to negentropically entangled collections of sub-CDs and negentropic entanglement would stabilize them.

One can understand the fundamental abstraction process as generation of negentropic entanglement serving as a correlate for the experience of understanding. This process creates new mental images (sub-CDs) and to longer sequences of mental images (accumulation of experience by formation of longer quantum association sequences). Abstraction process involves also reduction of measurement resolution characterizing cognitive representations defined in terms of of discrete chart maps mapping discrete set of rational points of real preferred extremals to their p-adic counterparts allowing completion to p-adic preferred extremal. The reversal of this abstraction process gives rise to improved resolution and adds details to the representation. The basic cognitive process has as its building bricks this abstraction process and its reversal.


For details see the new chapter Comparison of TGD Inspired Theory of Consciousness with Some Other Theories of Consciousness or the article with the same title.

Monday, April 15, 2013

Riemann hypothesis and quasilattices

Freeman Dyson has represented a highly interesting speculation related to Riemann hypothesis and 1-dimensional quasicrystals (QCs). He discusses QCs and Riemann hypothesis briefly in his Einstein lecture.

Dyson begins from the defining property of QC as discrete set of points of Euclidian space for which the spectrum of wave vectors associated with the Fourier transform is also discrete. What this says is that quasicrystal as also ordinary crystal creates discrete diffraction spectrum. This presumably holds true also in higher dimensions than D=1 although Dyson considers mostly D=1 case. Thus QC and its dual would correspond to discrete points sets. I will consider the consequences in TGD framework below.

Dyson considers first QCs at general level. Dyson claims that QCs are possible only in dimensions D=1,2,3. I do not know whether this is really the case. In dimension D=3 the known QCs have icosahedral symmetry and there are only very few of them. In 2-D case (Penrose tilings) there is n-fold symmetry, roughly one kind of QC associated with any regular polygon. Penrose tilings correspond to n=5. In 1-D case there is no point group (subgroup of rotation group) and this explains why the number of QCs is infinite. For instance, so called PV numbers identified as algebraic integers, which are roots of any polynomial with integer coefficients such that all other roots have modulus smaller than unity. 1-D QCs is at least as rich a structure as PV numbers and probably much richer.

Dyson suggests that Riemann hypothesis and its generalisations might be proved by studying 1-D quasi-crystals.

  1. If Riemann Hypothesis is true, the spectrum for the Fourier transform of the distribution of zeros of Riemann zeta is discrete. The calculations of Andrew Odlycko indeed demonstrate this numerically, which is of course not a proof. From Dyson's explanation I understand that it consists of sums of integer multiples nlog(p) of logarithms of primes meaning that the non-vanishing Fourier components are apart from overall delta function (number of zeros) proportional to

    F(n)= ∑sk n-iskD(isk) , sk=1/2+iyk ,

    where sk are zeros of Zeta. ζD could be called the dual of zeta with summation over integers replaced with summation over zeros. For other "energies" than E=log(n) the Fourier transform would vanish. One can say that the zeros of Riemann Zeta and primes (or p-adic "energy" spectrum) are dual. Dyson conjectures that each generalized zeta function (or rather, L-function) corresponds to one particular 1-D QC and that Riemann zeta corresponds to one very special 1-D QC.

There are also intriguing connections with TGD, which inspire quaternionic generalization of Riemann Zeta and Riemann hypothesis.
  1. What is interesting that the same "energy" spectrum (logarithms of positive integers) appears in an arithmetic quantum field theory assignable to what I call infinite primes. An infinite hierarchy of second quantizations of ordinary arithmetic QFT is involved. A the lowest level the Fourier transform of the spectrum of the arithmetic QFT would consist of zeros of zeta rotated by π/2! The algebraic extensions of rationals and the algebraic integers associated with them define an infinite series of infinite primes and also generalized zeta functions obtained by the generalization of the sum formula. This would suggest a very deep connection with zeta functions, quantum physics, and quasicrystals. These zeta functions could correspond to 1-D QCs.

  2. The definition of p-adic manifold (in TGD framework) forces a discretisation of M4× CP2 having interpretation in terms of finite measurement resolution. This discretization induces also dicretization of space-time surfaces by induction of manifold structure. The discretisation of M4 (or E3) is achieved by crystal lattices, by QCs, and perhaps also by more general discrete structures. Could lattices and QCs be forced by the condition that the lattice like structures defines a discrete distributions with discrete spectrum? But why this?

  3. There is also another problem. Integration is a problematic notion in p-adic context and it has turned out that discretization is unavoidable and also natural in finite measurement resolution. The inverse of the Fourier transform however involves integration unless the spectrum of the Fourier transform is discrete so that in both E3 and corresponding momentum space integration reduces to a summation. This would be achieved if discretisation is by lattice or QC so that one would obtain the desired constraint on discretizations. Thus Riemann hypothesis has excellent mathematical motivations to be true in TGD Universe!




  4. What could be the counterpart of Riemann Zeta in the quaternionic case? Quaternionic analog of Zeta suggests itself: formally one can define quaternionic zeta using the same formula as for Riemann zeta.

    1. Rieman zeta characterizes ordinary integers and s is in this case complex number, extension of reals by adding a imaginary unit. A naive generalization would be that quaternionic zeta characterizes Gaussian integers so that s in the sum ζ(s)=∑ n-s should be replaced with quaternion and n by Gaussian integer. In octonionic zeta s should be replaced with octonion and n with a quaternionic integer. The sum is well-defined despite the non-commutativity of quaternions (non-associativity of octonions) if the powers n-s are well-defined. Also the analytic continuation to entire quaternion/octonion plane should make sense and could be performed in a step wise manner by starting from real axis for s, extended to complex plane and then to quaternionic plane.

    2. Could the zeros sk of quaternionic zeta ζH(s) reside at the 3-D hyper-plane Re(q)=1/2, where Re(q) corresponds to E4 time coordinate (one must also be able to continue to M4)? Could the duals of zeros in turn correspond to logarithms ilog(n), n Gaussian integer. The Fourier transform of the 3-D distribution defined by the zeros would in turn be proportional to the dual of ζD,H(isk) of ζH. Same applies to the octonionic zeta.

    3. The assumption that n is ordinary integer in ζH would trivialize the situation. One obtains the distribution of zeros of ordinary Riemann zeta at each line s= 1/2+ yI, I any quaternionic unit and the loci of zeros would correspond to entire 2-spheres. The Fourier spectrum would not be discrete since only the magnitudes of the magnitudes of the quaternionic imaginary parts of "momenta" would be imaginary parts of zeros of Riemann zeta but the direction of momentum would be free. One would not avoid integration in the definition of inverse Fourier transform although the integrand would be constant in angular degrees of freedom.

Thursday, April 04, 2013

AMS results as a support for lepto-hadron physics and M89 hadron physics?

The results of AMS-02 experiment are published. There is paper, live blog from CERN, and article in Economist. There is also press release from CERN. Also Lubos has written a summary from the point of view of SUSY fan who wants to see the findings as support for the discovery of SUSY neutralino. More balanced and somewhat skeptic representations paying attention to the hypeish features of the announcement come from Jester and Matt Strassler.

The abstract of the article is here.

A precision measurement by the Alpha Magnetic Spectrometer on the International Space Station of the positron fraction in primary cosmic rays in the energy range from 0.5 to 350 GeV based on 6.8 × 106 positron and electron events is presented. The very accurate data show that the positron fraction is steadily increasing from 10 to 250 GeV, but, from 20 to 250 GeV, the slope decreases by an order of magnitude. The positron fraction spectrum shows no fine structure, and the positron to electron ratio shows no observable anisotropy. Together, these features show the existence of new physical phenomena.

New physics has been observed. The findings confirm the earlier findings of Fermi and Pamela also showing positron excess. The experimenters do not give data above 350 GeV but say that the flux of electrons does not change. The press release states that the data are consistent with dark matter particles annihilating to positron pairs. For instance, the flux of the particles is same everywhere, which does not favor supernovae in galactic plane as source of electron positron pairs. According to the press release, AMS should be able to tell within forthcoming months whether dark matter or something else is in question.

About the neutralino interpretation

Lubos trusts on his mirror neurons and deduces from the body language of Samuel Ting that the flux drops abruptly above 350 GeV as neutralino interpretation predicts.

  1. The neutralino interpretation assumes that the positron pairs result in the decays χχ→ e+e- and predicts a sharp cutoff above mass scale of neutralino due to the reduction of the cosmic temperature below critical value determined by the mass of the neutralino leading to the annilation of neutralinos (fermions). Not all neutralinos annhilate and this would give to dark matter as a cosmic relic.


  2. According the press release and according to the figure 5 of the article the positron fraction settles to small but constant fraction before 350 GeV. The dream of Lubos is that abrupt cutoff takes place above 350 GeV: about this region we did not learn anything yet because the measurement uncertainties are too high. From Lubos's dream I would intuit that neutralino mass should be of the order 350 GeV. The electron/positron flux is fitted as a sum of diffuse background proportional to Ce+/-Ee+/- and a contribution resulting from decays and parametrized as Cs Es exp(-E/Es) - same for electron and positron. The cutoff Es of order Es= 700 GeV: error bars are rather large. The factor exp(-E/Es) does not vary too much in the range 1-350 GeV so that the exponential is probably motivated by the possible interpretation as neutralino for which sharp cutoff is expected. The mass of neutralino should be of order Es. The positron fraction represented in figure 5 of the article seems to approach constant near 350 GeV. The weight of the common source is only 1 per cent of the diffuse electron flux.


  3. Lubos notices that in neutralino scenario also a new interaction mediated by a particle with mass of order 1 GeV is needed to explain the decrease of the positron fraction above 1 GeV. It would seem that Lubos is trying to force right leg to the shoe of the left leg. Maybe one could understand the low end of the spectrum solely in terms of particle or particles with mass of order 10 GeV and the upper end of the spectrum in terms of particles of M89 hadron physics.


  4. Jester lists several counter arguments against the interpretation of the observations in terms of dark matter. The needed annihilation cross section must be two orders of magnitude higher than required for the dark matter to be a cosmic thermal relic -this holds true also for the neutralino scenario. Second problem is that the annihilation of neutralinos to quark pairs predicts also antiproton excess, which has not been observed. One must tailor the couplings so that they favor leptons. It has been also argued that pulsars could explain the positron excess: the recent finding is that the flux is same from all directions.

What could TGD interpretation be?

What can one say about the results in TGD framework? The first idea that comes to mind is that electron-positron pairs result from single particle annihilations but it seems that this option is not realistic. Fermion-antifermion annihilations are more natural and brings in strong analogy with neutralinos, which would give rise to dark matter as a remnant remaining after annihilation in cold dark matter scenario. An analogous scenario is obtained in TGD Universe by replacing neutralinos with baryons of some dark and scaled up variant of ordinary hadron physics of leptohadron physics.

  1. The positron fraction increases from 10 to 250 GeV with its slope decreasing between 20 GeV and 250 GeV by an order of magnitude. The observations suggest to my innocent mind a scale of order 10 GeV. The TGD inspired model for already forgotten CDF anomaly discussed in the chapter The recent status of leptohadron hypothesis of "p-Adic length Scale Hypothesis and Dark Matter Hierarchy" suggests the existence of τ pions with masses coming as three first octaves of the basic mass which is two times the mass of τ lepton. I have proposed interpretation of the positron excess ob served by Fermi and Pamela now confirmed by AMS in terms τ pions. The predicted mass of the three octaves of τ pion would be 3.6 GeV, 7.2 GeV, and 14.4 GeV. Could the octaves of τ pion explain the increase of the production rate up to 20 GeV and its gradual drop after that?

    There is a severe objection against this idea. The energy distribution of τ pions dictates the width of the energy interval in which their decays contribute to the electron spectrum and what suggests itself is that decays of τ pions yield almost monochromatic peaks rather than the observed continuum extending to high energies. Any resonance should yield similar distribution and this suggests that the electron positron pairs must be produced in the two particle annihilations of some particles.

    The annihilations of colored τ leptons and their antiparticles could however contribute to the spectrum of electron-positron pairs. Also the leptonic analogs of baryons could annihilate with their antiparticles to lepton pairs. For these two options the dark particles would be fermions as also neutralino is.


  2. Could colored τ leptons and - hadrons and their muonic and electronic counterparts be really dark matter? ‎ The particle might be dark matter in TGD sense - that is particle with a non-standard value of effective Planck constant hbareff coming as integer multiple of hbar. The existence of colored excitations of leptons and pion like states with mass in good approximation twice the mass of lepton leads to difficulties with the decay widths of W and Z unless the colored leptons have non-standard value of effective Planck constant and therefore lack direct couplings to W and Z.

    A more general hypothesis would be that the hadrons of all scaled up variant of QCD like world (leptohadron physics and scaled variants of hadron physics) predicted by TGD correspond to non-standard value of effective Planck constant and dark matter in TGD sense. This would mean that these new scaled up hadron physics would couple only very weakly to the standard physics.

  3. At the high energy end of the spectrum M89 hadron physics would be naturally involved and also now the hadrons could be dark in TGD sense. Es might be interpreted as temperature, which is in the energy range assigned to M89 hadron physics and correspond to a mass of some M89 hadron. Fermions are natural candidates and the annihilations nucleons and anti-nucleons of M89 hadron physics could contribute to the spectrum of leptons at higher energies. The direct scaling of M89 proton mass gives mass of order 500 GeV and this value is consistent with the limits 480 GeV and 1760 GeV for Es.

  4. There could be also a relation to the observations of Fermi suggesting an annihilation of some bosonic states to gamma pairs with gamma energy around 135 GeV could be interpreted in terms of annihilations of a M89 pion with mass of 270 GeV (maybe octave of leptopion with mass 135 Gev in turn octave of pion with mass 67.5 GeV).

How to resolve the objections against dark matter as thermal relic?

The basic objection against dark matter scenarios is that dark matter particles as thermal relics annihilate also to quark pairs so that proton excess should be also observed. TGD based vision could also circumvent this objection.

  1. Cosmic evolution would be a sequence of phase transitions between hadron physics characterized by Mersenne primes. The lowest Mersenne primes are M2=3, M3=7, M5=31, M_7=127, M13, M17, M19, M31, M61, M89, and M107 assignable to the ordinary hadron physics are involved but it might be possible to have also M127(electrohadrons). There are also Gaussian Mersenne primes MG,n= (1+i)n-1. Those labelled by n=151,157,163,167 and spanning p-adic length scales in biologically relevant length scales 10 nm,..., 2.5 μm.

  2. The key point is that at given period characterised by M_n the hadrons characterized by larger Mersenne primes would be absent. In particular, before the period of the ordinary hadrons only M89 hadrons were present and decayed to ordinary hadrons. Therefore no antiproton excess is expected - at least by the mechanism producing it in the standard dark matter scenarios where all dark and ordinary particles are present simultaneously.

  3. The second objection relates to the cross section, which must be two orders of magnitude larger than required by the cold dark matter scenarios. I am unable to say anything definite about this. The fact that both M89 hadrons and colored leptons are strongly interacting would increase corresponding annilation cross section and leptohadrons could later decay to ordinary leptons.

Connection with strange cosmic ray events and strange observations at RHIC and LHC

The model could also allow to understand the strange ultrahigh energy cosmic ray events (Centauros,etc) suggesting a formation of a blob ("hot spot" of exotic matter in atmosphere and decaying to ordinary hadrons. In the center of mass system of atmospheric particle and incoming cosmic ray cm energies are indeed of order M89 mass scale. As suggested, these hot spots would be hot in p-adic sense and correspond to p-adic temperature assignable to M89. Also the strange events observed already at RHIC in heavy ion collisions and later at LHC in proton-heavy ion collisions, and in conflict with the perturbative QCD predicting the formation of quark gluon plasma could be understood as a formation of M89 hot spots (see this). The basic finding was that there were strong correlations: two particles tended to move either parallel or antiparallel, as if they had resulted in a decay of string like objects. The AdS/CFT inspired explanation was in terms of higher dimensional blackholes. TGD explanation is more prosaic: string like objects (color magnetic flux tubes) dominating the low energy limit of M89 hadron physics were created.

The question whether M89 hadrons, or their cosmic relics are dark in TGD sense remains open. In the case of colored variants of the ordinary leptons the decay widths of weak bosons force this. It however seems that a coherent story about the physics in TGD Universe is developing as more data emerges. This story is bound to remain to qualitative description: quantitative approach would require a lot of collective theoretical work.

Also CDMS claims dark matter

Also CDMS (Cryogenic Dark Matter Search) reports new indications for dark matter particles: see the Nature blog article Another dark matter sign from a Minnesota mine. Experimenters have observed 3 events with expected background of .7 events and claim that the mass of the dark matter particle is 8.6 GeV. This mass is much lighter than what has been expected: something above 350 GeV was suggested as explanation of the AMS observations. The low mass is however consistent with the identification as first octave of tau-pion with mass about 7.2 GeV for which already forgotten CDF anomaly provided support for years ago (as explained above p-adic length scale hypothesis allows octaves of the basic mass for leptopion which is in good approximation 2 times the mass of the charged lepton, that is 3.6 GeV). The particle must be dark in TGD sense, in other words it must have non-standard value of effective Planck constant. Otherwise it would contribute to the decay widths of W and Z.