https://matpitka.blogspot.com/search?updated-max=2014-10-16T20:45:00-07:00&max-results=100&reverse-paginate=true

Monday, June 03, 2013

More precise formulation of Negentropy Maximization Principle


Negentropy Maximization Principle (NMP) is assumed to be the variational principle telling what can happen in quantum jump and says that the information content of conscious experience for the entire system is maximized. In zero energy ontology (ZEO) the definition of NMP is far from trivial and the recent progress - as I believe - in the understanding of structure of quantum jump forces to check carefully the details related to NMP. A very intimate connection between quantum criticality, life as something in the intersection of realities and p-adicities, hierarchy of effective vales of Planck constant, negentropic entanglement, and p-adic view about cognition emerges. One ends up also with an argument why p-adic sector is necessary if one wants to speak about conscious information.


The anatomy of quantum jump in zero energy ontology (ZEO)

Zero energy ontology emerged around 2005 and has had profound consequences for the understanding of quantum TGD. The basic implication is that state function reductions occur at the opposite light-like boundaries of causal diamonds (CDs) forming a hierarchy, and produce zero energy states with opposite arrows of imbedding space time. Also concerning the identification of quantum jump as moment of consciousness ZEO encourages rather far reaching conclusions. In ZEO the only difference between motor action and sensory representations on one hand, and intention and cognitive representation on the other hand , is that the arrows of imbedding space time are opposite for them. Furthermore, sensory perception followed by motor action corresponds to a basic structure in the sequence of state function reductions and it seems that these processes occur fractally for CDs of various size scales.

  1. State function reduction can be performed to either boundary of CD but not both simultaneously. State function reduction at either boundary is equivalent to state preparation giving rise to a state with well defined quantum numbers (particle numbers, charges, four-momentum, etc...) at this boundary of CD. At the other boundary single particle quantum numbers are not well defined although total conserved quantum numbers at boundaries are opposite by the zero energy property for every pair of positive and negative energy states in the superposition. State pairs with different total energy, fermion number, etc.. for other boundary are possible: for instance, t coherent states of super-conductor for which fermion number is ill defined are possible in zero energy ontology and do not break the super-selection rules.

  2. The basic objects coding for physics are U-matrix, M-matrices and S-matrix. M-matrices correspond to a orthogonal rows of unitary U-matrix between zero energy states, and are expressible as products of a hermitian square root of density matrix and of unitary S-matrix which more or less corresponds to ordinary S-matrix. One can say that quantum theory is formally a square root of thermodynamics. The thermodynamics in question would however relate more naturally to NMP rather than second law, which at ensemble level and for ordinary entanglement can be seen as a consequence of NMP.

    The non-triviality of M-matrix requires that for given state reduced at say the "lower" boundary of CD there is entire distribution of statesat "upper boundary" (given initial state can lead to a continuum of final states). Even more, all size scales of CDs are possible since the position of only the "lower" boundary of CD is localized in quantum jump whereas the location of upper boundary of CD can vary so that one has distribution over CDs with different size scales and over their Lorentz boots and translates.

  3. The quantum arrow of time follows from the asymmetry between positive and negative energy parts of the state: the other is prepared and the other corresponds to the superposition of the final states resulting when interactions are turned on. What is remarkable that the arrow of time at imbedding space level at least changes direction when quantum jump occurs to opposite boundary.

    This brings strongly in mind the old proposal of Fantappie that in living matter the arrow of time is not fixed and that entropy and its diametric opposite syntropy apply to the two arrows of the imbedding space time. The arrow of subjective time assignable to second law would hold true but the increase of syntropy would be basically a reflection of second law since only the arrow of the geometric time at imbedding space level has changed sign. The arrow of geometric at space-time level which conscious observer experiences directly could be always the same if quantum classical correspondence holds true in the sense that the arrow of time for zero energy states corresponds to arrow of time for preferred extremals. The failure of strict non-determinism making possible phenomena analogous to multifurcations makes this possible.

  4. This picture differs radically from the standard view and if quantum jump represents a fundamental algorith, this
    variation of the arrow of geometric time from quantum jump to quantum jump should manifest itself in the functioning of brain and living organisms. The basic building brick in the functioning of brain is the formation of sensory representation followed by motor action. These processes look very much like temporal mirror images of each other such as the state function reductions to opposite boundaries of CD look like. The fundamental process could correspond to a sequences of these two kinds of state function reductions for opposite boundaries of CDs and maybe independently for CDs of different size scales in a "many-particle" state defined by a union of CDs.

How the formation of cognitive and sensory representations could relate to quantum jump?
  1. ZEO allows quantum jumps between different number fields so that p-adic cognitive representations can be formed and intentional actions realized. How these quantum jumps are realized at the level of generalized Feynman diagrams is non-trivial question: one possibility suggested by the notion of adele combining reals and various p-adic number fields to a larger structure is that the lines and vertices of generalized Feynman diagrams can correspond to different number fields.

    The formation of cognitive representation could correspond to a quantum jump in which real space-time sheet identified as a preferred extremal is mapped to its p-adic counterpart or superposition of them with the property that the discretized versions of all p-adic counterparts are identical. In the latter case the chart map of real preferred extremal would be quantal and correspond to delocalized state in WCW. The p-adic chart mappings are not expected to take place but with some probabilities determined by the number theoretically universal U-matrix.

  2. Similar consideration applies to intentional actions realized as real chart maps for p-adically realized intention. The natural interpretation of the process is as a time reversal of cognitive map. Cognitive map would be generated from real sensory represention and intentional action would transform time reversed cognitive map to real "motor" action identifiable as time reversal of sensory perception. This would occur in various length scales in fractal manner.

  3. The formation of superpositions of preferred extremals associated with discrete p-adic chart maps from real preferred extremals could be interpretated as an abstraction process. Similar abstraction could take place also in the mapping of p-adic space-time surface to a superposition of real preferred extrmals representing intentional action. U-matrix should give also the probability amplitudes for these processes, and the intuitive idea is that the larger then number of common rational and algebraic points of real and p-adic surfaces is, the higher the probability for this is: the first guess is that the amplitude is proportional the number of common points. On the other hand, large number of common points means high measurement resolution so that the number of different surfaces in superposition tends to be smaller.

  4. One should not make any un-necessary assumptions about the order of various kinds of quantum jumps. For the most general option real-to-padic and p-adic-to-real quantum jumps can follow any quantum jumps and state function reductions to opposite boundaries of CD can also occur any time in any length scale. Also the length scale of resolution scale assignable to the cognitive representation should be determined probabilistically. Quantal probabilities for quantum jumps should therefore apply to all aspect of quantum jump and now ad hoc assumptions should be made. Very probably internal consistency allows only very few alternative scenarios. The assumption that the cascade beginning from given CD continues downwards until stops due to the emergence of negentropic entanglement looks rather natural constraint.

What happens in single state function reduction?

State function reduction is a measurement of density matrix. The condition that a measurement of density matrix takes place implies standard measurement theory on both real and p-adic sectors: system ends to an eigen-space of density matrix. This is true in both real and p-adic sectors. NMP is stronger principle at the real side and implies state function reduction to 1-D subspace - its eigenstate.

The resulting N-dimensional space has however rational entanglement probabilities p=1/N so that one can say that it is the intersection of realities and p-adicities. If the number theoretic variant of entanglement entropy is used as a measure for the amount of entropy carried by entanglement rather than either entangled system, the state carries genuine information and is stable with respect to NMP if the p-adic prime p divides N. NMP allows only single p-adic prime for real → p-adic transition: the power of this prime appears is the largest power of prime appearing in the prime decomposition of N. Degeneracy means also criticality so that that ordinary quantum measurement theory for the density matrix favors criticality and NMP fixes the p-adic prime uniquely.

If one - contrary to the above conclusion - assumes that NMP holds true in the entire p-adic sector, NMP gives in p-adic sector rise to a reduction of the negentropy in state function reduction if the original situation is negentropic and the eigen-spaces of the density matrix are 1-dimensional. This situation is avoided if one assumes that state function reduction cascade in real or genuinely p-adic sector occurs first (without NMP) and gives therefore rise to N-dimensional eigen spaces. The state is negentropic and stable if the p-adic prime p divides N. Negentropy is generated.

The real state can be transformed to a p-adic one in quantum jump (defining cognitive map) if the entanglement coefficients are rational or belong to an algebraic extension of p-adic numbers in the case that algebraic extension of p-adic numbers is allowed (number theoretic evolution gradually generates them). The density matrix can be expressed as sum of projection operators multiplied by probabilities for the projection to the corresponding sub-spaces. After state function reduction cascade the probabilities are rational numbers of form p=1/N.

Number theoretic entanglement entropy also allows to avoid some objections related to fermionic and bosonic statistics. Fermionic and bosonic statistics require complete anti-symmetrization/symmetrization. This implies entanglement which cannot be reduced away. By looking for symmetrized or antisymmetrized 2-particle state consisting of spin 1/2 fermions as the simplest example one finds that the density matrix for either particle is the simply unit 2× 2 matrix. This is stable under NMP based on number theoretic negentropy. One expects that the same result holds true in the general case. The interpretation would be that particle symmetrization/antisymmetrization carries negentropy.

The degeneracy of the density matrix is of course not a generic phenomenon and one can argue that it corresponds to some very special kind of physics. The identification of space-time correlates for the hierarchy for the effective values hbareff=n×hbar of Planck constant as n-furcations of space-time sheet suggests strongly the identification of this physics in terms of this hierarchy. Hence quantum criticality, the essence of life as something in the rational intersection of realities and p-adicities, the hierarchy of effective values of hbar, negentropic quantum entanglement, and the possibility to make real-p-adic transitions and thus cognition and intentionality would be very intimately related. This is a highly satisfactory outcome, since these ideas have been rather loosely related hitherto.

What happens in quantum jump?

Suppose that everything can be reduced to what happens for a given CD characterized by a scale. There are at least two questions to be answered.

  1. There are two processes involved. State function reduction and quantum jump transforming real state to p-adic state (matter to cognition) and vice versa (intention to action). Do these transitions occur independently or not? Does the ordering of the processes matter? The proposed view about state function reduction strongly suggests that the p-adic ↔real transition (if possible at all) can occur any time without affecting the outcome of the state function reduction.

  2. State function reduction cascade in turn consists of two different kinds of state function reductions. The M-matrix characterizing the zero energy state is product of square root of density matrix and of unitary S-matrix and the first step means the measurement of the projection operator. It defines a density matrix for both upper and lower boundary of CD and these density matrices are essentially same.

    1. At the first step a measurement of the density matrix between positive and negative energy parts of the quantum state takes place for CD. One can regard both the lower and upper boundary as an eigenstate of density matrix in absence of negentropic entanglement. The measurement is thus completely symmetric with respect to the boundaries of CDs. At the real sector this leads to a 1-D eigen-space of density matrix if NMP holds true. In the intersection of real and p-adic sectors this need not be the case if the eigenvalues of the density matrix have degeneracy. Zero energy state becomes stable against further state function reductions! The interactions with the external world can of course destroy the stability sooner or later. An interesting question is whether so called higher states of consciousness relate to this kind of states.

    2. If the first step gave rise to 1-D eigen-space of the density matrix, a state function reduction cascade at either upper of lower boundary of CD proceeding from long to short scales. At given step divides the sub-system into two systems and the sub-system-complement pair which produces maximum negentropy gain is subject to quantum measurement maximizing negentropy gain. The process stops at given subsystem resulting in the process if the resulting eigen-space is 1-D or has negentropic entanglement (p-adic prime p divides the dimension N of eigenspace in the intersection of reality and p-adicity).

For details and background see the section "Updates since 2012" of chapter "Negentropy Maximization Principle"
of "TGD Inspired Theory of Consciousness".

Friday, May 24, 2013

How a sequence of quantum jumps could give rise to experience about continuous flow of time?

I am trying to improve my understanding about the relationship between subjective and geometric time. Subjective time corresponds to a sequence of quantum jumps at given level of hierarchy of selves having as correlates causal diamonds (CDs). Geometric time is fourth space-time coordinate and has real and p-adic variants. This raises several questions.

  1. How the subjective times at various levels of hierarchy relate to each other? Should/could one somehow map sequences of quantum jumps at various levels to real or p-adic time values in order to compare them - as quantum classical correspondence indeed suggests?

  2. Subjective existence corresponds to a sequence of moments of consciousness: state function reductions at opposite boundaries of CDs. State function reduction reduction localizes either boundary but the the second boundary is in a quantum superposition of several locations and size scales for CD. We however experience time as a continuous flow. Is this a problem or not? One could argue that it is not possible to be conscious about being unconscious so that gaps would not be experienced. But is this so simple? We are indeed able to experience the gap in sensory consciousness caused by sleeping over night (this does not mean we have been unconscious: we just do not remember).

  3. Subjective time is certainly not metricizable whereas geometric time is and defines a continuum. But are moments of consciousness well-ordered as the values of real variant of geometric time are? This relates closely to the relationship of subjective time to geometric time. Certainly subjective time does not allow any continuous measure in real sense as geometric time does. One can however map moments of consciousness to integers.

    1. It would seem natural to be able to say about two moments of consciousness - call them A and B, - whether A is before B or vice versa. Moments of consciousness would be well-ordered and could be mapped to real integers. But is this the case always? There is experimental evidence for the fact that consciously experience time ordering does not always correspond to the physical one. This was observed already by Libet (I have tried to understand these findings for the first time here).

    2. What about p-adic integers as labels for moments of consciousness as suggested by the vision about p-adic space-time sheets as correlates for cognition and intention (as time reversal of cognition). Given p-adic integers m and n, one can only say whether the p-adic norm of m is larger than, smaller than, or equal to that of n. One can say that p-adic integers are weakly ordered.

p-Adic integers form a continuum in p-adic topology. Could one map the infinite sequence of quantum jumps already occurred to p-adic integers and in this manner to p-adic continuum instead of real one? Could the p-adic cognitive representations allow to achieve this? If so, the experience about conscious flow of time could be due to the p-adic topology for cognitive representation for the sequence of quantum jumps!

Could p-adic integers label moments of consciousness and explain why we experience conscious flow of time?

Next arguments give a more precise formulation for the idea that p-adic integers might label the sequence of quantum jumps at the level of conscious experience, or rather reflective consciousness involving various representations realized as "Akashic records" and read consciously by interaction free measurements (assuming that they make sense in TGD: NMP considerably modifies the standard quantum measurement theory).

  1. Most p-adic integers expressible as n= ∑k nkpk are infinite in real sense and in p-adic topology they form a continuum. Suppose that the infinite sequence of moments of consciousness that have already taken place can be labelled by p-adic integers and look what might be the outcome.

  2. Sounds very strange in ears of real analyst but is true: the integers n and n+ kpN, for N large are very near to each other p-adically. In real sense they are very far. This allows to fill the gaps between say integers n=1 and 2 by p-adic integers which are very large in real sense.

  3. The p-adic correlate of the sequence of discrete quantum jumps/moments of consciousness would define p-adic continuum which in turn can be mapped to real continuum by canonical identification.
This map sequence of moments of consciousness to p-adic continuum would be nice but maybe tricky for any-one accustomed to think in terms of real topology!

This raises two questions.

  1. p-Adic integers are not well-ordered. Could one induced the well-ordering of real time to p-adic context by mapping p-adic time axis to real one in a continuous manner and in this manner achieving mapping of moments of consciousness to real time axis?

  2. Could canonical identification ∑k nkpk → ∑k nkp-k map (or its appropriate modification) allow to map p-adic integers to real numbers and in this manner induce real well ordering to the p-adic side. The problem is that real number with finite pinary expansion has second infinite expansion (1=.9999... is example using decimal expansion) so that two p-adic time values correspond to any real time value with finite pinary digits. Should one restrict the consideration to integers with finite number of pinary digits (finite measurement resolution) and select either branch? Could the two branches correspond to real time coordinates assignable to the opposite boundaries of CD defining two conscious selves in this scale?

What happens when I type letters in wrong order?

One can speak about sensory and cognitive orderings of events corresponding to reals and p-adics (for various values prime p or course). The cognitive ordering of events would not be well-ordering if cognition is p-adic. Is there any empirical support for this besides Libet's mysterious looking findings?

Maybe. For instance, as I am typing text I experience that I am typing the letters of the word in the correct order but now and then it happens that the order is changed, even the order of syllables and sometimes even that of short words can change. It is probably easy to cook up a very mundane explanation in terms of neuroscience or even electric circuits from keyboard to computer memory, or computer itself. One can however also ask whether this could reflect the fact that p-adic ordering of the intentions to type letter is not well-ordering and does not always correspond to the real number based order for what happened ?

In TGD Universe writing process involves a sequence of transformations of p-adically realized intention to type a letter to a real action (doing it). At space-time level it is therefore a map from p-adic realm to real realm by a variant of canonical identification crucial in the definition of p-adic manifold concept assigning to real preferred extremal of Kähler action a p-adic preferred extremal in finite measurement resolution (see this). ´

The variant of canonical identification in question defines chart maps from real to p-adic realm and vice versa, and is defined in such a manner that discrete and rationals in a finite subset of rationals are mapped to themselves and defining intersection of real and p-adic realms.

  1. In the case of p-adic integers this subset is characterized by a cutoff telling the power of p below which p-adic integers and real integers correspond to each other as such. For the corresponding moments of consciousness (now intentions to type letter) one has same ordering in both realms. For integers containing higher powers of p a variant of canonical identification mapping p-adics to reals continuously is applied. In this case ordering anomalies can appear.

  2. Another pinary cutoff comes from physics: real preferred extremals are mapped to p-adic preferred extremals and vice versa: without the cutoff the p-adic image of real extremal would be continuous but non-differentiable so that field equations would not make sense. The cutoff tells the largest power of p up to which the variant of canonical identification is performed for p-adic integers. Also now ordering anomalies appear if one regards p-adic integers as ordinary integers.

  3. For the remaining integers the map is obtained by completing the discrete set of points to a preferred extremal of Kähler action on both real and p-adic sides so that physics enters into the game. This assignment need not be unique and the most natural manner to handle the non-uniqueness is to form quantum superposition of all allowed completions with same amplitude: this effective gauge invariance would be very natural from the point of view of finite resolution and conforms with the vision about inclusions of hyper-finite factors as a representation for finite measurement resolution giving rise to the analog of dynamical gauge symmetry.

Could the strange inconsistencies between cognitive (sequences of intentions) and sensory time orderings (sequence of typed letters) reflect the fact that the ordering of p-adic integers as real integers is not the same as the ordering of their real images under canonical identification? Could it be possible to test this and perhaps deduce the prime p characterizing p-adic topology of cognitive representation in question?

Wednesday, May 22, 2013

About God theory of Bernard Haisch

I have found that the best manner to learn about TGD is to read books about other theories, and after many years at the border of basic survival I now have opportunity to do this thanks to some generous people making this possible.

Just now I have been reading Bernard Haisch's book "The God theory". Haisch himself is an astrophysicist who might have become priest. The book discusses the possibility of spirituality consistent with physics. It also discusses Zero Point Energy (ZPE) hypothesis and the idea that inertia might emerge from vacuum fluctuations of various fields.

I agree in many respects with Haisch's vision about possibility to build bridge between fundamental physics and spirituality. The new view about spirituality requires that a lot of horrendous stuff of religions (such as eternal purgatory, the sadistic God of Old Testament killing his own son, blind belief in dogmas, etc...) is thrown away. Where I disagree with Haisch is the notion of ZPE but think that I understand why he wants ZPE. In TGD all that can be done using ZPE can be replaced with zero energy ontology (ZEO) to achieve the possibility of re-creation without breaking of conservation laws: without ability go generate new sub-Universes God would be rather powerless creature. I also disagree with the idea that inertia follows from zero point fields although again I understand the underlying motivations of the proposal as relating to a genuine problem of General Relativity. This problem also inspired TGD.

Haisch lists three questions usually regarded as highly non-scientific. Is there really a God? What am I? What is my destiny? As I started to build theory of consciousness, these questions began to make more and more sense also to me. One must be however ready to give up some dogmas such as God as a sage with white hair and long beard, the idea that we are nothing but our neurophysiology generating a brief flash of light in infinite darkness, and the belief that heat death dictated by second law is the eventual fate of the universe as whole.

Putting Haisch in box

When thinkers happen to encounter genuine thinking they want to classify it in order to feel safe. For safety reason some of us also debunk the new idea. The first classification is philosophical. I use three boxes for this purpose (safety reasons). The first box has label "monism". It contains two smaller boxes. "Materialist" contains thinkers accepting only third person view as an acceptable - objective - view about the world. I close to "Idealist" those thinkers who accept only the first person view as fundamental. Most of my colleagues are happy to live in the box "Materialist". The second box has label "Dualist" and contains thinkers accepting both first and third person views - also this box decomposes to smaller boxes depending on how closely the first and third person views are assumed to be related: if the correspondence is exactly 1-1 then the view reduces to materialism. To the third box - "Miscellaneous" - I put the others and live also myself in this box.

Haisch perfoms the classification himself and completely voluntarily chooses the box "Idealist". Hence consciousness is fundamental form of existence for him. In TGD framework both first and third person perspectives are tolerated: consciousness is however in quantum jump between quantum superpositions of objective realities identified as zero energy states and does not define another world as it does in dualistic theories. As a matter fact, in TGD several ontological levels are accepted: geometric existence at space-time and imbedding space levels in real and various p-adic versions, existence as zero energy states identified as spinor fields of world of classical worlds (WCW)) and subjective existence as quantum jumps.

Universe as God

Haisch postulates God as an infinite intelligence. We are God's eyes and ears through which God experiences her (no reference to gender here) own creation. Haisch's God is not the Newtonian clock-smith who creates deterministic universe and then forgets it completely. This God is free to create universes with he chooses freely using her infinite intelligence. This God is also somehow outside the realm of space-time.

The possibility of universes with different laws of physics inside each of them brings in mind inflationary cosmology, multiverse, and the landscape of M-theory. Haisch indeed takes inflationary scenario and multiverse idea rather seriously and also talks about superstrings. The landscape of string theory is catastrophe from the point of view of physics but would fit with the the idea about God who can freely decide about the laws of physics in the limits of mathematical consistency. But what mathematical consistency means? Have M-theorists really thought about this?

What about TGD? In TGD framework nothing prevents from calling conscious selves gods since free will is genuine and the essence of creation. Thus God is replaced with an infinite hierachy of god like entities. Nothing prevents from calling the entire Universe as God, which is re-creating itself in every quantum jump. This God has us as mental images or to be more precise: as mental images of mental images of ..... of its mental images. The sequence could be rather long;-)!

Concerning the laws of physics the situation in TGD framework. The surprising outcome already from the geometrization of loop spaces is that geometry of the infinite-dimensional world of classical worlds (WCW) is expected to be unique if it consists of 4-D surfaces of some higher-dimensional space. This comes from mere mathematical existence requiring the WCW metric to have infinite-dimensional group of isometries (generalization of various conformal symmetries of super string models). This means that also physics is unique just from its existence. As a matter fact, in TGD there is no need to assume any physical existence behind mathematical existence since consciousness is in quantum jumps. Space-time dimension and the choice imbedding space are forced by very general mathematical conditions closely related to the structure of classical number fields. Four-dimensional Minkowski space and space-time dimension four are forced by the condition of maximal symmetries needed for the existences of WCW geometry.

Inflation in TGD framework is replaced with quantum criticality making the Universe maximally sensitive perceiver and motor instrument. Quantum criticality means absence of scales (or actually discrete hierarchy of them) and the flatness of 3-space (dimensional curvature scalar vanishes) is the correlate of quantum criticality in cosmology. The inflaton field producing via its decay matter is in TGD framework replaced with monopole magnetic fluxes assignable to magnetic flux quanta which near Big Bang correspond to what I call cosmic strings. The decay of magnetic energy of flux quanta to particles produces matter and radiation. The basic difference to string landscape is that standard model symmetries apply in all these sub-cosmologies although there are dynamical parameters distinguishing between them. Hence TGD is highly predictive theory. Even God must bow to the laws of mathematics. TGD space-time is many-sheeted and one has Russian doll cosmology natural also in inflationary scenarios.

In superstring theory the landscape problem forces to assume anthropic principle: the fact that we exist becomes the basic guide line when we try to identify the particular universe in which we happen to live. In TGD framework the evolution implied by Negenropy Maximization Principle (NMP) stating that the conscious information gained in quantum jump is maximal, implies evolution. Evolution gradually fine tunes the values of various parameters so that they generate maximal intelligence. This implies that our existence indeed fixes the values of various parameter very precisely. Of course there are some parameters such as Kähler coupling strength (analogous to critical temperature), whose possible values are dictated by quantum criticality. Note that NMP challenges second law as a universal law - at least a generalization is required in ZEO - and it is now clear that the recent view about universe neglects completely the huge negentropy sources associated with the negentropic entanglement assignable to magnetic flux tubes carrying dark matter. In human scale these resources - "Akashic records"- give rise to memories and plans of future, ideas,...

The purpose of lifes

Haisch adopts the vision about endless sequence of reincarnations as a kind of "life-school" in which one transcends life by life to higher levels of consciousness - to upper class in school (and sometime to same or even to lower one).

This vision could have rather concrete realization in TGD framework. In the average sense the average size scale for personal causal diamonds (CDs) in their quantum superposition grows in a given quantum jump, and a biological death now and then does not stop this process. New sub-CDs also pop up and mean creation of new small sub-Universes which began to evolve. Asymptotically the size of the personal CD approaches infinity - asymptotic Universe, asymptotic Godness;-)!

Biological death would not mean the end of consciousness, only a transformation to a new level: perhaps higher, perhaps same, or maybe even lower. This depending on the Karma - the law of action and reaction at spiritual level as Haisch puts it - that we have gathered by our deeds. By doing bad deeds reduce our level of consciousness guaranteeing the return to a lower level in hierarchy. This has quite concrete quantum physical correlate: reduction of the effective Planck constants reducing the quantal size scales of the magnetic flux tubes connecting as as bridges of attention to the rest of the world and reducing thus quantum coherence lengths and times characterizing us. It also reduces our long range goals from those dictated by a mission to short range goals dictated by opportunism.

What could happen in biological death?

"What is my fate?" is one of the questions of Haisch. A more concrete formulation for this question is "What happens to the magnetic body in biological death?". TGD framework provides the tools for a glass pearl game around this question.

It would not be too surprising if at least some upper layers of this onion-like structure were preserved. NMP might guarantee the approximate conservation of the entire magnetic body since its braiding serves as a correlate for negentropic entanglement defining "Akashic records", a kind of cumulative collective wisdom having as a counterpart Sheldrake's morphic fields defining among other things also species memory.

What it means that in 4-D sense (contents of consciousness are from 4-D imbedding space region: either boundary of CD in given scale) also our biological body still exists as sub-CD of the larger CD we continue to exist subjectively? Only the sensory input and motor output conscious- to-us has ceased in biological death.

Does my biological body continue its life in reversed direction of imbedding space geometric time? The answer is negative if one relies on the assumption that the arrow of imbedding space time changes and the folded bath towel argument for the arrow of 4-D time defined by thermodynamical entropy holds true: my body would continue becoming older than it was at the moment of death. Not very plausible or desirable scenario!

NMP requires that negentropic entanglement is generated at the moment of biological death and adds to existing negentropic entanglement defining "Akashic records" about previous life conserved in good approximation. What I painfully learned during my lifetime is not waste! Attention is directed to some target generates negentropic entanglement. It has braiding of magnetic flux tubes connecting the attending system to the attenfed one. Reconnection is the mechanism for building flux tube bridges between the systems.

Tibetan book of dead supports what NMP suggests: I direct my attention somewhere else from my biological body which has become rather uninteresting. The new target of attention could be some new brisk young life form not yet caught the attention (almost anywhere in planet or even elsewhere but inside my personal CD: my magnetic body is big with size scale of - as I hope - about one hundred light years at least!). My new life would proceed in opposite direction of imbedding space time (recall that two subsequent quantum jumps create zero energy states with opposite arrows of imbedding space geometric time). Maybe I remember the teachings of Tibetan book of dead and manage to direct my attention to a higher level in self hierarchy, larger CD, representing perhaps a collective level of consciousness.

If one takes fractality seriously, the death of civilizations and cultures could be a process analogous to biological death. It is difficult to avoid the feeling that this is something which could happen in not so distant future. If this process corresponds to quantum jump, NMP tells that negentropy is generated but does not exclude the possibility of a catastrophe in which even entire species suffers extinction and some of our relatives, maybe bonobos, take the lead. The transition could also lead to a new higher level of consciousness with the prevailing materialistic world view being replaced with a new one? The individuals who have become aware about the need for a new world view and about what it might be could serve as seeds of the quantum phase transition.

ZPE or ZEO?

Laws of physics and conservation laws are the basic problem of Haisch and all those who want free will in the existing ontology of physics. Haisch is also a physicist so that the problem becomes even more difficult to circumvent! How God can re-create the reality without breaking the well-established conservation laws? Or are these laws just rules of game that God has chosen to obey in this particular part of multiverse? But would this lead to mere quantum randomness and does statistical determinism mean a loss of genuine free will?

If I have guessed correctly, Haisch hopes that ZPE could help God over this problem but to my opinion ZPE is mathematically hopelessly ill-defined and reflects the mathematical problems of quantum field theory rather than reality.

In TGD framework ZPE is effectively replaced with ZEO - zero energy ontology instead of zero point energy. Zero energy states have vanishing total quantum numbers so that re-creation can be carried out without breaking conservation laws and standard laws of physics remain true. One can assign to the positive (say) energy part of zero energy state conserved energy and other quantum numbers and positive and negative energy parts correspond to initial and final state of physical event in the usual positive energy ontology: no states - just events! Therefore there is room also for God in TGD Universe. Together with re-creation as quantum jump one obtains maximal free will: any zero energy state can be created or vacuum in principle. ZEO is also necessary for p-adic--real transitions representing formation of thoughts and realization of intentions as actions: essentially time reversals of each other in ZEO as also sensory perception and motor action which generalize to completely universal concepts.

A possible test for ZEO would be creation of zero energy states apparently breaking conservation laws in the framework of positive energy ontology. In cosmology the non-conservation of gravitational energy indeed takes place and can be understood in terms of ZEO: the energy and other quantum numbers are conserved only in scale which correspond to spotlight of consciousness defined by one particular causal diamond (CD). Therefore also the consistency of Poincare invariance of TGD with cosmology requires ZEO.

Are we continually creating tiny Universes as we transform our intentions represented as p-adic space-time sheets to actions represented as real space-time sheets? Does the replacement of personal CD with a larger one in quantum jump (perhaps increasing the effective value of Planck constant) involve also generation of smaller sub-CDs representing mental images. Are our mental images these tiny Universes that we create?

How to a new sub-Universe this in laboratory? Quantum physicists would perhaps speak about generating long lived enough quantum fluctuations creating matter from vacuum. I remember having seen a popular article about a planned experiment in which very intense laser beams would generate particle pairs from vacuum. Of course, the probability for generating CD containing matter might be very small but maybe for some selected CDs this might not be the case!

The origin of inertia

Haisch and Rueda claim of having derived inertia appearing as a mass parameter in Newton's equations from vacuum energy - see this. The basic idea behind the derivation does not however make much sense to me. Here is the condensed form of argument.

If one assumes that the quarks and electrons in such an object scatter this radiation, the semi-classical techniques of stochastic electrodynamics show that there will result a reaction force on that accelerating object having the form fr = −μa, where the μ parameter quantifies the strength of the scattering process. In order to maintain the state of acceleration, a motive force f must continuously be applied to balance this reaction force fr. Applying Newton’s third law to the region of contact between the agent and the object, f = −fr, we thus immediately arrive at f = μa, which is identical to Newton’s equation of motion.

I confess that I have do not have a slightest idea what this statement might mean. The standard wisdom is that particle to which no forces are applied does not suffer acceleration. Now it would suffer acceleration although net force vanishes: f+fr=0.

The standard view is that in special relativity Poincare invariance combined with Noether's theorem allows to assign to any system conserved four-momentum and angular momentum. Given a variational principle coupling particles to fields one obtains automatically the analog of Newton's equations stating that the momentum lost/gained by fields is gained/lost by particles. Therefore in special relativity based theories there are no problems.

In general relativity situation however changes.

  1. First of all, space-time becomes curved and the symmetries behind Poincare invariance are lost. One cannot use Noether's theorem to deduce expressions for conserved quantities: this is especially catastrophic outcome in quantum theory where the conserved quantities interpreted as operators play fundamental role. This was indeed the basic motivation of TGD: by replacing abstract space-time with a 4-D surface in higher-D space possessing the symmetries of empty Minkowski space, one does not loose the classical conservation laws.

  2. There is also another, closely related problem. In Newtonian approach to gravity gravitation accelerating test particle experiences a genuine force. In general relativity test particle however suffers no acceleration nor force. There seems to be now manner for how these pictures could be consistent. Maybe Haisch and Rueda were thinking about this aspect when they made their attempt to derive inertia from vacuum energy in general relativistic context.

    TGD provides a neat solution also to this problem. At 4-D space-time level the orbit of neutral test particle is indeed a geodesic line and 4-D acceleration vanishes. At 8-D imbedding space level the orbit of test particle is not a geodesic line anymore and it experiences genuine 8-D acceleration, whose M4 part defines the Newtonian force. The CP2 part of the force is also present can be neglected since the scale of CP2 is so small (about 104 Planck lengths).

Monday, May 20, 2013

Cold fusion is becoming real


Cold fusion is gradually becoming real: see Tommaso Dorigo's posting telling that Rossi's findings seem to be real (thanks to Ulla for the link). The preprint reporting the results of the tests is titled "Indication of anomalous heat energy production in a reactor device" and can be found from arXiv.

The history of cold fusion begins from the findings of Pons and Fleischmann around 1989. Cold fusion has also prehistory: books have been written about cold fusion in living matter before anyone spoke about cold fusion but official science of course has never taken them seriously (see refences in the chapter Nuclear String Hypothesis of "p-Adic length scale hypothesis and dark matter hierarchy").

Now cold fusion is studied behind the scenes but in public physics community refuses to take cold fusion seriously: it is against "the established laws of physics". And of course, the people doing hot fusion do not want to lose the impressive flow of research money to hot fusion research, which has continued decade after decade despite the absence of concrete results. Freeman Dyson has expressed his view about the situation here and here.

It takes especially long time for super stringers accustomed to think that physics has been reduced to Planck length scales to accept that there might be some genuinely new nuclear physics there and that the reductionistic narrative about natural sciences as a march towards super string theory might be a great illusion. The comments of Lubos Motl about Rossi represent a perfect example of the gigantic arrogance of a person who believes that string theorists belong to a new species intellectualy above all other human beings.

As a matter of fact, super string theory shares a lot with hot fusion. It is as unsuccessful as a theory just like hot fusion is a failure as technological project, and its news to the mankind is that the ultimate theory cannot predict nor explain and that we must just accept this message as a truth because the scientists who created it are so brilliant that they cannot be wrong.


Addition: The latest rant of Lubos about cold fusion appeared today. Now Lubos labels Tommaso Dorigo a crackpot, idiot, etc... The message is clear: everyone ready to consider the possibility that reductionism might not be quite correct, is lunatic. As usual, Lubos appeals to authority and attempts to give a label of swindlers to the group, which has performed the tests. This is much more convenient than saying something about contents. Lubos learns nothing: again and again he manages to be wrong but nothing beats him down! Lubos is completely immune to facts.

For TGD based views about possible new physics behind cold fusion see this and this.

Sunday, May 19, 2013

Some critical questions relating to Maya and Akashic records

The discovery of representations as negentropically entangled states defining approximate invariants under quantum jump sequence and defining "Akashic records" and the idea about reading these representations using interaction free measurements have meant a dramatic progress in the understanding of TGD inspired theory of conscioiusness progress.

There are two basic objections against quantum theories of consciousness. How it is possible to have conscious information about invariant under quantum jumps if only change is experienced continuously? The outcome of state function reduction in standard quantum theory is random: how can one understand freedom of choice and intentional behaviour in terms of state function reduction? NMP and possibility of negentropic entanglement imply that TGD based quantum theory is not equivalent with the standard one, and this allows to circumvent the objections.

There are however two further questions, which I cannot answer yet. Can one really assume that the notion of interaction free measurement continues to make sense in TGD framework? Could NMP allow to make this notion exact or make it impossible? Are the invariants or at least their existence experienced directly without interaction free measurement?

The experiments carried out to test whether 40 Hz thalamocortical resonance is correlate for conscious experience suggests that the resonance is present only when a new pattern is discovered, not when it has become a memory. The TGD inspired interpretation would be that the resonances accompanies negentropy gain and quantum jump is necessary for conscious experience. However, the reports about higher states of consciousness (and also my own experiences) suggest that the invariants can be experienced directly when all thoughts (interaction free measurements) are eliminated. This experience cannot be however communicated: one understands does not know what one understands. Therefore also the original vision that negentropic entanglement corresponds to conscious experience - experience of pure understanding, which is not communicable - and in apparent contradiction with the basic hypothesis about quantum jump, would be correct after all!

For details see the new chapter Comparison of TGD Inspired Theory of Consciousness with Some Other Theories of Consciousness or the article with the same title.

Monday, May 13, 2013

How to construct Akashic records and read them?

While reading a marvellous book "The Field" by Lynne McTaggart about evolution of ideas about the role of electromagnetic fields in biology and neuroscience, I became aware of two questions which I had not yet answered.

Realization of memory representations in terms of braided flux tubes

The first question is following: How various representations (sensory - , memory -,...) - "Akashic records" - are realized as negentropically entangled states?

Magnetic body should be the seat of memories in some sense.

  1. I have already earlier proposed this kind of realization based on the observation that braiding in time direction generates space-like braiding. Dancers on the parquette with their feet connected to the wall by threads illustrates the idea. When dancers move at the parquette their world lines define a time-like braiding in 3-dimensional space-time assignable to the floor. Also the threads connecting the dancers to the wall get braided - or entangled - as one might also say. There is clearly a duality between time-like and space-like braidings: the running topological quantum computer program coded by braiding in time direction is stored as space-like braiding defining memory representation of what happened. Note that same mechanism realizes also predictions and future plans as time reversed topological quantum computer programs in ZEO. CDs in various scales contain this kind of programs and their memory representations.

  2. I have also proposed that the geometric entanglement - braiding - of flux tubes defines a space-time correlate for quantum entanglement. In the case of topological quantum computation it would be naturally described by probabilities, which are rational numbers (or perhaps even algebraic numbers in some algebraic extension of p-adic numbers characterizing together value of the p-adic prime the evolutionary level of the system). Hence the notion of number theoretic negentropy makes sense and one obtains a connection with topological quantum computation.

  3. The representation of memories in terms of space-like braiding of magnetic flux tubes connecting various systems would be universal, and not restricted to DNA-cell membrane system in which the flux tubes would connect DNA nucleotides or codons (this seems to be the more plausible option) with the lipids. One could indeed speak about Akashic records.

  4. The time reversals or these representations defined by the zero energy states of opposite arrow of the imbedding space time would define a representation for future predictions/ plans in ZEO. For instance, the development of a seed to a full-grown organism could be coded in this manner in time scale where CD has time scale of order of the lifetime of the organism. Already Burr found evidence that the radiation field assignable to the seed has the same shape as the plant or animal (salamander in his experiments). This energy field would naturally correspond to the magnetic body containing dark photon Bose-Einstein condensates. The Akashic records and their time reversal would naturally correspond to the morphic fields of Sheldrake: memories and future plans in time scales longer than than duration of life cycle for an individual member of species would be possibles. Every scientist of course agrees that the societies are busily predicting and planning their futures but find very difficult to accept the idea that this could have some concrete quantum physical correlate.

How to construct and read conscious hologram?

While reading the book of McTaggart also a second question popped up: How the vision about brain as a conscious hologram is realized in the proposed conceptual framework?

The idea about living system as a hologram has strong empirical basis. One of the most dramatic demonstrations of the hologram like character of brain was the discovery of Pietch that salamander's brain can be sliced to pieces and shuffled like a deck of cards and put together. When the resulting stuff is returned to the head of the salamander, it recovers! This extreme robustness is very strong support for the non-local hologram like storage of biological information. Ironically, Pietch tried to demonstrate that the theory of Karl Pribram about brain as a hologram is wrong! In TGD framework one can go even further and ask whether this robustness actually demonstrates that various representations (sensory - , cognitive - , memory - ...) are realized at the long magnetic flux loops and sheets of the magnetic body rather than brain.

The notion of conscious hologram is one of the key ideas of TGD inspired theory of consciousness. Hitherto I have not been owever able to find a really convincing concrete proposal for how brain could be a hologram in TGD Universe. The reading of memory - and other representations by interaction free measurement however leads to a natural proposal for what the hologram might be.

  1. Certainly the formation of the hologram must closely relate to the vision about universal Akashic records realized in terms of braided flux tubes and their non-destructive reading by interaction free measurement. Oversimplifying, tor a given bit of the representation the photons scattered without interaction would kick either of the two detectors C and D associated with it to an excited state (see Elitzur-Vaidman bomb tester). This process is very much like absorption of photons by a photosensitive plate defining an ordinary hologram.

  2. The lipids of the cell membrane are good candidates as something in 1-1 correspondence with the basic units of this hologram (note the analogy with computer screen - also a liquid crystal!). If one irradiates the laser like system formed by the detectors not only by the radiation scattered from the quantum Akashic records but by its superposition with the reference wave of same frequency, one obtains a good candidate for a hologram. It would be defined by the excited quantum state consisting of laser systems analogous to the detectors C and D. Any piece of the system should give and approximate representation of the memory and robustness of the representation is achieved.

  3. In semiclassical treatment the probability that a given laser like detector is excited must be proportional to the modulus squared of the net field amplitude, which is a superposition of reference wave and scattered wave Also just. as in the case of ordinary holograms, the irradiation of the laser like system by the negative energy counterpart of the reference wave (its phase conjugate emitted in a state resulting in state function reduction to the opposite boundary of CD) effectively generates the conjugate of the scattered wave since only those parts of the system can return to the ground state with considerable probability for which the probability to go to excited state is high enough. Note that this implies that magnetic body contains geometric representations of the perceptive field as indeed assumed. This is however not quite the classical hologram. Rather, the total number of absorbed negative energy phase conjugate photons for given pixel defines the "real" picture. A given point of the hologram corresponds to an ensemble of laser like detectors so that a statistically deterministic response is obtained as an ensemble average.

How to realize this concretely?

  1. I have proposed that the lipids of cell membrane could serve as pixels of sensory representations. They could indeed serve as the pixels of conscious hologram. Each pixel should contain large number of laser like "detectors" so that statistical determinism would be achieved.

  2. There should be pair C and D of detectors such that either of them absorbs photon in an interaction free measurement so that a value of bit is defined. Universality serves as a strong constraint as one tries to guess what C and D could be.

    1. The lipids at the two lipid layers of cell membrane could be in 1-1 correspondence with C and D. This option is not however universal.

    2. It is however quite possible that the magnetic fields involved are what I have called wormhole magnetic fields, which carry monopole flux and and involve two space-time sheets carrying opposite net fluxes. As a matter of fact, all elementary particles correspond to flux quanta of wormhole magnetic fields. In this case the two sheets would naturally correspond to detectors C and D and in the simplest situation they would have same Minkowski space projection. Universality of both detectors and holograms is achieved.
  3. The cyclotron Bose-Einstein condensates for charged particles at magnetic flux tubes assignable to lipids are good candidates for the laser like systems if they contain cyclotron Bose-Einstein condensates. There are however several options since the magnetic flux tubes are closed and there are several manners to realize this.

    1. DNA as topological quantum computer vision and the view about cell membrane as a sensory receptor communicating data to the magnetic body in turn sending control signals via DNA suggest the following. Magnetic flux loops have a part connecting DNA with nuclear or cell membrane (this would be the analog for the dipole of the dipole magnetic field) and part which is long - even with size scale of Earth and corresponds to the magnetic field created by the DNA-cell membrane system. This picture applies both to the flux tubes of ordinary magnetic field and to the flux tubes of the wormhole magnetic field.

    2. An assumption in accordance with the general role of magnetic body is that Akashic records reside at the short portions of flux tubes connecting lipids with DNA codons: their braiding would define badsic example about universal representations in living matter. The laser like detectors would reside at the long portions of the flux tubes connecting cell membrane and DNA. If wormhole magnetic fields are in question, the detectors C and D could correspond to the two parallel flux tubes carrying opposite monopole fluxes.

    3. Universality suggest that this picture allows many alternative realizations. In principle, the relative motion of any system (partonic 2-surfaces with light-like orbits) connected by flux tubes could give rise to Akashic records. The lipids of axonal membrane are excellent candidates for the pixels and the flux tubes connecting the lipids to microtubuli would also define Akashic records with long parts of the flux tubes serving as the laser like system. The maximization of the memory capacity would also explain why the neural pathways to brain tend to maximize their lengths by connecting right side of the body to left hemisphere and vice versa.

  4. What remains still open is how to integrate the Josephson junctions defined by the lipid layers of the cell membrane to this picture.

For details see the new chapter Comparison of TGD Inspired Theory of Consciousness with Some Other Theories of Consciousness or the article with the same title.



Wednesday, May 08, 2013

Are dark photons behind biophotons?

After few years with emphasis on particle physics, I have returned to TGD inspired theory and consciousness and quantum biology. One reason is that no new results will emerge from LHC for two years, and the recent rather depressing situation in theoretical particle physics which have continued for decades.

During last years it has become kore and more clear that biophotons are highly relevant for biology and neuroscience. In TGD framework biophotons are decay products of dark photons, which are key players in TGD inspired theory of consiousness and quantum biology. Therefore I felt that it is time to write a chapter about biophotons taking into account also quite recent progress in understanding the structure of quantum jump in zero energy ontology leading to surprisingly detailed understanding of based structure of information processing in brain. I attach below the abstract of the new chapter Are dark photons behind biophotons?, which can be found also as an article with the same title.


TGD approach leads to a prediction that biophotons result when dark photons with large value of effective Planck constant and large wavelength transform to ordinary photons with same energy. The collaboration with Lian Sidorov stimulated a more detailed look for what biophotons are. Also the recent progress in understanding the implications of basic vision behind TGD inspired theory of consciousness erved as an additional motivation for a complementary treatment.

  1. The anatomy of quantum jump in zero energy ontology (ZEO) allows to understand basic aspects of sensory and cognitive processing in brain without ever mentioning brain. Sensory perception - motor action cycle with motor action allowing interpretation as time reversed sensory perception reflects directly the fact that state function reductions occur alternately to the two opposite boundaries of causal diamond (which itself or rather, quantum superposition of CDs, changes in the process).

  2. Also the abstraction and de-abstraction processes in various scales which are essential for the neural processing emerge already at the level of quantum jump. The formation of associations is one aspect of abstraction since it combines different manners to experience the same object. Negentropic entanglement of two or more mental images (CDs) gives rise to rules in which superposed n-particle states correspond to instances of the rule. Tensor product formation generating negentropic entanglement between new mental images and earlier ones generates longer sequences of memory mental images and gives rise to negentropy gain generating experience of understanding, recognition, something which has positive emotional coloring. Quantum superposition of perceptively equivalent zero energy states in given resolution gives rise to averaging. Increasing the abstraction level means poorer resolution so that the insignificant details are not perceived.

  3. Various memory representations should be approximately invariant under the sequence of quantum jumps. Negentropic entanglement gives rise to this kind of stabilization. The assumption that self model is a negentropically entangled system which does not change in state function reduction, leads to a problem. If the conscious information about this kind of subself corresponds to change of negentropy in quantum jump, it seems impossible to get this information. Quite generally, if moment of consciousness corresponds to quantum jump and thus change, how it is possible to carry conscious information about quantum state? Interaction free measurement however allows to circumvent the problem: non-destructive reading of memories and future plans becomes possible in arbitrary good approximation.

    This memory reading mechanism can be formulated for both photons and photons and these two reading mechanisms could correspond to visual memories as imagination and auditory memories as internal speech. Therefore dark photons decaying to biophotons could be crucial element of imagination and the notion bio-phonon could also make sense and even follow as a prediction. This would also suggest a correlation of biophoton emission with EEG for which there is a considerable evidence. The observation that biophotons seem to be associated only with the right hemisphere suggests that at least some parts of right hemisphere prefer dark photons and are thus specialized to visual imagination: spatial relationships are the speciality of the right hemisphere. Some parts the of left hemisphere at least might prefer dark photons in IR energy range transforming to ordinary phonons in ear or dark phonons: left hemisphere is indeed the verbal hemisphere specialized to linear linguistic cognition.

In the sequel I shall discuss biophotons in TGD Universe as decay products of dark photons and propose among other things an explanation for the hyperbolic decay law in terms of quantum coherence and echo like mechanism guaranteing replication of memory representations. Applications to biology, neuroscience, and consciousness are discussed and also the possible role of biophotons for remote mental interactions is considered. Also the phenomenon of taos hum is discussed as a possible evidence for biophonons.

Saturday, May 04, 2013

Comparison of TGD Inspired Theory of Consciousness with Some Other Theories of Consciousness

Reading of two highly inspiring books led to a new chapter summing up the vision behind TGD inspired theory of consciousness and also generating some new ideas. The first book "On intelligence" is by Jeff Hawkins. The second book "Consciousness: the science of subjectivity" is by Antti Revonsuo. Below is the abstract of the new chapter.

Jeff Hawkins has developed a highly interesting and inspiring vision about neo-cortex, one of the few serious attempts to build a unified view about what brain does and how it does it. Since key ideas of Hawkins have quantum analogs in TGD framework, there is high motivation for developing a quantum variant of this vision. The vision of Hawkins is very general in the sense that all parts of neo-cortex would run the same fundamental algorithm, which is essentially checking whether the sensory input can be interpreted in terms of standard mental images stored as memories. This process occurs at several abstraction levels and involve massive feedback. If it succeeds at all these levels the sensory input is fully understood.

TGD suggests a generalization of this process. Quantum jump defining moment of consciousness would be the fundamental algorithm realized in all scales defining an abstraction hierarchy. Negentropy Maximization Principle (NMP) would be the variational principle driving this process and in optimal case lead to an experience of understanding at all levels of the scale hierarchy realized in terms of generation of negentropic entanglement. The analogy of NMP with second law suggests strongly thermodynamical analogy and p-adic thermodynamics used in particle mass calculations might be also seen as effective thermodynamics assignable to NMP.

In the following I will first discuss the ideas of Hawkins and then summarize some relevant aspects of quantum TGD and TGD inspired theory of consciousness briefly in the hope that this could make representation comprehensible for the reader having no background in TGD (I hope I have achieved this). The representation involves some new elements: reduction of the old idea about motor action as time reversal of sensory perception to the anatomy of quantum jump in zero energy ontology (ZEO); interaction free measurement for photons and photons as a non-destructive reading mechanism of memories and future plans (time reversed memories) represented 4-dimensionally as negentropically entangled states approximately invariant under quantum jumps (this resolves a basic objection against identifying quantum jump as moment of consciousness) leading to the identification of analogs of imagination and internal speech as fundamental elements of cognition; and a more detailed quantum model for association and abstraction processes.

I will also compare various theories and philosophies of consciousness with TGD approach following the beautifully organized representation of Revonsuo. Also anomalies of consciousness are briefly discussed. My hope is that this comparison would make explicit that TGD based ontology of consciousness indeed circumvents the difficulties against monistic and dualistic approaches and also survives the basic objections that I have been able to invent hitherto.

For details see the new chapter Comparison of TGD Inspired Theory of Consciousness with Some Other Theories of Consciousness or the article with the same title.

Sunday, April 28, 2013

Self or only a model of self?

Negentropic entanglement provides a model for associations as rules in which superposition of tensor product states defines rule with entanglement pairs defining its various instances. This generalizes to N-fold tensor products. Associations would be realized as N-neuron negentropic entanglement stable against NMP. One could also think of realizing associative areas in terms of neurons whose inputs form entangled tensor product and when sensory inputs are received they form analogous tensor product in representative degrees of freedom.

Thus negentropic entanglement is necessary for mental images (having sub-CDs as correlates) to mental images representing spatial patterns. Negentropic entanglement in time direction for these patterns (zero energy states) is in turn necessary to bind them to sequences of mental images representing abstract memories as sequences of mental images. Negentropically entangled sequence would be a quantal counterpart for the original association sequence introduced as purely geometric concept.

This picture however challenges the identification of self as quantum jump. Should the negentropically entangled sequences of mental images define selves so that self would be something characterizing zero energy state rather than something identified as quantum jump? Could they define a model of self to be distinguished from self identified as quantum jump? Or could one give up the notion of self alltogether and be satisfied with model of self? At this moment it seems that nothing is lost by assuming only the model of self.

By definition negentropic entanglement tends to be preserved in quantum jumps so that it represents information as approximate invariant: this conforms with the idea of invariant representation and quite generally with the idea that invariants represent the useful information. There is however a problem involved. This information would not be conscious if the original view about conscious information as a change of information is accepted. Could one imagine a reading mechanism in which this information is read without changing the negentropically entangled state at all? This reading process would be analogous to deducing the state of a two-state system in interaction free measurement to be discussed below in more detail.

The assumption that self model is a negentropically entangled system which does not change in state function reduction, leads to a problem. If the conscious information about this kind of subself corresponds to change of negentropy in quantum jump, it seems impossible to get this information. One can however consider a generalization of so called interaction free measurement as a manner to deduced information about self model. This information would be obtained as sequences of bits and might be correspond to declarative, verbal memories rather than direct sensory experiences.


  1. The bomb testing problem of Elitzur and Vaidman gives a nice concrete description of what happens in interaction free measurement.

    The challenge is to find whether the bomb is dud or not. Bomb explodes if it receives photon with given energy. The simplest test would explode all bombs. Interaction free measurement allows to make test by destroying only small number of bombs and at idealized limit no bombs are destroyed.

    The system involves four lenses and two detectors C and D (see the illustration in the link). In the first lense the incoming photon beam splits to reflected and transmitted beams: the path travelled by transmitted beam contains the bomb.

    1. The bomb absorbs photon with a probability which tells the fraction of photon beam going to the path at which bomb is (is transmitted through the lense). The other possibility is that this measurement process creates a state in which photon travels along the other path (is reflected). This photon goes through a lense and ends up to detector C or D through lense.

    2. If the bomb is dud photon travels through both paths and interference at the lense leads the photon to detector D. If C detects photon we know that the bomb was not a dud without exploding it. If D detects the photon, it was either dud or not and we can repeat the experiment as long as bomb explodes, or C detects photon and stop if the detector continues to be D (dud). This arrangement can be refined so that at the ideal limit no explosions take place and all.
  2. The measurement of bomb property is interaction free experiment in the sense that state function reduction performed by absorber/bomb can eliminate the interaction in the sense that photon travels along the path not containing the bomb. One might say that state function reduction is an interaction which can eliminates the usual interaction with photon beam. State function reduction performed by bomb can change the history of photon so it travels along the path not containing the bomb.

This picture is only metaphorical representation of something much more general.
  1. In TGD framework the photon paths branching at lenses correspond to branching 3-surfaces analogous to branching strings in string model and photon wave splits to sum of waves travelling along the two paths.

  2. Bomb could be of course replaced with any two-state system absorbing photons in one state but not in the other state, say atom. Now one would test in which state the atom is gaining one bit of information in the optimal situation. Two-state atom could thus represent bit and one could in principle read the bit sequence formed by atoms (say in row) by this method without any photon absorption so that the row of atoms would remain in the original state.
One can imagine several applications if the information to be read in interaction free manner can be interpreted as bit sequences represented as states of two-state system. Lasers in ground states and its excited state would be analogous many particle quantum system. In TGD framework the analog of laser consisting of two space-time sheets with different sizes and different zero point kinetic energies would be the analogous system.

For instance, a model of memory recall with memories realized as negentropically entangled states such that each state represents a qubit can be considered.

  1. Reading of a particular qubit of memory means sending of negative energy photon signal to the past, which can be absorbed in the reading process. The problem is however that the memory representation is changed in this process since two state system returns to the ground state. This could be seen as analog of no-cloning theorem (the read thoughts define the clone). Interaction free measurement could help to overcome the problem partially. Memory would not be affected at all at the limit so that no-cloning theorem would be circumvented at this limit.

  2. A possible problem is that the analogs of detectors C and D for a given qubit are in geometric past and one must be able to decide whether it was C or D that absorbed the negative energy photon! Direct conscious experience should tell whether the detector C or D fired: could this experience correspond to visual quale black/white and more generally to a pair of complementary colors?

  3. ZEO means that zero energy states appear have both imbedding space arrows of time and these arrows appear alternately. This dichotomy would correspond to sensory representation-motor action dichotomy and would suggest that there is no fundamental difference between memory recall and future prediction by self model and they different only the direction of the signal.

  4. Since photon absorption is the basic process, the conscious experience about the qubit pattern could be visual sensation or even some other kind of sensory qualia induced by the absoroption of photons. The model for the lipids of cell membrane as pixels of a sensory screen suggests that neuronal/cell membranes could serve defined digital self model at the length scale of neurons.

For details see the new chapter Comparison of TGD Inspired Theory of Consciousness with Some Other Theories of Consciousness or the article with the same title.

Thursday, April 25, 2013

A vision about quantum jump as a universal cognitive process

Jeff Hawkins has developed in his book "On Intelligence" a highly interesting and inspiring vision about neo-cortex, one of the few serious attempts to build a unified view about what brain does and how it does it. Since the key ideas of Hawkins have quantum analogs in TGD framework, there is high motivation for developing a quantum variant of this vision. The vision of Hawkins is very general in the sense that all parts of neo-cortex would run the same fundamental algorithm, which is essentially checking whether the sensory input can be interpreted in terms of standard mental images stored as memories. This process occurs at several abstraction levels and involve massive feedback. If it succeeds at all these levels the sensory input is fully understood.

TGD suggests a generalization of this process. Quantum jump defining moment of consciousness would be the fundamental algorithm realized in all scales defining an abstraction hierarchy. Negentropy Maximization Principle (NMP) would be the variational principle driving this process and in optimal case lead to an experience of understanding at all levels of the scale hierarchy realized in terms of generation of negentropic entanglement. The analogy of NMP with second law suggests strongly thermodynamical analogy and p-adic thermodynamics used in particle mass calculations might be also seen as effective thermodynamics assignable to NMP. Quantum jump sequence realised as alternate reductions at the future and past boundaries of causal diamonds (CDs) carrying positive and negative energy parts of zero energy states.

The anatomy of quantum jump implies alternating arrow of geometric time at the level of imbedding space. This looks strange at first glance but allows to interpret the growth of syntropy as growth of entropy in reversed direction of imbedding space time. As a matter fact, one has actually wave function in the moduli space of CDs and in state function reductions localisation of either boundary takes place and gradually leads to the increase of the imbedding space geometric time and implies the alternating arrow for this time. The state function reduction at positive energy boundary of CD has interpretation as a process leading to sensory representation accompanied by p-adic cognitive representation. The time reversal of this process has interpretation as motor action in accordance with Libets findings. This duality holds true in various length scales for CDs. In the same manner p-adic space-time sheets define cognitive representations and their time reversals as intentions. It seems that selves (identified earlier as quantum jumps) could be assigned to negentropically entangled collections of sub-CDs and negentropic entanglement would stabilize them.

One can understand the fundamental abstraction process as generation of negentropic entanglement serving as a correlate for the experience of understanding. This process creates new mental images (sub-CDs) and to longer sequences of mental images (accumulation of experience by formation of longer quantum association sequences). Abstraction process involves also reduction of measurement resolution characterizing cognitive representations defined in terms of of discrete chart maps mapping discrete set of rational points of real preferred extremals to their p-adic counterparts allowing completion to p-adic preferred extremal. The reversal of this abstraction process gives rise to improved resolution and adds details to the representation. The basic cognitive process has as its building bricks this abstraction process and its reversal.


For details see the new chapter Comparison of TGD Inspired Theory of Consciousness with Some Other Theories of Consciousness or the article with the same title.

Monday, April 15, 2013

Riemann hypothesis and quasilattices

Freeman Dyson has represented a highly interesting speculation related to Riemann hypothesis and 1-dimensional quasicrystals (QCs). He discusses QCs and Riemann hypothesis briefly in his Einstein lecture.

Dyson begins from the defining property of QC as discrete set of points of Euclidian space for which the spectrum of wave vectors associated with the Fourier transform is also discrete. What this says is that quasicrystal as also ordinary crystal creates discrete diffraction spectrum. This presumably holds true also in higher dimensions than D=1 although Dyson considers mostly D=1 case. Thus QC and its dual would correspond to discrete points sets. I will consider the consequences in TGD framework below.

Dyson considers first QCs at general level. Dyson claims that QCs are possible only in dimensions D=1,2,3. I do not know whether this is really the case. In dimension D=3 the known QCs have icosahedral symmetry and there are only very few of them. In 2-D case (Penrose tilings) there is n-fold symmetry, roughly one kind of QC associated with any regular polygon. Penrose tilings correspond to n=5. In 1-D case there is no point group (subgroup of rotation group) and this explains why the number of QCs is infinite. For instance, so called PV numbers identified as algebraic integers, which are roots of any polynomial with integer coefficients such that all other roots have modulus smaller than unity. 1-D QCs is at least as rich a structure as PV numbers and probably much richer.

Dyson suggests that Riemann hypothesis and its generalisations might be proved by studying 1-D quasi-crystals.

  1. If Riemann Hypothesis is true, the spectrum for the Fourier transform of the distribution of zeros of Riemann zeta is discrete. The calculations of Andrew Odlycko indeed demonstrate this numerically, which is of course not a proof. From Dyson's explanation I understand that it consists of sums of integer multiples nlog(p) of logarithms of primes meaning that the non-vanishing Fourier components are apart from overall delta function (number of zeros) proportional to

    F(n)= ∑sk n-iskD(isk) , sk=1/2+iyk ,

    where sk are zeros of Zeta. ζD could be called the dual of zeta with summation over integers replaced with summation over zeros. For other "energies" than E=log(n) the Fourier transform would vanish. One can say that the zeros of Riemann Zeta and primes (or p-adic "energy" spectrum) are dual. Dyson conjectures that each generalized zeta function (or rather, L-function) corresponds to one particular 1-D QC and that Riemann zeta corresponds to one very special 1-D QC.

There are also intriguing connections with TGD, which inspire quaternionic generalization of Riemann Zeta and Riemann hypothesis.
  1. What is interesting that the same "energy" spectrum (logarithms of positive integers) appears in an arithmetic quantum field theory assignable to what I call infinite primes. An infinite hierarchy of second quantizations of ordinary arithmetic QFT is involved. A the lowest level the Fourier transform of the spectrum of the arithmetic QFT would consist of zeros of zeta rotated by π/2! The algebraic extensions of rationals and the algebraic integers associated with them define an infinite series of infinite primes and also generalized zeta functions obtained by the generalization of the sum formula. This would suggest a very deep connection with zeta functions, quantum physics, and quasicrystals. These zeta functions could correspond to 1-D QCs.

  2. The definition of p-adic manifold (in TGD framework) forces a discretisation of M4× CP2 having interpretation in terms of finite measurement resolution. This discretization induces also dicretization of space-time surfaces by induction of manifold structure. The discretisation of M4 (or E3) is achieved by crystal lattices, by QCs, and perhaps also by more general discrete structures. Could lattices and QCs be forced by the condition that the lattice like structures defines a discrete distributions with discrete spectrum? But why this?

  3. There is also another problem. Integration is a problematic notion in p-adic context and it has turned out that discretization is unavoidable and also natural in finite measurement resolution. The inverse of the Fourier transform however involves integration unless the spectrum of the Fourier transform is discrete so that in both E3 and corresponding momentum space integration reduces to a summation. This would be achieved if discretisation is by lattice or QC so that one would obtain the desired constraint on discretizations. Thus Riemann hypothesis has excellent mathematical motivations to be true in TGD Universe!




  4. What could be the counterpart of Riemann Zeta in the quaternionic case? Quaternionic analog of Zeta suggests itself: formally one can define quaternionic zeta using the same formula as for Riemann zeta.

    1. Rieman zeta characterizes ordinary integers and s is in this case complex number, extension of reals by adding a imaginary unit. A naive generalization would be that quaternionic zeta characterizes Gaussian integers so that s in the sum ζ(s)=∑ n-s should be replaced with quaternion and n by Gaussian integer. In octonionic zeta s should be replaced with octonion and n with a quaternionic integer. The sum is well-defined despite the non-commutativity of quaternions (non-associativity of octonions) if the powers n-s are well-defined. Also the analytic continuation to entire quaternion/octonion plane should make sense and could be performed in a step wise manner by starting from real axis for s, extended to complex plane and then to quaternionic plane.

    2. Could the zeros sk of quaternionic zeta ζH(s) reside at the 3-D hyper-plane Re(q)=1/2, where Re(q) corresponds to E4 time coordinate (one must also be able to continue to M4)? Could the duals of zeros in turn correspond to logarithms ilog(n), n Gaussian integer. The Fourier transform of the 3-D distribution defined by the zeros would in turn be proportional to the dual of ζD,H(isk) of ζH. Same applies to the octonionic zeta.

    3. The assumption that n is ordinary integer in ζH would trivialize the situation. One obtains the distribution of zeros of ordinary Riemann zeta at each line s= 1/2+ yI, I any quaternionic unit and the loci of zeros would correspond to entire 2-spheres. The Fourier spectrum would not be discrete since only the magnitudes of the magnitudes of the quaternionic imaginary parts of "momenta" would be imaginary parts of zeros of Riemann zeta but the direction of momentum would be free. One would not avoid integration in the definition of inverse Fourier transform although the integrand would be constant in angular degrees of freedom.

Thursday, April 04, 2013

AMS results as a support for lepto-hadron physics and M89 hadron physics?

The results of AMS-02 experiment are published. There is paper, live blog from CERN, and article in Economist. There is also press release from CERN. Also Lubos has written a summary from the point of view of SUSY fan who wants to see the findings as support for the discovery of SUSY neutralino. More balanced and somewhat skeptic representations paying attention to the hypeish features of the announcement come from Jester and Matt Strassler.

The abstract of the article is here.

A precision measurement by the Alpha Magnetic Spectrometer on the International Space Station of the positron fraction in primary cosmic rays in the energy range from 0.5 to 350 GeV based on 6.8 × 106 positron and electron events is presented. The very accurate data show that the positron fraction is steadily increasing from 10 to 250 GeV, but, from 20 to 250 GeV, the slope decreases by an order of magnitude. The positron fraction spectrum shows no fine structure, and the positron to electron ratio shows no observable anisotropy. Together, these features show the existence of new physical phenomena.

New physics has been observed. The findings confirm the earlier findings of Fermi and Pamela also showing positron excess. The experimenters do not give data above 350 GeV but say that the flux of electrons does not change. The press release states that the data are consistent with dark matter particles annihilating to positron pairs. For instance, the flux of the particles is same everywhere, which does not favor supernovae in galactic plane as source of electron positron pairs. According to the press release, AMS should be able to tell within forthcoming months whether dark matter or something else is in question.

About the neutralino interpretation

Lubos trusts on his mirror neurons and deduces from the body language of Samuel Ting that the flux drops abruptly above 350 GeV as neutralino interpretation predicts.

  1. The neutralino interpretation assumes that the positron pairs result in the decays χχ→ e+e- and predicts a sharp cutoff above mass scale of neutralino due to the reduction of the cosmic temperature below critical value determined by the mass of the neutralino leading to the annilation of neutralinos (fermions). Not all neutralinos annhilate and this would give to dark matter as a cosmic relic.


  2. According the press release and according to the figure 5 of the article the positron fraction settles to small but constant fraction before 350 GeV. The dream of Lubos is that abrupt cutoff takes place above 350 GeV: about this region we did not learn anything yet because the measurement uncertainties are too high. From Lubos's dream I would intuit that neutralino mass should be of the order 350 GeV. The electron/positron flux is fitted as a sum of diffuse background proportional to Ce+/-Ee+/- and a contribution resulting from decays and parametrized as Cs Es exp(-E/Es) - same for electron and positron. The cutoff Es of order Es= 700 GeV: error bars are rather large. The factor exp(-E/Es) does not vary too much in the range 1-350 GeV so that the exponential is probably motivated by the possible interpretation as neutralino for which sharp cutoff is expected. The mass of neutralino should be of order Es. The positron fraction represented in figure 5 of the article seems to approach constant near 350 GeV. The weight of the common source is only 1 per cent of the diffuse electron flux.


  3. Lubos notices that in neutralino scenario also a new interaction mediated by a particle with mass of order 1 GeV is needed to explain the decrease of the positron fraction above 1 GeV. It would seem that Lubos is trying to force right leg to the shoe of the left leg. Maybe one could understand the low end of the spectrum solely in terms of particle or particles with mass of order 10 GeV and the upper end of the spectrum in terms of particles of M89 hadron physics.


  4. Jester lists several counter arguments against the interpretation of the observations in terms of dark matter. The needed annihilation cross section must be two orders of magnitude higher than required for the dark matter to be a cosmic thermal relic -this holds true also for the neutralino scenario. Second problem is that the annihilation of neutralinos to quark pairs predicts also antiproton excess, which has not been observed. One must tailor the couplings so that they favor leptons. It has been also argued that pulsars could explain the positron excess: the recent finding is that the flux is same from all directions.

What could TGD interpretation be?

What can one say about the results in TGD framework? The first idea that comes to mind is that electron-positron pairs result from single particle annihilations but it seems that this option is not realistic. Fermion-antifermion annihilations are more natural and brings in strong analogy with neutralinos, which would give rise to dark matter as a remnant remaining after annihilation in cold dark matter scenario. An analogous scenario is obtained in TGD Universe by replacing neutralinos with baryons of some dark and scaled up variant of ordinary hadron physics of leptohadron physics.

  1. The positron fraction increases from 10 to 250 GeV with its slope decreasing between 20 GeV and 250 GeV by an order of magnitude. The observations suggest to my innocent mind a scale of order 10 GeV. The TGD inspired model for already forgotten CDF anomaly discussed in the chapter The recent status of leptohadron hypothesis of "p-Adic length Scale Hypothesis and Dark Matter Hierarchy" suggests the existence of τ pions with masses coming as three first octaves of the basic mass which is two times the mass of τ lepton. I have proposed interpretation of the positron excess ob served by Fermi and Pamela now confirmed by AMS in terms τ pions. The predicted mass of the three octaves of τ pion would be 3.6 GeV, 7.2 GeV, and 14.4 GeV. Could the octaves of τ pion explain the increase of the production rate up to 20 GeV and its gradual drop after that?

    There is a severe objection against this idea. The energy distribution of τ pions dictates the width of the energy interval in which their decays contribute to the electron spectrum and what suggests itself is that decays of τ pions yield almost monochromatic peaks rather than the observed continuum extending to high energies. Any resonance should yield similar distribution and this suggests that the electron positron pairs must be produced in the two particle annihilations of some particles.

    The annihilations of colored τ leptons and their antiparticles could however contribute to the spectrum of electron-positron pairs. Also the leptonic analogs of baryons could annihilate with their antiparticles to lepton pairs. For these two options the dark particles would be fermions as also neutralino is.


  2. Could colored τ leptons and - hadrons and their muonic and electronic counterparts be really dark matter? ‎ The particle might be dark matter in TGD sense - that is particle with a non-standard value of effective Planck constant hbareff coming as integer multiple of hbar. The existence of colored excitations of leptons and pion like states with mass in good approximation twice the mass of lepton leads to difficulties with the decay widths of W and Z unless the colored leptons have non-standard value of effective Planck constant and therefore lack direct couplings to W and Z.

    A more general hypothesis would be that the hadrons of all scaled up variant of QCD like world (leptohadron physics and scaled variants of hadron physics) predicted by TGD correspond to non-standard value of effective Planck constant and dark matter in TGD sense. This would mean that these new scaled up hadron physics would couple only very weakly to the standard physics.

  3. At the high energy end of the spectrum M89 hadron physics would be naturally involved and also now the hadrons could be dark in TGD sense. Es might be interpreted as temperature, which is in the energy range assigned to M89 hadron physics and correspond to a mass of some M89 hadron. Fermions are natural candidates and the annihilations nucleons and anti-nucleons of M89 hadron physics could contribute to the spectrum of leptons at higher energies. The direct scaling of M89 proton mass gives mass of order 500 GeV and this value is consistent with the limits 480 GeV and 1760 GeV for Es.

  4. There could be also a relation to the observations of Fermi suggesting an annihilation of some bosonic states to gamma pairs with gamma energy around 135 GeV could be interpreted in terms of annihilations of a M89 pion with mass of 270 GeV (maybe octave of leptopion with mass 135 Gev in turn octave of pion with mass 67.5 GeV).

How to resolve the objections against dark matter as thermal relic?

The basic objection against dark matter scenarios is that dark matter particles as thermal relics annihilate also to quark pairs so that proton excess should be also observed. TGD based vision could also circumvent this objection.

  1. Cosmic evolution would be a sequence of phase transitions between hadron physics characterized by Mersenne primes. The lowest Mersenne primes are M2=3, M3=7, M5=31, M_7=127, M13, M17, M19, M31, M61, M89, and M107 assignable to the ordinary hadron physics are involved but it might be possible to have also M127(electrohadrons). There are also Gaussian Mersenne primes MG,n= (1+i)n-1. Those labelled by n=151,157,163,167 and spanning p-adic length scales in biologically relevant length scales 10 nm,..., 2.5 μm.

  2. The key point is that at given period characterised by M_n the hadrons characterized by larger Mersenne primes would be absent. In particular, before the period of the ordinary hadrons only M89 hadrons were present and decayed to ordinary hadrons. Therefore no antiproton excess is expected - at least by the mechanism producing it in the standard dark matter scenarios where all dark and ordinary particles are present simultaneously.

  3. The second objection relates to the cross section, which must be two orders of magnitude larger than required by the cold dark matter scenarios. I am unable to say anything definite about this. The fact that both M89 hadrons and colored leptons are strongly interacting would increase corresponding annilation cross section and leptohadrons could later decay to ordinary leptons.

Connection with strange cosmic ray events and strange observations at RHIC and LHC

The model could also allow to understand the strange ultrahigh energy cosmic ray events (Centauros,etc) suggesting a formation of a blob ("hot spot" of exotic matter in atmosphere and decaying to ordinary hadrons. In the center of mass system of atmospheric particle and incoming cosmic ray cm energies are indeed of order M89 mass scale. As suggested, these hot spots would be hot in p-adic sense and correspond to p-adic temperature assignable to M89. Also the strange events observed already at RHIC in heavy ion collisions and later at LHC in proton-heavy ion collisions, and in conflict with the perturbative QCD predicting the formation of quark gluon plasma could be understood as a formation of M89 hot spots (see this). The basic finding was that there were strong correlations: two particles tended to move either parallel or antiparallel, as if they had resulted in a decay of string like objects. The AdS/CFT inspired explanation was in terms of higher dimensional blackholes. TGD explanation is more prosaic: string like objects (color magnetic flux tubes) dominating the low energy limit of M89 hadron physics were created.

The question whether M89 hadrons, or their cosmic relics are dark in TGD sense remains open. In the case of colored variants of the ordinary leptons the decay widths of weak bosons force this. It however seems that a coherent story about the physics in TGD Universe is developing as more data emerges. This story is bound to remain to qualitative description: quantitative approach would require a lot of collective theoretical work.

Also CDMS claims dark matter

Also CDMS (Cryogenic Dark Matter Search) reports new indications for dark matter particles: see the Nature blog article Another dark matter sign from a Minnesota mine. Experimenters have observed 3 events with expected background of .7 events and claim that the mass of the dark matter particle is 8.6 GeV. This mass is much lighter than what has been expected: something above 350 GeV was suggested as explanation of the AMS observations. The low mass is however consistent with the identification as first octave of tau-pion with mass about 7.2 GeV for which already forgotten CDF anomaly provided support for years ago (as explained above p-adic length scale hypothesis allows octaves of the basic mass for leptopion which is in good approximation 2 times the mass of the charged lepton, that is 3.6 GeV). The particle must be dark in TGD sense, in other words it must have non-standard value of effective Planck constant. Otherwise it would contribute to the decay widths of W and Z.