Tuesday, June 30, 2015

Gaussian Mersennes in cosmology, biology, nuclear, and particle physics

p-Adic length scale hypothesis states that primes slightly below powers of two are physically preferred ones. Mersenne primes Mn=2n-1 obviously satisfy this condition optimally. The proposal generalizes to GAussian Mersenne primes MG,n=(1+i)n-1. It is now possible to understand preferred p-adic primes as so called ramified primes of an algebraic extension of rationals to which the parameters characterizing string world sheets and partonic 2-surfaces belong. Strong form of holography is crucial: space-time surfaces are consctructible from these 2-surfaces: for p-adic variants the construction should be easy by the presence of pseudo-constants. In real sector very probably continuation is possible only in special cases. In the framework of consciousness theory the interpretation is that in this case imaginations (p-adic space-time surfaces) are realizable. Also p-adic length scale hypothesis can be understood and generalizes: primes near powers of any prime are preferred.

The definition of p-adic length scale a convention to some degree.

  1. One possible definition for Lp is as Compton length for the smallest mass possible in p-adic thermodynamics for a given prime if the first order contribution is non-vanishing.

  2. Second definition is the Compton length Lp,e for electron if it would correspond to the prime in question: in good approximation one has Lp= 51/2× Lp,e from p-adic mass calculations. If p-adic length scale hypothesis is assumed (p≈ 2k) one has Lp,e== L(k,e)=2(k-127)/2Le, where Le is electron Compton length (electron mass is .5 MeV). If one is interested in Compton time T(k,e), one obtains it easily from electrons Compton time .1 seconds (defining fundamental biorhythm) as T(k,e)= 2(k-2×127)/2× .1 seconds. In the following I will mean with p-adic length scale T(k,e)≈5-1/2× T(k).

Mersenne primes Mn=2n-1 are as near as possible to power of two and are therefore of special interest.

  1. Mersenne primes corresponding to n∈{2, 3, 5, 7, 13, 17, 19, 31, 61} are out of reach of recent accelerators.


  2. n=89 characterizes weak bosons and suggests a scaled up version of hadron physics which should be seen at LHC. There are already several indications for its existence.

  3. n=107 corresponds to hadron physics and tau lepton.

  4. n=127 corresponds to electron. Mersenne primes are clearly very rare and characterize many elementary particle physics as well as hadrons and weak bosons. The largest Mersenne prime which does not define completely super-astrophysical p-adic length scale is M127 associated with electron.

Gaussian Mersennes (complex primes for complex integers) are much more abundant and in the following I demonstrate that corresponding p-adic time scales might seem to define fundamental length scales of cosmology, astrophysics, biology, nuclear physics, and elementary physics. I have not previously checked the possible relevance of Gaussian Mersennes for cosmology and for the physics beyond standard model above LHC energies: there are as many as 10 Gaussian Mersennes besides 9 Mersennes above LHC energy scale suggesting a lot of new physics in sharp contrast with the GUT dogma that nothing interesting happens above weak boson scale- perhaps copies of hadron physics or weak interaction physics. The list of Gaussian Mersennes is following.
  1. n∈{2, 3, 5, 7, 11, 19, 29, 47, 73} correspond to energies not accessible at LHC. n= 79 might define new copy of hadron physics above TeV range -something which I have not considered seriously before. The scaled variants of pion and proton masses (M107 hadron physics) are about 2.2 TeV and 16 TeV. Is it visible at LHC is a question mark to me.

  2. n=113 corresponds to nuclear physics. Gaussian Mersenne property and the fact that Gaussian Mersennes
    seem to be highly relevant for life at cell nucleus length scales inspires the question whether n=113 could give rise to something analogous to life and genetic code. I have indeed proposed realization of genetic code and analogs of DNA, RNA, amino-acids and tRNA in terms of dark nucleon states.

  3. n= 151, 157, 163, 167 define 4 biologically important scales between cell membrane thickness and cell nucleus size of 2.5 μ m. This range contains the length scales relevant for DNA and its coiling.

  4. n=239, 241 define two scales L(e,239)=1.96× 103 km and L(e,241)=3.93× 103 km differing by factor 2. Earth radius is 6.3 × 103 km, outer core has radius 3494 km rather near to L(2,241) and inner core radius 1220 km, which is smaller than 1960 km but has same order of magnitude. What is important that Earth reveals the two-core structure suggested by Gaussian Mersennes.

  5. n=283: L(283)= .8× 1010 km defines the size scale of a typical star system. The diameter of the solar system is about d=.9 × 1010 km.

  6. n=353: L(353,e)= 2.1 Mly, which is the size scale of galaxies. Milky Way has diameter about .9 Mly.

  7. n=367 defines size scale L(267,e)= 2.8× 108 ly, which is the scale of big voids.

  8. n=379: The time scale T(379,e)=1.79× 1010 years is slightly longer than the recently accepted age of the Universe about T=1.38× 1010 years and the nominal value of Hubble time 1/H=1.4× 1010 years. The age of the Universe measured using cosmological scale parameter a(t) is equal to the light-cone proper time for the light-cone assignable to the causal diamond is shorter than t.

For me these observations are without any exaggeration shocking and suggest that number theory is visible in the sructure of entire cosmos. Standard skeptic of course saves the piece of his mind by labelling all this as numerology: human stupidity beats down even the brightest thinker. Putting it more diplomatically: Only understood fact is fact. TGD indeed allows to understand these facts.

For a summary of earlier postings see Links to the latest progress in TGD.

Wednesday, June 24, 2015

Transition from flat to hyperbolic geometry and q-deformation

In Thinking Allowed Original there was a link to a very interesting article with title "Designing Curved Blocks of Quantum Space-Time...Or how to build quantum geometry from curved tetrahedra in loop quantum gravity" telling about the work of Etera Livine working at LPENSL (I let the reader to learn what this means;-).

The idea of the article

The popular article mentions a highly interesting mathematical result relevant for TGD. The idea is to build 3-geometry - not by putting together flat tetrahedra or more general polyhedra along their boundaries - but by using curved hyperbolic tetrahedra or more generally polygons) defined in 3-D hyperbolic space - negative constant curvature space with Lorentz group acting as isometries - cosmic time=constant section of standard cosmology.

As a special case one obtains tesselation of 3-D hyperbolic space H3. This is somewhat trivial outcome so that one performs a "twisting". Some words about tesselations/lattices/crystals are in order first.

  1. In 2-D case you would glue triangles (say) together to get curved surface. For instance, at the surface of sphere you would get finite number of lattice like structures: the five Platonic solids tetrahdron, cube, octahedron, icosahedron, and dodecahedron, which are finite geometries assignable to finite fields corresponding to p=2, 3, and 5 and defining lowest approximaton of p-adic numbers for these primes.


  2. In 2-D hyperbolic plane H2 one obtains hyperbolic tilings used by Escher (see this).

  3. One can also consider decomposition of hyperbolic 3-space H3 to lattice like structure. Essentially a generalization of the ordinary crystallography from flat 3-space E3to H3. There are indications for the quantization of cosmic redshifts completely analogous to the quantization of positions of lattice cells, and my proposal is that they reflect the existing of hyperbolic crystal lattice in which astrophysical objects replace atoms. Macroscopic gravitational quantum coherence due to huge value of gravitational Planck constant could make them possible.

Back to the article and its message. The condition for tetrahedron property stating in flat case that the sum of the 4 normal vectors vanishes generalizes, and is formulated in group SU(2) rather than in group E3 (Euclidian 3-space). The popular article states that deformation of sum to product of SU(2) elements is equivalent with a condition defining classical q-deformation for gauge group. If this is true, a connection between "quantum quantum mechanics" and hyperbolic geometries therefore might exist and would correspond to a transition from flat E3 to hyperbolic H3.

Let loop gravity skeptic talk first

This looks amazing but it is better to remain skeptic since the work relates to loop quantum gravity and involves specific assumptions and different motivations.

  1. For instance, the hyperbolic geometry is motivated by the attempts to quantum geometry producing non-vanishing and negative cosmological constant by introducing it through fundamental quantization rules rather than as a physical prediction and using only algebraic conditions, which allow representation as a tetrahedron of hyperbolic space. This is alarming to me.

  2. In loop quantum gravity one tries to quantize discrete geometry. Braids are essential for quantum groups unless one wants to introduce them independently. In loop gravity one considers strings defining 1-D structures and the ordinary of points representing particles at string like entity might be imagined in this framework. I do not know enough loop gravity to decide whether this condition is realized in the framework motivating the article.

  3. In zero energy ontology hyperbolic geometry emerges in totally different manner. One wants only a discretization of geometry to represent classically finite measurement resolution and Lorentz invariance fixes it at the level of moduli space of CDs. At space-time level discretization would occur for the parameters charactering strings world sheets and partonic 2-surfaces defining "space-time genes" in strong form of holography.

  4. One possible reason to worry is that H3 allows infinite number of different lattice like structures (tesselations) with the analog of lattice cell defining hyperbolic manifold. Thus the decomposition would be highly non-unique and this poses practical problems if one wants to construct 3-geometries using polyhedron like objects as building bricks. The authors mention twisting: probably this is what would allow to get also other 3-geometries than 3-D hyperbolic space. Could this resolve the non-uniqueness problem?

    I understand (on basis of this) that hyperbolic tetrahedron can be regarded as a hyperbolic 3-manifold and gives rise to a tesselation of hyperbolic space. Note that in flat case tetrahedral crystal is not possible. In any case, there is an infinite number of this kind of decompositions defined by discrete subgroups G of Lorentz group and completely analogous to the decompositions of flat 3-space to lattice cells: now G replaces the discrete group of translations leaving lattice unaffected. An additional complication from the point of view of loop quantum gravity in the hyperbolic case is that the topology of the hyperbolic manifold defining lattice cell varies rather than being that of ball as in flat case (all Platonic solids are topologically balls).

The notion of finite measurement resolution

The notion of finite measurement resolution emerged first in TGD through the realization that von Neumann algebras known as hyper-finite factors of type I1 (perhaps also of type III1) emerge naturally in TGD framework. The spinors of "world of classical worlds" (WCW) identifiable in terms of fermionic Fock space provide a canonical realization for them.

The inclusions of hyperfinite factors provide a natural description of finite measurement resolution with included factor defining the sub-algebra, whose action generates states not distinguishable from the original ones. The inclusions are labelled by quantum phases coming as roots of unity and labelling also quantum groups. Hence the idea that quantum groups could allow to describe the quantal aspects of finite measurement resolution whereas discretization would define its classical aspects.

p-Adic sectors of TGD define a correlate for cognition in TGD Universe and cognitive resolution is forced by number theory. Indeed, one cannot define the notion of angle in p-adic context but one can define phases in terms of algebraic extensions of p-adic numbers defined by roots of unity: hence a finite cognitive resolution is unavoidable and might have a correlate also at the level of real physics.

The discrete algebraic extensions of rationals forming a cognitive and evolutionary hierarchy induce extensions of p-adic numbers appearing in corresponding adeles and for them quantum groups should be a necessary ingredient of description. The following arguments support this view and make it more concrete.

Quantum groups and discretization as two manners to describe finite measurement resolution in TGD framework

What about quantum groups in TGD framework? I have also proposed that q-deformations could represent finite measurement resolution. There might be a connection between discretizing and quantum groups as different aspects of finite measurement resolution. For instance, quantum group SU(2)q allows only a finite number of representations (maximum value for angular momentum): this conforms with finite angular resolution implying a discretization in angle variable. At the level of p-adic number fields the discretization of phases exp(iφ) as roots Un=exp(i2π/n) of unity is unavoidable for number theoretical reasons and makes possible discrete Fourier analysis for algebraic extension.

There exist actually a much stronger hint that discretization and quantum groups related to each other. This hint leads actually to a concrete proposal how discretization is described in terms of quantum group concept.

  1. In TGD discretization for space-time surface is not by a discrete set of points but by a complex of 2-D surfaces consisting of strings world sheets and partonic 2-surface. By their 2-dimensionality these 2-surfaces make possible braid statistics. This leads to what I have called "quantum quantum physics" as the permutation group defining the statistics is replaced with braid group defining its infinite covering. Already fermion statistics replaces this group with its double covering. If braids are present there is no need for "quantum quantum". If one forgets the humble braidy origins of the notion begins to talk about quantum groups as independent concept the attribute "quantum quantum" becomes natural. Personally I am skeptic about this approach: it has not yielded anything hitherto.

  2. Braiding means that the R-matrix characterizing what happens in the permutation of nearby particles is not anymore multiplication by +1 or -1 but more complex operation realized as a gauge group action (no real change to change by gauge invariance). The gauge group action could in electroweak gauge group for instance.

    What is so nice that something very closely resembling the action of quantum variant of gauge group (say electroweak gauge group) emerges. If the discretization is by the orbit of discrete subgroup H of SL(2,C) defining hyperbolic manifold SL(2,C)/H as the analog of lattice cell, the action of the discrete subgroup H is leaves
    "lattice cell" invariant but could induce gauge action on state. R-matrix defining quantum group representation would define the action of braiding as a discrete group element in H. Yang-Baxter equations would give a constraint on the representation.

    This description looks especially natural in the p-adic sectors of TGD. Discretization of both ordinary and hyperbolic angles is unavoidable in p-adic sectors since only the phases, which are roots of unity exist (p-adically angle is a non-existing notion): there is always a cutoff involved: only phases Um=exp(i2π/m), m<r exist and r should be a factor of the integer defining the value of Planck constant heff/h=n defining the dimension of the algebraic extension of rational numbers used. In the same manner hyperbolic "phases" defined by roots e1/mp of e (the very deep number theoretical fact is that e is algebraic number (p:th root) p-adically since ep is ordinary p-adic number!). The test for this conjecture is easy: check whether the reduction of representations of groups yields direct sums of representations of corresponding quantum groups.

  3. In TGD framework H3 is identified as light-cone proper time=constant surface, which is 3-D hyperboloid in 4-D Minkowski space (necessary in zero energy ontology). Under some additional conditions a discrete subgroup G of SL(2,C) defines the tesselation of H3 representing finite measurement resolution. Tesselation consists of a discrete set of cosets gSL(2,C). The right action of SL(2,C) on cosets would define the analog of gauge action and appear in the definition of R-matrix.

    The original belief was that discretization would have continuous representation and powerful quantum analog of Lie algebra would become available. It is not however clear whether this is really possible or whether this is needed since the R-matrix would be defined by a map of braid group to the subgroup of Lorentz group or gauge group. The parameters defining the q-deformation are determined by the algebraic extension and it is quite possible that there are more than one parameters.

  4. The relation to integrable quantum field theories in M2 is interesting. Particles are characterized by Lorentz boosts in SO(1,1) defining their 2-momenta besides discrete quantum numbers. The scattering reduces to a permutation of quantum numbers plus phase shifts. By 2-particle irreducibility defining the integrability the scattering matrix reduces to 2-particle S-matrix depending on the boost parameters of particles, and clearly generalizes the R-matrix as a physical permutation of particles having no momentum. Could this generalize to 4-D context? Could one speak of the analog of this 2-particle S-matrix as having discrete Lorentz boosts hi in sub-group H as arguments and representable as element h( h1, h2) of H: is the ad hoc guess h= h1 h2-1 trivial?

  5. The popular article says that one has q>1 in loop gravity. As found, in TGD quantum deformation has at least two parameters are needed in the case of SL(2,C). The first corresponds to the n:th root of unity (Un= exp(i2π/n)) and second one to n×p:th root of ep. One could do without quantum group but it would provide an elegant representation of discrete coset spaces. It could be also powerful tool as one considers algebraic extensions of rationals and the extensions of p-adic numbers induced by them.

    One can even consider a concrete prediction follows for the unit of quantized cosmic redhifts if astrophysical objects form tesselations of H3 in cosmic scales. The basic unit appearing in the exponent defining the Lorentz boost would depend on the algebraic extension invlved and of p-adic prime defining effective p-adcity and would be e1/np.

For details see the chapter Was von Neumann right after all? or the article Discretization and quantum group description as different aspects of finite measurement resolution.

For a summary of earlier postings see Links to the latest progress in TGD.

Criticality of Higgs: is Planck length dogmatics physically feasible?

While studying the materials related to Convergence conference running during this week at Perimeter institute I ended up with a problem related to the fact that the mass Mh= 125.5+/- .24 GeV implies that Higgs as described by standard model (now new physics at higher energies) is at the border of metastability and stability - one might say near criticality (see this and this), and I decided to look from TGD perspective what is really involved.

Absolute stability would mean that the Higgs potential becomes zero at Planck length scale assumed to be the scale at which QFT description fails: this would require Mh>129.4 GeV somewhat larger that the experimentally determined Higgs mass in standard model framework. Metastability means that a new deep minimum is developed at large energies and the standard model Higgs vacuum does not anymore correspond to a minimum energy configuration and is near to a phase transition to the vacuum with lower vacuum energy. Strangely enough, Higgs is indeed in the metastable region in absence of any new physics.

Since the vacuum expectation of Higgs is large at high energies the potential is in a reasonable approximation of form V= λ h4, where h is the vacuum expectation in the high energy scale considered and λ is dimensionless running coupling parameter. Absolute stability would mean λ=0 at Planck scale. This condition cannot however hold true as follows from the input provided by top quark mass and Higgs mass to which λ at LHC energies is highly sensitive. Rather, the value of λ at Planck scale is small and negative: λ(MPl)=-0.0129 is the estimate to be compared with λ(Mt)=0.12577 at top quark mass. This implies that the potential defining energy density associated with the vacuum expectation value of Higgs becomes negative at high enough energies.The energy at which λ becomes negative is in the range 1010-1012 GeV, which is considerably lower than Planck mass about 1019 GeV. This estimate of course assumes that there is no new physics involved.

The plane defined by top and Higgs masses can be decomposed to regions (see figure 5 of this), where perturbative approach fails (λ too large), there is only single minimum of Higgs potential (stability), there is no minimum of Higgs potential (λ<0, instability) and new minima with smaller energy is present (metastability). This metastability can lead to a transition to a lower energy state and could be relevant in early cosmology and also in future cosmology.

The value of λ turns out to be rather small at Planck mass. λ however vanishes and changes sign in a much lower energy range 1010-1012 GeV. Is this a signal that something interesting takes place considerably below Planck scale? Could Planck length dogmatics is wrong? Is criticality only an artefact of standard model physics and as such a signal for a new physics?

How could this relate to TGD? Planck length is one of the unchallenged notions of modern physics but in TGD p-adic mass calculations force to challenge this dogma. Planck length is replaced with CP2 length scale which is roughly 104 longer than Planck length and determined by the condition that electron corresponds to the largest Mersenne prime (M127), which does not define completely super-astrophysical p-adic length scale, and by the condition that electron mass comes out correctly. Also many other elementary particles correspond to Mersenne primes. In biological relevant scales there are several (4) Gaussian Mersennes.

In CP2 length scale the QFT approximation to quantum TGD must fail since the the replacement of the many-sheeted space-time with GRT space-time with Minkowskian signature of the metric fails, and space-time regions with Euclidian signature of the induced metric defining the lines of generalized Feynman diagrams cannot be anymore approximated as lines of ordinary Feynman diagrams or twistor diagrams. From electron mass formula and electron mass of .5 MeV one deduces that CP2 mass scale is 2.53× 1015 GeV - roughly three orders of magnitudes above 1012 GeV obtained if there is no new physics emerges above TeV scale.

TGD "almost-predicts" several copies of hadron physics corresponding to Mersenne primes Mn, n=89, 61, 31,.. and these copies of hadron physics are expected to affect the evolution of λ and maybe raise the energy 1012 GeV to about 1015 GeV. For M31 the electronic p-adic mass scale happens to be 2.2× 1010 GeV. The decoupling of Higgs by the vanishing of λ could be natural at CP2 scale since the very notion of Higgs vacuum expectation makes sense only at QFT limit becoming non-sensical in CP2 scale. In fact, the description of physics in terms of elementary particles belonging to three generations might fail above this scale. Standard model quantum numbers make still sense but the notion of family replication becomes questionable since in TGD framework the families correspond to different boundary topologies of wormhole throats and the relevant physics is above this mass scale inside the wormhole contacts: there would be only single fermion generation below CP2 scale.

This raises questions. Could one interpret the strange criticality of the Higgs as a signal about the fact that CP2 mass scale is the fundamental mass scale and Newton's constant might be only a macroscopic parameter. This would add one more nail to the coffin of superstring theory and of all theories relying on Planck length scale dogmatics. One can also wonder whether the criticality might somehow relate to the quantum criticality of TGD Universe. My highly non-educated guess is that it is only an artefact of standard model description. Note however that below CP2 scale the transition from the phase dominated by cosmic strings to a phase in which space-time sheets emerge and leading to the radiation dominated cosmology would take place: this period would be the TGD counterpart for the inflationary period and also involve a rapid expansion.

For a summary of earlier postings see Links to the latest progress in TGD.

Tuesday, June 23, 2015

Strings 2015 and Convergence: two very different conferences

There are two very different conferences going on. The first one is Strings 2015 commented by Peter Woit. Superstrings live nowadays mostly in grant applications: 10 per cent of titles of lectures mention superstrings and the rest are about quantum field theories in various dimension, about applications of AdS/CFT, and about the latest fashion taking seriously wormhole contacts connecting blackholes - a GRT adaptation of magnetic flux tubes accompanied by strings connecting partonic 2-surfaces introduced by Maldacena and Susskind. Susskind introduced also p-adic numbers a couple of years ago. Also the holography is due to Susskind but introduced in TGD much earlier as an implication of general coordinate invariance in sub-manifold gravity.

Asoke Sen is talking about how to save humankind from a catastrophe caused by inflationary phase transition and destroying ordinary matter: anthropically estimated to occur about once per lifetime of the Universe from the fact that it has not occurred. This attempt to save human kind has mostly been taken as a joke but Sen is right in that the worry is real if inflationary multiverse scenario and GRT are right. Logic is correct but premises might be wrong;-): this can be said quite generally about super string theories.

This is definitely an application of super string theory but I would be delighted on more concrete applications: say deducing the standard model from super string theory and maybe saying something about quantum biology and even consciousness as one might theory of everything to do. Unfortunately the only game in the town cannot do this. The spirit in Strings 2015 does not seem to be very high. Even Lubos did not bother to comment the talks, which he said are "interesting" and asked whether some-one in the conference might do this. It is clear that Strings 2015 is were big old guys guys meet and refresh memories, it is not for young revolutionaries.

Another conference - Convergence - is held in Perimeter institute at the same time - perhaps not an accident: see the comments of Peter Woit: Lubos has not commented this conference - I expected the usual rant about intellectually inferior imbecilles. The spirit is totally different: it is admitted that theoretical physics has been on wrong track for four decades and are now actively searching the way out of the dead end. People are talking about revolution taking place in near future! Some-one mentioned even consciousness. There are a lot of young participants present. I am highly optimistic. Things might begin to move again.

Thursday, June 18, 2015

About quantum cognition

The talks in the conference Towards Science of Consciousness 2015 held in Helsinki produced several pleasant surprises, which stimulated more precise views about TGD inspired theory of consciousness. Some of the pleasant surprises were related to quantum cognition. It is a pity that I lost most of the opening talk of Harald Atmanspacher (see this).

The general idea is to look whether one could take the formalism of quantum theory and look whether it might allow to construct testable formal models of cognition. Quantum superposition, entanglement, and non-commutativity, are the most obvious notions to be considered. The problems related to quantum measurement are however present also now and relate to the basic questions about consciousness.

  1. For instance, non-commutativity of observables could relate to the order effects in cognitive measurements. Also the failure of classical probability to which Bell inequalities relate could have testable quantum cognitive counterpart. This requires that one should be able to speak about the analogs of quantization axis for spin in cognition. Representation of Boolean logic statements as tensor product of qubits would resolve the problem and in TGD framework fermionic Fock state basis defines a Boolean algebra: fermions would be interpretation as quantum correlates of Boolean cognition.

  2. The idea about cognitive entanglement described by density matrix was considered and the change of the state basis was suggested to have interpretation as a change of perspective. Here I was a little bit puzzled since the speakers seemed to assume that density matrix rather than only its eigenvalues has an independent meaning. This probably reflects my own assumption that density matrix is always assignable to a system and its complement regarded as subsystems of large system in pure state. The states are purifiable - as one says. This holds true in TGD but not in the general case.

  3. The possibility that quantum approach might allow to describe this breaking of uniqueness in terms of entanglement - or more precisely in terms of density matrix, which in TGD framework can be diagonalized and in cognitive state function reduction reduces in the generic case to a 1-D density matrix for one of the meanings. The situation would resemble that in hemispheric rivalry or for illusions in which two percepts appear as alternatives. One must be of course very cautious with this kind of models: the spoken and written language do not obey strict rules. I must however admit that I failed to get the gist of the arguments completely.

One particular application discussed in the conference was to a problem of linguistics.
  1. One builds composite words from simpler ones. The proposed rule in classical linguistics is that the composites are describable as unique functions of the building bricks. The building brick words can however have several meanings and meaning is fixed only after one tells to which category the concept to which the world refers belongs. Therefore also the composite word can have several meanings.

  2. If the words have several meanings, they belong to at least n=2 two categories. The category associated with the word is like spin n=2 and one can formally treat the words as spins, kind of cognitive qubits. The category-word pairs - cognitive spins- serve building bricks for 2 composite worlds analogous to two-spin systems.

  3. A possible connection with Bell's inequalities emerges from the idea that if word can belong to two categories it can be regarded as analogous to spin with two values. If superpositions of same word with different meanings make sense, the analogs for the choice of spin quantization axis and measurement of spin in particular quantization direction make sense. A weaker condition is that the superpositions make sense only for the representations of the words. In TGD framework the representations would be in terms of fermionic Fock states defining quantum Boolean algebra.

    1. Consider first a situation in which one has two spin measurement apparatus A and B with given spin quantization axis and A' and B' with different spin quantization axis. One can construct correlation functions for the products of spins s1 and s2 defined as outcomes of measurements A and A' and s3 and s4 defined as outcomes of B and B'. One obtains pairs 13, 14, 23, 24.

    2. Bell inequalities give a criterion for the possibility to model the system classically. One begins from 4 CHSH inequalities follow as averages of inequalities holding for individual measurement always (example: -2≤ s1s3 + s1s4+s2s3- s2s4≤ 2) outcomes by assuming classical probability concept implying that the probability distributions for sisj are simply marginal distributions for a probability distribution P(s1,22,s3,s4). CHSH inequalities are necessary conditions for the classical behavior. Fine's theorem states that these conditions are also sufficient. Bell inequalities follow from these and can be broken for quantum probabilities.

    3. Does this make sense in the case of cognitive spins? Are superpositions of meanings really possible? Are conscious meanings really analogous to Schrödinger cats? Or should one distinguish between meaning and cognitive representation? Experienced meanings are conscious experiences and consciousness identified as state function reduction makes the world look classical in standard quantum measurement theory. I allow the reader to decide but represent TGD view below.

What about quantum cognition in TGD framework? Does the notion of cognitive spin make sense? Do the notions of cognitive entanglement and cognitive measurement have sensible meaning? Does the superposition of meanings of words make sense or does it make sense for representations only?

  1. In TGD quantum measurement is measurement of density matrix defining the universal observable leading to its eigenstate (or eigen space when NE is present in final state) meaning that degenerate eigenvalues of the density matrix are allowed). In the generic case the state basis is unique as eigenstates basis of density matrix and cognitive measurement leads to a classical state.

    If the density matrix has degenerate eigenvalues situation changes since state function can take place to a sub-space instead of a ray of the state space. In this sub-space there is no preferred basis. Maybe "enlightened" states of consciousness could be identified as this kind of states carrying negentropy (number theoretic Shannon entropy is negative for them and these states are fundamental for TGD inspired theory of consciousness. Note that p-adic negentropy is well-defined also for rational (or even algebraic) entanglement probabilities but the condition that quantum measurement leads to an eigenstate of density matrix allows only projector as a density matrix for the outcome of the state function reduction. In any case, in TGD Universe the outcome of quantum measurement could be enlightened Schrödinger cat which is as much dead as olive.

    Entangled states could represent concepts or rules as superpositions of their instances consisting of pairs of states. For NE generated in state function reduction density matrix would be a projector so that these pairs would appear with identical probabilities. The entanglement matrix would be unitary. This is interesting since unitary entanglement appears also in quantum computation. One can consider also the representation of associations in terms of entanglement - possibly negentropic one.

  2. Mathematician inside me is impatiently raising his hand: it clearly wants to add something. The restriction to a particular extension of rationals - a central piece of the number theoretical vision about quantum TGD - implies that density matrix need not allow diagonalization. In eigen state basis one would have has algebraic extension defined by the characteristic polynomial of the density matrix and its roots define the needed extension which could be quite well larger than the original extension. This would make state stable against state function reduction.

    If this entanglement is algebraic, one can assign to it a negative number theoretic entropy. This negentropic entanglement is stable against NMP unless the algebraic extension associated with the parameters characterizing the parameters of string world sheets and partonic surfaces defining space-time genes is allowed to become larger in a state function reduction to the opposite boundary of CD generating re-incarnated self and producing eigenstates involving algebraic numbers in a larger algebraic extension of rationals. Could this kind of extension be an eureka experience meaning a step forwards in cognitive evolution?

    If this picture makes sense, one would have both the unitary NE with a density matrix, which is projector and the algebraic NE with eigen values and NE for which the eigenstates of density matrix outside the algebraic extension associated with the space-time genes. Note that the unitary entanglement is "meditative" in the sense that any state basis is possible and therefore in this state of consciousness it is not possible to make distinctions. This strongly brings in mind koans of Zen buddhism. The more general algebraic entanglement could represent abstractions as rules in which the state pairs in the superposition represent the various instances of the rule.

  3. Can one really have superposition of meanings in TGD framework where Boolean cognitive spin is represented as fermion number (1,0), spin, or weak isospin in TGD, and fermion Fock state basis defines quantum Boolean algebra.

    In the case of fermion number the superselection rule demanding that state is eigenstate of fermion number implies that cognitive spin has unique quantization axis.

    For the weak isopin symmetry breaking occurs and superpositions of states with different em charges (weak isospins) are not possible. Remarkably, the condition that spinor modes have a well-defined em charge implies in the generic case their localization to string world sheets at which classical W fields carrying em charge vanish. This is essential also for the strong form of holography, and one can say that cognitive representations are 2-dimensional and cognition resides at string world sheets and their intersections with partonic 2-surfaces. Electroweak quantum cognitive spin would have a unique quantization axes?

    But what about ordinary spin? Does the presence of Kähle magnetic field at flux tubes select a unique quantization direction for cognitive spin as ordinary spin so that it is not possible to experience superposition of meanings? Or could the rotational invariance of meaning mean SU(2) gauge invariance allowing to rotate given spin to a fixed direction by performing SU(2) gauge transformation affecting the gauge potential?

  4. A rather concrete linguistic analogy from TGD inspired biology relates to the representation of DNA, mRNA, amino-acids, and even tRNA in terms of dark proton triplets. One can decompose ordinary genetic codons to letters but dark genetic codons represented by entangled states of 3 linearly order quarks and do not allow reduction to sequence of letters. It is interesting that some eastern written languages have words as basic symbols whereas western written languages tend to have as basic units letters having no meaning as such. Could Eastern cognition and languages be more holistic in this rather concrete sense?

For details see the chapter p-Adic physics as physics of cognition and intention of "TGD Inspired Theory of Consciousness" or the article Impressions created by TSC 2015 conference.

For a summary of earlier postings see Links to the latest progress in TGD.

Aromatic rings as the lowest level in the molecular self hierarchy?

I had the opportunity to participate the conference Towards Science of Consciousness 2015 held in Helsinki June 8-13. Of special interest from TGD point of view were the talks of Hameroff and Bandyopadphyay, who talked about aromatic rings (ARs) (see this).

I have also wondered whether ARs might play key role with motivations coming from several observations.

  1. In photosynthesis ARs are a central element in the energy harvesting system , and it is now known that quantum effects in longer length and time scales than expected are involved. This suggests that the ARs involved fuse to form a larger quantum system connected by flux tubes, and that electron pair currents follow along the flux tubes as supra currents.

    DNA codons involve ARs with delocalized pi electrons, neurotransmitters and psychoactive drugs involve them, 4 amino-acids Phe, trp, tyr and his involve them and they are all hydrophobic and tend to be associated with hydrophobic pockets. Phe and trp appear in hydrophobic pockets of microtubules.

  2. The notion of self hierarchy suggests that at molecular level ARs represent the basic selves. ARs would integrate to larger conscious entities by a reconnection of the flux tubes of their magnetic bodies (directing attention to each other!). One would obtain also linear structures such as DNA sequence in this manner. In proteins the four aromatic amino-acids would represent subselves possibly connected by flux tubes. In this manner one would obtain a concrete molecular realization of self hierarchy allowing precise identification of the basic conscious entities as aromatic rings lurking in hydrophobic pockets.

  3. Given AR would be accompanied by a magnetic flux tube and the current around it would generate magnetic field. The direction of the current would represent a bit (or perhaps even qbit). In the case of microtubules the phe-trp dichotomy and direction of current would give rise to 4 states identifiable as a representation for four genetic letters A,T,C,G. The current pathways proposed by Hameroff et al consisting of sequences of current rings (see this) could define the counterparts of DNA sequences at microtubule level.

    For B type microtubules 13 tubulins, which correspond to single 2π rotation, would represent basic unit followed by a gap. This unit could represent a pair of helical strands formed by flux tubes and ARs along them completely analogous to DNA double strand. This longitudinal strand would be formed by a reconnection of magnetic flux tubes of the magnetic fields of ARs and reconnection occurring in two different manners at each step could give rise to braiding.

  4. The magnetic flux tubes associated with the magnetic fields of nearby aromatic rings could suffer reconnection and in this manner a longitudinal flux tubes pair carrying supra current could be generated by the mechanism of bio-superconductivity discussed (see this) and working also for the ordinary high Tc super conductivity. The interaction of microtubule with frequencies in the scales kHz, GHz, and THz scales would induce longitudinal superconductivity as a transition to phase A from phase B meaning generation of long super-conducting wires.

    This view suggests that also DNA is superconductor in longitudinal direction and that oscillating AC voltage induces the superconductivity also now. Bandyopadphyay indeed observed the 8 AC resonance frequencies first for DNA with frequency scales of GHz, THz, PHz, which suggests that dark photon signals or AC voltages at these frequencies induce DNA superconductivity. According to the model of DNA as topological quantum computer DNA is superconductor also in the transversal degrees of freedom meaning that there are flux tubes connecting DNA to a lipid layer of the nuclear or cell membrane (see this and this).

  5. Interestingly, the model of Hameroff et al for the helical pathway (see this) assumes that there are three aromatic rings per d=1 nm length along microtubule. This number is same as the number of DNA codons per unit length. It is however mentioned that the distance between aromatic rings trp and phe in MT is about d=2 nm. Does this refer to average distance or is d=1 nm just an assumption. In TGD framework the distance would scale as heff so that also scaling of DNA pathway by a factor 6 could be considered. In this case single tubulin could correspond to genetic codon.

    If d=1 nm is correct, these helical pathways might give rise to a representation of memetic codons representable as sequences of 21 genetic codons meaning that there are 2126 different memetic codons this. DNA would represent the lowest level of hierarchy of consciousness and microtubules the next level. Note that each analog of DNA sequences corresponds to different current pathway.

  6. What is especially interesting, that codon and its conjugate have always altogether 3 aromatic cycles. Also phe and trp appearing in MTs have this property as also tyr and his. Could these 3 cycles give rise to 3-braid? The braid group B3, which is covering of permutation group of 3 objects. Since B2 is Abelian group of integers, 3-braid is the smallest braid, which can give rise to interesting topological quantum computation.

    B3 is also the knot group of trefoil knot, and the universal central extension of the modular group PSL(2,Z) (a discrete subgroup of Lorentz group playing a key role in TGD since it defines part of the discrete moduli space for the CDs with other boundary fixed (see this). Quite generally, B(n) is the mapping class group of a disk with n punctures fundamental both in string model: in TGD where disk is replaced with partonic 2-surface.

For details see the chapter Quantum model for nerve pulse of "TGD and EEG" or the article Impressions created by TSC 2015 conference.

For a summary of earlier postings see Links to the latest progress in TGD.

Two kinds of negentropic entanglements

The most general view is that negentropic entanglement NE corresponds to algebraic entanglement with entanglement coefficients in some algebraic extension of rationals. The condition that the outcome of state function reduction is eigenspace of density matrix fixes the density matrix of the final state to be a projector with identical eigenvalues defining the probabilities of various states.

But what if the eigenvalues and thus also eigenvectors of the density matrix, which are algebraic numbers, do not belong to the algebraic extensions involved. Can state function reduction reduction occur at all so that this kind of NE would be stable?

The following argument suggests that also more general algebraic entanglement could be reasonably stable against NMP, namely the entanglement for which the eigenvalues of the density matrix and eigenvectors are outside the algebraic extension associated with the parameters characterizing string world sheets and partonic 2-surfaces as space-time genes.

The restriction to a particular extension of rationals - a central piece of the number theoretical vision about quantum TGD - implies that density matrix need not allow diagonalization. In eigen state basis one would have has algebraic extension defined by the characteristic polynomial of the density matrix and its roots define the needed extension which could be quite well larger than the original extension. This would make state stable against state function reduction.

If this entanglement is algebraic, one can assign to it a negative number theoretic entropy. This negentropic entanglement is stable against NMP unless the algebraic extension associated with the parameters characterizing the parameters of string world sheets and partonic surfaces defining space-time genes is allowed to become larger in a state function reduction to the opposite boundary of CD generating re-incarnated self and producing eigenstates involving algebraic numbers in a larger algebraic extension of rationals. Could this kind of extension be an eureka experience meaning a step forwards in cognitive evolution?

If this picture makes sense, one would have both the unitary NE with a density matrix, which is projector and the algebraic NE with eigen values and NE for which the eigenstates of density matrix outside the algebraic extension associated with the space-time genes. Note that the unitary entanglement is "meditative" in the sense that any state basis is possible and therefore in this state of consciousness it is not possible to make distinctions. This strongly brings in mind koans of Zen buddhism and enlightment experience. The more general irreducible algebraic entanglement could represent abstractions as rules in which the state pairs in the superposition represent the various instances of the rule.

For details see the chapter Negentropy Maximization Principle of "TGD Inspired Theory of Consciousness" or the article Impressions created by TSC2015 conference.

For a summary of earlier postings see Links to the latest progress in TGD.

Tuesday, June 16, 2015

Impressions created by TSC 2015 conference

Towards a Science of Consciousness conference (TCS 2015) was held in Helsinki June 8-13. The conference was extremely interesting with a lot of excellent lectures, and it is clear that a breakthrough is taking place and quantum theories of consciousness are becoming a respected field of science.

In the article below my impressions about the conference are described. They reflect only my limited scope of interest, and not even this since the number of representations was so large that it was possible to listen only a minor fraction of representations.

From my point of view the most interesting representations were related to the experimental findings about microtubules and also DNA. These findings allowing a much more detailed view about bio-molecular level of the self hierarchy having at the lowest level molecules having aromatic cycles carrying supra currents of pi electron pairs creating magnetic bodies of these basic selves. DNA, microtubules, and chlorophyll represent the basic biomolecules containing these aromatic cycles. Also neuro-active compounds (neurotransmitters, hallucinogens,…) involve them. Amino-adics phe,trp,tyr,and his would represent subselves (mental images) of proteins in this picture so that the picture about molecular self hierarchy is becoming very concrete.

In the earlier posting I already considered TGD based model for the action of anesthetes.

My impressions about TSC 2015 are described in the article Impressions created by TSC 2015 conference.

Saturday, June 06, 2015

A model for anesthetic action

The mechanism of anesthetic action has remained mystery although a lot of data exist. The Meyer-Overton correlation suggests that the changes occurring at lipid layers of cell membrane are responsible for anesthesia but this model fails. Another model assumes that the binding of anesthetes to membrane proteins is responsible for anesthetic effects but also this model has problems. The hypothesis that the anesthetes bind to the hydrophobic pockets of microtubules looks more promising.

The model should also explain hyperpolarization of neuronal membranes taking also place when consciousness is lost. The old finding of Becker is that the reduction or reversal of voltage between frontal brain and occipital regions correlates with the loss of consciousness. Microtubules and DNA are negatively charged and the discovery of Pollack that so called fourth phase of water involves generation of negatively charged regions could play a role in the model. Combining these inputs with TGD inspired theory of consciousness and quantum biology one ends up to a microtubule based model explaining the basic aspects of anaestesia.

For details see the article TGD based model for anesthetic action or the chapter Quantum model for nerve pulse of "TGD and EEG".

For a summary of earlier postings see Links to the latest progress in TGD.

Wednesday, June 03, 2015

Quantitative model of high Tc super-conductivity and bio-super-conductivity

I have developed already earlier a rough model for high Tc super conductivity. The members of Cooper pairs are assigned with parallel flux tubes carrying fluxes which have either same or opposite directions. The essential element of the model is hierarchy of Planck constants defining a hierarchy of dark matters.

  1. In the case of ordinary high Tc super-conductivity bound states of charge carriers at parallel short flux tubes become stable as spin-spin interaction energy becomes higher than thermal energy.

    The transition to super-conductivity is known to occur in two steps: as if two competing mechanisms were at work. A possible interpretation is that at higher critical temperature Cooper pairs become stable but that the flux tubes are stable only below rather short scale: perhaps because the spin-flux interaction energy for current carriers is below thermal energy. At the lower critical temperature the stability would is achieved and supra-currents can flow in long length scales.

  2. The phase transition to super-conductivity is analogous to a percolation process in which flux tube pairs fuse by a reconnection to form longer super-conducting pairs at the lower critical temperature. This requires that flux tubes carry anti-parallel fluxes: this is in accordance with the anti-ferro-magnetic character of high Tc super conductivity. The stability of flux tubes very probably correlates with the stability of Cooper pairs: coherence length could dictate the typical length of the flux tube.

  3. A non-standard value of heff for the current carrying magnetic flux tubes is necessary since otherwise the interaction energy of spin with the magnetic field associated with the flux tube is much below the thermal energy.

There are two energies involved.
  1. The spin-spin-interaction energy should give rise to the formation of Cooper pairs with members at parallel flux tubes at higher critical temperature. Both spin triplet and spin singlet pairs are possible and also their mixture is possible.

  2. The interaction energy of spins with magnetic fluxes, which can be parallel or antiparallel contributes also to the gap energy of Cooper pair and gives rise to mixing of spin singlet and spin triplet. In TGD based model of quantum biology antiparallel fluxes are of special importance since U-shaped flux tubes serve as kind of tentacles allow magnetic bodies form pairs of antiparallel flux tubes connecting them and carrying supra-currents. The possibility of parallel fluxes suggests that also ferro-magnetic systems could allow super-conductivity.

    One can wonder whether the interaction of spins with magnetic field of flux tube could give rise to a dark magnetization and generate analogs of spin currents known to be coherent in long length scales and used for this reason in spintronics (see this). One can also ask whether the spin current carrying flux tubes could become stable at the lower critical temperature and make super-conductivity possible via the formation of Cooper pairs. This option does not seem to be realistic.

In the article Quantitative model of high Tc super-conductivity and bio-super-conductivity the earlier flux tube model for high Tc super-conductivity and bio-super-conductivity is formulated in more precise manner. The model leads to highly non-trivial and testable predictions.
  1. Also in the case of ordinary high Tc super-conductivity large value of heff=n× h is required.

  2. In the case of high Tc super-conductivity two kinds of Cooper pairs, which belong to spin triplet representation in good approximation, are predicted. The average spin of the states vanishes for antiparallel flux tubes. Also super-conductivity associated with parallel flux tubes is predicted and could mean that ferromagnetic systems could become super-conducting.

  3. One ends up to the prediction that there should be a third critical temperature not lower than T**= 2T*/3, where T* is the higher critical temperature at which Cooper pairs identifiable as mixtures of Sz=+/- 1 pairs emerge. At the lower temperature Sz=0 states, which are mixtures of spin triplet and spin singlet state emerge. At temperature Tc the flux tubes carrying the two kinds of pairs become thermally stable by a percolation type process involving re-connection of U-shaped flux tubes to longer flux tube pairs and supra-currents can run in long length scales.

  4. The model applies also in TGD inspired model of living matter. Now however the ratio of critical temperatures for the phase transition in which long flux tubes stabilize is roughly by a factor 1/50 lower than that in which stable Cooper pairs emerge and corresponds to thermal energy at physiological temperatures which corresponds also the cell membrane potential. The higher energy corresponds to the scale of bio-photon energies (visible and UV range).

For details see the article Quantitative model of high Tc super-conductivity and bio-super-conductivity.

For a summary of earlier postings see Links to the latest progress in TGD.