Wednesday, February 29, 2012

Progress in number theoretic vision about TGD

During last weeks I have been writing a new chapter Quantum Adeles. The key idea is the generalization of p-adic number fields to their quantum counterparts and they key problem is what quantum p-adics and quantum adeles mean. Second key question is how these notions relate to various key ideas of quantum TGD proper. The new chapter gives the details: here I just list the basic ideas and results.

What quantum p-adics and quantum adeles really are?

What quantum p-adics are?

The first guess is that one obtains quantum p-adics from p-adic integers by decomposing them to products of primes l first and after then expressing the primes l in all possible manners as power series of p by allowing the coefficients to be also larger than p but containing only prime factors p1<p. In the decomposition of coefficients to primes p1<p these primes are replaced with quantum primes assignable to p.

One could pose the additional condition that coefficients are smaller than pN and decompose to products of primes l<pN mapped to quantum primes assigned with q= exp(i2π/pN). The interpretation would be in terms of pinary cutoff. For N=1 one would obtain the counterpart of p-adic numbers. For N>1 this correspondence assigns to ordinary p-adic integer larger number of quantum p-adic integers and one can define a natural projection to the ordinary p-adic integer and its direct quantum counterpart with coefficients ak<p in pinary expansion so that a covering space of p-adics results. One expects also that it is possible to assign what one could call quantum Galois group to this covering and the crazy guess is that it is isomorphich with the Absolute Galois Group defined as Galois group for algebraic numbers as extension of rationals.

One must admit that the details are not fully clear yet here. For instance, one can consider quantum p-adics defined in power series of pN with coefficients an<pN and expressed as products of quantum primes l<pN. Even in the case that only N=1 option works the work has left to surprisingly detailed understanding of the relationship between different pieces of TGD.

This step is however not enough for quantum p-adics.

  1. The first additional idea is that one replaces p-adic integers with wave functions in the covering spaces associated with the prime factors l of integers n. This delocalization would give a genuine content for the attribute "quantum" as it does in the case of electron in hydrogen atom.

    The natural physical interpretation for these wave functions would be as cognitive representations of the quantum states in matter sector so that momentum, spin and various internal quantum numbers would find cognitive representation in quantum Galois degrees of freedom.

    One could talk about self-reference: the unseen internal degrees of freedom associated with p-adic integers would make it possible to represent physical information. Also the ratios of infinite primes reducing to unity give rise to similar but infinite-dimensional number theoretical anatomy of real numbers and leads to what I call Brahman=Atman identity.

  2. Second additional idea is to replace numbers with sequences of arithmetic operations that is quantum sum +q and quantum product ×q represented as fundamental 3-vertices and to formulate basic laws of arithmetics as symmetries of these vertices give rise to additional selection rules from natural additional symmetry conditions. These sequences of arithmetics with sets of integers as inputs and outputs are analogous to Feynman diagrams and the factorization of integers to primes has the decomposition of braid to braid strands as a direct correlate. One can also group incoming integers to sub-groups and the hierarchy of infinite primes describes this grouping.

A beautiful physical interpretation for the number theoretic Feynman diagrams emerges.

  1. The decomposition of integers m and n of a quantum rational m/n to products of primes l correspond to the decomposition of two braids to braid strands labeled by primes l. TGD predicts both time-like and space-like braids having their ends at partonic 2-surfaces. These two kinds of braids would naturally correspond to the two co-prime integers defining quantum rational m/n.

  2. The two basic vertices +q and ×q correspond to the fusion vertex for stringy diagrams and 3-vertex for Feynman diagrams: both vertices have TGD counterparts and correspond at Hilbert space level direct sum and tensor product. Note that the TGD inspired interpretation of +q (direct sum) is different from string model interpretation (tensor product). The incoming and outgoing integers in the Feynman diagram corresponds to Hilbert space dimensions and the decomposition to prime factors corresponds to the decomposition of Hilbert space to prime Hilbert spaces as tensor factors.

  3. Ordinary arithmetic operations have interpretation as tensor product and direct sum and one can formulate associativity, commutativity, and distributivity as well as product and sum as conditions on Feynman diagrams. These conditions imply that all loops can be transformed away by basic moves so that diagram reduces to a diagram obtained by fusing only sum and product to initial state to produce single line which decays to outgoing states by co-sum and co-product. Also the incoming lines attaching to same line can be permuted and permutation can only induce a phase factor. The conjecture that these rules hold true also for the generalized Feynman diagrams is obviously extremely powerful and consistent with the picture provided by zero energy ontology. Also connection with twistor approach is suggestive.

  4. Quantum adeles for ordinary rationals can be defined as Cartesian products of quantum p-adics and of reals or rationals. For algebraic extensions of rationals similar definition applies but allowing only those p-adic primes which do not split to a product of primes or the extension. Number theoretic evolution means increasing dimension for the algebraic extension of rationals and this means that increasing number of p-adic primes drops from the adele. This means a selective pressure under which only the fittest p-adic primes survive. The basic question is why Mersenne primes and some primes near powers of two are survivors.

The connection with infinite primes

A beautiful connection with the hierarchy of infinite primes emerges.

  1. The simplest infinite primes at the lowest level of hierarchy define two integers having no common prime divisors and thus defining a rational number having interpretation in terms of time-like and space-like braids characterized by co-prime integers.

  2. Infinite primes at the lowest level code for algebraic extensions of rationals so that the infinite primes which are survivors in the evolution dictate what p-adic primes manage to avoid splitting. Infinite primes coding for algebraic extensions have interpretation as bound states and the most stable bound states and p-adic primes able to resist corresponding splitting pressures survive.

    At the n:th level of the hierarchy of infinite primes correspond to monic polynomials of n variables constructed from prime polymomials of n-1 variables constructed from.... The polynomials of single variable are in 1-1 correspondence with ordered collections of n rationals. This collection corresponds to n pairs of time-like and space-like braids. Thus infinite primes code for collections of lower level infinite primes coding for... and eventually everything boils down to collections rational coefficients for monic polynomials coding for infinite primes at the lowest level of the hierarchy. In generalized Feynman diagrams this would correspond to groups of groups of .... of groups of integers of incoming and outgoing lines.

  3. The physical interpretation is in terms of pairs time-like and space-like braids having ends at partonic 2-surfaces with strands labelled by primes and defining as their product integer: the rational is the ratio of these integers. From these basic braids one can form collections of braid pairs labelled by infinite primes at the second level of hierarchy, and so on and a beautiful connection with the earlier vision about infinite primes as coders of infinite hierarchy of braids of braids of... emerges. Space-like and time-like braids playing key role in generalized Feynman diagrams and representing rationals supporting the interpretation of generalized Feynman diagrams as arithmetic Feynman diagrams. The connection with many-sheeted space-time in which sheets containing smaller sheet define higher level particles, emerges too.

  4. Number theoretic dynamics for ×q conserves the total numbers of prime factors so that one can either talk about infinite number of conserved number theoretic momenta coming as multiples of log(p), p prime or of particle numbers assignable to primes p: pn corresponds to n-boson state and finite parts of infinite primes correspond to states with fermion number one for each prime and arbitrary boson number. The infinite parts of infinite primes correspond to fermion number zero in each mode. The two braids could also correspond to braid strands with fermion number 0 and 1. The bosonic and fermionic excitations would naturally correspond the generators of super-conformal algebras assignable to light-like and space-like 3-surfaces.

The interpretation of integers representing particles a Hilbert space dimensions

In number theoretic dynamics particles are labeled by integers decomposing to primes interpreted as labels for braid strands. Both time-like and space-like braids appear. The interpretation of sum and product in terms of direct sum and tensor product implies that these integers must correspond to Hilbert space dimensions. Hilbert spaces indeed decompose to tensor product of prime-dimensional Hilbert spaces stable against further decomposition.

Second natural decomposition appearing in representation theory is into direct sums. This decomposition would take place for prime-dimensional Hilbert spaces with dimension l with dimensions anpn in the p-adic expansion. The replacement of an with quantum integer would mean decomposition of the summand to a tensor product of quantum Hilbert spaces with dimensions which are quantum primes and of pn-dimensional ordinary Hilbert space. This should relate to the finite measurement resolution.

×q vertex would correspond to tensor product and +q to direct sum with this interpretation. Tensor product automatically conserves the number theoretic multiplicative momentum defined by n in the sense that the outgoing Hilbert space is tensor product of incoming Hilbert spaces. For +q this conservation law is broken.

Connection with the hierarchy of Planck constants, dark matter hierarchy, and living matter

The obvious question concerns the interpretation of the Hilbert spaces assignable to braid strands. The hierarchy of Planck constants interpreted in terms of a hierarchy of phases behaving like dark matter suggests the answer here.

  1. The enormous vacuum degeneracy of Kähler action implies that the normal derivatives of imbedding space coordinates both at space-like 3 surfaces at the boundaries of CD and at light-like wormhole throats are many-valued functions of canonical momentum densities. Two directions are necessary by strong form of holography implying effective 2-dimensionality so that only partonic 2-surfaces and their tangent space data are needed instead of 3-surfaces. This implies that space-time surfaces can be regarded as surfaces in local singular coverings of the imbedding space. At partonic 2-surfaces the sheets of the coverings co-incide.

  2. By strong form of holography there are two integers characterizing the covering and the obvious interpretation is in terms of two integers characterizing infinite primes and time-like and space-like braids decomposing into braids labelled by primes. The braid labelled by prime would naturally correspond to a braid strand and its copies in l points of the covering. The state space defined by amplitudes in the n-fold covering would be n-dimensional and decompose into a tensor product of state spaces with prime dimension. These prime-dimensional state spaces would correspond to wave functions in prime-dimensional sub-coverings.

  3. Quantum primes are obtained as different sum decompositions of primes l and correspond direct sum decompositions of l-dimensional state space associated with braid defined by l-fold sub-covering. What suggests itself strongly is a symmetry breaking. This breaking would mean the geometric decomposition of l strands to subsets with numbers of elements coming proportional to powers pn of p. Could anpn in the expression of l as ∑ akpk correspond to a tensor product of an-dimensional space with finite field G(p,n)? Does this decomposition to state functions localized to sub-braids relate to symmetries and symmetry breaking somehow? Why an-dimensional Hilbert space would be replaced with a tensor product of quantum-p1-dimensional Hilbert spaces? The proper understanding of this issue is needed in order to have more rigorous formulation of quantum p-adics.

  4. Number theoretical dynamics would therefore relate directly to the hierarchy of Planck constants. This would also dictate what happens for Planck constants in the two vertices. There are two options.

    1. For ×q vertex the outgoing particle would have Planck constant, which is product of incoming Planck constants using ordinary Planck constant as unit. For +q vertex the Planck constant would be sum. This stringy vertex would lead to generation of particles with Planck constant larger than its minimum value. For ×q two incoming particles with ordinary Planck constant would give rise to a particle with ordinary Planck constant just as one would expect for ordinary Feynman diagrams.

    2. Another possible scenario is the one in which Planck constant is given by hbar/hbar0= n-1. In this case particles with ordinary Planck constant fuse to particles with ordinary Planck constant in both vertices.

    For both options the feed of particles with non-standard value of Planck constant to the system can lead to a fusion cascade leading to a generation of dark matter particles with very large value of Planck constant. Large Planck constant means macroscopic quantum phases assumed to be crucial in TGD inspired biology. The obvious proposal is that inanimate matter transforms to living and thus also to dark matter by this kind of phase transition in presence of feed of particles - say photons- with non-standard value of Planck constant.

Summary

The work with quantum p-adics and quantum adeles and generalization of number field concept to quantum number field in the framework of zero energy ontology has led to amazingly deep connections between p-adic physics as physics of cognition, infinite primes, hierarchy of Planck constants, vacuum degeneracy of Kähler action, generalized Feynman diagrams, and braids. The physics of life would rely crucially on p-adic physics of cognition. The optimistic inside me even insists that the basic mathematical structures of TGD are now rather well-understood. This fellow even uses the word "breakthrough" without blushing. I have of course continually admonished him for his reckless exaggerations but in vain.

The skeptic inside me continues to ask how this construction could fail. A possible Achilles heel relates to the detailed definition of the notion of quantum p-adics. For N=1 it reduces essentially to ordinary p-adic number field mapped to reals by quantum variant of canonical identification. Therefore most of the general picture survives even for N=1. What would be lost are wave functions in the space of quantum variants of a given prime and also the crazy conjecture that quantum Galois group is isomorphic to Absolute Galois Group.

For details see the new chapter Quantum Adeles of "Physics as Generalized Number Theory".

Thursday, February 23, 2012

Error in OPERA experiment?

The rumor mill in particle physics has gone rather wild. Lubos rumored just some time ago that CDF will provide additional sigmas for Higgs at 125 GeV. It turned out that CDF had found no evidence for the signal. Lubos rumored also about support for stop: not even indications had been actually found. The latest rumor is that OPERA collaboration has detected two errors in the experiment suggesting super-luminality of neutrinos: the first error would be technical and second one related to the analysis. I could not however make head or tail of the published popular pieces of text.

The reader is encouraged to find whether he/she can make any sense of the following cryptic piece of text (which is "popular" and should be therefore easy to understand!).

According to sources familiar with the experiment, the 60 nanoseconds discrepancy appears to come from a bad connection between a fiber optic cable that connects to the GPS receiver used to correct the timing of the neutrinos’ flight and an electronic card in a computer. After tightening the connection and then measuring the time it takes data to travel the length of the fiber, researchers found that the data arrive 60 nanoseconds earlier than assumed. Since this time is subtracted from the overall time of flight, it appears to explain the early arrival of the neutrinos. New data, however, will be needed to confirm this hypothesis.

As those bloggers who are unable to image modifications of Einstein's theory (see for instance this) do not hesitate to take the rumor as a final truth, and have no difficulties in forgetting that also other experiments have seen indications for the super-luminality. Sad to see that so many science bloggers behave as third rank politicians. This tends to give totally wrong view about people working in the field. My day was saved by Cosmic Variance, where the rumor was taken as a rumor and nothing else.

In TGD framework one has sub-manifold gravity and the operationally defined maximal signal velocity varies and can depend on particle species. I am however unable to fix the magnitude of the variation from the maximal signal velocity for photons quantitatively so that the possible neutrino sub-luminality cannot kill TGD whereas super-luminality can only support the notion of sub-manifold gravity. One must just wait and see.

Addition: There is now a New Scientist article about the possible measurement error. Two errors with opposite effects have been identified. The first error relates to a mis-functioning signal cable communicating the time measured by CERN clock to Gran Sasso clock and 60 ns lapse in signal transfer would mean that neutrinos seem to arrive 60 ns earlier than they should. For me this is not a question about whether Einstein was or TGD is wrong or right and it is interesting to see what the final answer will be. No need for ranting;-)!

Addition: New Scientist contains another popular article with title Lights speed limit is safe for now. Is some-one threating some-one? Why a possible anomaly which could have extremely far reaching consequences allowing to generalize Einstein's theory rather than destroy it, is seen as a threat? How people with this attitude can make objective decisions? How many scientific decision makers and researchers have this defensive attitude?

Addition: Matt Strassler has excellent blog posting about the situation in Opera experiment.

Monday, February 20, 2012

Progress in understanding of quantum p-adics

Quantum arithmetics is a notion which emerged as a possible resolution of long-lived challenge of finding mathematical justification for the canonical identification mapping p-adics to reals playing key role in p-adic mass calculations. The model for Shnoll effect was the bridge leading to the discovery of quantum arithmetics.

I have been gradually developing the notion of quantum p-adics and during the weekend made quite a step of progress in understanding the concept and dare say that the notion now rests on a sound basis.

  1. What quantum arithmetics suggests is a modification of p-adic numbers by replacing p-adic pinary expansions with their quantum counterparts allowing the coefficients of prime powers to be integers not divisible by p. A further important constraint is that the factors of coefficients are primes smaller than p. If the coefficients are smaller than p, one obtains something reducing effectively to ordinary p-adic number field.

  2. A further constraint is that quantum integers respect the decomposition of integer to powers of prime. Quantum p-adic integers are to p-adic integers what the integers in the extension of number field are for the number field and one can indeed identify Galois group Gp for each prime p and form adelic counterpart of this group as Cartesian product of all Gp:s.

  3. After various trials it turned out (this is what motivated this posting!) that quantum p-adics are indeed quantal in the sense that one can assign to given quantum p-adic integer n a wave function at the orbit of corresponding Galois group decomposing to Galois groups of its prime factors of n.
    1. The basic conditions are that ×q and +q satisfy the basic associativity and distributivity laws. These conditions are extremely powerful and can be formulated in terms of number theoretic Feynman diagrams assignable to sequences of arithmetical operations and their co-algebra counterparts. This brings in physical insight.

    2. One can interpret ×q and +q and their co-algebra operations as 3-vertices for number theoretical Feynman diagrams describing algebraic identities X=Y having natural interpretation in zero energy ontology. The two vertices have direct counterparts as two kinds of basic topological vertices in quantum TGD, namely stringy vertices in which 3-surface splits and vertices analogous to those of Feynman diagrams in which lines join along their 3-D ends. Only the latter vertices correspond to particle decays and fusions whereas stringy vertices correspond to decay of particle to path and simultaneous propagation along both paths: this is by the way one of the fundamental differences between quantum TGD and string models. This plus the assumption that Galois groups associated with primes define symmetries of the vertices allows to deduce very precise information about the symmetries of the two kinds of vertices needed to satisfy the associativity and distributivity and actually fix them highly uniquely, and therefore determine corresponding zero energy states having collections of integers as counterparts of incoming positive energy (or negative energy) particles.

    3. Zero energy ontology leads naturally zero energy states for which time reversal symmetry is broken in the sense that either positive or negative energy part corresponds to a single collection of integers as incoming lines. What is fascinating is the the prime decomposition of integer corresponds to a decomposition of braid to strands. C and P have interpretation as formations of multiplicative and additive inverses of quantum integers and CP=T changes the positive and negative energy parts of the number theoretic zero energy states.

    4. This gives strong support for the old conjecture that generalized Feynman diagrams have number theoretic interpretation and allow moves transforming them to tree diagrams - also this generalization of old-fashioned string duality is old romantic idea of quantum TGD, which I however gave up as too "romantic". I noticed the analogy of Feynman diagrams with the algebraic expressions but failed to realize how extremely concrete the connection could be. What was left from the idea were some brief comments in Appendix A: Quantum Groups and Related Structures to one of the chapters of "Towards M-matrix".

      The moves for generalized Feynman diagrams would code for associativity and distributivity of quantum arithmetics and we have actually learned them in elementary school as a process simplifying algebraic expressions! Also braidings with strands labeled by the primes dividing the integer emerge naturally so that the connection with quantum TGD proper becomes very strong and consistent with the earlier conjecture inspired by the construction of infinite primes stating that transition amplitudes have purely number theoretic meaning in ZEO.

  4. Canonical identification finds a fundamental role in the definition of the norm for both quantum p-adics and quantum adeles. The construction is also consistent with the notion of number theoretic entropy which can have also negative values (this is what makes living systems living!).

  5. There are arguments suggesting that quantum p-adics form a field - one might say "quantum field" - so that also differential calculus and even integral calculus would make sense since quantum p-adics inherit almost well-ordering from reals via canonical identification.

  6. One can also generalize the construction to algebraic extensions of rationals. In this case the coefficients of quantum adeles are replaced by rationals in the extension and only those p-adic number fields for which the p-adic prime does not split into a product of primes of algebraic extension are kept in the quantum adele associated with rationals. This construction gives first argument in favor of the crazy conjecture that the Absolute Galois group (AGG) is isomorphic with the Galois group of quantum adeles.

To sum up, the vision abut "Physics as generalized number theory" can be also transformed to "Number theory as quantum physics"!

For detais see the new chapter Quantum Adeles of "Physics as Generalized Number Theory".

Saturday, February 18, 2012

The anatomy of state function reduction

In a comment to previous posting Ulla gave a link to an interesting article by George Svetlichny describing an attempt to understand free will in terms of quantum measurement. After reading of the article I found myself explaining once again to myself what state function reduction in TGD framework really means.

The proposal of Svetlichny

The basic objection against assigning free will to state function reduction in the sense of wave mechanics is that state function reduction from the point of view of outsider is like playing dice. One can of course argue that for an outsider any form of free will looks like throwing a dice since causally effective experience of free will is accompanied by non-determinism. We simply do cannot know what is the experience possibly associated with the state function reduction. The lesson is that we must carefully distinguish between two levels: the single particle level and ensemble level - subjective and objective. When we can say that something is random, we are talking about ensembles, not about single member of ensemble.

The author takes the objection seriously and notices that quantum measurement means a division of system to three parts: measured system, measuring system and external world and argues that in some cases this division might not be unique. The choice of this division would have interpretation as an act of free will. I leave it to the reader can decide whether this proposal is plausible or not.

TGD view about state function reduction

What can one say about the situation in TGD framework? There are several differences as compared to the standard measured "theory", which is just certain ad hoc rules combined with Born rule, which applies naturally also in TGD framework and which I do not regard as adhoc in infinite-D context.

I have considered the general anatomy of quantum jump in zero energy ontology (ZEO) from a slightly different point of view here. In the sequel I will discuss the possible anatomy of the state function reduction part of the quantum jump.

  1. TGD ontology differs from the standard one. Space-time surfaces and quantum states as such are zombies in TGD Universe: consciousness is in the quantum jump. Conscious experience is in the change of the state of the brain, brain state as such is not conscious. Self means integration of quantum jumps to higher level quantum jumps and the hierarchy of quantum jumps and hierarchy of selves can be identified in ZEO . It has the hierarchy of CDs and space-time sheets as geometrical correlates. In TGD Universe brain and body are not conscious: rather, conscious experience is about brain and body and this leads to the illusion caused by the assimilation with the target of sensory input: I am what I perceive.

  2. In TGD framework one does not assume the division of the system to a product of measured system, measuring system, and external world before the measurement. Rather, this kind of divisions are outcomes of state function reduction which is part of quantum jump involving also the unitary process. Note that standard measurement theory is not able to say anything about the dynamics giving rise to this kind of divisions.

  3. State function reduction cascade as a part of quantum jump - this holistic view is one new element - proceeds in zero energy ontology (ZEO) from long to short length scales CD→sub-CDs→..., and stops when Negentropy Maximization Principle (NMP defining the variational principle of consciousness is also something new) does not allow to reduce entanglement entropy for any subsystem pair of subsystem un-entangled with the external world. This is the case if the sub-system in question is such that all divisions to two parts are negentropically entangled or form entangled bound state.

    For a given subsystem occurring in the cascade the splitting into an unentangled pair of measured and measuring system can take place if the entanglement between these subsystems is entropic. The splitting takes place for a pair with largest entanglement entropy and defines measuring and measured system.

    Who measures who? This seems to be a matter of taste and one should not talk about measuring system as conscious entity in TGD Universe, where consciousness is in quantum jump.

  4. The factorization of integer to primes is a rather precise number theoretical analogy for what happens, and the analogy might actually have a deeper mathematical meaning since Hilbert spaces with prime dimension cannot be decomposed into tensor products. Any factorization of integer to a product of primes corresponds to a cascade of state function reductions. At the first step division takes place to two integers and several alternative divisions are possible. The pair for which the reduction of entanglement entropy is largest, is preferred. The resulting two integers can be further factorized to two integers, and the process continues and eventually stops when all factors are primes and no further factorization is possible.

    One could even assign to any decomposition n= rs the analogs of entanglement probabilities as p1= log(r)/log(n) and p2= log(s)/log(n). NMP would favor the divisions to factors r and s which are as near as possible to n/2.

    Negentropically entangled system is like prime. Note however that these systems can still make an analog of state function reduction which does not split them but increases the negentropy for all splittings of system to two parts. This would be possible only in the intersection of real and p-adic worlds, that is for living matter. My cautious proposal is that just this kind of systems - living systems - can experience free will: either in the analog of state function reduction process increasing their negentropy or in state function process reducing their entanglement with environment.

  5. In standard measurement theory observer chooses the measured observables and the theory says nothing about this process. In TGD the measured observable is the density matrix for a pair formed by any two entangled parts of sub-system division for which negentropy gain is maximal in quantum measurement defines the pair. Therefore both the measurement axis and the pair representing the target of measurement and measurer are selected in quantum jump.

  6. Quantum measurement theory assumes that measurement correlates classical long range degrees of freedom with quantal degrees of freedom. One could say that the direction of the pointer of the measurement apparatus correlates faithfully with the value of the measured microscopic observable. This requires that the entanglement is reduced between microscopic and macroscopic systems .

    I have identified the "classical" degrees of freedom in TGD framework as zero modes which by definition do not contribute to the line-element of WCW although the WCW metric depends on zero modes as external parameters. The induced Kähler field represents an infinite number of zero modes whereas the Hamiltonians of the boundaries of CD define quantum fluctuating degrees of freedom.

    The reduction of the entanglement between zero modes and quantum fluctuating degrees of freedom is an essential part of quantum measurement process. Also state function reductions between microscopic degrees of freedom are predicted to occur and this kind of reductions lead to decoherence so that one can apply quantum statistical description and derive Boltzmann equations. Also state function reductions between different values of zero modes are possible are possible and one could perhaps assign "telepathic" effects with them.

    The differences with respect to the standard quantum measurement theory are that several kinds of state function reductions are possible and that the division to classical and quantum fluctuating degrees of freedom has a purely geometric meaning in TGD framework.

  7. One can even imagine quantum parallel state function reduction cascades. This would make possible quantum parallel dissipation, which would be something new. My original proposal was that in hadronic physics this could make possible a state function reduction cascade proceeding in quark scales while hadronic scales would remain entangled so that one could apply statistical description to quarks as parts of a system, which is quantum coherent in hadronic length scale.

    This looks nice but...! It is a pity that eventually an objection pops up against every idea irrespective how cute it looks like. The p-adic primes associated with light quarks are larger than that associated with hadron so that quarks - or rather, their magnetic bodies are larger than that hadron's magnetic body. This looks strange at first but actually conforms with Uncertainty Principle and the observation that the charge radius of proton is slightly smaller than predicted (see this), gives support for this picture. Geometrically the situation might change if quarks are highly relativistic and color magnetic fields of quarks are dipoled fields compressed to cigar like shape: Lorentz contraction could reduce the size scale of their magnetic bodies in the direction of their motion. [Note that p-adic length scale hypothesis applies in the rest system of the particle so that Lorentz contraction is in conflict with it]. Situation remains unsettled.

Further questions

There are many other interesting issues about which my understanding could be much better.

  1. In ZEO the choice of the quantization axes and would fix the moduli of the causal diamond CD: the preferred time direction defined by the line connecting the tips of CD, the spin quantization axis, etc.. This choice certainly occurs. Does it reduce to the measurement of a density matrix for some decomposition of some subsystem to a pair? Or should one simply assume state function reductions also at this level meaning localization to a sector of WCW corresponding to given CD. This would involve localization in the moduli space of CDs selecting some boost of a CD with fixed quantized proper time distance between it tips, fixed spin directions for positive and negative energy parts of zero energy states defined by light-like geodesics at its light-like boundary. Preferred complex coordinates for CP2, etc...

  2. Zero energy states are characterized by arrow of geometric time in the sense that either positive or negative energy parts of states have well defined particles numbers and single particle numbers but not both. State function reduction is possible only for positive or negative energy part of the state but not both. This should relate very closely to the fact that our sensory percepts defined by state function reductions are mostly about the upper or lower boundary of CD. I have discussed this in previous posting.

  3. In ZEO also quantum jumps can also lead to generation of new sub-Universes, sub-CDs carrying zero energy states. Quantum jumps can also involve phase transitions changing p-adic space-time sheets to real ones and these could serve as quantum correlates for intentional actions. Also the reverse process changing matter to thoughts is possible. These possibilities are totally unimaginable in the quantum measurement theory for systems describable by wave mechanics.

  4. There is also the notion of finite measurement resolution described in terms of inclusions of hyperfinite factors at quantum level and in terms of braids at space-time level.
To summarize, a lot of theory building is needed in order to fuse all new elements to a coherent framework. In this framework standard quantum measurement theory is only a collection of ad hoc rules and can catch only a small part of what really happens. Certainly, standard quantum measurement theory is is far from being enough for the purposes of consciousness theorist.

Friday, February 17, 2012

Views about free will

Now and then comes the day when you feel that you have said all that might possibly interest anyone somewhere in this waste Universe and even an attempt to think about some problem creates a feeling of deep disgust. I try to escape this depressive mood by meandering around the web in the hope that some colleague or blogger might have written something original. Usually the outcome is a disappointment. Original and not obviously wrong thoughts are as rare as genuine anomalies.

This kind of cheerless wandering around web led me to read some postings and articles about free will. Even some physicists have now accepted "free will" into their vocabulary. The fQXI conference about the nature of time held in some boat sailing from Norway towards Copenhagen las summer had inspired several blog postings. Also I wrote comments about the excellent lecture of David Eagleman about perceived time. This kind of sailing trips cost and it is good if they induce interaction between people with different backgrounds: now at least physicists, neuro-scientists, computer scientists and philosophers were solving both the problem of time and the problems caused by sea sickness at one blow.

I did not find it surprising that I did not find anything surprising in these postings. The common feature of all these articles is that quite too much is assumed. Sean Carroll as a descent reductionist makes especially strong assumptions. All writers have managed to remain unaware of the dramatic distinctions between subjective time and the geometric time of physicist. They also make the same error: in the process of trying to understand free will scientifically their first step is to carefully eliminate conscious mind from the picture. The outcome is free will as something effective and emergent or free will as resulting from deterministic but non-predictable/non-computable process. My humble question is: Why on earth something very complex or non-computable would generate sensation "I decide to do this!"?! A non-deterministic behavior serves as a correlate of free will but non-predictable (but possibly deterministic) behavior does not imply experience of free will.

Every writer grasps something essential but fails to see the essence of the problem and connect it with many related problems like the puzzle of time and the basic paradox of quantum measurement theory. One should not be however too critical since the position of the writers is unrewarding. Being names in blogo-sphere they have been invited to solve the problem of time with minimal background: this is like solving some deep problem in topology with the background given by a couple of hastily read Wikipedia articles.

I was a little bit disappointed but understood that I must also realize that the understanding free will is as difficult as the understanding of the nature of time. It requires a lot of time and a flash of genius: a sea trip from Norway to Copenhagen with National Geographic Explorer - even in a good company - need not be enough to spark this kind of flash. I have been trying to communicate more than 15 years my own flash of genius relating to free will and the relationship between experienced time and the geometric time of physicist but it seems that this has been waste of time. They must discover it themselves! Let us hope better luck during the next cruise! In the following some more detailed comments about articles of the peopled who participated the trip.

Sabine Hossenfelder: Free Will function

Sabine Hossenfelder has a posting titled "Free Will function". I agree with Sabine that the idea about emergent free will is self deception. Free will does not emerge from a deterministic microscopic dynamics.

The believers in emergence say that free will is an effective concept. Not real but useful. If the system in question is complex enough and behaves non-predictably as seen by outsider one can say that it has effective free will. But why the impossibility to predict a deterministic dynamics in practice would generate the experience "I will do this!". There is absolutely no justification for this belief.

A good objection against this identification comes from neuro-science and is described in the article The Brain on Trial by David Eagleman. People suffering Tourette's syndrome, split brain patients, persons with side personalities, and patients with choreic motions behave from the point of view of outsider as they would have free will. Using biblical language: they act as if being possessed. They do not experience free will. Who wills? Who uses the biological body of the patient? Same questions can be made in the situation when people who have done mass murder become conscious and begin to wonder what these bloody hands have done. Who used these hands? Are we merely our brains and bodies? Who uses my biological body? What is this "me"? Is this particular biological body used only by single intentional agent, by single "me" only? I could continue by telling about the notion of magnetic body but will not do it here.

Acidic out-of-topic side remark: Effective theories have become the basic curse of theoretical physics today. No-one can seriously claim that string models say anything about the world of experimental physicists. But there is a loop hole. By postulating effective field theory approach one can build entire landscape of effective theories. This is non-sense but it works. The only honest reaction would be to admit that string models are nice theories but not theories about the world we live in.

Sabine Hossenfelder suggests as a solution something that she calls free will function. Sabine considers a machine spitting out digits of π. This process is fully deterministic but outsider has no means of predicting what the next digit will be and what number the digit sequence represents unless he manages to get the program code. The proposal is that our brain has this kind of free will function. The strange assumption is that the inability to predict would in some mysterious manner generate experience of free will. But Sabine a physicist has learned that one must forget all subjective elements when doing science. In this mental framework the only conceivable goal of a theory of consciousness is to eliminate it. The fruitless searches of "consciousness modules" assumed to reside somewhere in the brain are fruits of similar "consciousness as function" thinking.

Sean Carroll: Free will as real as baseball

Also Sean Carroll has written about free will in his posting "Free will as real as baseball". Sean belongs to the effective theory camp and sees free will as a convenient tool of description just like baseball is seen by a reductionist as a convenient abstraction to describe the dynamics of a condensed matter system.

Sean makes two very strange claims.

  1. The first strange claim is that free will is inconsistent with the laws of physics. This is the case only if the experienced time and geometric time of physicists are identified. The are not one and the same thing as even child realizes. Experienced time is irreversible and there is no subjective future. Geometric time is reversible and future and past are in the same position. In general relativity 4-D space-time region becomes the basic entity instead of time=constant snapshot which is the basic entity according to Newtonian thinking. Amusingly, all writers except Scott Aaronson seem to belong to the species of Newtonians as far as their views about time are considered. The first years of scientific education involves really heavy social conditioning and it is really hard to de-learn even the obviously wrong beliefs.

  2. The second strange claim of Sean Carroll is that the physics is completely understood in everyday realm! Really! Do we really understand the physics underlying living matter?! I cannot do help it: this kind of loose text book statements irritate me that suddenly the dull depressive mood has gone and I am full of adrenaline!

Interestingly, Sean Carroll notices analogy of poorly understood notion of free will with the poorly understood notion of time. The arrow of time is in conflict with microscopic reversibility but - according to Sean Carroll - physicists do not see this as a problem so that it is not a problem. Continuing in the same spirit: if billions of Chinese believe in communism then marxism is the only correct view about society and is indeed law of Nature! The effective theory solution is simple: also the arrow of time somehow emerges. Exactly how?: this we do not understand but it does not matter.

This is self deception. One should admit this and really try to understand second law. If one does this, the first observation is that Boltzmann's equations are deduced by assuming the occurrence of state function reductions in time scale much shorter than the time scale of observations. State function reduction is what makes quantum physics non-deterministic at the level of single quantum system - also internally in-consistent: the determinism of Schrödinger equation is in blatant conflict with state function reduction if one identifies experienced time with the geometric time of physicist. One should be able to resolve this logical flaw and this requires that the two times are different - something which of course even child knows! If we have two times we have also two independent causalities: the causality of field equations and that of "free will". This would be the first step towards the solution.

Sean Carroll also represents what he calls consequent argument. The argument begins with an innocent looking statement that our past is fixed. Therefore free will obeying field equations is impossible since it would change both future and past. Wrong again: the assumption about fixed past in the geometric sense need not be true. About subjective past it is. Already Wheeler was led to propose that in state function reduction the geometric past changes: see for the Wheeler's delayed choice experiment. Maybe Wheeler's general relativistic background helped him to make this conceptual leap, which leads very near to TGD view about quantum jump.

In TGD framework quantum states are superpositions of classical histories and quantum jumps replace them with new ones and the average geometric past also changes. The finding of Libet that in volitional act neural activity begins a fraction of second before the conscious decisions supports the idea that we are replacing our geometric past with a new one all the subjective time.

Sean Carroll notices also the ethical aspect of problem. If we really believe that free will is illusion, we have no justification for moral rules. The criminal has been doomed to perform his crime at the moment of Big Bang and we cannot therefore accuse him.

Of course, there could be something badly wrong in the brain of the mass murderer and it has indeed become clear that our behavior correlates strongly with biology. This does not however mean that free choices are not possible. Braid disorer only changes the probabilities of different outcomes of the choices. We have the experience of free will as every reader can testify. This we must accept and try to understand the physical correlates of this experience irrespective of whether the free will is real or not.

In fact, neuroscience has led to quite concrete progress in the understanding of the correlations between biology and behavior. This has also practical consequences. Many mass murderers have been victims of child abuse or have suffered from brain tumor. This does not mean that we should allow mass murders to continue with their rare hobby. We can however do our best to prevent child abuse. Also the degeneration of some regions of frontal lobes can lead to highly asocial behaviors when stimuli usually inhibited are not inhibited anymore. One could say that there are competing free wills using the same biological body and the one wanting to perform the mass murder wins.

These issues were discussed already at times of Dostojevski and Turgeniev. The fashionable thinking was that we are nothing but physiology and that we can indeed forget the rules of moral. The people propagating this view and trying to live according to this philosophy were known as nihilists: they were mad but fully logical in their madness. Many people calling themselves skeptics today are surprisingly near to these views. Thanks to God, most of us are to not too strict in their logics and follow their conscience rather than materialistic dogmas.

Scott Aaronson's view

G. Musser has summarized computer scientist Scott Aaronson's talk about free will.

Also Scott Aaronson studies the idea of reducing free will to behavior observed from outside. Aaronson's thought experiment considers a Turing like test allowing to decide whether you have free will. A computer model of you is built using all available data about the initial state of your brain: this of course assumes determinism or at least quantum statistical determinism. If the computer is able to mimic your behavior faithfully, one can say that you have no free will. The proponent of effective free will might say that the longer the needed computer code is, the more you have effective free will. This kind of free-will-meter is of course not possible in practice except with some accuracy so that the whole thing reduces to mere mimicry, kind of parameter fit.

Aaronson represents the non-cloning theorem of quantum theory as a first principle objection against Turing test of free-will-meter. Even in principle it is not possible to construct complete copy of brain state to make a complete simulation possible. This kind of machine would be successful in what Aaronson calls Toddler test but this would be a fake success. Any toddler says completely predictably "No" to any question. We however know that the toddler expresses by behaving irrationally that he/she has discovered his/her free will (but can this kind of free will be effective?)!

Aaronson brings in special relativity and notices that free will means also backward causation if it is to be consistent with the causality of field equations. From this it would be only a short step to the realization that the causality of free will could act in the space of quantum states defined as superposition of solutions of classical field equations consistent with holography in the sense that 3-D section determines the entire space - at least below certain scale! The problem would have been solved! Scott makes a near miss!

To sum up, Aaronson dimly realizes that in general relativity - and in any 4-D Universe obeying general coordinate invariance - we live in a kind of block world consisting of 4-D blocks but the other writers continue in the good old Newtonian style. In TGD zero energy ontology would realize blocks as causal diamonds and would extend free will from a mere choice between given alternatives to creation of new worlds. Sabine Hossenfelder realizes that emergence is self deception: I cannot but agree. Sean Carroll grasps the full meaning of the absence of free will at the level of moral issues. Eagleman describes real life situations, which should be highly valuable for any-one proposing in earnest a theory of consciousness. Also the lecture of Eagleman about perceived time was excellent. To me it seems that physicists and (quantum) computer scientists should be able to forget for a moment their formulas and rhetorics making possible to get rid of a problems they cannot solve, and open their minds for the problem to get settled.

Wednesday, February 15, 2012

Quantum Adeles as a Golden Road to Number Theoretical Universality?

Quantum arithmetics is a notion which emerged as a possible resolution of long-lived challenge of finding mathematical justification for the canonical identification mapping p-adics to reals playing key role in p-adic mass calculations. The model for Shnoll effect was the bridge leading to the discovery of quantum arithmetics.

  1. What quantum arithmetics suggests is a modification of p-adic numbers by replacing p-adic pinary expansions with their quantum counterparts allowing the coefficients of prime powers to be integers not divisible by p.

  2. A further constraint is that quantum integers respect the decomposition of integer to powers of prime. Quantum p-adic integers are to p-adic integers what the integers in the extension of number field are for the number field and one can indeed identify Galois group Gp for each prime p and form adelic counterpart of this group as Cartesian product of all Gp:s. After various trials it turned out that quantum p-adics are indeed quantal in the sense that one can assign to given quantum p-adic integer n a wave function at the orbit of corresponding Galois group decomposing to Galois groups of its prime factors of n. The basic conditions are that ×q and +q satisfy the basic associativity and distributivity laws.

    One can interpret ×q and +q and their co-algebra operations as 3-vertices for number theoretical Feynman diagrams describing algebraic identities X=Y having natural interpretation in zero energy ontology. The two vertices have direct counterparts as two kinds of basic topological vertices in quantum TGD (stringy vertices and vertices of Feynman diagrams). This allows to deduce very precise information about the symmetries of the vertices needed to satisfy the associativity and distributivity and actually fix them highly uniquely, and therefore determined corresponding zero energy states having collections of integers as counterparts of incoming positive energy (or negative energy) particles.

    This gives strong support for the old conjectures that generalized Feynman diagrams have number theoretic interpretation and allow moves transforming them to tree diagrams - also this generalization of old-fashioned string duality is old romantic idea of quantum TGD. The moves for generalized Feynman diagrams would code for associativity and distributivity of quantum arithmetics. Also braidings with strands labelled by the primes dividing the integer emerge naturally so that the connection with quantum TGD proper becomes very strong.

  3. Canonical identification finds a fundamental role in the definition of the norm for both quantum p-adics and quantum adeles.

  4. There are arguments suggesting that quantum p-adics form a field so that also differential calculus and even integral calculus would make sense since quantum p-adics inherit well-ordering from reals via canonical identification.

The ring of adeles is essentially Cartesian product of different p-adic number fields and reals.

  1. The proposal is that adeles can be replaced with quantum adeles. Gp has natural action on quantum adeles allowing to construct representations of Gp. This norm for quantum adeles is the ordinary Hilbert space norm obtained by first mapping quantum p-adic numbers in each factor of quantum adele by canonical identification to reals.

  2. Also quantum adeles could form form a field rather than only ring so that also differential calculus and even integral calculus could make sense. This would allow to replace reals by quantum adeles and in this manner to achieve number theoretical universality. The natural applications would be to quantum TGD, in particular to construction of generalized Feynman graphs as amplitudes which have values in quantum adele valued function spaces associated with quantum adelic objects. Quantum p-adics and quantum adeles suggest also solutions to a number of nasty little inconsistencies, which have plagued to p-adicization program.

  3. One must of course admit that quantum arithmetics is far from a polished mathematical notion. It would require a lot of work to see whether the dream about associative and distributive function field like structure allowing to construct differential and integral calculus is realized in terms of quantum p-adics and even in terms of quantum adeles. This would provide a realization of number theoretical universality.

Ordinary adeles play a fundamental technical tool in Langlands correspondence. The goal of classical Langlands program is to understand the Galois group of algebraic numbers as algebraic extension of rationals - Absolute Galois Group (AGG) - through its representations. Invertible adeles define Gl1 which can be shown to be isomorphic with the Galois group of maximal Abelian extension of rationals (MAGG) and the Langlands conjecture is that the representations for algebraic groups with matrix elements replaced with adeles provide information about AGG and algebraic geometry.

The crazy question is whether quantum adeles could be isomorphic with algebraic numbers and whether the Galois group of quantum adeles could be isomorphic with AGG or with its commutator group. If so, AGG would naturally act is symmetries of quantum TGD. The connection with infinite primes leads to a proposal what quantum p-adics and quantum adeles associated with algebraic extensions of rationals could be and provides support for the conjecture. The Galois group of quantum p-adic prime p would be isomorphic with the ordinary Galois group permuting the factors in the representation of this prime as product of primes of algebraic extension in which the prime splits.

Objects known as dessins d'enfant provide a geometric representation for AGG in terms of action on algebraic Riemann surfaces allowing interpretation also as algebraic surfaces in finite fields. This representation would make sense for algebraic partonic 2-surfaces, and could be important in the intersection of real and p-adic worlds assigned with living matter in TGD inspired quantum biology, and would allow to regard the quantum states of living matter as representations of AGG. Quantum Adeles would make these representations very concrete by bringing in cognition represented in terms of quantum p-adics.

Quantum Adeles could allow to realize number theoretical universality in TGD framework and would be essential in the construction of generalized Feynman diagrams as amplitudes in the tensor product of state spaces assignable to real and p-adic number fields. Canonical identification would allow to map the amplitudes to reals and complex numbers. Quantum Adeles also provide a fresh view to conjectured M8-M4×CP2 duality, and the two suggested realizations for the decomposition of space-time surfaces to associative/quaternionic and co-associative/co-quaternionic regions.

For detais see the new chapter Quantum Adeles of "Physics as Generalized Number Theory".

Tuesday, February 14, 2012

No stop but maybe cold fusion

The rumors about the detection of stop particle at LHC have been circulating for some time. Here stop is understood in the sense of standard SUSY predicting R-parity conservation so that sparticles are produced only in pairs and that stop is the lightest squark. Missing energy corresponding to lightest - and thus stable - neutral sparticle is the basic decay signature of stop in this sense.

For those who took these rumors as more than wishful thinking, the ATLAS collaboration produced a dissappointment: the analysis of integrated luminosity 2.05/fb shows no significant excess. The new limits tell that gluino mass must be above 650 GeV stop mass above 450 GeV.

Also in TGD framework both Higgs and SUSY are also in TGD are creators of tension. It would be nice to have a computer program listing the predictions of the theory but the situation is not so simple. Developing and interpreting the theory is a complex process requiring a continual interaction with experiment making educated guesses. Even in the case of Higgs the situation in TGD is still not closed. Higgs is not needed in TGD and no-Higgs option is the most elegant one: but does Nature think in the same manner as I happen to do just now?

SUSY in TGD sense means that sfermion is obtained by adding right handed neutrino to a wormhole throat carrying quantum numbers of fermion. R-parity as well as B and L are conserved and spartners are created in pairs. The simplest option is that the right-handed neutrino corresponds to a covariantly constant spinor in CP2 degrees of freedom. More complex option possibly allowed by super-conformal symmetry is that right-handed neutrinos appear as color octets.

LHC tells us that sfermions and gluinos are heavy very (TeV mass scale) if they obey standard SUSY. The conclusion comes from the missing missing energy. This conclusion might be circumvented in TGD.

  1. Squarks are colored and interact strongly. This allows them to fuse together to form shadrons: say smesons formed from squark pair. This could be the dominating decay channel leading eventually to ordinary hadrons.

  2. For covariantly constant right handed neutrino this however leaves the decays of squarks to quarks and electroweak gauginos proceeding with a rate fixed by electro-weak gauge symmetry. The situation seems to be like that in standard SUSY. Gauginos would decay eventually produce missing energy seen as righthanded neutrinos which mix with left handed components. It might well be that LHC already kills this option unless one assumes short enough p-adic length scales for squarks which is of course possible.

  3. If right-handed neutrino is in color octet partial wave, the situation changes. Shadrons are the only final states by color confinement and quarks and squarks could have even same p-adic mass scale for both ordinary and M89 hadron physics. Fuel for the speculations with this option comes from so called X and Y bosons, which are charmonium like states which should not be there: are they scharmoniums? There are also two other anomalies discussed in previous posting suggesting that mesons have what I call IR Regge trajectories with mass scale of 38 MeV. They are very natural in TGD framework in which hadrons are accompanied by color magnetic flux tubes behaving like string-like objects and thus contributing to to hadron mass a stringy contribution with a small string tension. Is TGD SUSY needed to explain X and Y boson or could also IR Regge trajectories do the same (probably not): this is the first thing to check. Quite often I feel that this endless questioning rather frustrating. Life would be so easy if I could just believe.

Blind believing makes things simple but eventually it leads to painful conflicts with facts. Lubos has been especially strong believer of stop rumours and it is a pity that he is wrong again with so much authority a (big names such as Gell Mann) behind his arguments;-).

This is hard time for Lubos also otherwise;-): Lubos has used all tools of bad rhetorics to attack cold fusion but demonstrations continue to generate support for the effect. The progress of physics understood as a reductionistic (and highly imperialistic;-)) enterprise proceeding to shorter and shorter length scales has perhaps been quite not so successful as we have been taught. There are a lot of bridges of belief on the road of reductionism and this particular bridge - the belief that there is no interaction between atomic and nuclear length scales - might be collapsing under merciless pressures of cold fusion researchers whom Lubos does not want to count as scientists at all. It might be that we do not understand nuclear physics properly, and this mis-understanding - if it continues- can have profound impact on the future of our civilization.

Even worse, there will be a cold fusion colloqium - and believe or not - at CERN! On Thursday, March 22nd. I have written some postings earlier debunking the cold fusion debunkings of Lubos (see for instance this). I admit that I have to make a conscious effort to keep a fully serious face;-). Here is the rant of Lubos inspired by cold fusion colloqium at CERN. Lubos is learning- or at least he should finally learn - that authority means absolutely nothing for Nature.

More evidence for IR Regge trajectories

TGD based view about non-perturbative aspects of hadron physics (see this) relies on the notion of color magnetic flux tubes. These flux tubes are string like objects and it would not be surprising if the outcome would be satellite states of hadrons with string tension below the pion mass scale. One would have kind of infared Regge trajectories satisfying in a reasonable approximation a mass formula analogous to string mass fomula. What is amazing that this phenomenon could allow new interpretation for the claims for a signal interpreted as Higgs at several masses (115 GeV by ATLAS, at 125 GeV by ATLAS and CMS, and at 145 GeV by CDF).

Consider first the mass formula for the hadrons at IR Regge trajectories.

  1. There are two options depending on whether the mass squared or mass for hadron and for the flux tubes are assumed to be additive. p-Adic physics would suggest that if the p-adic primes characterizing the flux tubes associated with hadron and hadron proper are different then mass is additive. If the p-adic prime is same, the mass squared is additive.

  2. The simplest guess is that the IR stringy spectrum is universal in the sense that m0 does not depend on hadron at all. This is the case if the flux tubes in question correspond to hadronic space-time sheets characterized by p-adic prime M107 in the case of ordinary hadron physics. This would give for the IR contribution to mass the expression

    m2=(m02+ nm12)1/2 .

  3. The net mass of hadron results from the contribution of the "core" hadron and the stringy contribution. If mass squared is additive, one obtains

    m(Hn)= [m2(H0) +m02+ nm12]1/2,

    where H0 denotes hadron ground state and Hn its excitation assignable to magnetic flux tube. For heavy hadrons this would give the approximate spectrum

    m(Hn)≈ m(H0)+ [m02+nm12]/2m(H0) .

    The mass unit for the excitations decreases with the mass of the hadron.

  4. If mass is additive as one indeed expects since the p-adic primes characterizing heavy quarks are smaller than hadronic p-adic prime, one obtains

    m(Hn)= m(H0)+ (m02+ nm12)1/2 .

    For m02>> m12 one has

    m(Hn)= m(H0)+ m0+ nm12/2m0 .

    If the flux tubes correspond to p-adic prime. This would give linear spectrum which is same for all hadrons.

There is evidence for this kind of states.

  1. Tatischeff and Tomasi-Gustafsson claim the existence of states analogous to ordinary pion with masses 60, 80, 100, 140,.... MeV. Also nucleons have this kind of satellite states.

  2. Second piece of evidence comes from two articles by Eef van Beveren and George Rupp. The first article is titled First indications of the existence of a 38 MeV light scalar boson. Second article has title Material evidence of a 38 MeV boson . The basic observations are following. The rate for the annihilation e++e-→ uubar assignable to the reaction e++e-→ π+π- has a small periodic oscillation with a period of 78+/- 2 MeV and amplitude of about 5 per cent. The rate for the annihilation e++e-→ bbbar, assignable to the reaction e++e-→ Υπ+π- has similar oscillatory behavior with a period of 73+/- 3 MeV and amplitude about 12.5 per cent. The rate for the annihilation ppbar→ cbbar assignable to the reaction e++e-→ J/Ψπ+π- has similar oscillatory behavior with period of 79+/- 5 MeV and amplitude .75 per cent.

    In these examples universal Regge slope is consistent with the experimental findings and supports additive mass formula and the assignment of IR Regge trajectories to hadronic flux tubes with fixed p-adic length scale.

What does one obtain if one scales up the IR Regge trajectories to the M89 which replaces Higgs in TGD framework?

  1. In the case of M89 pion the mass differences 20 MeV and 40 MeV appearing in the IR Regge trajectories of pion would scale up to 10 GeV and 20 GeV respectively. This would suggest the spectrum of pion like states with masses 115, 125, 145, 165 GeV. What makes this interesting that ATLAS reported during last year evidence for a signal at 115 GeV taken as evidence for Higgs and CDF reported before this signal taken as evidence for Higgs around 145 GeV! 125 GeV is the mass of the the most recent Higgs candidate. Could it be that all these reported signals have been genuine signals - not for Higgs- but for M89 pion and corresponding spion consisting of squark pair and its IR satellites?

  2. I the case of M89 hadron physics the naive scaling of the parameters m0 and m1 by factor 512 would scale 38 MeV to 19.5 GeV.

Tuesday, February 07, 2012

Indeed! Is it really Higgs?

Jester comments the latest release of results from LHC relating to the signal interpreted by all fashionable and well-informed physics bloggers as Higgs.

Additional support for a resonance at 125 GeV is emerging. What is new are two events which are interpreted as fusion of two W bosons to Higgs. This is very nice. The only problem is that the predicted rate for these events is so small for standard model Higgs that they would not have been observed. Second not anymore pleasant surprise is that Higgs candidates are indeed produced but with a rate twice than the predicted rate.

Hitherto these signals which are too strong to allow interpretation as standard model Higgs have been interpreted by saying that both CMS and ATLAS have been "lucky". I warned already in the previous Higgs posting that if this good luck continues, it turns to a serious problem. And as Jester mentions, already now people are beginning to suspect that this Higgs is not quite the standard model Higgs. The next step will come sooner or later and will be a cautious proposal spoiling the euphoric mood of co-bloggers: perhaps it is not Higgs at all!

But things go slowly. Colleagues are rather conformistic and remarkably slow as thinkers. There are even those who are still making bets for standard SUSY;-)! I can however hope that after this step colleagures would be finally psychologically mature to consider the TGD prediction for M89 hadron physics as an alternative to Higgs. Accepting this hypothesis as something worth of testing would mean enormous progress on both the theoretical and the experimental side.

Thursday, February 02, 2012

One more good reason for p-adic cognition

One can present several justifications for why p-adic numbers are natural correlates of cognition and why p-adic topology is tailor-made for computation. One possible justification derives from the ultrametricity of p-adic norm stating that the p-adic norm is never larger than the maximum of the norms of summands.

If one forms functions of real arguments, a cutoff in decimal or more general expansion of arguments introduces a cumulating error, and in principle one must perform calculation assuming that the number of digits for the arguments of function is higher than the number digits required by the cutoff, and drop the surplus digits at the end of the calculations.

In p-adic case the situation is different. The sum for the errors resulting from cutoffs is never p-adically larger than the largest individual error so that there is no cumulation of errors , and therefore no need for surplus pinary digits for the arguments of the function. In practical computations this need not have great significance unless they involve very many steps but in cognitive processing the situation might be different.

Wednesday, February 01, 2012

Bullying as a national disease

We have a presidential election in Finland. The two main candidates are Sauli Niinistö and Pekka Haavisto. Niinistö can be said to represent the old world order in which economical values dictate everything. Haavisto is a representative of the new world order in which humanity, freedom, equality, and environment represent the most important values. For me the choice between these options is easy although I have nothing against Niinistö personally.

Haavisto crystallized something very essential about Finland as a nation as he said that bullying is the national disease of Finland. Teasing begins already in elementary schools and continues in various educational establishments and eventually it continues at working places. Web has become also an arena of bullying providing completely new opportunities. Now and then some-one gets enough. The two mass murders that took place in educational establishments for few years ago are just two sad examples of what "enough is enough" really means.

Personally I belong to the victims of academic bullying. The terror began for 34 years ago and has continued since then. I have lost my academic human rights and have been unemployed most of the time after I began to write my thesis 1977. I will remain so until I get to the age of 63 (only two years of this humiliation anymore!) and start to receive a minimal pension. I have done impressive life work: 15 books making about 12 thousand pages and a lot of articles. This does not means anything since so called "evaluation by equals" (direct translation for "vertaisarviointi"), a scientific equivalent of inquisition, can be used to label my work as crackpottery. There are many people out-broad and also in Finland who appreciate my work and they have made attempts to inform about my work in Wikipedia but (very probably finnish) censors have reacted immediately and vandalized the attempts.

I have tried to understand what drives people to this kind of sadistic behaviors in which human life is literally destroyed. As far as I know, these people are quite descent human beings as individuals. But as members of collective they become sadistic beasts. Or some of them. The others remain completely passive and this is probably the core problem. We do not have the courage to say no when some sociopath starts the cruel game. I am of course just one of the many victims of this national hobby and I sincerely hope that Finland as a nation could heal from it. Haavisto is certainly experienced as a symbol of this healing and my sincere hope is that he wins.