https://matpitka.blogspot.com/search?updated-max=2007-08-04T00:45:00-07:00

Thursday, August 02, 2007

p-Adic rigs and category theoretic definition of algebraic numbers

I already told about the idea of representing negative integers and even rationals as p-adic fractals. To gain additional understanding I decided to look at Weekly Finds (Week 102) of John Baez to which Kea gave link. Fascinating reading! Thanks Kea!

The outcome was the realization that the notion of rig used to categorify the subset of algebraic numbers obtained as roots of polynomials with natural number valued coefficients generalizes trivially by replacing natural numbers by p-adic integers. As a consequence one obtains beautiful p-adicization of the generating function F(x) of structure as a function which converges p-adically for any rational x=q for which it has prime p as a positive power divisor.

Effectively this generalization means the replacement of natural numbers as coefficients of the polynomial defining the rig with all rationals, also negative, and all complex algebraic numbers find a category theoretical representation as "cardinalities". These cardinalities have a dual interpretation as p-adic integers which in general correspond to infinite real numbers but are mappable to real numbers by canonical identification and have a geometric representation as fractals as discussed in the previous posting.

1. Mapping of objects to complex numbers and the notion of rig

The idea of rig approach is to categorify the notion of cardinality in such a manner that one obtains a subset of algebraic complex numbers as cardinalities in the category-theoretical sense. One can assign to an object a polynomial with coefficients, which are natural numbers and the condition Z=P(Z) says that P(Z) acts as an isomorphism of the object. One can interpret the equation also in terms of complex numbers. Hence the object is mapped to a complex number Z defining a root of the polynomial interpreted as an ordinary polynomial: it does not matter which root is chosen. The complex number Z is interpreted as the "cardinality" of the object but I do not really understand the motivation for this. The deep further result is that also more general polynomial equations R(Z)= Q(Z) satisfied by the generalized cardinality Z imply R(Z)= Q(Z) as isomorphism. This means that algebra is mapped to isomorphisms.

I try to reproduce what looks the most essential in the explanation of John Baez and relate it to my own ideas but take this as my talk to myself and visit This Week's Finds to learn of this fascinating idea.

  1. Baez considers first the ways of putting a given structure to n-element set. The set of these structures is denoted by Fn and the number of them by Fn. The generating function F(x) = ∑nFnxn packs all this information to a single function.

    For instance, if the structure is binary tree, this function is given by T(x)= ∑nCn-1xn, where Cn-1 are Catalan numbers and n>0 holds true. One can show that T satisfies the formula

    T= X+T2

    since any binary tree is either trivial or decomposes to a product of binary trees, where two trees emanate from the root. One can solve this second order polynomial equation and the power expansion gives the generating function.

  2. The great insight is that one can also work directly with structures. For instance, by starting from the isomorphism T=1+T2 applying to an object with cardinality 1 and substituting T2 with (1+T2)2 repeatedly, one can deduce the amazing formula T7(1)=T(1) mentioned by Kea, and this identity can be interpreted as an isomorphism of binary trees.

  3. This result can be generalized using the notion of rig category (Marcelo Fiore and Tom Leinster, Objects of categories as complex numbers, available as math.CT/0212377). In rig category one can add and multiply but negatives are not defined as in the case of ring. The lack of subtraction and division is still the problem and as I suggested in previous posting p-adic integers might resolve the problem.

    Whenever Z is object of a rig category, one can equip it with an isomorphism Z=P(Z) where P(Z) is polynomial with natural numbers as coefficients and one can assign to object "cardinality" as any root of the equation Z=P(Z). Note that set with n elements corresponds to P(Z)= n. Thus subset of algebraic complex numbers receive formal identification as cardinalities of sets. Furthermore, if the cardinality satisfies another equation Q(Z)= R(Z) such that neither polynomial is constant, then one can construct an isomorphism Q(Z)= R(Z). Isomorphisms correspond to equations which is nice!

  4. This is indeed nice that there is something which is not so beautiful as it could be: why should we restrict ourselves to natural numbers as coefficients of P(Z)? Could it be possible to replace them with integers to obtain all complex algebraic numbers as cardinalities? Could it be possible to replace natural numbers by p-adic integers? Oops! I told it! All tension of drama is now lost! Sorry!

2. p-Adic rigs and Golden Object as representation p-adic -1

The notions of generating function and rig generalize to the p-adic context.

  1. The generating function F(x) defining isomorphism Z in the rig formulation converges p-adically for any p-adic number x containing p as a factor so that the idea that all structures have p-adic counterparts is natural. In the real context the generating function typically diverges and must be defined by analytic continuation. Hence one might even argue that p-adic numbers are more natural in the description of structures assignable to finite sets than reals.

  2. For rig one considers only polynomials P(Z) (Z corresponds to the generating function F) with coefficients which are natural numbers. Any p-adic integer can be however interpreted as a non-negative integer: natural number if it is finite and "super-natural" number if it is infinite. Hence can generalize the notion of rig by replacing natural numbers by p-adic integers. The rig formalism would thus generalize to arbitrary polynomials with integer valued coefficients so that all complex algebraic numbers could appear as cardinalities of category theoretical objects. Even rational coefficients are allowed. This is highly natural number theoretically.

  3. For instance, in the case of binary trees the solutions to the isomorphism condition T=p+T2 giving T= [1+/- (1-4p)1/2]/2 and T would be complex number [p+/-(1-4p)1/2]/2. T(p) can be interpreted also as a p-adic number by performing power expansion of square root: this super-natural number can be mapped to a real number by the canonical identification and one obtains also the set theoretic representations of the category theoretical object T(p) as a p-adic fractal. This interpretation of cardinality is much more natural than the purely formal interpretation as a complex number. This argument applies completely generally. The case x=1 discussed by Baez gives T= [1+/-(-3)1/2]/2 allows p-adic representation if -3==p-3 is square mod p. This is the case for p=7 for instance.

  4. John Baez poses also the question about the category theoretic realization of Golden Object, his big dream. In this case one would have Z= G= -1+G2=P(Z). The polynomial on the right hand side does not conform with the notion of rig since -1 is not a natural number. If one allows p-adic rigs, x=-1 can be interpreted as a p-adic integer (p-1)(1+p+...), positive and infinite and "super-natural", actually largest possible p-adic integer in a well defined sense. A further condition is that Golden Mean converges as a p-adic number: this requires that sqrt(5) must exist as a p-adic number: (5=1+4)1/2 certainly converges as power series for p=2 so that Golden Object exists 2-adically. By using quadratic resiprocity theorem of Euler, one finds that 5 is square mod p only if p is square mod 5. To decide whether given p is Golden it is enough to look whether p mod 5 is 1 or 4. For instance, p=11, 19, 29, 31 (M5) are Golden. Mersennes Mk,k=3,7,127 and Fermat primes are not Golden. One representation of Golden Object as p-adic fractal is the p-adic series expansion of [1/2+/-51/2]/2 representable geometrically as a binary tree such that there are 0< xn+1≤p branches at each node at height n if n:th p-adic coefficient is xn. The "cognitive" p-adic representation in terms of wavelet spectrum of classical fields is discussed in the previous posting.

  5. It would be interesting to know how quantum dimensions of quantum groups assignable to Jones inclusions relate to the generalized cardinalities. The root of unity property of quantum phase (qn+1=1) suggests Q=Qn+1=P(Q) as the relevant isomorphism. For Jones inclusions the cardinality q =exp(i2π/n) would not be however equal to quantum dimension d(n)= 4cos2(π/n).

For details see the chapter Category Theory, Quantum TGD, and TGD Inspired Theory of Consciousness of "TGD as Generalized Number Theory".

Does a set with -1 one elements exist in some sense?

I find Kea's blog interesting because it allows to get some grasp about very different styles of thinking of a mathematician and physicist. For mathematician it is very important that the result is obtained by a strict use of axioms and deduction rules. Physicist (at least me: I dare to count me as physicist) is a cognitive opportunist: it does not matter how the result is obtained by moving along axiomatically allowed paths or not, and the new result is often more like a discovery of a new axiom and physicist is ever-grateful for Gödel for giving justification for what sometimes admittedly degenerates to a creative hand-waving. For physicist ideas form a kind of bio-shere and the fate of the individual idea depends on its ability to survive, which is determined by its ability to become generalized, its consistency with other ideas, and ability to interact with other ideas to produce new ideas.

During last days we have had a little bit of discussion inspired by the problem related to the categorification of basic number theoretical structures. I have learned from Kea that sum and product are natural operations for objects of category but that subtraction and division are problematic. I dimly realize that this relates to the fact that negative numbers and inverses of integers do not have a realization as a number of elements for any set. The naive physicist inside me asks immediately: why not go from statics to dynamics and take operations (arrows with direction) as objects: couldn't this allow to define subtraction and division? Is the problem that the axiomatization of group theory requires something which purest categorification does not give? Or aren't the numbers representable in terms of operations of finite groups not enough? In any case cyclic groups would allow to realize roots of unity as operations (Z2 would give -1).

I also wonder in my own simplistic manner why the algebraic numbers might not somehow result via the representations of permutation group of infinite number of elements containing all finite groups and thus Galois groups of algebraic extensions as subgroups? Why not take the elements of this group as objects of the basic category and continue by building group algebra and hyper-finite factors of type II1 isomorphic to spinors of world of classical worlds, and...yes-yes-yes, I must stop!

This discussion led me to ask what the situation is in the case of p-adic numbers. Could it be possible to represent the negative and inverse of p-adic integer, and in fact any p-adic number, as a geometric object? In other words, does a set with -1 or 1/n elements exist? If this were in some sense true for all p-adic number fields, then all this wisdom combined together might provide something analogous to the adelic representation for the norm of a rational number as product of its p-adic norms.

Of course, this representation might not help to define p-adics or reals categorically but might help to understand how p-adic cognitive representations defined as subsets for rational intersections of real and p-adic space-time sheets could represent p-adic number as the number of points of p-adic fractal having infinite number of points in real sense but finite in the p-adic sense. This would also give a fundamental cognitive role for p-adic fractals as cognitive representations of numbers.

1. How to construct a set with -1 elements?

The basic observation is that p-adic -1 has the representation

-1=(p-1)/(1-p)=(p-1)(1+p+p2+p3....)

As a real number this number is infinite or -1 but as a p-adic number the series converges and has p-adic norm equal to 1. One can also map this number to a real number by canonical identification taking the powers of p to their inverses: one obtains p in this particular case. As a matter fact, any rational with p-adic norm equal to 1 has similar power series representation.

The idea would be to represent a given p-adic number as the infinite number of points (in real sense) of a p-adic fractal such that p-adic topology is natural for this fractal. This kind of fractals can be constructed in a simple manner: from this more below. This construction allows to represent any p-adic number as a fractal and code the arithmetic operations to geometric operations for these fractals.

These representations - interpreted as cognitive representations defined by intersections of real and p-adic space-time sheets - are in practice approximate if real space-time sheets are assumed to have a finite size: this is due to the finite p-adic cutoff implied by this assumption and the meaning a finite resolution. One can however say that the p-adic space-time itself could by its necessarily infinite size represent the idea of given p-adic number faithfully.

This representation applies also to the p-adic counterparts of algebraic numbers in case that they exist. For instance, roughly one half of p-adic numbers have square root as ordinary p-adic number and quite generally algebraic operations on p-adic numbers can give rise to p-adic numbers so that also these could have set theoretic representation. For p mod 4=1 also sqrt(-1) exists: for instance, for p=5: 22=4=-1 mod 5 guarantees this so that also imaginary unit and complex numbers would have a fractal representation. Also many transcendentals possess this kind of representation. For instance exp(xp) exists as a p-adic number if x has p-adic norm not larger than 1. log(1+xp) also.

Hence a quite impressive repertoire of p-adic counterparts of real numbers would have representation as a p-adic fractal for some values of p. Adelic vision would suggest that combining these representations one might be able to represent quite a many real numbers. In the case of π I do not find any obvious p-adic representation (for instance sin(π/6)=1/2 does not help since the p-adic variant of the Taylor expansion of π/6;=arcsin(1/2) does not converge p-adically for any value of p). It might be that there are very many transcendentals not allowing fractal representation for any value of p.

2. Conditions on the fractal representations of p-adic numbers

Consider now the construction of the fractal representations in terms of rational intersections of real real and p-adic space-time sheets. The question is what conditions are natural for this representation if it corresponds to a cognitive representation is realized in the rational intersection of real and p-adic space-time sheets obeying same algebraic equations.

  1. Pinary cutoff is the analog of the decimal cutoff but is obtained by dropping away high positive rather than negative powers of p to get a finite real number: example of pinary cutoff is -1=(p-1)(1+p+p2+...)→ (p-1)(1+p+p2). This cutoff must reduce to a fractal cutoff meaning a finite resolution due to a finite size for the real space-time sheet. In the real sense the p-adic fractal cutoff means not forgetting details below some scale but cutting out all above some length scale. Physical analog would be forgetting all frequencies below some cutoff frequency in Fourier expansion.

    The motivation comes from the fact that TGD inspired consciousness assigns to a given biological body there is associated a field body or magnetic body containing dark matter with large hbar and quantum controlling the behavior of biological body and so strongly identifying with it so as to belief that this all ends up to a biological death. This field body has an onion like fractal structure and a size of at least order of light-life: at least 100 happy light years in my own case is my optimistic expectation. Of course, also larger onion layers could be present and would represent those levels of cognitive consciousness not depending on the sensory input on biological body: some altered states of consciousness could relate to these levels. In any case, the larger the magnetic body, the better the numerical skills of the p-adic mathematician;-).

  2. Lowest pinary digits of x= x0+x1p+x2p2+..., xn<p must have the most reliable representation since they are the most significant ones. The representation must be also highly redundant to guarantee reliability. This requires repetitions and periodicity. This is guaranteed if the representation is hologram like with segments of length pn with digit xn represented again and again in all segments of length pm, m>n.

  3. The TGD based physical constraint is that the representation must be realizable in terms of induced classical fields assignable to the field body hierarchy of an intelligent system interested in artistic expression of p-adic numbers using its own field body as instrument. As a matter, sensory and cognitive representations are realized at field body in TGD Universe and EEG is in a fundamental role in building this representation. By p-adic fractality fractal wavelets are the most natural candidate. The fundamental wavelet should represent the p different pinary digits and its scaled up variants would correspond to various powers of p so that the representation would reduce to a Fourier expansion of a classical field.

3. Concrete representation

Consider now a concrete candidate for a representation satisfying these constraints.

  1. Consider a p-adic number

    y= pn0x, x= ∑ xnpn, n≥n0=0.

    If one has representation for a p-adic unit x the representation of is by a purely geometric fractal scaling of the representation by pn. Hence one can restrict the consideration to p-adic units.

  2. To construct the representation take a real line starting from origin and divide it into segments with lengths 1, p, p2,.... In TGD framework this scalings come actually as powers of p1/2 but this is just a technical detail.

  3. It is natural to realize the representation in terms of periodic field patterns. One can use wavelets with fractal spectrum pnλ0 of "wavelet lengths", where λ0 is the fundamental wavelength. Fundamental wavelet should have p different patterns correspond to the p values of pinary digit as its structures. Periodicity guarantees the hologram like character enabling to pick n:th digit by studying the field pattern in scale pn anywhere inside the field body.

  4. Periodicity guarantees also that the intersections of p-adic and real space-time sheets can represent the values of pinary digits. For instance, wavelets could be such that in a given p-adic scale the number of rational points in the intersection of the real and p-adic space-time sheet equals to xn. This would give in the limit of an infinite pinary expansion a set theoretic realization of any p-adic number in which each pinary digit xn corresponds to infinite copies of a set with xn elements and fractal cutoff due to the finite size of real space-time sheet would bring in a finite precision. Note however that p-adic space-time sheet necessarily has an infinite size and it is only real world realization of the representation which has finite accuracy.

  5. A concrete realization for this object would be as an infinite tree with xn+1 ≤ p branches in each node at level n (xn+1 is needed in order to avoid the splitting tree at xn=0). In 2-adic case -1 would be represented by an infinite pinary tree. Negative powers of p correspond to the of the tree extending to a finite depth in ground.

For details see the chapter Category Theory, Quantum TGD, and TGD Inspired Theory of Consciousness of "TGD as Generalized Number Theory".

Tuesday, July 31, 2007

The problem of staggered fermions

In QCD blog Life on lattice there is a summary about Lattice-2007 conference. The talks of the second day touched also the problem of "staggered fermions". The talk Why rooting fails by Michael Creutz gives an enjoyable summary about the problem.

In QCD not only spinor fields in space-time are dynamical: also spinor structure itself becomes dynamical if space-time topology is non-trivial. In lattice QCD one effectively replaces space-time with 4-D torus. Non-trivial topology implies that there are 24 non-equivalent spinor structures, which means that one obtains 16-fold degeneracy of fermions. In the case of circle there would be 2-fold degeneracy: gamma matrix can be identified as γ or -γ: both choices anticommute to metric but they cannot be transformed to each other by a local Lorentz transformation. In lattice QCD this problem manifests as 16 different lattice Dirac operators γk sin(apk)/a, where a defines the lattice spacing and can have both signs.

The challenge is to get rid of this unphysical degeneracy ("tastes"). One could argue that the problem is an artifact of the periodic boundary conditions effectively replacing space-time with four-torus but the formalism of lattice gauge theory does not seem to agree with his view. From the lecture of Creutz one learns that in the perturbative approach the problem can be resolved by replacing the fermionic determinant by its 16:th root but that the situation changes in non-perturbative approach. This seems understandable since small perturbations mean only dynamics of spinor fields, not the discrete dynamics of spinor structure.

Why I decided to wrote this little comment is that this problem might not be a mere nasty technical detail but a clear signature for the fact that the notion of completely free ordinary spinor structure is not physical. In TGD framework induced spinor fields are dynamical but not the spinor structure, which is induced from that of H=M4×CP2. Whatever the TGD counterpart of lattice QCD might be, there is only single spinor structure to be induced to the space-time sheet so that the problem must disappear.

Note however that standard spinor structure in CP2 does not exist: a generalized spinor structure is obtained by coupling the two spinor chiralities to an odd multiple of Kähler gauge potential: this is absolutely essential for reproducing correct electroweak quantum numbers for the two H-chiralities of H-spinors identified as quark and lepton spinors (color is not spinlike quantum number but corresponds to CP2 partial waves). Baryon and lepton numbers are separately conserved since in TGD H-chirality remains exact symmetry whereas M4 chirality is broken from beginning: in gauge field theories in M4 one must break down exact M4-chiral invariance to make particles massive. Also electro-weak symmetry breaking is coded into the geometry of CP2 at the level of classical electroweak fields defined as projections of CP2 spinor connection.

P. S. Tommaso Dorigo has a posting New Higgs limits with taus from D0, which contains a summary of various production mechanisms for Higgs. Direct production by gluon-gluon fusion to Higgs via quark loop dominates in standard model Universe. In TGD Universe it could give only a small contribution since Higgs couplings to fermions could (but need not) be small (Higgs coupling does not explain fermion mass). Thus associate production as WH and ZH pairs for which rates are smaller by a factor of order ten could dominate.

Monday, July 30, 2007

Article of Arkady Khodolenko about quantum signatures of Solar system dynamics

Yesterday I learned from the message of Arkady Khodolenko to the blog of Kea that there is a paper by him titled Quantum signatures of Solar system dynamics. There was a comment about paper today by Kea. After having worked for half a decade with ideas about quantization of in astrophysical systems it is nice to learn that the quantization in planetary systems is finally taken seriously also by mathematicians. In fact, we had interesting discussions with Arkady for a couple of months ago and I got opportunity to happily explain the basic ideas of TGD. Not so crackpotty feeling anymore although it would be nice to get the recognition for having done so massive pioneering work but it seems that I am on the wrong side of fence.

The abstract of the article is here.

Let w(i) be a period of rotation of the i-th planet around the Sun (or w(j;i) be a period of rotation of j-th satellite around the i-th planet). From empirical observations it is known that the sum of n(i)w(i)=0 (or the sum of n(j)w(j;i)=0) for some integers n(i)(or n(j)) (some of which allowed to be zero), different for different satellite systems. These conditions, known as ressonance conditions, make uses of theories such as KAM difficult to implement. To a high degree of accuracy these periods can be described in terms of the power law dependencies of the type w(i)=Ai (or w(j;i)= A(i)mi) with A,c (respectively, A(i),m) being some known empirical constants. Such power law dependencies are known in literature as the Titius-Bode law of planetary/satellite motion. The resonances in Solar system are similar to those encountered in old quantum mechanics. Although not widely known nowadays, applications of methods of celestial mechanics to atomic physics were, in fact, highly successful. With such a success, the birth of new quantum mechanics is difficult to understand. In short, the rationale for its birth lies in simplicity with which the same type of calculations are done using new methods capable of taking care of resonances. The solution of quantization puzzle was found by Heisenberg. In this work new uses of Heisenberg's ideas are found. When superimposed with the equivalence principle of general relativity, they lead to quantum mechanical tratment of observed resonances in the Solar system. To test correctness of our theoretical predictions the number of allowed stable orbits for planets and for equatorial stable orbits of satellites of heavy planets is calculated resulting in surprisingly good agreement with observational data.

Some comments about article are in order.

  1. The emphasis of the article is on the rules satisfied by resonance frequencies. The vanishing of the sum of rotation frequencies with integer weights is one manner to end up with the quantization rules and allowing classical interpretation in terms of KAM theory. In TGD approach a genuine quantization of dark matter based on hierarchy of Planck constants is the explanation for the rules. Integer multiples of basic frqeuency scale implied by resonance conditions fit nicely with the number theoretical quantization rules requiring rationals.

  2. These resonance rules follow automatically from Bohr quantization for systems for which the values of various physical parameters are simple rationals with a suitable choice of units. This is of course true for hydrogen atom type systems, harmonic oscillator, etc.. Also known exoplanets demonstrate this quantization many of them with orbital radius corresponding to the ground state. In TGD framework space-time surface are preferred extremals of Kähler action (not simply absolute minima as assumed initially) so that Bohr orbitology is coded into the basic definition of the theory and classical theory is genuine part of quantum theory. Quantum criticality is second essential element: in accordance with general ideas about quantum chaos it implies that dark matter wave functions are concentrated around classical Bohr orbits.

  3. In TGD framework the quantization for gravitational Planck constant hbargr= GMm/v0, where v0 =2-11 is the most preferred value, is also essential element. The form of gravitational Planck constant follows from Equivalence Principle. The dependence on both M and m is difficult to understand in the context of standard quantum field theory but makes perfect sense if it characterizes the gravitational "field body" mediating gravitational interaction between two systems. In principle each interaction corresponds to its own "field body" and hbar. This requires a profound generalization of the notion of imbedding space.

  4. Also certain harmonics and subharmonics of v0 appear and rational valued spectrum is the most general one. Number theoretically simple ruler-and-compass rationals (corresponding quantum phases exp(i2π/n) are expressible in terms of square root operations applied to rationals) allow to understand quantization in solar system. The quantization of hbargr leads also to number-theoretic predictions for mass ratios of planets as ruler-and-compass rationals and typically holding true with accuracy better than 10 per cent.

  5. Titius-Bode type rules (not necessarily only powers of 2) are discussed in the article as a manner to satisfy the resonance conditions. Titius-Bode rules have also interpretation in terms of p-adic length scale hypothesis favoring powers of 2, and quantization rules starting from continuous distribution of matter around planets lead naturally to this kind of rules. These rules conform reasonably well with the Bohr type quantization rules for hydrogen atom but the predictions of hydrogen atom quantization are more accurate. It is possible that also inside Sun similar onionlike hierarchy in powers of two holds true.

  6. Both general coordinate invariance and Poincare invariance of is a serious problem for the Bohr rule based quantization in General Relativity context. In TGD framework space-times are 4-surfaces so that Minkowski coordinates for M4×CP2 define preferred coordinates. An important prediction is that astrophysical quantum states cannot participate in cosmological expansion except by quantum transitions in which gravitational Planck constant changes. Masreliez observed for years ago that planetary system seems to shrink and the explanation is that the distances in this approach correpond to the radial coordinate of Robertson-Walker metric rather than genuine distance. This picture allows also to understand different value of hgr for inner and outer planets as signature of past dynamics of solar system. Also the acceleration of cosmic expansion can be understood as associated with the criticality associated with quantum phase transition increasing hgr in cosmological length scale (large voids).

  7. Khodolenko proposes Exclusion Principle for planetary systems. It is not needed in the picture where macroscopic quantization is genuine physical effect and associated with dark matter. Visible matter condensed around dark matter and makes the presence of it "visible" via approximate Bohr rules.

The chapters of Classical Physics in Many-sheeted Space-time describe the TGD based view about quantum astrophysics.

Allais effect and TGD

Allais effect is a fascinating gravitational anomaly associated with solar eclipses. It was discovered originally by M. Allais, a Nobelist in the field of economy, and has been reproduced in several experiments but not as a rule. The experimental arrangement uses so called paraconical pendulum, which differs from the Foucault pendulum in that the oscillation plane of the pendulum can rotate in certain limits so that the motion occurs effectively at the surface of sphere.

The articles Should the Laws of Gravitation Be Reconsidered: Part I,II,III? of Allais here and here and the summary article The Allais effect and my experiments with the paraconical pendulum 1954-1960 of Allais give a detailed summary of the experiments performed by Allais.

A. Experimental findings of Allais

Consider first a brief summary of the findings of Allais.

  1. In the ideal situation (that is in the absence of any other forces than gravitation of Earth) paraconic pendulum should behave like a Foucault pendulum. The oscillation plane of the paraconic pendulum however begins to rotate.

  2. Allais concludes from his experimental studies that the orbital plane approach always asymptotically to a limiting plane and the effect is only particularly spectacular during the eclipse. During solar eclipse the limiting plane contains the line connecting Earth, Moon, and Sun. Allais explains this in terms of what he calls the anisotropy of space.

  3. Some experiments carried out during eclipse have reproduced the findings of Allais, some experiments not. In the experiment carried out by Jeveran and collaborators in Romania it was found that the period of oscillation of the pendulum changes by Δ f/f≈ 5× 10-4, which happens to correspond to the constant v0=2-11 appearing in the formula of the gravitational Planck constant.
  4. There is also quite recent finding by Popescu and Olenici, which they interpret as a quantization of the plane of oscillation of paraconic oscillator during solar eclipse (see this).

B. TGD inspired model for Allais effect

The basic idea of the TGD based model is that Moon absorbs some fraction of the gravitational momentum flow of Sun and in this manner partially screens the gravitational force of Sun in a disk like region having the size of Moon's cross section. Screening is expected to be strongest in the center of the disk. The predicted upper bound for the change of the oscillation frequency is slightly larger than the observed change which is highly encouraging.

1. Constant external force as the cause of the effect

The conclusions of Allais motivate the assumption that quite generally there can be additional constant forces affecting the motion of the paraconical pendulum besides Earth's gravitation. This means the replacement ggg of the acceleration g due to Earth's gravitation. Δg can depend on time.

The system obeys still the same simple equations of motion as in the initial situation, the only change being that the direction and magnitude of effective Earth's acceleration have changed so that the definition of vertical is modified. If Δ g is not parallel to the oscillation plane in the original situation, a torque is induced and the oscillation plane begins to rotate.This picture requires that the friction in the rotational degree of freedom is considerably stronger than in oscillatory degree of freedom: unfortunately I do not know what the situation is.

The behavior of the system in absence of friction can be deduced from the conservation laws of energy and angular momentum in the direction of gg.

2. What causes the effect in normal situations?

The gravitational accelerations caused by Sun and Moon come first in mind as causes of the effect. Equivalence Principle implies that only relative accelerations causing analogs of tidal forces can be in question. In GRT picture these accelerations correspond to a geodesic deviation between the surface of Earth and its center. The general form of the tidal acceleration would thus the difference of gravitational accelerations at these points:

Δg= -2GM[(Δ r/r3) - 3(r•Δ rr/r5)].

Here r denotes the relative position of the pendulum with respect to Sun or Moon. Δr denotes the position vector of the pendulum measured with respect to the center of Earth defining the geodesic deviation. The contribution in the direction of Δ r does not affect the direction of the Earth's acceleration and therefore does not contribute to the torque. Second contribution corresponds to an acceleration in the direction of r connecting the pendulum to Moon or Sun. The direction of this vector changes slowly.

This would suggest that in the normal situation the tidal effect of Moon causesgradually changing force mΔg creating a torque, which induces a rotation of the oscillation plane. Together with dissipation this leads to a situation in which the orbital plane contains the vector Δg so that no torque is experienced. The limiting oscillation plane should rotate with same period as Moon around Earth. Of course, if effect is due to some other force than gravitational forces of Sun and Earth, paraconic oscillator would provide a manner to make this force visible and quantify its effects.

3. What happens during solar eclipse?

During the solar eclipse something exceptional must happen in order to account for the size of effect. The finding of Allais that the limiting oscillation plane contains the line connecting Earth, Moon, and Sun implies that the anomalous acceleration Δ |g| should be parallel to this line during the solar eclipse.

The simplest hypothesis is based on TGD based view about gravitational force as a flow of gravitational momentum in the radial direction.

  1. For stationary states the field equations of TGD for vacuum extremals state that the gravitational momentum flow of this momentum. Newton's equations suggest that planets and moon absorb a fraction of gravitational momentum flow meeting them. The view that gravitation is mediated by gravitons which correspond to enormous values of gravitational Planck constant in turn supports Feynman diagrammatic view in which description as momentum exchange makes sense and is consistent with the idea about absorption. If Moon absorbs part of this momentum, the region of Earth screened by Moon receives reduced amount of gravitational momentum and the gravitational force of Sun on pendulum is reduced in the shadow.

  2. Unless the Moon as a coherent whole acts as the absorber of gravitational four momentum, one expects that the screening depends on the distance travelled by the gravitational flux inside Moon. Hence the effect should be strongest in the center of the shadow and weaken as one approaches its boundaries.

  3. The opening angle for the shadow cone is given in a good approximation by Δ Θ= RM/RE. Since the distances of Moon and Earth from Sun differ so little, the size of the screened region has same size as Moon. This corresponds roughly to a disk with radius .27× RE.

    The corresponding area is 7.3 per cent of total transverse area of Earth. If total absorption occurs in the entire area the total radial gravitational momentum received by Earth is in good approximation 93.7 per cent of normal during the eclipse and the natural question is whether this effective repulsive radial force increases the orbital radius of Earth during the eclipse.

    More precisely, the deviation of the total amount of gravitational momentum absorbed during solar eclipse from its standard value is an integral of the flux of momentum over time:

    Δ Pkgr = ∫ Δ(Pkgr/dt) (S(t))dt,

    (ΔPkgr/dt)(S(t))= ∫S(t) Jkgr(t)dS.

    This prediction could kill the model in classical form at least. If one takes seriously the quantum model for astrophysical systems predicting that planetary orbits correspond to Bohr orbits with gravitational Planck constant equal to GMm/v0, v0=2-11, there should be not effect on the orbital radius. The anomalous radial gravitational four-momentum could go to some other degrees of freedom at the surface of Earth.

  4. The rotation of the oscillation plane is largest if the plane of oscillation in the initial situation is as orthogonal as possible to the line connecting Moon, Earth and Sun. The effect vanishes when this line is in the the initial plane of oscillation. This testable prediction might explain why some experiments have failed to reproduce the effect.

  5. The change of |g| to |gg| induces a change of oscillation frequency given by

    Δf/f=g• Δ g/g2 = (Δ g/g) cos(Θ).

    If the gravitational force of the Sun is screened, one has |gg| >g and the oscillation frequency should increase. The upper bound for the effect is obtained from the gravitational acceleration of Sun at the surface of Earth given by v2E/rE≈ 6.0× 10-4g. One has

    |Δ f|/f≤ Δ g/g = v2E/rE ≈ 6.0× 10-4.

    The fact that the increase(!) of the frequency observed by Jeveran and collaborators is Δf/f≈ 5× 10-4 supports the screening model. Unfortunately, I do not have access to the paper of Jeveran et al to find out whether the reported change of frequency, which corresponds to a 10 degree deviation from vertical is consistent with the value of cos(Θ) in the experimental arrangement.

  6. One should explain also the recent finding by Popescu and Olenici, which they interpret as a quantization of the plane of oscillation of paraconic oscillator during solar eclipse (see this). A possible TGD based explanation would be in terms of quantization of Δg and thus of the limiting oscillation plane. This quantization should reflect the quantization of the gravitational momentum flux receiving Earth. The flux would be reduced in a stepwise manner during the solar eclipse as the distance traversed by the flux through Moon increases and reduced in a similar manner after the maximum of the eclipse.

C. What kind of tidal effects are predicted?

If the model applies also in the case of Earth itself, new kind of tidal effects (for normal tidal effects see this) are predicted due to the screening of the gravitational effects of Sun and Moon inside Earth. At the night-side the paraconical pendulum should experience the gravitation of Sun as screened. Same would apply to the "night-side" of Earth with respect to Moon.

Consider first the differences of accelerations in the direction of the line connecting Earth to Sun/Moon: these effects are not essential for tidal effects proper. The estimate for the ratio for the orders of magnitudes of the these accelerations is given by

gp(Sun)|/|Δgp(Moon)|= (MS/MM) (rM/rE)3≈ 2.17.

The order or magnitude follows from r(Moon)=.0026 AU and MM/MS=3.7× 10-8. The effects caused by Sun are two times stronger. These effects are of same order of magnitude and can be compensated by a variation of the pressure gradients of atmosphere and sea water.

The tangential accelerations are essential for tidal effects. The above estimate for the ratio of the contributions of Sun and Moon holds true also now and the tidal effects caused by Sun are stronger by a factor of two.

Consider now the new tidal effects caused by the screening.

  1. Tangential effects on day-side of Earth are not affected (night-time and night-side are of course different notions in the case of Moon and Sun). At the night-side screening is predicted to reduce tidal effects with a maximum reduction at the equator.

  2. Second class of new effects relate to the change of the normal component of the forces and these effects would be compensated by pressure changes corresponding to the change of the effective gravitational acceleration. The night-day variation of the atmospheric and sea pressures would be considerably larger than in Newtonian model.

The intuitive expectation is that the screening is maximum when the gravitational momentum flux travels longest path in the Earth's interior. The maximal difference of radial accelerations associated with opposite sides of Earth along the line of sight to Moon/Sun provides a convenient manner to distinguish between Newtonian and TGD based models:

gp,N|=4GM ×(RE/r)3 ,

gp,TGD|= 4GM ×(1/r2).

The ratio of the effects predicted by TGD and Newtonian models would be

gp,TGD|/|Δ gp,N|= r/RE ,

rM/RE =60.2 , rS/RE= 2.34× 104.

The amplitude for the oscillatory variation of the pressure gradient caused by Sun would be

Δ|gradpS|=v2E/rE≈ 6.1× 10-4g

and the pressure gradient would be reduced during night-time. The corresponding amplitude in the case of Moon is given by

Δ |gradpS|/Δ|gradpM|= (MS/MM)× (rM/rS)3≈ 2.17.

Δ |gradpM| is in a good approximation smaller by a factor of 1/2 and given by

Δ|gradpM|=2.8× 10-4g.

Thus the contributions are of same order of magnitude.

One can imagine two simple qualitative killer predictions.

  1. Solar eclipse should induce anomalous tidal effects induced by the screening in the shadow of the Moon.
  2. The comparison of solar and moon eclipses might kill the scenario. The screening would imply that inside the shadow the tidal effects are of same order of magnitude at both sides of Earth for Sun-Earth-Moon configuration but weaker at night-side for Sun-Moon-Earth situation.

D. An interesting co-incidence

The measured value of Δ f/f=5× 10-4 is exactly equal to v0=2-11, which appears in the formula hbargr= GMm/v0 for the favored values of the gravitational Planck constant. The predictions are Δ f/f≤ Δ p/p≈ 6× 10-4. Powers of 1/v0 appear also as favored scalings of Planck constant in the TGD inspired quantum model of bio-systems based on dark matter (see this). This co-incidence would suggest the quantization formula

gE/gS= (MS/ME) × (RE/rE)2= v0

for the ratio of the gravitational accelerations caused by Earth and Sun on an object at the surface of Earth.

E. Summary of the predicted new effects

Let us sum up the basic predictions of the model.

  1. The first prediction is the gradual increase of the oscillation frequency of the conical pendulum by Δ f/f≤ 6× 10-4 to maximum and back during night-time. Also a periodic variation of the frequency and a periodic rotation of the oscillation plane with period co-inciding with Moon's rotation period is predicted.

  2. A paraconical pendulum with initial position, which corresponds to the resting position in the normal situation should begin to oscillate during solar eclipse. This effect is testable by fixing the pendulum to the resting position and releasing it during the eclipse. The amplitude of the oscillation corresponds to the angle between g and gg given in a good approximation by

    sin[Θ(g,gg)]= (Δ g/g)sin[Θ( gg)].

    An upper bound for the amplitude would be Θ≤ 6× 10-4

    , which corresponds to .03 degrees.

  3. Gravitational screening should cause a reduction of tidal effects at the "night-side" of Moon/Sun. The reduction should be maximum at "midnight". This reduction together with the fact that the tidal effects of Moon and Sun at the day side are of same order of magnitude could explain some anomalies know to be associated with the tidal effects. A further prediction is the day-night variation of the atmospheric and sea pressure gradients with amplitude which is for Sun 6× 10-4g and for Moon 1.3× 10-3g.

To sum up, the predicted anomalous tidal effects and failure of the explanation of the limiting oscillation plane in terms of stronger dissipation in rotational degree of freedom could kill the model.

For details see the chapter The Relationship Between TGD and GRT of "Classical Physics in Many-Sheeted Space-Time".

Saturday, July 28, 2007

What Equivalence Principle means?

In GRT Equivalence Principle can be coded typically by Einstein-YM action. What Equivalence Principle means in TGD framework is not at all obvious since this action does not make sense in TGD framework the fundamental action being so called Kähler action.

Basically the problem is about the relationship between inertial and gravitational four-momenta. The real achievement at the level of formal rigor is that inertial and gravitational four-momenta and their generalizations color quantum numbers are well-defined as Noether charges associated with curvature scalar. The most general option is that cosmological constant and gravitational constant are by definition chosen so that gravitational and inertial four-momenta are identical. It however seems un-necssary to introduce cosmological constant and also it seems that one must accept failure of Equivalence Principle although it does not occur for ordinary matter. For instance, string like objects which are vacuum extremals but have gigantic gravitational mass are possible.

Here one must however be very cautious. It is asymptotic behavior of gravitational field created by topologically condensed space-time sheet which matters experimentally, and it is not at all clear under what conditions the mass parameter characterizing the gravitational field equals to the gravitational mass of the space-time sheet defined by Einstein tensor. The reason is that one does not anymore have Einstein's field equations which in linear approximation identify energy momentum tensor as the source of gravitational field.

On the other hand, the covariant divergence of Einstein tensor vanishes and the components of Einstein tensor are essentially what one obtains by applying d'Alembert type operator on components of metric. Hence it is natural to regard topologically condensed space-time sheets as sources of the gravitational field defined by the metric. If these sources corresponds to gravitational charges of the topologically condensed space-time sheets then there are good hopes of obtaining Equivalence Principle at the level of asymptotic behavior of the metric. That pseudo-Riemannian geometry codes for the dynamics of gravitational field without any variational principle is something which is highly non-trivial and means that Einstein's equations derived from EYM action are only a manner to state Equivalence Principle.

The are good reasons to expect that small deformations of vacuum extremals, which are extremals of curvature scalar define in the stationary situation exterior metrics. Field equations state in this kind of situation the conservation of various kinds of gravitational charges. For the simplest exterior metrics one can indeed cast the conservation laws in a form in which one has Einstein tensor at the left hand side of field equations and a term depending on geometric data at the right hand side. The optimistic conjecture is that this is the case more generally but - as noticed - this is not necessary for the realization of Equivalence Principle in the sense of asymptotic behavior.

For details see the updated chapter The Relationship Between TGD and GRT of "Classical Physics in Many-Sheeted Space-Time".

Thursday, July 26, 2007

Comments about Higgs

Kea has talked about Higgs in her block. I wrote my comments but this morning's comment was so long that I preferred to use my own blog. I just glue the text below.

1. Mathematicians and Higgs

Kea mentioned that great mathematicians develop fancy frameworks trying to incorporate Higgs as something more fundamental. My feeling is that mathematicians take quite too passive attitude to the problem by using their precious skills only to build complex mathematizations of the physicist's ideas. Why not try to assing new physics to these fascinating structures? For instance, I really wonder why mathematician like Connes is satisfied in only reproducing standard model couplings using his wonderful mathematics and incrediable mathematical insights and skills.

2. Is Higgs field something fundamental?

As Kea, I see the standard model Higgs mechanisms as a manner to parameterize the masses of particles, not much more and certainly not "God particle".

I see however no general reason why scalar particles with non-vanishing electro-weak quantum numbers could not exist and contribute to effective masses of particles by generating coherent states (by the way, there is important delicacy involved: scalar particle in TGD framework is only M4 scalar, not CP2 scalar). Both gauge bosons and Higgs are bound states in TGD framework, not fundamental fields. Elementary gauge bosons are understood as wormhole contacts connecting two space-time sheets with light-like throats carrying fermion and antifermion quantum numbers: wormhole contact is what binds. This picture is unavoidable if fermionic fields are free conformal fields inside throats.

3. Can Higgs expectation prevail in cosmic length scales

The idea that Higgs vacuum expectation interpreted in terms of coherent state would pervade all space-time does not look very attractive to me either.

In TGD situation is different: Higgs expectation as a coherent state would be associated only with gauge boson space-time sheets. Elementary fermions correspond to single light-like wormhole throats associated with topologically condensed CP2 type vacuum extremals and coherent states of Higgs do not make sense for them although they couple to Higgs in the sense of generalized Feynman diagrams.

p-Adic thermodynamics explains fantastically fermion masses and there is a nice geometric interpretation for this aspect of massivation: random light-like motion looks like motion with velocity v<c in a given resolution: hence average four-momentum is timelike. p-Adic thermodynamcis describes this randomness. In case of bosons p-adic temperature T=1/n would be low and this contribution would be negligible as compared to Higgs contribution. The prediction of top quark mass is in the rage allowed by its most recent value about which Tommaso Dorigo told some time ago and here is one of the killer predictions.

4. Quarks do not give the entire mass of hadron

As Kea mentions, an experimental fact is that quarks contribute only a small portion to baryon mass so that Higgs mechanism cannot be the whole story.

TGD prediction for quark contribution to proton mass is 170 MeV. The rest would come from super-canonical bosons, which are particles having no electro-weak interactions and electro-weakly dark. They are elementary bosons in a strict sense of the word. Their masses can be calculated from p-adic thermodynamics and assuming same topological mixing as for U quarks (natural since only modular degrees of freedom of partonic 2-surfaces are involved), one can understand hadron masses if one assumes proper contents of these particles for hadrons.

One can of course criticize. The supercanonical particle content of hadron is deduced from the requirement that hadron mass is predicted with accuracy better than per cent and not yet predicted from basic principles. Also the integers k labelling p-adic length scales of quarks is deduced from hadron mass and depend on hadron. However, for neutrinos it is an experimental fact that several mass scales exist: something very strange when one recalls the standard text book explanations about how incredibly weakly interacting neutrinos are.

5. The question

The question is whether one takes particle masses as God given or not. Or stating it somewhat differently: when particle physicists are ready to accept the p-adic mass scale of particle mass scale as a new discrete dynamical degree of freedom? If they are ready to this the mystery number 1038 of particle physics reduces to number theory. Of course, those outside quantum gravity and particle physics communities have enjoyed the beauties of fractals for quite a time: dare I hope that also particle theorists might some day consider the possibility that Planck scale is not the only fundamental scale?

Quite generally, the question is whether one wants to stick to the framework of QFT or string models or whether one is ready to accept new mathematical and physical ideas and look whether they might work. Both top-down and bottom-up approaches are needed and one cannot avoid dirty hands. p-Adic massivation is very concrete idea involving both these approaches.

6. Devil as a master organizer

I am afraid that institutional inertia is too strong for anything to happen in time scale of decade. I recall a story from Krishnamurti allowing to understand how spirituality transforms to religion transforms to church transforms to fundamentalism transforms to terrorism and USA in Iraq. Or how hadronic string model suffers a sequence of transformations leading to the philosophy that physical theory need not predict anything if it happens to be M-theory. Or how particle physics developed from an anarchy of ideas to standard model to the recent situation.

Oh yes, the story! It happened, as it sometimes happens, that someone discovered the final truth, nothing less. The nearest assistant of Devil got very worried and rushed to Devil's office to inform his employer about the situation. Surprisingly, Devil was not worried at all and his only comment was "No problem, let us organize it!"

Wednesday, July 25, 2007

Question by John Baez

John Baez made an interesting question in n-Category-Cafe. The question reads as follows:

Is every representation of every finite group definable on the field Qab obtained by taking the field Q of rational numbers and by adding all possible roots of unity?

Since every finite group can appear as Galois group the question translates to the question whether one can represent all possible Galois groups using matrices with elements in Qab.

This form of question has an interesting relation to Langlands program. By Langlands conjecture the representations of the Galois group of algebraic closure of rationals can be realized in the space of functions defined in GL(n,F)\GL(n,Gal(Qab/Q)), where Gal(Qab/Q) is the maximal Abelian subgroup of the Galois group of the algebraic closure of rationals. Thus one has group algebra associated with the matrix group for which matrix elements have values in Gal(Qab/Q). Something by several orders of more complex than matrices having values in Qab.

This relates interestingly also to my own physics inspired ideas about Langlands program (see this). For some time ago I went on to propose that the Galois group of algebraic numbers could be regarded as the permutation group S of infinite number of objects generated by permutations for finite numbers of objects.

  1. The corresponding group algebra is hyper-finite factor of type II1 (briefly HFF) and this led to quite fascinating physics inspired ideas about Langlands program and a connection with what I call number theoretic braids. In particular, Galois groups would have interpretation as physical symmetries in TGD Universe and would act on the infinite-D spinors of the world of classical worlds realized as infinite-D fermionic Fock algebra.

  2. The representations of finite Galois groups G are the physically interesting ones. They could be interpreted as representations an infinite diagonal subgroup consisting of elements g×g×...., where g belongs to G. This requires the completion of the S to contain this kind of permutations. At the level of infinite-D spinors the action would be periodic with respect to the lattice defined by the tensor factors and belonging to the completion of HFF. The motivation for this representation is that infinite braid cannot be realized at space-time level and the periodicity would allow to reduce the situation to that for a finite braid since there is not need to say same thing again and again (I hope I could learn this too;-)!).

  3. The interpretation in terms of gauge symmetries and spontaneous symmetry breaking suggests itself: number theoretic counterparts for local gauge transformations would correspond to S and those for global gauge transformations to elements of form g×g×....

What this has then to do with John's question and Langlands program? S contains any finite group G as a subgroup. If all the representations of finite-dimensional Galois groups could be realized as representations in Gl(n,Qab), same would hold true also for the proposed symmetry breaking representations of the completion of S reducing to representations of finite Galois groups. There would be an obvious analogy with Langlands program using functions defined in the space Gl(n,Q)\Gl(n, Gal(Qab/Q)). Be as it may, mathematicians are able to work with incredibly abstract objects! A highly respectful sigh is in order!

Tuesday, July 24, 2007

TGD Inspired Cosmology

I have updated "TGD Inspired Cosmology". Here is the updated abstract.
A proposal for what might be called TGD inspired cosmology is made. The basic ingredient of this cosmology is the TGD counter part of the cosmic string. It is found that many-sheeted space-time concept; the new view about the relationship between inertial and gravitational four-momenta; the basic properties of the cosmic strings; zero energy ontology; the hierarchy of dark matter with levels labelled by arbitrarily large values of Planck constant: the existence of the limiting temperature (as in string model, too); the assumption about the existence of the vapor phase dominated by cosmic strings; and quantum criticality imply a rather detailed picture of the cosmic evolution, which differs from that provided by the standard cosmology in several respects but has also strong resemblances with inflationary scenario.

TGD inspired cosmology in its recent form relies on an ontology differing dramatically from that of GRT based cosmologies. Zero energy ontology states that all physical states have vanishing net quantum numbers so that all matter is creatable from vacuum. The hierarchy of dark matter identified as macroscopic quantum phases labelled by arbitrarily large values of Planck constant is second aspect of the new ontology. The values of the gravitational Planck constant assignable to space-time sheets mediating gravitational interaction are gigantic. This implies that TGD inspired late cosmology might decompose into stationary phases corresponding to stationary quantum states in cosmological scales and critical cosmologies corresponding to quantum transitions changing the value of the gravitational Planck constant and inducing an accelerated cosmic expansion.

1. Zero energy ontology

The construction of quantum theory leads naturally to zero energy ontology stating that everything is creatable from vacuum. Zero energy states decompose into positive and negative energy parts having identification as initial and final states of particle reaction in time scales of perception longer than the geometro-temporal separation T of positive and negative energy parts of the state. If the time scale of perception is smaller than T, the usual positive energy ontology applies.

In zero energy ontology inertial four-momentum is a quantity depending on the temporal time scale T used and in time scales longer than T the contribution of zero energy states with parameter T1<T to four-momentum vanishes. This scale dependence alone implies that it does not make sense to speak about conservation of inertial four-momentum in cosmological scales. Hence it would be in principle possible to identify inertial and gravitational four-momenta and achieve strong form of Equivalence Principle. It however seems that this is not the correct approach to follow.

2. Dark matter hierarchy and hierarchy of Planck constants

Dark matter revolution with levels of the hierarchy labelled by values of Planck constant forces a further generalization of the notion of imbedding space and thus of space-time. One can say, that imbedding space is a book like structure obtained by gluing together infinite number of copies of the imbedding space like pages of a book: two copies characterized by singular discrete bundle structure are glued together along 4-dimensional set of common points. These points have physical interpretation in terms of quantum criticality. Particle states belonging to different sectors (pages of the book) can interact via field bodies representing space-time sheets which have parts belonging to two pages of this book.

3. Quantum criticality

TGD Universe is quantum counterpart of a statistical system at critical temperature. As a consequence, topological condensate is expected to possess hierarchical, fractal like structure containing topologically condensed 3-surfaces with all possible sizes. Both Kähler magnetized and Kähler electric 3-surfaces ought to be important and string like objects indeed provide a good example of Kähler magnetic structures important in TGD inspired cosmology. In particular space-time is expected to be many-sheeted even at cosmological scales and ordinary cosmology must be replaced with many-sheeted cosmology. The presence of vapor phase consisting of free cosmic strings and possibly also elementary particles is second crucial aspects of TGD inspired cosmology.

Quantum criticality of TGD Universe supports the view that many-sheeted cosmology is in some sense critical. Criticality in turn suggests fractality. Phase transitions, in particular the topological phase transitions giving rise to new space-time sheets, are (quantum) critical phenomena involving no scales. If the curvature of the 3-space does not vanish, it defines scale: hence the flatness of the cosmic time=constant section of the cosmology implied by the criticality is consistent with the scale invariance of the critical phenomena. This motivates the assumption that the new space-time sheets created in topological phase transitions are in good approximation modellable as critical Robertson-Walker cosmologies for some period of time at least.

These phase transitions are between stationary quantum states having stationary cosmologies as space-time correlates: also these cosmologies are determined uniquely apart from single parameter.

4. Only sub-critical cosmologies are globally imbeddable

TGD allows global imbedding of subcritical cosmologies. A partial imbedding of one-parameter families of critical and overcritical cosmologies is possible. The infinite size of the horizon for the imbeddable critical cosmologies is in accordance with the presence of arbitrarily long range fluctuations at criticality and guarantees the average isotropy of the cosmology. Imbedding is possible for some critical duration of time. The parameter labelling these cosmologies is scale factor characterizing the duration of the critical period. These cosmologies have the same optical properties as inflationary cosmologies. Critical cosmology can be regarded as a 'Silent Whisper amplified to Bang' rather than 'Big Bang' and transformed to hyperbolic cosmology before its imbedding fails. Split strings decay to elementary particles in this transition and give rise to seeds of galaxies. In some later stage the hyperbolic cosmology can decompose to disjoint 3-surfaces. Thus each sub-cosmology is analogous to biological growth process leading eventually to death.

5. Fractal many-sheeted cosmology

The critical cosmologies can be used as a building blocks of a fractal cosmology containing cosmologies containing ... cosmologies. p-Adic length scale hypothesis allows a quantitative formulation of the fractality. Fractal cosmology predicts cosmos to have essentially same optic properties as inflationary scenario but avoids the prediction of unknown vacuum energy density. Fractal cosmology explains the paradoxical result that the observed density of the matter is much lower than the critical density associated with the largest space-time sheet of the fractal cosmology. Also the observation that some astrophysical objects seem to be older than the Universe, finds a nice explanation.

6. Equivalence Principle in TGD framework

The failure of Equivalence Principle in TGD Universre was something which was very difficult to take seriously and this led to a long series of ad hoc constructs trying to save Equivalence Principle instead of trying to characterize the failure, to find out whether it has catastrophic consequences, and to relate it to the recent problems of cosmology, in particular the necessity to postulate somewhat mysterious dark energy characterized by cosmological constant. The irony was that all this was possible since TGD allows to define both inertial and gravitational four-momenta and generalized gravitational charges assignable to isometries of M4× CP2 precisely.

It indeed turns out that Equivalence Principle can hold true for elementary particles having so called CP2 type extremals as space-time correlates and for hadrons having string like objects as space-time correlates. This is more or less enough to have consistency with experimental facts. Equivalence Principle fails for vacuum extremals representing Robertson-Walker cosmologies and for all vacuum extremals representing solutions of Einstein's equations. The failure is very dramatic for string like objects that I have used to call cosmic strings. These failures can be however understood in zero energy ontology.

7. Cosmic strings as basic building blocks of TGD inspired cosmology

Cosmic strings are the basic building blocks of TGD inspired cosmology and all structures including large voids, galaxies, stars, and even planets can be seen as pearls in a cosmic fractal necklaces consisting of cosmic strings containing smaller cosmic strings linked around them containing... During cosmological evolution the cosmic strings are transformed to magnetic flux tubes with smaller Kähler string tension and these structures are also key players in TGD inspired quantum biology.

Cosmic strings are of form X2× Y2subset M4× CP2, where X2 corresponds to string orbit and Y2 is a complex sub-manifold of CP2. The gravitational mass of cosmic string is Mgr=(1-g)/4G, where g is the genus of Y2. For g=1 the mass vanishes. When Y2 corresponds to homologically trivial geodesic sphere of CP2 the presence of Kähler magnetic field is however expected to generate inertial mass which also gives rise to gravitational mass visible as asymptotic behavior of the metric of space-time sheet at which the cosmic string has suffered topological condensation. The corresponding string tension is in the same range that for GUT strings and explains the constant velocity spectrum of distant stars around galaxies.

For g>1 the gravitational mass is negative. This inspires a model for large voids as space-time regions containing g&gr;1 cosmic string with negative gravitational energy and repelling the galactic g=0 cosmic strings to the boundaries of the large void.

These voids would participate cosmic expansion only in average sense. During stationary periods the quantum states would be modellable using stationary cosmologies and during phase transitions increasing gravitational Planck constant and thus size of the large void they critical cosmologies would be the appropriate description. The acceleration of cosmic expansion predicted by critical cosmologies can be naturally assigned with these periods. Classically the quantum phase transition would be induced when galactic strings are driven to the boundary of the large void by the antigravity of big cosmic strings with negative gravitational energy. The large values of Planck constant are crucial for understanding of living matter so that gravitation would play fundamental role also in the evolution of life and intelligence.

Many-sheeted fractal cosmology containing both hyperbolic and critical space-time sheets based on cosmic strings suggests an explanation for several puzzles of GRT based cosmology such as dark matter problem, origin of matter antimatter asymmetry, the problem of cosmological constant and mechanism of accelerated expansion, the problem of several Hubble constants, and the existence of stars apparently older than the Universe. Under natural assumptions TGD predicts same optical properties of the large scale Universe as inflationary scenario does. The recent balloon experiments however favor TGD inspired cosmology.

For details see the updated chapter TGD Inspired Cosmology of "Classical Physics in Many-Sheeted Space-Time".

Monday, July 23, 2007

Matter-antimatter asymmetry and cosmic strings

Despite the huge amount of work done during last decades (during the GUT era the problem was regarded as being solved!) matter-antimatter asymmetry remains still an unresolved problem of cosmology. A possible resolution of the problem is matter-antimatter asymmetry in the sense that cosmic strings (in TGD sense and understood as string like objects) contain antimatter and their exteriors matter. The challenge would be to understand the mechanism generating this asymmetry. The vanishing of net gauge charges of cosmic string allows this symmetry since electro-weak charges of quarks and leptons can cancel each other.

The challenge is to identify the mechanism inducing the CP breaking necessary for the matter-antimatter asymmetry. Quite a small CP breaking inside cosmic strings would be enough. The key observation is that only small deformations of vacuum extremals to non-vacua are physically acceptable. The simplest deformation of this kind would induce a radial Kähler electric field and thus a small Kähler electric charge inside cosmic string. This in turn would induce CP breaking inside cosmic string inducing matter antimatter asymmetry by the minimization of ground state energy. Conservation of Kähler charge in turn would induce asymmetry outside cosmic string and the annihilation of matter and antimatter would then lead to a situation in which there is only matter.

For background see the updated chapter The Relationship Between TGD and GRT of "Classical Physics in Many-Sheeted Space-Time".

Never-ending updating continues

I am continuing the updating of the chapters of "Classical Physics in Many-Sheeted Space-time" related to the relationship of TGD and GRT. The updatings are due to the zero energy ontology, the hierarchy of dark matter labelled by Planck constants, and due to the progress in the understanding of Equivalence Principle. I just finished the elimination of the worst trash from "The Relationship between TGD and GRT" and attach the abstract below.

In this chapter the recent view about TGD as Poincare invariant theory of gravitation is discussed. Radically new views about ontology were necessary before it was possible to see what had been there all the time. Zero energy ontology states that all physical states have vanishing net quantum numbers. The hierarchy of dark matter identified as macroscopic quantum phases labelled by arbitrarily large values of Planck constant is second aspect of the new ontology.

1. The fate of Equivalence Principle

There seems to be a fundamental obstacles against the existence of a Poincare invariant theory of gravitation related to the notions of inertial and gravitational energy.

  1. The conservation laws of inertial energy and momentum assigned to the fundamental action would be exact in this kind of a theory. Gravitational four-momentum can be assigned to the curvature scalar as Noether currents and is thus completely well-defined unlike in GRT. Equivalence Principle requires that inertial and gravitational four-momenta are identical. This is satisfied if curvature scalar defines the fundamental action principle crucial for the definition of quantum TGD. Curvature scalar as a fundamental action is however non-physical and had to be replaced with so called Kähler action.

  2. One can question Equivalence Principle because the conservation of gravitational four-momentum seems to fail in cosmological scales.

  3. For the extremals of Kähler action the Noether currents associated with curvature scalar are well-defined but non-conserved. Also for vacuum extremals satisfying Einstein's equations gravitational energy momentum is not conserved and non-conservation becomes large for small values of cosmic time. This looks fine but the problem is whether the failure of Equivalence Principle is so serious that it leads to conflict with experimental facts.

It turns out that Equivalence Principle can hold true for elementary particles having so called CP2 type extremals as space-time correlates and for hadrons having string like objects as space-time correlates. This is more or less enough to have consistency with experimental facts. Equivalence Principle fails for vacuum extremals representing Robertson-Walker cosmologies and for all vacuum extremals representing solutions of Einstein's equations. The failure is very dramatic for string like objects that I have used to call cosmic strings. These failures can be however understood in zero energy ontology.

2. The problem of cosmological constant

A further implication of dark matter hierarchy is that astrophysical systems correspond to stationary states analogous to atoms and do not participate to cosmic expansion in a continuous manner but via discrete quantum phase transitions in which gravitational Planck constant increases. By quantum criticality of these phase transitions critical cosmologies are excellent candidates for the modelling of these transitions. Imbeddable critical cosmologies are unique apart from a parameter determining their duration and represent accelerating cosmic expansion so that there is no need to introduce cosmological constant.

It indeed turns out possible to understand these critical phases in terms of quantum phase transition increasing the size of large modelled in terms of "big" cosmic strings with negative gravitational mass whose repulsive gravitation drives "galactic" cosmic strings with positive gravitational mass to the boundaries of the void. In this framework cosmological constant like parameter does not characterize the density of dark energy but that of dark matter identifiable as quantum phases with large Planck constant.

A further problem is that the naive estimate for the cosmological constant is predicted to be by a factor 10120 larger than its value deduced from the accelerated expansion of the Universe. In TGD framework the resolution of the problem comes naturally from the fact that large voids are quantum systems which follow the cosmic expansion only during the quantum critical phases.

p-Adic fractality predicting that cosmological constant is reduced by a power of 2 in phase transitions occurring at times T(k) propto 2k/2, which correspond to p-adic time scales. These phase transitions would naturally correspond to quantum phase transitions increasing the size of the large voids during which critical cosmology predicting accelerated expansion naturally applies. On the average Λ(k) behaves as 1/a2, where a is the light-cone proper time. This predicts correctly the order of magnitude for observed value of Λ.

3. Topics of the chapter

The topics discussed in the chapter are following.

  1. The relationship between TGD and GRT is discussed applying recent views about the relationship of inertial and gravitational masses, the zero energy ontology, and dark matter hierarchy. One of the basic outcomes is the TGD based understanding of cosmological constant as characterized of dark matter density.

  2. The notion of many-sheeted space time interpreted as a hierarchy of smoothed out space-times produced by Nature itself rather than only renormalization group theorist is discussed. The dynamics of what might be called gravitational charges is discussed the basic idea being that the structure of Einstein's tensor automatically implies that metric carries information about sources of the gravitational field without any assumption about variational principle.

  3. The theory is applied to the vacuum extremal embeddings of Reissner-Nordström and Schwartschild metric.

  4. A model for the final state of a star, which indicates that Z0 force, presumably created by dark matter, might have an important role in the dynamics of the compact objects. During year 2003, more than decade after the formulation of the model, the discovery of the connection between supernovas and gamma ray burstsprovided strong support for the predicted axial magnetic and Z0 magnetic flux tube structures predicted by the model for the final state of a rotating star. Two years later the interpretation of the predicted long range weak forces as being caused by dark matter emerged.

    The recent progress in understanding of hadronic mass calculations has led to the identification of so called super-canonical bosons and their super-counterparts as basic building blocks of hadrons. This notion leads also to a microscopic description of neutron stars and black-holes in terms of highly entangled string like objects in Hagedorn temperature and in very precise sense analogous to gigantic hadrons.

  5. There is experimental evidence for gravimagnetic fields in rotating superconductors which are by 20 orders of magnitudes stronger than predicted by general relativity. A TGD based explanation of these observations is discussed.

For details see the updated chapter The Relationship Between TGD and GRT of "Classical Physics in Many-Sheeted Space-Time".

Sunday, July 22, 2007

Updated Cosmic Strings

The new developments in quantum TGD have led to updatings of also TGD inspired cosmology. I have just gone through the chapter about cosmic strings.

Cosmic strings belong to the basic extremals of the Kähler action. The upper bound for string tension of the cosmic strings is T≈.5× 10-6/G and in the same range as the string tension of GUT strings and this makes them very interesting cosmologically although TGD cosmic strings have otherwise practically nothing to do with their GUT counterparts.

1. Basic ideas

The understanding of cosmic strings has developed only slowly and has required dramatic modifications of existing views.

  1. Zero energy ontology implies that the inertial energy and all quantum numbers of the Universe vanishes and physical states are zero energy states decomposing into pairs of positive and negative energy states. Positive energy ontology is a good approximation under certain assumptions.

  2. Dark matter hierarchy whose levels are labelled by gigantic values of gravitational Planck constant associated with dark matter is second essential piece of the picture.

  3. The identification of gravitational four-momentum as the Noether charge associated with curvature scalar looks in retrospect completely obvious and resolves the long standing ambiguities. This identification explains the non-conservation of gravitational four-momentum which is in contrast with the conservation of inertial four-momentum and implies breaking of Equivalence Principle. There are good reasons to believe that this breaking can be avoided for elementary particles and hadronic strings.

  4. The gravitational energy of string like objects X2× Y2subset M4× CP2 corresponds to gravitational string tension Tgr= (1-g)/4G, where g is the genus of Y2. The tension is negative for g>1. The string tension is by a factor of order 107 larger than the inertial string tension. This leads to the hypothesis that g>1 "big" strings in the centers of large voids generate repulsive gravitational force driving g=1 galactic strings to the boundaries of the voids. If the total gravitational mass of strings inside voids vanishes, the breaking of Equivalence Principle occurs only below the size scale of the void.

  5. The basic question whether one can model the exterior region of the topologically condensed cosmic string using General Relativity. The exterior metric of the cosmic string corresponds to a small deformation of a vacuum extremal. The angular defect and surplus associated with the exterior metrics extremizing curvature scalar can be much smaller than assuming vacuum Einstein's equations. The conjecture is that the exterior metric of g=1 galactic string conforms with the Newtonian intuitions and thus explains the constant velocity spectrum of distant stars if one assumes that galaxies are organized to linear structures along long strings like pearls in a necklace.

2. Critical and over-critical cosmologies involve accelerated cosmic expansion

In TGD framework critical and over-critical cosmologies are unique apart from single parameter telling their duration and predict the recently discovered accelerated cosmic expansion. Critical cosmologies are naturally associated with quantum critical phase transitions involving the change of gravitational Planck constant. A natural candidate for such a transition is the increase of the size of a large void as galactic strings have been driven to its boundary. During the phase transitions connecting two stationary cosmologies (extremals of curvature scalar) also determined apart from single parameter, accelerated expansion is predicted to occur. These transitions are completely analogous to quantum transitions at atomic level.

The proposed microscopic model predicts that the TGD counterpart of the quantity ρ+3p for cosmic strings is negative during the phase transition which implies accelerated expansion. Dark energy is replaced in TGD framework with dark matter indeed predicted by TGD and its fraction is.74 as in standard scenario. Cosmological constant thus characterizes the density of dark matter rather than energy in TGD Universe.

The sizes of large voids stay constant during stationary periods which means that also cosmological constant is piecewise constant. p-Adic length fractality predicts that Λ scales as 1/L2(k) as a function of the p-adic scale characterizing the space-time sheet of void. The order of magnitude for the recent value of the cosmological constant comes out correctly. The gravitational energy density described by the cosmological constant is identifiable as that associated with topologically condensed cosmic strings and of magnetic flux tubes to which they are gradually transformed during cosmological evolution.

3. Cosmic strings and generation of structures

  1. In zero energy ontology cosmic strings must be created from vacuum as zero energy states consisting of pairs of strings with opposite time orientation and inertial energy.

  2. The counterpart of Hawking radiation provides a mechanism by which cosmic strings can generate ordinary matter. The splitting of cosmic strings followed by a "burning" of the string ends provides a second manner to generate visible matter. Matter-antimatter symmetry would result if antimatter is inside cosmic strings and matter in the exterior region.

  3. Zero energy ontology has deep implications for the cosmic and ultimately also for biological evolution (magnetic flux tubes paly a fundamental role in TGD inspired biology and cosmic strings are limiting cases of them). The arrows of geometric time are opposite for the strings and also for positive energy matter and negative energy antimatter. This implies a competition between two dissipative time developments proceeding in different directions of geometric time and looking self-organization and even self-assembly from the point of view of each other. This resolves paradoxes created by gravitational self-organization contra second law of thermodynamics. So called super-canonical matter at cosmic strings implies large p-adic entropy resolves the well-known entropy paradox.

  4. p-Adic fractality and simple quantitative observations lead to the hypothesis that cosmic strings are responsible for the evolution of astrophysical structures in a very wide length scale range. Large voids with size of order 10^8 light years can be seen as structures cosmic strings wound around the boundaries of the void. Galaxies correspond to same structure with smaller size and linked around the supra-galactic strings. This conforms with the finding that galaxies tend to be grouped along linear structures. Simple quantitative estimates show that even stars and planets could be seen as structures formed around cosmic strings of appropriate size. Thus Universe could be seen as fractal cosmic necklace consisting of cosmic strings linked like pearls around longer cosmic strings linked like...
For details see the updated chapter Cosmic Strings.

Saturday, July 14, 2007

New anomaly in cosmic microwave background

In the comment section of Not-Even-Wrong 'island' gave a link to an article about the observation of a new anomaly in cosmic microwave background. The article Extragalactic Radio Sources and the WMAP Cold Spot by L. Rudnick, S. Brown, and L. R. Williams tells that a cold spot in the microwave background has been discovered. The amplitude of the temperature variation is -73 microK at maximum. The authors argue that the variation can be understood if there is a void at redshift z≤ 1, which corresponds to d≤ 1.4× 1010 ly. The void would have radius of 140 Mpc making 5.2× 108 ly.

In New Scientist, there is a story titled Cosmologists spot a 'knot' in space-time about Neil Turok’s recent talk at PASCOS entitled “Is the Cold Spot in the CMB a Texture?”. Turok has proposed that the cold spot results from a topological defect associated with a cosmic string of GUT type theories.

1. Comparison with sizes and distances of large voids

It is interesting to compare the size and distance of the argued CMB void to those for large voids. The largest known void has size of 163 Mpc making 5.3×108 ly which does not differ significantly from the size 8×6.5×108 ly of CMB void. The distance is 201 Mpc making about 6.5×108 ly and roughly by a factor 1/22 smaller than CMB void.

Is it only an accident that the size of CMB void is same as that for largest large void? If large voids follow the cosmic expansion in a continuous manner, the size of the CMB void should be roughly 1/22 time smaller. Could it be that large voids might follow cosmic expansion by rather seldomly occurring discrete jumps? TGD based quantum astrophysics indeed predicts that expansion occurs in discrete jumps.

2. TGD based quantum model for astrophysical systems

A brief summary of TGD based quantum model of astrophysical systems is in order.

  1. TGD based quantum model for astrophysical systems relies on the evidence that planetary orbits (also those of known exoplanets) correspond to Bohr orbits with a gigantic value of gravitational Planck constant hgr= GMm/v0 characterizing the gravitational interaction between masses M and m. Nottale introduced originally this quantization rule and assigned it to hydrodynamics.

  2. TGD inspired hypothesis is that quantization represents genuine quantum physics and is due to the fact that dark matter matter corresponds to a hierarchy whose levels are labelled by the values of Planck constant. Visible matter bound to dark matter would make this quantization visible. Putting it more precisely, the each or the space-time sheets mediating interactions (electro-weak, color, gravitational) between the two physical systems is characterized by its own Planck constant which can have arbitrarily large values. For gravitational interactions the value of this Planck constant is gigantic.

  3. The implication is that astrophysical systems are analogous to atoms and molecules and thus correspond to quantum mechanical stationary states have constant size in the local M4 coordinates (t,rM,Ω) related to Robertson Walker coordinates via the formulas (a,r,Ω) by (a2= t2-rM2, r= rM/a). This means that their M4 radius RM remains constant whereas the coordinate radius R decreases as 1/a rather than being constant as for comoving matter.

  4. Astrophysical quantum systems can however participate in the cosmic expansion by discrete quantum jumps in which Planck constant increases. This means that the parameter v0 appearing in the gravitational Planck constant hbar= GMm/v0 is reduced in a discrete manner so that the quantum scale of the system increases.

  5. This applies also to gravitational self interactions for which one has hbar= GM2/v0. During the final states of star the phase transitions reduce the value of Planck constant and the prediction is that collapse to neutron or super-nova should occur via phase transitions increasing v0. For blackhole state the value of v0 is maximal and equals to 1/2.

  6. Planetary Bohr orbit model explains the finding by Masreliez that planetary radii seem to decrease when express in terms of the cosmic radial coordinate r =rM/a (see this and this). The prediction is that planetary systems should experience now and then a phase transition in which the size of the system increases by an integer n. The favored values are ruler-and-compass integers expressible as products of distinct Fermat primes (four of them are known) and power of 2. The most favoured changes of v0 are as powers of 2. This would explain why inner and outer planets correspond to the values of v0 differing by a factor of 1/5.

3. The explanation of CMB void

Concerning the explanation of CMB void one can consider two options.

  1. If the large CMB void is similar to the standard large voids it should have emerged much earlier than these or the durations of constant value of v0 could be rather long so that also the nearby large voids should have existed for a very long time with same size.

  2. One can also consider the possibility that CMB void is a fractally scaled up variant of large void. The p-adic length scale of the CMB void would be Lp==L(k), p≈ 2k, k= 263 (prime). If it has participated cosmic expansion in the average sense its recent p-adic size scale would be about 16<22 times larger and p-adic scale would be L(k), k=271 (prime).