### Does 2-adic quantum arithmetics explain p-adic length scale hypothesis?

For p=2 quantum arithmetics looks singular at the first glance. This is actually not the case since odd quantum integers are equal to their ordinary counterparts in this case. This applies also to powers of two interpreted as 2-adic integers. The real counterparts of these are mapped to their inverses in canonical identification.

Clearly, odd 2-adic quantum quantum rationals are very special mathematically since they correspond to ordinary rationals. It is fair to call them "classical" rationals. This special role might relate to the fact that primes near powers of 2 are physically preferred. CDs with n=2^{k} would be in a unique position number theoretically. This would conform with the original - and as such wrong - hypothesis that only these time scales are possible for CDs. The preferred role of powers of two supports also p-adic length scale hypothesis.

The discussion of the role of quantum arithmetics in the construction of generalized Feynman diagrams allows to understand how for a quantum arithmetics based on particular prime p particle mass squared - equal to conformal weight in suitable mass units - divisible by p appears as an effective propagator pole for large values of p. In p-adic mass calculations real mass squared is obtained by canonical identification from the p-adic one. The construction of generalized Feynman diagrams allows to understand this strange sounding rule as a direct implication of the number theoretical universality realized in terms of quantum arithmetics.

## 18 Comments:

Dear Matti,

Congratulate for the independent new website, it’s address is simple and beauty ;-).

Every time I see a new understanding takes place for you from your weblog (Especially recently!), I become really happy. I have best wishes in evolution of TGD.

Now i am struggling with zero energy ontology. For first, I started with causal diamond.

Suppose an object, I can imagine a causal diamond between time t1 and t2 for it. So every time we speak about a causal diamond we should identify a temporal interval?

I imagine for boundary of causal diamond of the object there is 4 pieces, two of them are:

- A piece of Future light cone of the object at t1 and a piece of past light cone of the object at t2

They are light like 3-surfaces and intersect together at edges of causal diamond. other pieces are:

Two snapshot of the object at t1 and t2 that are space light 3- surfaces.

I think my understanding of causal diamond is very weaker than you. Please guide me more.

Is transactional interpretation of QM by Cramer at http://en.wikipedia.org/wiki/Transactional_interpretation like zero energy ontology in some aspects?

“The basic element of the transactional interpretation is an emitter-absorber transaction through the exchange of advanced and retarded waves, as first described by Wheeler and Feynman (1945, 1949)

Advanced waves have characteristic eigenvalues of negative energy and frequency, and they propagate in the negative time direction. The advanced wave solutions of the electromagnetic wave equation are usually ignored as unphysical because they seem to have no counterpart in nature.”

Dear Hamed:

Thank your for questions. I think your confusion is due to the fact that you identify the snapshot at t1 as 3-D surface with constant value of M^4 time. This interpretation is wrong.

a) Usually one would think that 3-D objects are at M^4 time= constant snapshot. Now they are at the boundary of light-cone (lower boundary of CD) .

b) 3-D object at t1 means object at the light-like boundary of CD xCP_2 with tip at t1. If it were a surface in M^4 it would be light-like. It is however surface in M^4xCP_2 so that only its M^4 projection is light-like. The full metric is space-like due to the CP_2 contribution in it.

What might cause also confusion is that there CD boundaries are 7-D light-like objects in imbedding space and light-like 3-surfaces X^3_l 3-D ones at space-time surface X^4.

a) Causal diamond has the tips of the intersecting light cones as special (singular) points. Note that CD is assumed to include the CP_2 as Cartesian factor so that it is 7-D. It would be boring to write it again and again so that I chose a little bit loose language.

b) The temporal interval is the *proper time* distance T in the metric of M^4 between the points of CD. One can perform Lorentz boosts for CD but T is of course invariant under the boosts.

c) Causal diamond as 7-D object of M^4xCP_2 has just two pieces given by 0<r=ct<cT and 0<r = c(T-t)< cT at the tips. r is radial distance in M^4.

d) The light-like 3 surfaces X^3_l are - or by general coordinate invariance can be chosen to be- wormhole throats - sub-manifolds of 4-D space-time surface at which the signature of the induced metric changes so that 4-metric has a vanishing determinant. It intersects CD boundary at partonic 2-surfaces.

There is asymmetry since for space-time ends only their M^4 projections are light-like.

You might wonder why the treatment of space-like and light-like 3-surfaces in so asymmetric. The justification for the asymmetric treatment of space-like and light-like is following.

a) Metric 2-dimensionality of light-like 3-surfaces gives rise to huge generalized conformal symmetry.

b) If light-like 3-surfaces are equivalent with space-like 3-surfaces at the ends of space-time, the latter must allow similar symmetry. They allow and this symmetry and it comes from the metric 2-dimensionality of M^4 light-cone boundary. It is just light-like 3-surface in M^4.

c) This conformal equivalence is realized as the condition that one has so called coset representation of conformal symmetries. The differences of the super conformal generators associated with these two conformal symmetries annihilate the physical states.

d) The interpretation is as a generalization of Equivalent Principle: inertial conformal quantum numbers associated with CD boundary are equal to gravitational conformal quantum numbers associated with light-like 3-surfaces. In special case "quantum numbers" would read "masses".

e) Which conformal algebra contains which? One can say that Kac-Moody type conformal algebra associated with light-like 3-surface X^3_l and its partonic ends defines a sub-algebra of the symmetries assignable to the delta M^4xCP_2 _2.

f) I have asked whether these two symmetry algebra extend to single 4-D symmetry algebra could be realized in terms of Yangian defined at 4-D space-time surface.

To Hamed:

Thank you for an excellent question. Forced me to try to understand what Crameräs interpretation really means.

I think that there is something common. What you say obviously applies also in TGD framework. I am not familiar enough with Cramer's interpretation. I must be honest: I have never really understood it;-) . What I do not understand is how the flow of subjective time is possible in this framework.

Wikipedia tells that Cramer uses advanced and retarded solutions of wave equations. The basic idea is that the waves generated by the absorber and sender interfere to zero in the future of absorbed and past of sender. My criticism.

a) I am not all sure whether this allows to reproduce experimental facts about state function reduction. Advanced waves do not make sense for Schrodinger equation and in general Hamiltonian quantization one has idPsi/dt=HPsi so that the notion of advance wave makes no sense.

b) Only if the scattering process can be described in terms of relativistic fields and if it makes sense to speak about quantized fields which vanish in the future of absorbed and past of sender, this picture might make sense. For classical relativistic fields however probability conservation is replaced by charge conservation and unitarity is lost: one obtains negative probabilities. In order to solve the problem one must second quantize. But is it possible to formulate absorber-emitter theory in this framework?

c) I do not understand whether Cramer obtains a realistic scattering matrix in this framework. Wikipedia talks about stochasticity and Born rule but what Born rule means now? How Cramer would calculate probability for a given pair of initial and final states to appear in an ensemble of identical emitter-absorber pairs? One should be able to reproduce the predictions of quantum field theory and therefore second quantization would be the first step. Born rule would require inner product for 4-D evolutions of emitter-absorber quantum fields formulated in terms of the interaction term. Does one obtain Feynman diagrammatics?

d) It would seem that one must introduce statistical ensemble of identical absorber-emitter pars to representing different kinds of initial-final state pairs resulting in state preparation followed by state function reduction. By taking square root of this thermodynamics one would end up with zero energy ontology and the time like entanglement coefficients of zero energy state would give the scattering amplitudes.

e) This framework does not say anything about what time is. One just assumes that the time=constant snapshot representing observe moves towards geometric future.

To be continued...

To Hamed:

TGD view differs in several aspects from Cramer's view.

a) Zero energy states are pairs of positive and negative energy states. One can use state basis for which either positive or negative energy part of the state is prepared: (well defined particle numbers etc just as in particle physics experiment before the scattering). Depending on which end of CD is prepared , one has analog of retarded/advanced wave.

*The arrow of geometric time is different for these two kinds of zero energy states and I think that this is the correct signature.

*It seems that states tend to have same arrow of geometric time. This could be seen as a kind of phase transition analogous to magnetization in which spins tend to be parallel.

*Phase conjugate laser beams would be a physical analog of advanced waves. Also processes like self-assembly in biology could has non-standard arrow of geometric time.

b) In TGD framework there is no intention to get rid of state function reduction. Quantum jump is the basic notion of TGD inspired theory of consciousness. Cramer's world would be single solution of field equations as he understand them. TGD Universe quantum superposition of all solutions replaced by a new one in each quantum jump. There is continual recreation of zero energy states consistency with conservation laws because by zero energy property.

c) Zero energy state defines the M-matrix as time-like entanglement coefficients and M-matrices form orthogonal rows of U-matrix so that the notion of S-matrix is generalized. M-matrix is product of S-matrix and hermitian square root of density matrix so that one has square root of thermodynamics. In Cramer's approach one does not have anything like this.

Thank you very much, they were useful for me, but in the case of causal diamond although I understand more than before from your answering, But it takes time to understand algebraic meaning of it precisely and needs some algebraic tools.

Something that makes TGD harder to learning is that topics are very entangling together! And perhaps for me it’s better to read all topics several times and in each time I read more accurate than before.

Dear Hamed,

I tend to avoid formulas. I have some kind of formula allergy. Often simple drawing would be worth of a page of text but I hate drawing programs intensely;-). Gratis drawing programs drive me to the border of maddness.

This probably makes more difficult to understand what I am saying. To my opinion you have been learning very fast. I enjoy answering to questions which force me to learn myself!

Matti:

"TGD Universe quantum superposition of all solutions replaced by a new one in each quantum jump."

So this dynamical recreation of the universe in each quantum jump AT ALL time/length scales would resonate with the notion of Plato's Cave. "Reality" is a fractal, relative projection.

Regards.

The complexity problem is wellknown, and it makes TGD so difficult, when everything entangles into a mess. I don't know if there is a way out, because everything IS entangled :) As it of course must be.

I myself like figures and pictures, and many times they are more clear than a page of text. But I hate math formulas intensely too. YES. They destroy a good paper effectively. And I suspect they make people think they know something, which may not be the case. Just gibberish? I hunt the principles behind the math.

Today I got an interesting statement, as instance: The momentum is often conserved, so it is more fundamental than energy. Energy change (tensors). Light is delivering momentum to Earth. Momentum is the product of the mass and velocity of an object, a vector, says wikipedia. So, a delivery of velocity, vectors? Gravity is a vector. In relativistic mechanics, in order to be conserved, the momentum of an object must be defined as the Lorentz factor *mass* velocity of the object. Gives (c^2 p)/E or a reversal of Einsteins formula? This relativistic energy-momentum relationship holds even for massless particles such as photons. relativistic momentum is related to the de Broglie wavelength λ by

p = h/\lambda\,,

where h is the Planck constant. Ye, of course :)

Relativistic four-momentum as proposed by Albert Einstein arises from the invariance of four-vectors under Lorentzian translation.

Momentum is a result of the equivalence principle?

I never liked the wippiename, but tgdtheory sounds beautiful :) A change to the better? This time I got no warning. So everything is just as it shall be. I wish I could help, but how? I simply don't know.

Ye, Hamed, you learn fast. This is just my gibberish.

Dear Ulla,

four-momentum conservation is a key implication of Special Relativity used routinely in the analysis of particle physics experiments. For instance, the discovery of neutrino as missing energy and momentum was forced by this law.

In General Relativity things become intricate.

*One can have Poincare invariance only as an approximate tangent space symmetry (imagine replacing the surroundings of a point of sphere with plane as we usually do when we approximately think that Earth is flat). But what one means with this?

*One could argue that general coordinate invariance is extension of Poincare invariance but this would mean that Poincare transformations are like gauge transformations and four-momentum would identically vanish.

The densities for Noether currents for four-momentum indeed vanish by Einstein' s equations and four-momenta cannot be integrated as conserved charges since this procedure is not general coordinate invariant.

*One must invent all kinds of tricky definitions of mass (one should identify also momentum and angular nomentum) and one loses Poincare Lie algebra which is what is needed.

Things work only if one assume that space-time is a small deformation of Minkowski space by the smallness of gravitational interaction (but what abot blacholes?), and one must restrict the consideration to asymptotic regions of space-time. Nima Arkani-Hamed has emphasized this aspect and it could be seen as a one motivation for the notion of holography.

These mathematical and conceptual difficulties led to TGD where Poincare symmetry remains exact symmetry and Equivalence Principle generalizes. One can assign Equivalence Princple to not only gravitational and inertial mass, to not only gravitational and inertial Poincare charges, but to entire infinite-dimensional conformal super-algebras which could be called gravitational and inertial (Super-symplectic algebra at light-like boundaries of CD and Super Kac-Moody at light-like 3-surfaces).

Matti:

"TGD Universe quantum superposition of all solutions replaced by a new one in each quantum jump."

So this dynamical recreation of the universe in each quantum jump at each and every time/length scales would resonate with the notion of Plato's Cave. "Reality" is a fractal, relative projection.

Regards.

Matti:

If you would, examine any possible parallels with TGD.

The nature of light is to give to all other lights, so that giving may give again (perpetual motion). All lights in the Universe are connected to each other. Light is not traveling from distant stars and galaxies, it is already here and has always been here, because our star has been connected to all other stars in the Universe, ever since it's birth. What we call the travel of light is actually the time that it takes for "electrically simulated light" to reproduce itself wave-field to wave-field, once light is emitted from a source.

The light of our Sun gives to all other Suns in the Universe and this giving is instantaneous, along the electrical torsion spirals (magnetic flux tubes) which connect all lights in our Universe. We are not looking at stars or galaxies as they appeared millions of billions of years ago as taught by quacks in academic theory. These connected, spiraling electrical streams of torsion between all stars are pulsing their positions along the torsion waves which create the appearance of light at speeds, which increase by the square of their distance. The very furthest stars are therefore pulsing their lights along these torsion waves, at speeds in extreme excess to the limiting academic "velocity of light" and all other motions as well, as theorized by einstein and still taught in our schools as academic truth.

Regards.

To Anonymous:

There are several parallels. Zero energy ontology implies time-like entanglement between positive and negative energy parts of zero energy state which could now correspond photon at source and photon at receiver. Also quantum entanglement in astrophysical scales is possible if one accepts the hierarchy of Planck constants. Universe would be a gigantic living organism.

Velocity of light has however upper bound when one speaks about f classical signals. The instantaneous changes of even space-time surfaces are possible but would not be due to super-luminal signaling but due to the behavior of these space-time sheets as particle like objects.

Could one test this picture in living systems? Could two parts of biological body show correlations requiring super-luminal signal velocities if due to classical signals between them? For 1 meter scale this would give time scale of order nanosecond.

Dear Matti,

Do I understand correctly? : TGD wants to say there are two types of entropy. One is related to causal diamond as a whole (in each quantum jump a new causal diamond replaced with before and the dynamics governed by NMP), but another entropy is related to an ensemble of particles at a snapshot (t=constant) of causal diamond (the Second definition is common and for TGD the first is important!).

Then how two definitions related together?

At “required ensemble of entropy is ensemble of strictly deterministic regions of space-time” is the entropy the same as my understanding of first definition of entropy as before?

How do you define macrostates and microstates of them?(and then entropy as logarithm of the number of different microstates that correspond to a given macrostate)

Dear Hamed,

thank you for an interesting question.

There are two kind of entropies.

a) Entanglement entropy assigned usually to two entangled systems.

b) A purely statistical entropy associated with ensemble of identical systems but in different states in general.

In case a) entanglement probabilities str identified as eigenvalues of the density matrix characterizing entanglement correspond to probabilities of state pairs in the eigenbasis of the density matrix for either system.

NMP states that in state function reduction this density matrix is measured so that its eigenstates is the outcome. This state is of course pure.

In case b) probabilities for states in ensemble are just these probabilities if the ensemble results in quantum measurement of the density matrix for a large number of identical copies of system entangled in the same manner with external world.

One can say that entanglement probabilities for a member of ensemble become ensemble probabilities in state function reduction.

Ensemble could be interpreted as ensemble of sub-CDs for a given bigger CD ("observer"). Each sub-CD would define a system entangled with the bigger CD and quantum measurement of the density matrix would reduce this entanglement.

What I have said applies to ordinary entanglement entropy making sense for real numbers. For rational or algebraic entanglement number theoretic entanglement entropy makes sense and the situation is more delicate.

Dear Hamed,

you asked also about micro- and macro states.

I have been speaking only about various entanglement probabilities. In classical thermodynamics one speaks about microstates and macrostates and entropy in this sense measures the number of microstates which correspond to a given macrostate.

I can only try to formulate this problem in TGD framework assuming hyper-finite factors of type II_1. Let us try to count microstates at quantum level first.

a) Macrostates are obviously equivalence classes of microstates by finite measurement resolution. In TGD framework measurement resolution is described in terms of inclusions N subset M of hyper-finite factors of type II_1.

b) The included algebra N would generate the microstates defining the same macrostate and one should count the number of the states generated by this algebra from given state.

The problem is that the algebraic dimension of this algebra is infinite in case of hyper-finite factors! Could one argue in the following manner?

a) The trace of infinite-D unit matrix associated with total algebra M is by definition equal to one. Usually it would be the infinite dimension of Hilbert space. Now we have however hyper-finite factor of type II_1 for which infinite-D unit matrix has trace and thus also dimension equal to one.

The trick is to redefine the dimension of subspace as trace of the projector to that sub-space and replace ordinary trace with what could be called quantum trace. This characterized by quantum phase q. For identity matrix this trace can be taken equal to one (convention).

b) The total algebra M is tensor product of the included algebra N and the factor algebra M/N. The dimension of M/N is the index of the inclusion which for Jones inclusions is algebraic number in the range 1 to 4: this is not the most general case.

c) This gives that the dimension is the inverse of the index of the inclusion and in the range 1 to 1/4 in this case.

d) The number of microstates associated with a given macrostate should be the inverse of the index for the inclusion which is algebraic number and defines fractal quantum dimension of M/N.

e) In the general case one would obtain a product of the inverses of the indices for different inclusions (assuming that one replaces M by its tensor power and N by a tensor product N_i:s).

What about the counting of microstates at classical level? I will try to say something about this in separate comment.

Consider now the counting of microstates at classical level.

a) Quantum classical correspondence states that macrostates correspond to braids at space-time level and microstates to light-like 3 surfaces. Light-like 3-surface is effectively replaced by a braid with strands carrying fermion number.

b) At the level of braids one should somehow count the number of partonic 2-surfaces corresponding to same braid end configuration. Kind of volume measure in WCW would be needed.

c) Could this volume be a p-adic number which can be infinite as real number. Canonical identification defined in terms of quantum arithmetics for which quantum phase q is characterized by p-adic prime p would map it to a finite real number. Note that quantum phases characterize also Jones inclusions. Quantum arithmetics would however restrict them to integers or quantum arithmetics must be generalize to allow expansions in powers of integer n such that factors are quantum integers containing no factor if n. Maybe there is internal consistency!

d) If there is consistency, the quantum counting by traces and the classical counting by integration over WCW should give identical results. The classical count could be also defined to be equal to the quantum count!

Is this a good place to start? http://ysfine.com/einstein/emc/fparton.html

I want to separate the different geodesic frames into spatial, timelike and lightlike. They need a common frame? You say that is the momentumm/energy. If we look at our universe in a frame of c, then mass cannot escape, nor c, but when we compare to a blackhole we see that something do escape. So we see BH as lightlike.

I also need different frames for symmetry and the imbeddings of gravity. I asked you some small questions. Please answer.

Thank you very much Matti.

Now i am struggling with your answer! That’s very interesting for me :)

Dear Hamed,

still about your question concerning micro and macro-states. First of all, the argument does not apply to classical thermodynamic where the measurement accuracy is really huge. One considers just variables like pressure, temperature, etc...

Secondly, I noticed a stupid mistake in the guesswork argument for entropy. Entropy should be additive in tensor product. My guess is multiplicative.

The first guess is that one takes just a logarithm of what I obtained. One could try to justify this in the following manner.

a) In thermal equilibrium for given energy all microstates are equally probable. For D-state quantum system the probability of microstate is 1/D. Shannon entropy is -\sum p(n)log(p_n)= log(D).

b) For the factor space M/N obtained by smoothing out details away D is fractal dimension having values

d= 4cos^2(pi/n), n=3,4,....

ranging from 1 to 4. Also larger fractal dimensions are possible and if I do not remember wrong there is no upper bound for the dimension. This formula holds true only for the simplest situation. By taking tensor products of copies of M and N one obtains products of these dimensions.

One would have S(M/N) =log(d)

for M/N by a formal generalization of the formula for integer dimension.

c) Now one is interested on the entropy of the N factor corresponding to the microscopic unseen degrees of freedom. One can argue that the entropy for M in thermal equilibrium is just log(d)=0 since the trace d of unit operator equals to d=1. Therefore one would have from S(M)= S(M/N)+S(N)=0

S(N)= -S(M/N)= -log(d).

d) This entropy is negative!! Could this mean that the hidden degrees of freedom in question carry information instead of entropy? Could this information be regarded as conscious information? Could one assign it with negentropic entanglement? Something could be of course wrong with the argument. Maybe the naive generalization to fractal dimension fails.

Post a Comment

<< Home