https://matpitka.blogspot.com/2015/08/could-interpretation-of-planck-mass-be.html

Tuesday, August 11, 2015

Could the interpretation of Planck mass be totally wrong?

Recent day theoretical physics is suffering from a heavy load of irrational beliefs, which are remainders from the era of classical physics but ceased to have real justification anymore. Unfortunately, those who should have told do colleagues about this have forgotten to do so. Seems that it is me who must do it;-). Here comes the list.

  1. Naive length scale reductionism. Super string models and M-theory were believed for some time to be the last jewel in the crown of length scale reductionism. Fractality is the obvious candidate for a vision replacing it but for some reason - probably sheer vanity - colleagues refuse to give up reductionism.

  2. Quantum effects are important only in microscales has prevented progress in biology. Only now this taboo has began to lose its grasp as experimental facts do not leave any other options.

  3. Dark matter is trivial thing - just one or two exotic particles with very weak interactions. All experiments carried out hitherto suggest that this belief is wrong.

  4. Heath death occurs unavoidably and all life disappears from the Universe - terribly sorry. This belief is dictated by the second law in its recent form. Although the consequences are in blatant conflict with the fact that universe has been evolving to become and more complex, this taboo is absolute. Some theoreticians even manage to take seriously the notion of Boltzman brain and are able to believe that life emerged as a random fluctuation. As more and more evidence for the presence of organic molecules in Universe emerges, it is becoming clear that this fluctuation should have had size of entire Universe.

  5. Planck length mystics is a further strange belief. Although it has led to dead end in the attempts to construct quantum theory of gravitation, it is still taken seriously.
Bee had a nice reply to a question posed for her about Planck length. Bee correctly noticed that Planck length actually does not define actual length except in some attempts to quantize gravity - typically as space-time discretization. Loop gravity is one such attempt

A more feasible interpretation for the inverse of Planck length squared is as value of curvature scale above which gravitational fields are strong. In strong fields the perturbative expansion of standard quantum field theory ceases to make sense. In TGD framework Planck length is replaced with CP2 scale, which is both a genuine length and measure for the curvature of CP2. In TGD Planck mass is only a parameter associated with macroscopic gravity as will be explained. In super string theories gravitational constant defines only an algebraic quantity rather than defining a concretely unit of length.

To understand what is involved, one can start Newtonian gravitation.

  1. Gravitational potential V= GMm/r is the key notion. One might expect that when the parameter GMm/hbar is larger than unity, something terrible happens and perturbative expansion fails. The criterion would be Mm>hbar/G. This criterion involves only the product of masses, usually macroscopic masses. It does not contain any length! Could respected colleagues have been on wrong track?!

  2. What is interesting that for M=m the condition GM2> 1 holds true in condensed matter when M equals to the mass of large neuron with size of L≈.1 millimeters! L is definitely not the Planck length! Could it be that the size scale of big cell defines the scale in which on-perturbative quantum gravitational effects become important?! Could it be that Planck mass rather than Planck length is important relates to biology rather than with ultra short mass scales?

    This seems to be the case in TGD framework as the following arguments are meant to demonstrate! Unfortunately, the entire elaborate construction of quantum gravitation based on superstrings would be deadly wrong if this were the case. Therefore I do not expect that superstringy colleagues will continue readin if they ever started it.

As an eternal optimist I do however continue.

  1. Nottale was the first one to introduce the notion of gravitational Planck constant hgr= GMm/v0, where v0 is a characteristic velocity assignable to the system. Nottale proposed that the motion of even planets in solar system is quantized so that one has Bohr orbits. The value of hgr however differs for inner and outer planets being 5 times larger for outer planets so that it must characterize pair of systems rather than being a fundamental constant. Note that the dark Compton length for m would be GM/v0 and same for all inner/outer planets.

  2. In TGD framework quantum criticality predicts a hierarchy of effective (or real - depending on interpretation) Planck constants heff=n×h. Integer n has geometric/topological interpretation as the number of sheets for space-time surface having structure of singular covering with sheets co-inciding at 3-surfaces at the boundaries of causal diamonds - and defining the ends of space-time (also restaurants and all other things reside there;-)). This is space-time correlate for the non-determinism accompanying quantum criticality. The original identification was motivated by that proposed by Nottale and was generalized to other interactions such as electromagnetic interaction: in the case of electromagnetism one would have hem= Z1Z2e2/v0.

  3. One can assign heff =hgr to magnetic flux tubes connecting masses M and m and mediating gravitational interaction between them. Note that string theorists have started to talk about wormholes as mediators of entanglement. In TGD framework magnetic flux tubes carrying monopoles fluxes serve as correlates of negentropic entanglement and mediate gravitational interaction. Ordinary visible matter is condensed around dark matter structures and their genuine quantum character implies the approximate Bohr orbit property of planets among other things. Also in living matter genuine quantal behavior of dark matter implies approximate quantal looking behavior of ordinary matter, which is - sad to say;-) - living in slavery under the control of dark matter.

    The two identifications of Planck constant should be equivalent. This gives heff= hgr in the case of gravitational flux tubes. This leads to very nice predictions in quantum biology concerning biophotons and their role.

  4. In TGD framework flux tubes are also acccompaned by fermionic strings so that string theory becomes part of TGD but in a manner totally different from that in super string models , where the string tension is fixed and given by 1/G essentially. In TGD this identification would not allow macroscopic gravitation at all since strings at flux tubes serve as correlates for the formation of gravitational bound states and the strings would be hopelessly short: the distance between two gravitationally bound masses could not be much longer than Planck length. [Quite inofficially and in brackets: the situation is the same in super string models in their original form but polite manners do not allow to say this aloud for next few decades;-)]. In TGD the string tension is dynamically generated and essentially the density of magnetic energy per unit length and decreases as the string length increases.

  5. When does the transition to the dark phase occur? The key idea is following. Mother Nature loves the theoreticians working so hardly to understand her and since non-perturbative quantum theory is so difficult a challenge, Mother Nature has decided to be merciful. When needed, she makes a phase transition to a phase in which the Planck constant is so large that quantum perturbation series converges. In the case of gravitation a phase transition to a dark matter phase characterized by hgr= GMm/v0 should take place, when GMm is so large that perturbation theory does not converge. The naive estimate is GMm/h>1 for gravitational interactions and Z1Z2e2/h>1) for electromagnetic interactions. The perturbative parameter becomes simply v0/c<1 and everything works again. As noticed, for M=m and in condensed matter this phase transition should occur for gravitational interaction in the scale of a large cell - something like .1 millimeters. For em interactions it should occur in shorter scales and the values of Planck constants would be much smaller.

Quantum criticality is however the basic criterion. Even at LHC quantum criticality can be encountered. The phase transition to QCD plasma can be such a phase transition. What would happen that large heff phases are generated and since the collision energies are so large, resonances with masses which correspond to particles of M89 hadron physics with mass scale 512 times that of ordinary hadron physics can appear at criticality where they have large Planck constant and Compton length of order nucleon size -say. Thus quantum criticality allows to zoom up physics at much higher energies to longer length scales. This would be really marvellous! In biology dark EEG would correspond to photons with visible and UV energies and would be a zoom up of physics below micron scale!

Could this zooming up be a microscope provided by Mother Nature and be someday used as an experimental method to study physical systems, which for ordinary value of Planck constant are too small? Could the physics of dark gravitational systems scaling Compton length hbar/m to GM/v0 provide this possibility in the case of elementary particles?

32 comments:

Anonymous said...

Bee:
"However, a common way to tame infinities is to cut off integrals or to discretize the space one integrates over. In both cases this can be done by introducing a minimal length or a maximal energy scale."

and

"In Loop Quantum Gravity, one has found a smallest unit of area and of volume. Causal Dynamical Triangulation is built on the very idea of discretizing space-time in finite units."

It is worth noting that the problem of nasty infinities emerges already at the level of number theory when trying to stick to the notion of 1D-length - attempt to reduce Pythagoras theorem to 1D-numerical values of "length" leads to infinite processes of square roots, which in many ways fit better the category of algorithm than that of number. As Bee says, quantum gravity becomes important at _quadrance_ of... what, really? Exactly which or what kind of mathematical relation?

When you say that "theoretical physics is suffering from a heavy load of irrational beliefs,", it seems quite rational to identify notion of "length" as irrational, by it's number theoretical definition.

Anonymous said...

Am I correct to assume that quantum gravity - or criticality - becomes important at some general phase of curvature? If so, what do we mean by curvature mathematically, exactly, and can we express the critical phase algebraically?

Can you work out the solution in terms of purely rational approach to curvature (see: https://www.youtube.com/watch?v=ZRVQIajVdfs)?

Matpitka@luukku.com said...


I see quantum criticality as completely general notion: I am not able to assign curvature to it. I am even ready to consider the attribute "quantum" is not necessary and that any critical system gives rise to long range fluctuations realised as large h_eff phases.

It of course seems obvious that infinite-dimensional character of generalized confromal symmetries demands that there are infinite number of degrees of freedom involved: in TGD this is the case at fundamental level.

2-D critical systems modelled using conformal field theories would be a simpler case and already for them one has a fractal hierarchy of sub-algebras of conformal algebras: for some reason the importance of this hierarchy has not been noticed.

Physically criticality means unstable critical point: potential has vanishing gradient but the matrix defined by second derivatives is not positive definite- one has generalised saddle point. In Thom's catastrophe theory this notion is central and catastrophes are described in terms of degree of criticality.

In infinite D context the algebraic characterisation means that at given level of criticality characterised by n=h_eff/h
the sub-algebra of supersymplectic algebra with conformal weights coming as multiples of n annihilates the physical states and the classical Noether charges for it vanish. For the subalgebras with m<n this is not the case so that they act as dynamical symmetries instead of conformal gauge symmetries.

The vanishing of classical supersymplectic charges gives idea what criticality could mean. It would give rise to
effective 2-dimensionality and holography. Partonic 2-surfaces and string world sheets would allow to code for quantum physics. The interiors of space-time surfaces would code for the classical physics providing dual description of quantum physics necessary in quantum measurement theory. This would be analog of AdS/CFT duality.

Matpitka@luukku.com said...

"Can you work a solution in terms of purely rational approach to curvature".

The link did not work. In any case, curvature is not enough since it codes only for the data associated with the inherent geometry of space-time surfaces. There is also
the geometry due to imbedding: the shape of space-time
surface.

Various gauge potentials of standard model identified in terms of induce spinor connection of CP_2 and projections of CP_2 Killing vector fields are needed to characterise locally the fact that space-time is surface. Also second fundamental form is needed. Induced gauge fields are projections of CP_2 curvature form except for the U(1) part defined by Kahler form. In this sense the generalisation of the idea about coding of interactions by curvature makes sense.

Matpitka@luukku.com said...

Bee's comment is true but applies to quantum field theories and loop gravity defined by path integrals. I will return
to it at the end of the comment.

*In super string models the infinities are tamed by the generalization of particle concept: 1-D string instead of point like particle. This applies also to TGD: now particle becomes 3-D surface and huge super-symplectic symmetries generalize the super-conformal invariance of superstring models.


*The quantization in terms of mathematically ill-defined path integral is given up in TGD and one introduces the notion of "world of classical worlds" identified as space of 3-surfaces (analog of loop space). Einstein's geometrization program of physics is generalized so that it applies to entire quantum physics making it formally classical physics apart from state function reduction. The effective 2-D property gives excellent hopes that the *functional* (not path-) integral over WCW (mathematically well-defined!) reduces to something very similar encountered in string models.
Fermions reside at string world sheets connecting partonic 2-surfaces.


*WCW is endowed with Kaehler geometry. Already for loop spaces Kaehler geometry is unique from its existence and has Kac Moody algebra as infinitesimal symmetries. In TGD super-symplectic algebra defines the isometries of the Kaehler geometry and holography implies that the sub-algebra characterize with confromal weights coming as n-multiples of entire algebra as as gauge symmetries: this gives rise to h_eff=n*h hierarchy. As a matter fact, general coordinate invariance in strong form implies strong form of holography and effective 2-dimensionality meaning huge reduction of degrees of freedom transforming them to gauge symmetries.

*The story about how one gets rid of infinities involves many intricate details. For instance, Kahler geometry is necessary to cancel infinite due to the ill-defined determinant of metric of WCW in functional integral. The functional determinant from the second variation of Kahler function defining vacuum functional as exponent exp(-K/2) cancels this ill-defined determinant.

Matpitka@luukku.com said...


I do not see the notion of length as a problem. Without the notion of length physics reduces to topological field theories and the colleague at LHC would lose their jobs: I do not want this although I am not always able to think warm thoughts about them;-). The notion of length does not produce infinities in TGD framework. The problem is how to define definite integrals in p-adic context: these are needed when extending the real number based TGD to adelic TGD required by consciousness theory.

*Discretization is indeed needed but for purely number theoretical reasons, not to eliminate divergences. Discretization also corresponds to a finite cognitive and measurement resolution. Angles do not exist p-adically but correspond to phases exp(i*phi). These do not exist p-adically unless one considers only roots of unity phi=2*pi/n and introduces algebraic extension of p-adic numbers. All algebraic extensions of rationals are possible and each of them induces extension of p-adics and gives rise to one particular Adele in the hierarchy of Adeles. The identification is as an evolutionary hierarchy. The parameters characterizing string world sheets and partonic 2-surface belong to the extension of rationals defining given level.

*p-Adic variants of integrals are obtained by algebraic continuation from the points of algebraic extension to p-adic realm. Therefore the existence of integrals - or better to say scattering amplitudes since it probably turns out that the actual integration can be circumvented by using the huge super-symplectic symmetries - in the real sector is enough.

*The continuation of the scattering amplitudes takes place at the level of moduli space characterizing the particles and momentum space of particles. Important point is that real and p-adic space-time surfaces are not obtained from each other by a local operation but by algebraic continuation by holography from string world sheets and partonic 2-surfaces on the interaction of reality and p-adicities by demanding preferred extremal property in real/p-adic sense. Correspondence is not local as I thought originally.

Matpitka@luukku.com said...


Returning to Bee's comment: I am not too enthusiastic about the notions of minimal length and area - to me this looks too simplistic and creates more problems than it solves. Consider only the breaking of Lorentz symmetry predicted by loop gravity: it has not been observed. Its analog in TGD was the attempt to assign to real preferred extremals p-adic ones in terms of a local discretization.

Dicretization occurs at the level of moduli characterization space-time surfaces not at the level of their coordinates.

Space-time quantization - not discretization - in TGD emerges from preferred extremal property and is analog of Bohr orbit property and present in all scales rather than in Planck scale.

Anonymous said...

The link does work for me, maybe you copypasted the closing parenthesis too?

Notion of length runs into square roots, which in idealized form are infinite processes aka irrational numbers. Our von Neumann architectures (computers) don't do those, only finite floating points. Those do well for engineers, but program of reducing TOE to number theory with notion of cosmic evolution sets very different standerds.

Remembering that developers of quantum theory discarded all other old assumptions and beliefs but chose to trust modern pure math, and that modern pure math - of which mathematical physics is a subset - has deeply problematic foundational issues, beginning from Gödel incompleteness etc., it is natural to hypothesize that e.g. the inconsistent infinity-problems of quantum gravity are a product of foundational problems of pure mathematics.

If Beauty and Truth requires, e.g., that natural numbers are not atomistically separate but holographically contain whole of math as their internal structure, then it is not enough just to suggest so, what you need is to develop a number theory that formally makes so.

So, I remain very curious what kind of mathematical assumptions and relations the quantum gravity criticality of curvature boils down to, deep down and as theory independent as possible. What does the question look like if we look at it without any notions of length etc. infinite processes included, in terms of finite actually and definitely computable quadratic relations. In 2D all the way, if you like. This finite task sounds much more doable and resolvable than all the whole of Adeles and all that jazz. Analytical process of slicing problems into doable subproblems has also its merits.

Matpitka@luukku.com said...


I think we have discussed about this;-). This is a philosophical issue: if one want don't accept the notion of infinity then
one must work with finite rationals. Physics becomes in practice impossible unless one pretends that irrationals and transcendentals exist. Also p-adic numbers, adeles, etc are lost. Most of number theory is lost.

*I do not see divergence problems of quantum field theories as mathematics related: they are due to the idealization of non-point like particle with pointlike particle and solved already in string models.

*As I said, quantum criticality in TGD framework is not specific to gravity or related to specifically to curvature. It is quite general phenomenon and I already explained the super-symplectic symmetry behind it. This mathematics assumes the standard view about reals. Things could be done at the level of Lie algebra using rationals as coefficient fields but this would not bring not much new. The hierarchy of algebraic extensions of rationals is the key notion. Parameters characterizing space-time surfaces as points of WCW are are in algebraic extension of rationals - not space-time points. Space-time surfaces itself are continuous and locally smooth. Making them discrete would produce horrible mess. Consider only the surface x^n+y^n=z^n: the condition that x,y,z are rational would reduce the surface to a single point at origin for n>2 as Fermat's conjecture correctly states.

*If we give up the notion of length, we are left with topological quantum field theories: forget about physics. TGD is almost topological QFT and "almost" is extremely important word here, it brings in length measurement to which all local measurements reduce in quantum theory. One cannot deduce laws of physics by requiring that they are tailored to the computational skills human kind has just now.

*Exact computation is something extremely rarely occurring: usually one must do numerics and computers make this possible. Easy computatibility is ok in descriptive geometry and for second order polynomials but that's all. Algebraic numbers can be dealt in numerics very nicely: treat them as symbols and perform calculations analytically as far as you can and substitute their value only after that. Even when calculating with rationals one must perform binary/decimal cutoff.

Matpitka@luukku.com said...


Calculating the curvature of parabola y=kr^2 (the link) without calculus demonstrates to me how marvellous discovery calculus was! Without calculus we would still live like people before Newton and Leibniz. I am not arguing the modern life is better but certainly it has higher complexity level.

Matpitka@luukku.com said...


Strict axiomatics is not possible in physics and by the discovery of Goedel the number of independent basic truths is not finite even in arithmetics - even less in physics, where one cannot even know what is the final mathematical structure. This is fantatistic news to my opinion: new principles are left to be discovered by every generation. Discovery of genuinely new truths is actually the real reason why the brilliant you people enter the science (only after they give up they pretend something else;-). Observations serve as the best inspiration, not axioms of mathematics. There is no time for this kind of luxury. Most colleagues take even number theory as luxury and do just numerics.

Logical consistency is easy for computers but not for humans. Computer programmer learns this in painful manner. At the level of basic assumption model builders fail continually in this respect: one amusing example
is the models invented to explain the 2 GeV bump. The first question would have been "Does this resonance look like composite particle or genuine elementary particle?". Just rough data was enough to demonstrate that it behaves like
meson like bound state of quarks, but not however the quarks we know. Respected colleagues had however decided that the bump is elementary particle and simply discarded those aspects which they could not explain. Fantastic constructions: intersecting branes, large new dimension, new factors in gauge group, nre anomaly cancellation mechanism.... I could not but laugh!!

If one wants to do some physics in the limitations posed by the finite lifetime, one must accept fuzzy thinking: it is one of the most brilliant discoveries of Nature. Without it we could get drowned to irrelevant information and would not see the significant bits. Category theoretical philosophy takes finite cognitive resolution as starting point.

I have different view about the relationship between algebraic extensions and real line. One can see algebraic numbers as points of real line. One can also see algebraic extension as finite-D or even infinite-D space based on "rational line" as basic building brick. This depends on how one topologizes the numbers. Pi and e are present in the real sector of adeles, in p-adic sectors the transcendentals are infinite as real numbers. Completion is the basic notion: completion makes possible differential calculus without which even the calculation of derivate of y=kx^2 would require one web lecture: to say nothing about calculus in infinite-D context where visual intuition would not come in help! Accepting the notion of completion of rationals gives calculus. Physicist takes calculus seriously because there is no other practical option.

I do not know what you mean by "algebraic curvature of irrational". Certainly not curvature in the sense that geometrician or physicist talks about it. The role of curvature in Einstein's theory is similar to that of gauge field in gauge theory: it carries the physics. Ricci scalar has value, which does not depend on coordinate choice.

Anonymous said...

By algebraic/irrational curvature of repeating patterns I mean at the simplest level wave like patterns plotted by repeating numerical patterns. Eg. "curve" plotted by n,123211232112321 in cartesian coordinates, x=position of number in the string of repeating pattern and y=numerical value of position x. There or of course many other possibilities of algebraic curvature interpretations of wave-like repeating patterns, e.g. the technique used in the video lecture linked above, which I assume can be extended to polygons and higher dimensions - in a way that even our available von Neumann computers can compute reliably without floating point fuzziness. The basic intuition is that the repeating wave-like patterns of algebraic numerical extensions of rationals are not insignificant noise but most basic generative data in terms of space-time generation, a kind of purely algebraic number theoretical WCW.

One can see algebraic extensions as points on "real line" only by combinatorically creating the said monster, and my main point was that we run into very deep troubles if apples of algebraic extensions and oranges of combinatoric pseudotranscendental strings ("completion" via least upper bound trickery assuming completed infinite processes (in actual world of floating point finitness)) get mixed up and confused under the ill-defined term "real number". Algebraic extensions are fine and very interesting in terms of dimensions and space times, but the "completion" side still contains all the logical problems that already Berkeley pointed out in work of Newton and Leibnitz. The product of the least upper bound completion simply does not compute and fulfill most basic requirements of an arithmetical field, and it is fraudulent to pretend they do. Floating point computations are not completions. The notion of real line continuum is not computable, and what is computable is not continuum in the sense that believers in real line claim. A classic case where you cant both eat your cake and keep it.

Anonymous said...

PS: Norman teaches in one hour rational quadratic approach to curvature that makes basic sense and can be turned into computer algorithms with definite results, on the other hand how long you try, I'm unable to learn differential calculus not least because of it's foundational logical problems, and no computer in this world can do differential calculus with completed real numbers, only with floating point approximations. The emperor does not have clothes.

Matpitka@luukku.com said...

I would assign "rational number" to repeating numerical patterns: "curvature" is misleading.

You mention repeating wave like patterns of algebraic numbers: can you give any reference. I just checked what
Wikipedia says about algebraic numbers: they are computable to arbitrary precision by a finite terminiting algorithm.

Computable and arbitrary precision are important words here. There is a nice Wikipedia artcle about computable numbers: https://en.wikipedia.org/wiki/Computable_number .

Copmputable numbers form a field but do not form a closed set of unit interval. There is also an infinite set of non-computable (by Turing machine) numbers but the trascendentals like e and pi are computable.

I would guess that computable functions are polynomials and rational functions with coefficients which are computable numbers - that is have have computable numbers as values. The derivative of computable function in the set of computable numbers is computable.

Could continuous and smooth functions with computable Taylor coefficients be computable? The limit of Taylor series for computable number might be non computable number. Also integral might lead to problems since it involves limit defined by Riemann sum. Can one define integral in the set of computable numbers? Could the difficulties be similar to those in the case of p-adic calculus?

A very interesting observation is that all possible sequences of binary digits called 2^omega (I identify it as 2-adic integers) are computable but that real numbers form only a subset of this set since ..b_n011111... correspond to the same real number as ...b_n10. This is very familiar to me and is encountered more generally as one maps real numbers to p-adic numbers by canonical identification: the correspondence is 1-->2 when the real number has finite number of pinary digits.

According to Wikipedia article this non-uniqueness is the deep reason why all real numbers are not computable. This might be also be a reason for why p-adic numbers are the right choice if one wants computability (and cognition).

I have ended up with a similar conclusion from totally different view point. The need to calculate and cognize forces the adelic physics. We can however sensorily perceive the entire line [0,1] since sensory perception is not computation.

Matpitka@luukku.com said...


Reals do not induce this kind of horrors in me. I see them as a part of much bigger magnificent structure.
Maybe the attitude matters: usually learning of and learning to derive the rules of calculus takes one lecture. After this one can generalise them to functions of several variables. If lecturer wants to make lectures less boring he/she can talk also about basics of tensor analysis and Riemann geometry. I had luck: I had this kind of lecturer during first year in University.

This is a pity since if you must do same horrible for every
function,before you can write the program you are lost already in the case of single variable calculus, where you can use visual intuition. Topology is of course lost because it assumes real notion of continuity.

To me this acceptance of only rationals looks like a return to pre-Pythagorean times. It could be interesting intellectual exercise: like survival in wild Nature for a weekend eating what one happens to find.

If you know just one example about the monstrosities caused by real numbers in calculus and mathematics in general, I would be happy to hear about it.

Anonymous said...

I'm a nincanboob, of course repeating patterns belong to rationals, not irrationals. Repeating patterns of rationals associate in my mind with wave forms, and I think it is fascinating that wave forms emerge as relations of whole numbers.

The basic problem of reals is that it is ill-defined set, containing both _algorithmic_ (computable) and non computable _choices_, it is not ordered nor does it fulfill the requirements of additivity etc. basic arithmetics - contary to what is normally authoritatively claimed. Commutative and associative and distributive properties etc. can be shown and proven for only algorithmic areas, not for the choice of "completion" noise produced by "least upper bound" combinatorical trick. Textbook examples of commutative, associative and distributive properties of real numbers use whole numbers as example, never to my knowledge the non-computable pseudotranscendent strings of choice that "completion" claims to create. So, if you want to keep on believing in the "completion" for continuum, you lose arithmetic properties, or you can limit your notion of reals to algorithmic numbers/strings and give up the "real" notion of continuum by choice/simple combinatorics.

Anonymous said...

PS: Of course it's not purely an either-or matter, but rather there is very rich transfinite area between "finite" and "infinite". Transfinite relations are present already at the basic level of ordinal part-whole relations, e.g. in form of < and >. A "number" like 0,999... is created by choice and belief, not by algorithm, and I find it (by choice and disbelief) very hard to believe that it has any close resemblance to 1. Or that it is smaller than or as big as 1... :)

But the narrative of mathematics and especially platonism is that of Truth by Necessity, appearing Same for all, at least when common ground is presumed or accepted. Otherwise we couldn't say that Others are irrational, not accepting what is true by necessity... ;)

Anonymous said...

So, to link up, when study of quantum gravity runs up to problems of infinities and non-computability, while simultaneously adhering to metatheoretical ideals of logical consistency and computability (which, among other things, allows to call Others irrational), it seems very natural to hypothesize that the problem originates from non-computable areas of number theory and standard tool for assumed mathematical continuity. Or what and how do you think, fellow irrational and non-computable sentient being? :)

Matpitka@luukku.com said...


To my understanding the presence of non-computables has absolutely no implications for practical computations which always reduce to those using rationals (even when one allows non-rationals as symbols in calculations and replaces them with rational approximation at the end). Every non-computable has computable arbitrary near to it so that finite resolution solves the possible problems one might imagine.

The existence of non-computables might show that our view about computation is quite too idealistic. In physics we never can say that particle is localized in precisely defined point: finite measurement resolution. This fact might be worth of trying in the formulation of reals for instance (if it is not already done).

The use of intervals around numbers instead of numbers as in fuzzy set approach should solve the problem. One can formulate the basic arithmetics also for subsets.

I dare claim that as a number field reals certainly satisfy the requirements of basic arithmetics. If you can show that this not the case please do so or give the reference.

Matpitka@luukku.com said...

I am forced to repeat myself. I see the infinities as an outcome of too strong an idealisation, I do not assign the with the fine details concerning the definition of reals having no relevance for practical calculations. This comes very obvious in the formulation of quantum field theory: on obtains singular expressions when multiplying operators at the same point.

Particles cannot be point like. This sis it. A closely related idea is the geometrization of quantum physics leading to a unique theory forced by mere mathematical existence demanding the absence of infinities at the level of infinite-D geometry: loop space for strings still has infinite Ricci scalar. This forces 3-D basic objects.

Anonymous said...

Maybe I'm weird, but I can't see how non-computable numbers can have basic arithmetic properties. In my simple view arithmetics concerns algorithmic numbers, and when I see claims that completion of reals containing mostly(!) non-computable elements fulfills requirements of arithmetics, I get the feel that I'm being tricked. Maybe it's just my Savonian upbringing. ;)

But this has discussion has been helpful, we found out that there are people who take this stuff seriously, not as axiomatic matter of (dis)belief but as work in progress: https://en.wikipedia.org/wiki/Computable_analysis

Anonymous said...

What is the bare minimum of math to express QM? It is often said (e.g. by Susskind here https://www.youtube.com/watch?v=iJfw6lDlTuA and here https://www.youtube.com/watch?v=a6ANMKRBjA8) that complex field is essential. But would some version of complex rationals be enough, or complex p-adics instead of reals?

Or, do we need to create a new quantum math number theory at foundational level?

Matpitka@luukku.com said...


Union of computables *and* noncomputables is what I am speaking. Product/sum of computable and non-computable is non-computable. Non-computables are not field. Computables are. I am happy to get the argument showing that field property is not satisfied.

One can define Hilbert space in any field, also rationals. QM formalism exists for any commutative and associative field. Comples numbers are not absolutely necessary i can be also replaced by element defining extension. Unitary evolution operator can be defined even when there is not Schrodinger equation. This is not a problem.

When you go to space-time level a you need Riemann geometry, spinor structure etc and things get very unpractical. On can of course define discrete variants of simple spaces like homogenous spaces - CP_2 good example so that symmetries are restricted to discrete subgroups. They appear in the formulation of TGD
based on finite measurement resolution and are associated with algebraic extensions of rationals

Things get very unpractical when you must deal with space-time surfaces. You lose variational principles typically relying on integrals. Partial differential equations are lost.

I am doing my best to formulate the new quantum math by starting from physical picture;-): adelic physics - at space-time level at least. At the level of Hilbert space there a potential problems: Hilbert space norm can vanish for arbitrary p-adic Hilbert space vector. If finite measurement resolution is introduced everything reduces effectively to an algebraic extension of rationals and this never happens. This serves as one motivation for the hierarchy of extensions of rationals which is key element of TGD. In real sector everything is continuous.

Anonymous said...

The burden of proof is rather on the shoulders of the one making such claims about reals - and the real question is, are we talking about provability or axiomatic position? Andrew Mckinnon says it better in another discussion on this topic on youtube:

"You say "we know that the rationals form an ordered infinite field." If someone were to challenge you on HOW you know this, I'm guessing you wouldn't respond by saying that what we mean by the phrase "rational numbers" is "an infinite ordered field." And that therefore your original is tautologically true. INSTEAD you would say that we can define the rational numbers as a pairs of integers and can define operations of addition and multiplication and then PROVE that these entities satisfy the field and ordering conditions.

Compare this with the case of real numbers, where you agree there are problems with the various attempts that have been made to define them, but you are happy to merely deal with the properties themselves, ie as I described above, to treat "real numbers" and "complete ordered field" as synonyms.
In which case the statement "the reals form a complete ordered field" is just a tautology.

If this is your position, then I can't really fault it. But does it not seem unfortunate to use superficially similar language to describe such wildly different situations?"
https://www.youtube.com/watch?v=hgOVbFdwMAM&google_comment_id=z125c3ehiuaqilz2e220xzeimtedfj3up

Matpitka@luukku.com said...


The general idea of the lectures seems to be that there are difficulties in the definition of real numbers numbers. These would be due to the fact that it is not possible to realize the algorithms producing algebraics of transcendentals as finite sequences in "real world". Certainly true. But one can introduce finite resolution. We can formulate arithmetics for sets. The lecture does not accept the notion of finite resolution.

I see no reason why would should restrict the world of mathematical objects - Platonia- to have boundaries
defined by our very limited capabilities to represent its objects. Computers are nice but we should remember that they are able to represent these objects within finite resolution.

Physicists are well aware of the necessity of finite measurement resolution but most of them see no reason to claim that the physical existence would be restricted by this. There is Planck length scale mysticism which however leads to severe problems with Lorentz symmetry.


The example about addition of decimal number is interesting because it can happen that if the digits at infinity can can sum up to something bigger than ten and
force to update in the worst situation infinite number of digits before this last one representing infinitesimal. Finite resolution of course circumvents this problem.

The problem is avoided totally if the addition of p-adic number since in this case the overflow affects the future in the summation binary digit by binary digit.

Anonymous said...

To do justice to Wildberger's view, in the "Socratic lecture" he brought up Wittgenstein's deeply philosophical requirement that if we aim to speak clearly, we must be clear (and consistent) when we are speaking in algorithms and when in choices.

I believe that is the core issue and difficulty; an algorithm (or Turing machine) is finite procedure that can potentially keep on going (where halting problem arises), but I believe it's more accurate to call such algorithmic products transfinite that infinite. Additivity is simple and basic algorithm that produces natural numbers. Floating point resolution is also IMHO more correct to define as transfinite than finite resolution, to make clear the difference between genuinely finite computations with integers and rationals and transfinite floating point calculations of algorithms with transfinite products.

Difficulties of defining real numbers in terms of pure mathematics seem to be closely related to Gödel's proof of incompleteness theorem, at least when definition attempt contains the notion of completion. Uncountable set of real numbers is not exactly definable with finite set of axioms, and all attempts of axiomatic definitions lead to paradoxes at closer look. Gödel first showed that PM implicates completeness axiom as its foundational presupposition - that all propositions are supposed to be provable, and only after that proved the incompleteness theorem.

The axiomatic completion of reals in addition to algorithmic irrationals and transcendentals via least upper bound produces infinite set of non-algorithmic and non-computable choises of "real numbers" that, if I understood correctly, cannot be assigned Turing-machines, analogous to Gödel-numbers.

But as you note and what is remarkable, p-adic side is free from these problems, which is easy to show on technical arithmetic level, and I believe the deeper reason is that p-adic side is based on part-whole principle that ultimately relates parts to indivisible all-inclusive whole (cf. Spinoza's logical treatise on Absolute), whereas the real side is atomistic and reductionistic trying in vain to deduce whole from parts.

Anonymous said...

On the other hand, I strongly disagree with NJW's metaphysical view that mathematics and computability reduces or rather should reduce only to representations and results of "finite" measurable universe of classical physics. To my understanding quantum zeno limitations of measuring quantum computations states the contrary, and more generally, on quantum theory level the "physics" and "math" sides are inseparable and not reducible either way.

Einstein's metacausal conviction that nature conforms to beautiful math is important part of the story, but not the whole story. Nature had it's day of poetic justice with spooky action at distance that technical Einstein-causality was supposed to hinder.

Matti Pitkänen said...

Wilderberg talked about algorithms and choices. This dichotomy brings to my mind classical deterministic time evolution and free will of quantum jump. Both are there. Classical time evolution is the correlate for quantum event - not 1-to-1 representation - and essential for
the interpretation of quantum measurement.

This analogy suggests that algorithms are only a part of story. Direct conscious experience about the existence of algebraic numbers and transcendentals is possible by the non-deterministic quantum aspect of existence ("choice"). Purely algorithmic consciousness - if it were possible - would allow only rationals.

A further similar dichotomy is cognition and sensory experience/motor action. Cognition in the sense that it is often understood applies algorithms (I do not actually share this view!).

I know nothing about transfinite algorithms. We are however conscious about transcendentals and algebraics. This is not possible if we are only collections of finite algorithms. Ordinary computer cannot discover square roots and pi because it only produces rationals from rationals: it does not sensorily perceive squares and circles as we do.

Einstein causality (in its weaker form) is not in conflict with entanglement if one gives up the idea that particles are point-like. This has been realized now also by general relativists (or rather super stringers). They propose that blackholes are connected by wormholes serving as correlates/ prerequisites for entanglement. In TGD magnetic flux tubes do the same for partonic 2-surfaces. The "spooky interaction at distance" ceases to be spooky in this framework: entangled particles form actually single particle geometrically thanks to the flux tube connecting them.

Matpitka@luukku.com said...


Wilderberg noticed that the periodicity of the pinary expansions of rationals is computationally extremely nice feature. When you notice that the outcome starts to have
repeating pattern, you can stop the calculation! The rest can be predicted so that optimal accuracy is obtained without infinite time of computation. This is nice!

Rationals are not all and this requires the notion of finite computational accuracy. One can replace reals with intervals having rational end points. Since products, sums, etc.. of rational intervals are rational intervals, it is enough to calculate the results for the end points of intervals and stop the calculation when things start to repeat itself. Something like this is probably just what numerics does in practice but without explicitly stating it.

Anonymous said...

Anonymous, you say just enough to act line you know what you are saying but dont. Computers Do differential calculus amazingly well, Matti is correct the adelic thing is brilliant and correct. When you say "I haven't learned differential calculus because it's logic is wrong" just makes you sound like a lunatic

--crow

Anonymous said...

Matti, I looked at that paper about corbodism briefly, and the thing you said about exp(-t^2/2) seems like a limiting case of a more generaled space , its basic to the spectral theorem and Solomon Bochner's work .. it seems related to 3.8 Spectral Functoions in his 1930 something book Harmonic Analysis and the Theory of probability .. this equation, $\lim_{n \rightarrow \infty} \sum_{j = 1}^n \sum_{k = 1}^n \frac{f_{\gamma}\left( \frac{j}{n} \right) f_{\gamma}^{\ast} \left( \frac{k}{n} \right) Q\left( \frac{j - k}{n} \right)}{n^2} = \lim_{n \rightarrow \infty} E \left|
\sum_{j = 1}^n \frac{f_{\gamma} \left( \frac{j}{n} \right) g \left(\frac{j}{n} \right)}{n} \right|^2$, image of the formula here. http://i.imgur.com/XS37miy.png

--Stephen

Matpitka@luukku.com said...


Thank you Anonymous:


It seems that this comment was meant to different
posting in which I talked about dynamics topology by starting from the idea of cobordism! Another confusion: I do not remember what I might possibly have said about Gaussian: probably my age again!;-)