The mice were first placed in a tank of water and trained to find their way to a platform just below the surface. Next, the team ensured that the task was stored in the mice's long-term memory by waiting for a few weeks. Then they induced the mice to produce p25, leading to loss of neurons, learning ability and memory. When the mice were replaced in an environment rich of various stimuli the memories were restored.
The first interpretation is that the memories are stored in the ordinary sense but not to synaptic contacts but somewhere else, say to RNA inside cell nuclei known now to be coded in large quantities by the intronic portion of DNA (see the recent article in New Scientist). Stable storage of memories in the conventional sense of the word seems however to require that these RNA molecules remain in the nucleus which need not make sense.
TGD based model of long term memories for which zero energy ontology provides a justification, provides an alternative explanation. The basic ideas are following.
- Long term sensory or episodal memories making possible memory feats correspond to either sharing of mental images of the geometric past by time-like entanglement.
- This storage mechanism is not very efficient and a more efficient mechanism would be based on communication with geometric past. Memory recall would be represented as a signal sent from magnetic body at appropriate level of onionlike hierarchy of magnetic bodies to the brain of the geometric past and realized as phase conjugate negative energy dark photons. Memory would be communicated to the geometric future by using analogous positive energy signal. This time mirror mechanism is also the key mechanism of remote metabolism and mechanism of intentional action and explains the findings of Libet about strange time delays of consciousness.
- The problems due to the extremely low photon energies are circumvented if photons are dark and thus correspond to so large value of Planck constant that their energies are above thermal energy at room temperature. EEG represents only a small portion of frequencies involved and corresponds to time scale which is fraction of second. Much longer time scales are involved with what we are used to call long term memories.
In fact, almost any quantum dynamical event of the geometric past in the living body could serve as a memory storage. Nerve pulse patterns would serve only as symbolic "digital" representations of memories. This representation would be much more economic and than the representation as sensory memories localizable at the level of primary sensory organs.
35 comments:
Interesting. Off topic: did you notice that someone claims
http://arxiv.org/abs/0706.1968
to have proved RH using your ideas? Of course, one glimpse of the paper shows that it looks far too careless to be correct. And 'proofs/disproofs' seem to be showing up on the arxiv frequently now. What on earth are they doing about standards?
I had not noticed the paper earlier although the name of author was familiar. I could not see any obvious connection with my own ideas.
Hello Matti,
Kea pointed me to your blog on PF. I see you've published a massive amount of work on your website and I've not even read a fraction of it, but scanned some of the pdf's. Not having grasped it all, I am attracted to your mode of thinking, and in particular your inspiration with learning and biology. That's where I get alot of inpsiration as well.
Keep up your good thinking!
I pretty much just resumed thinking about this myself (I'm not doing any physics as a job, I made that decitions long time ago, it' a hobby for me) but I sense alot of common sources of inspiration, this makes me curious.
A question for you: What are your founding principla and philosophical first principles, used as source of inspiration? And what's your idea of the scientific method? Is it somehow mathematical structures, or what? I scanned some of your papers and my first question was to understand your source of inspiration and motivation.
/Fredrik (from Sweden)
Nice to hear of having a soulmate! I decided to write a short answer to your questions but I find that it grew to an essay. I hope you do not get totally bored.
I started as studen for theoretical physicist 36 years ago and somehow the idea about construction of unified theory became my great dream. For about 28 years ago and after 5 year's Odysseia with several failed ideas I ended up with the idea about space-times as 4-surfaces in certain 8-D space with 3-D surfaces identified as basic dynamical objects (counterparts for Wheeler's 3-geometries). It became soon obvious that this approach leads to a highly unique theoretical framework if one poses the condition that theory reproduces standard model symmetries. One could end up with this framework either from string models or from the energy problem of general relativity.
Only a couple years ago I realized that general coordinate invariance in 4-D sense allows to restrict consideration on light-like 3-surfaces and this allowed to understand how the conformal symmetries of super-string models generalize and how D=4 is the unique space-time dimension. It turned also that at fundamental level TGD has a formulation as almost topological quantum field theory with lightlikeness bringing in the notion of metric and length. This brings in category theoretical machinery naturally (for instance, S-matrix as functor). Here the interaction with Kea's blog has been very inspiring.
Already in the beginning it was obvious that the formulation of Quantum TGD is not possible using the existing recipes (canonical quantization, path integral) and this led to the idea that physics could be formulated as infinite-dimensional spinor geometry in the "world of classical worlds". Classical spinor fields of this infinite-dimensional space would represent quantum states of universe so that the only genuinely quantal thing would be quantum jump ("Do not quantize"). Also fermionic statistics found a natural geometrization in terms of infinite-D Clifford algebra and reduced to second quantization of free fermions at space-time level. The philosophical vision was that infinite-D geometry and thus physics could be more or less unique from the mere requirement of mathematical existence. This is suggested already by the uniqueness of Kaehler geometry of loop spaces: if you require the existence of Riemann connection you must have infinite-D Kac-Moody type symmetries.
The realization that the spinors of world of classical worlds can be regarded as a von Neumann algebra known as hyper-finite factor of type II_1 (HFF) led for few years ago to a burst of new ideas. Quantum groups and various non-commutative structures emerged in this framework from generalization of quantum measurement theory by taking into account that measurement resolution is necessary finite for HFFs. A generalization of the notion of imbedding space emerged and led to the vision about dark matter as a hierarchy of macroscopic quantum phases labelled by increasing values of Planck constant living in different sectors of the generalized imbedding space.
For about 15 years ago I started a serious construction of consciousness theory. I must say that most of the motivation and vision came from attempt to understand longlasting altered state of consciousness, that I experienced around 85 or so. Second motivation was the realization that the extension of quantum measurement theory and solution of its problems would be necessary before I can have any hopes of constructing S-matrix of TGD. The replacement of the reductionistic view with fractal view about universe with macroscopic quantum coherence in all length scales was already by the generalization of space-time concept to many-sheeted space-time with space-time sheets having interpretation as arbitrarly large quantum coherence regions. "Everything is consciousness and consciousness can be only lost by quantum entangling" is one manner to characterize the basic idea.
p-Adic physics emerged roughly at the same time via p-adic mass calculations. The interpretation of p-adic physics as physics of cognition and intentionality emerged. Cognition and intentionality would be present already at elementary particle level and p-adic fractality would be the experimental signature of it making itself visible in elementary particle mass spectrum among other things.
This led gradually to the vision about physics as generalized number theory. It involves three separate aspects.
a) The p-adic approach led eventually to the program of fusing real physics and various p-adic physics to a single coherent whole by generalizing the number concept by gluing reals and various p-adics to a larger structure along common rationals and algebraics. This inspired the notion of algebraic universality stating that for instance S-matrix should result by algebraic continuation from rational or at most algebraic valued S-matrix. The notion of number theoretic braid belonging to the algebraic intersection of real and p-adic partonic 2-surface obeying same algebraic equations emerged also and gives a further connection with topological QFT:s. The perturbation theoretic definition of S-matrix is definitely excluded in this approach and TGD indeed leads to the understanding of coupling constant evolution at the level of "free" theory as a discrete p-adic coupling constant evolution so that radiative corrections are not needed for this purpose.
b) Also the classical number fields relate closely to TGD and the vision is that imbedding space M^4xCP_2 emerges from the physics based on hyper-octonionic 8-space with associativity as the fundamental dynamical principle both at classical and quantum level. Hyper-octonion space M^8 with space-time surface identified as hyper-quanternionic sub-manifolds or their duals and M^4xCP_2 would provide in this framework dual manners to describe physics and this duality would provide TGD counterpart for compactication.
c) The construction of infinite primes is analogous to repeated second quantization of super-symmetric arithmetic quantum field theory. This notion implies a further generalization of real and p-adic numbers allowing space-time points to have infinitely complex number theoretic structure not visible at the level of real physics. The idea is that space-time points define the Platonia able to represent in its structure arbitrarily complex mathematical structures and that space-time points could be seen as evolving structures becoming quantum jump by quantum jump increasingly complex number theoretically. Even the world of classical worlds (light-like 3-surfaces) and quantum states of Universe might be represented in terms of the number theoretic anatomy of space-time points (number theoretic Brahman=Atman and algebraic holography).
The notion of zero energy ontology is one of the latest developments but forced already by TGD inspired cosmology. It means that physical states are zero energy states consisting of positive and negative energy parts. S-matrix measured in laboratory corresponds to time-like entanglement coefficients for zero energy states and need not be unitary. S-matrix can be regarded as complex square root of density matrix and is a matrix like generalization of Schrödinger amplitude giving density matrix as its modulus squared and having unitary S-matrix as counterpart of phase factor. Thermal S-matrix results as a special case so that thermodynamics becomes part of quantum theory in zero energy ontology. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Some comments about inspirations, motivations and methods. The basic inspiration and motivation has come from the realization that if the basic idea of TGD is correct, it means profound revolution in world view, and from the fact that there has been rather concrete connection with experimental physics from the beginning but raising also highly non-trivial challenges. If you have given up the reductionistic and materialistic dogmas, biology and neuroscience provide endless source of problems.
My scientific "method" has been more or less free drifting in the space of mathematical and physical ideas with a close interaction with various kinds of empirical inputs. There are some principles such as quantum classical correspondence which can be applied almost mechanically. Most of interesting results are outcomes from attempts to solve problems which could be characterized as conceptual, philosophical or interpretational. After these years it is clear that starting to apply existing formalisms to TGD as I understood it for decades ago would have led to a dead end since the recent general view about S-matrix, which I believe is not far from correct, differs so dramatically from the standard approach of string models and QFT:s (perturbation theoretic approach based on path integrals fails, the central role of number theoretic ideas, generalization of Feynman diagrams different from that of string models, zero energy ontology and new interpretation of S-matrix, dark matter hierarchy and dynamical and quantized hbar,...).
The attempts to build concrete connections to existing physics and its anomalies have been very inspiring. In its recent form TGD makes quite many novel predictions which seem to be consistent with what we (really) know. Many models are rather simple and typically involve symmetry arguments and simple scaling arguments based on p-adic length scale hypothesis and hierarchy of Planck constants.
Thanks for your response! I will "respond to your response" later, as I need to find more processing time :)
/Fredrik
It was a long day today, and I haven't been able to analyze your response in detail.
But my impression is that your basic motivation and method seems to be more mathematical than mine. But there is also the chance that I still don't understand.
You write that
"My scientific "method" has been more or less free drifting in the space of mathematical and physical ideas with a close interaction with various kinds of empirical inputs. There are some principles such as quantum classical correspondence which can be applied almost mechanically. Most of interesting results are outcomes from attempts to solve
problems which could be characterized as conceptual, philosophical or interpretational."
To spit it out, what I was really looking for is something this this:
I am not near as experienced as you. I am 34, but nevertheless I've done some thinking and I certainly recognize the word "drifting". The results of my own "iterations" is usually that my focus shifts towards the iteration itself. I have kept observing myself repeating this pattern in seemingly unrelated situations. Often enough to make me think that there is indeed a common abstraction here. In short, my thinking of physics leads me to a sequence of refined questions and ultimately the abstractions seems to contain nothing but the scientific principle itself. With that I mean the generic and universal scientific principle.
Therefore the approach I look for, is really what I consider to be the scientific method itself. Wether it's physics or biology is only a matter of application or choice of special case.
I am looking for the ultimate unification as a kind of learning model.
I smelled alot of this in some of your papers, but when asking for your motivation I am not that sure anymore wether it's a conincidence or not.
If my question is somewhat clear - what do you think? :) In the world of abstractions, are we on the same page or not?
You are clearly deeper into mathematical formalisms that I currently am, but I try to not be blinded by such details.
Brief comments?
(I may get back with more, need to get some sleep now:)
/Fredrik
Hello Matti,
A couple of more key questions if you don't mind.
> space-times as 4-surfaces in certain 8-D space with 3-D surfaces identified as basic dynamical objects
Question 1: Is the above the starting point of your approach, or the result of "applying your approach"?
Question 2: If I understand you right, your approach is arrived at by requiring consistency with the standard model. But does your model predict, or allow possibilities beyond the standard model? Or are they explicitly forbidden?
/Fredrik
Question 1: about method and whether we are in same boat:
I do not believe in any strict method although I see universe as a learning system increasing its understanding about itself quantum jump by quantum jump. I am also "iterating" and regard trial-and-error as the best method.
Comment about learning model: In accordance with my own views about consciousness (and dark matter) I believe in a hierarchy of conscious entities and locate human kind to a rather low level in this hierarchy. For me scientific discovery is more like receiving and becoming conscious of. The best method to achieve this is what Eastern Meditation practices teach: try to get rid of your own ego.
Question 2: Space-time as 4-surface. What led me to the starting point idea was a concrete problem: I took seriously the conservation of energy and noticed that even the notion of energy is in a strict mathematical sense ill-defined in general relativity. How to modify GRT was the question and answer was immediately obvious.
Question 3: Standard model symmetries. These symmetries fix the choice of 8-D imbedding uniquely. The theory however predicts a lot of new physics (p-adic physics, dark matter hierarchy). Quantum measurement theory based on Jones inclusions and the generalization of the 8-D imbedding space sugggest that any gauge group can appear as a dynamical gauge symmetry. TGD Universe would be like Turing machine- in principle able to mimick dynamically any gauge theory.
The flavor symmetries of old-fashioned hadron physics not present in standard model might be understood in this manner. SU(3) transforming U (D) quarks of predicted three generations to each other is also a new symmetry not present in standard model.
With Best Wishes,
Matti
> I am also "iterating" and regard trial-and-error as the best method.
This is seemingly the natural starting point, but each time I tried it (from different starting angles). I make a bunch of trials, and unless I get potluck my focus immediately changes to the choice of trials, finding a better, more selective method for generating new more targeted trials. I think humans do this automatically, when we revise our knowledge upon feedback, wether good or bad. Here is clearly an induction step, because this can be repeated. After further failures we start to consider our method for picking "trial generating methods". But this can not go on forever, because we run out of memory and processing capacity. So at some point the most efficient choice seems to be to just pick a new option at random. The "balance" is the effiency of progress vs overhead in memory and processing requirements for higher level of sophistication. What I'm looking for is the formalism that implements this in the best possible way.
I suspect this can be applied to the laws of physics themselves, that they as somehow the most probable "laws". But there is no need to explicitly ban anything in my thinking.
How does this sound to you? I originally suspected that we shared some common inspiration here, only that you used mathematical analogies? But maybe not?
/Fredrik
I believe that there is this impossible-to-rationalize mystic component of just becoming conscious of truth. This is really re-creation of new universe knowing more about itself and methods can only make these quantum leaps more probable. I know this from the periods of desparate wandering in the garden of branching paths which have ended to the realization that something in basic assumptions is badly wrong.
In the recent academic environment where people concentrate more on career than science (easy to see from the tone of discussion and extremely conservative and conformistic attitudes in many blogs) making an error is a deadly sin. The outcome is a lot of research which looks like research but has actually much to do with it. People concentrate on keeping face and this leads to the aggressive modes of behavior. Just now crackpot hunting seems to be especially active in some blogs;-). People in this kind of mood do not discover.
"This is really re-creation of new universe knowing more about itself and methods can only make these quantum leaps more probable. "
If you mean that the universe "grows" into something more capable than before, I agree completely. I don't see this in contradiction to my position. In fact this is why I do not look for a definite static formulation of the ultimate truth. I am only looking for the optimal route to the next level or optimal routing to the truth - the optimal scientific method. By definition an inductive reasoning But I hope that it should be possible to approximate the induction step at some reasonably persistent state, but probably not forever lasting. I think the bounding fact from prevents this from beeing an endless loop, is that all of this is contained in the state of a particular subsystem - a particle, or an observer, or a human for that matter.
Subsystems, particles, human consciousness are I think governed by the same inductive ("fractal" in a certain sense) rules, but they are seen at different levels of sophistican and self organisation.
It seems at least somewhat similar to your thinking? But I think your "first principles" and methods seem different than mine.
/Fredrik
> In the recent academic environment where people concentrate more on career than science
Sadly enough it seems this is a builtin problem. There is indeed a conflict between the two positions. This didn't hit me until when during the last year of the university studies. Thus bugged me and I made the choice to keep my favourite things clean from commercial infuences to the extent possible. Maybe you'd think that science is free and above commercial constraints and other annoying stuff, but it seems unfortunately it's not so. This is why I don't want to contribute to it.
/Fredrik
We seem to have a lot of common but also differences and this is what makes the discussion interesting;-).
I do not pretend of having very high thoughts about academic environments. Freedom of thought is the absolutely essential element for a flourishing academic community but at least in theoretical physics it has been lost completely during last three decades.
For more than decade ago I left university: one might regard this as a childish reaction but certain events induced too deep disgust in me. I have spent some short periods in university after that but have been free researcher (unemployed) for the last five years.
Matti, I am curious to hear your intuitive comment about the information geometric and ME-inspired approaches.
There's a guy who set out to derive GR in terms of these first principles, but have not yet succeded.
From what I know, not many people work on this, but his philosophy is probably closests to mine I've found in pulibhsed papers. Although alot of things, in the formalism is currently unclear and I am in favour of deriving the ME principles without using a clear entropy concept, mainly to get rid of the ambigous notion of entropy. At best the entropy represents a class of fcuntions that would equally well be used instead of hte entropy to derive the very same ME dynamics. I think this is better shaved off from the beginning.
Look at the ideas begind it at
http://arxiv.org/PS_cache/gr-qc/pdf/0301/0301061v1.pdf.
I'm just curious about your thinking of the philosophy.
/Fredrik
I like ot think the main message in that paper is the line of reasoning, not the math, which is fairly basic and elaborate at this point.
What I picture this will lead to is a kind of evolution of probability spaces, and thus ideally there is no need to assume any kind of dimensionalities, because the model is really a method, and applying the meothod to real data will yield the model, and the correct dimensionality and so on should be inferred from a kind of evolutionary dataprocessing. This is all extremely intuitive to me, thinking of human consciousness. This was the link I smelled in your work. Now I see you have a different approache, but i still don't rule out interesting commong denomitators.
/Fredrik
I cannot comment on ME method.
I looked at the paper in the link you gave. The article introduces dust cloud and probability measure characterizing the probality that particles can be distinguished. Distances between particles are distances between probability distributions and this leads to a metric expressible in terms of these probability distributions.
To me the main weak point the assumption of the dust cloud: this kind of cloud is not present for say Schwartschild solution.
Problems are also encountered when one tries to get metric with Minkowski signature: if one looks the fundamental Gaussian containing metric in exponent one finds that Minkowski signature forces the Gaussian distribution to be non-non-normalizable and diverge for large values of time coordinate.
The author restricts the consideration to 3-D metrics with Euclidian signature to avoid the problem. Author introduced a metric in the space of metric tensors probably in hope of obtaining something analogous to geometrodynamics of Wheeler. It is however difficult to see how 4-metric could emerge in this kind of picture.
Thanks for your comment.
I am not overly fond of the "dust cloud" idea either, but I don't think that's the most important focus.
Howoto get the minkowski signature I have some clear visions on that but the formalism is to come. The key I see is howto treat time, and that's connected to the ME principle.
IMO the idea is that time is simply a choice of parametrisation in a stochastic evolution. The actual parametrisation is carried out by a reference system (clock). But the clock is not given, the clock has to be identified first, meaning that there is a chaotic domain where the notion of time is cloudy, and in the extreme even nonexistent.
The local information, defines a distribution of probable futures, determined by the supposed ME principle, which is supposedly an extension to bayes rule. But I am not quite satisfied with the current formulation of the ME principle, but I have ideas on how to improve it. This very construction will guarantee a statistical nature with a kind of local upper bound of information progagation, like in SR. But this is not definite, it's only valid to a certain confidence in a statistical way. I don't think it's fundamental as such. Thus the trick would be to not at first consider time as a dimension.
This first seems to be hard to merge with GR, but the unification is that ultimaltely space isn't a "hard physical" dimension either... it's all fuzzy structures, and structures emerge as we learn, or as a particle "equilibrates" with the environment.
I was wonder wether your notion of quantum jumps could link to this. I migth consider if information quanta, like if that was the starting point, and then try to work all the other stuff out in reverse order - statistically using bayesian like reasoning. I say bayesian like, because it's not a straight bayesian thing, I think of it as a kind of generalisation of it.
/Fredrik
The success of Poincare invariance of special relativity and of general coordinate invariance of General Relativity make it very difficult to consider any other alternatives than the mathematician's geometric time for the time of physics.
The experienced time is of course different (irreversibility, no subjective future). To me the challenge is to understand why these two times are approximately related: why the contents of sensory experience comes from a short interval of geometric time and why this interval shifts to geometric future quantum jump by quantum jump. In this picture memories represent breaking of the naive "reality as time=constant snapshot of space-time" in the scale of lifetime.
In TGD framework 3-D light-like partonic space-time sheet is the fundamental object instead of point-like particle. One can say that 3-space and particle are unified: by "gluing" a small 3-space to larger one you get particle in space.
Fuzzy structures are there but I believe that quantum theory is source of fuzziness rather than fuzzines source of dynamical geometry. In quantum theory neither the position nor the shape of the partonic 2-surface are completely fixed since quantum fluctuations are present. One can assign to partonic 3-surface only the average values of temporal and spatial position. This brings in fuzziness via finite measurement resolution and quantum groups, quantum spinors, and quantum logic are manner to represent this fuzziness.
Matti, thanks for taking your time to respond, I appreciate it. I think it is interesting to hear your reasoning, because different methods may reveal interesting angles of the subject.
"The success of Poincare invariance of special relativity and of general coordinate invariance of General Relativity make it very difficult to consider any other alternatives than the mathematician's geometric time for the time of physics."
I have no doubt it's difficult, but I have an intuitive vision of how to do it, but I am currently looking for the formalism. Like you I have been intrigued by the way the human brain works, because after all physical theories are a product of our brains. We often talk about how we lack experimental data in some domains, but there are also the domains of high complexity in nature (human brain for example) where we certainly do not lack data, why look where we can't see, when there is alot of things we do see but don't understand? The potential is enough to keep me explore it in despite of how difficult it is.
"I believe that quantum theory is source of fuzziness"
My starting point and guiding principle is that
everything should be infered and defined in terms of real data. This includes the model structure itself. Every new question asked, must have come from a relation to existing information:
For example, I have never observed a "3 dimensional space" in nature, or a probability space for that matter. I am only exposed to a stream of data or information, and my view is formed by organizing and interpreting this data stream in a way that is beneficial for my existence. Thus I see that that this "3 dimensional space" is infered from the datastream as a way to compactify data in order of significance (least significan data is truncated and possible "radiated" off in some way), and thus vulnerable to change if unexpected data comes. But since it seems impossible due to memeory constraints, to have an full resolution mirror image of the universe inside my head that is eqvuivalent to the history of data I've been exposed to, some data reduction must be done. Thus a "fuzzy 3d space" is classically reduced to a idealized mathematical 3d space, at a minimum loss but at a huge gain in storage effiency.
I think the stability of systems may ultimately be explains in similar terms, and it's ultimately due to information quanta. But the information quanta is system dependent.
This clearly relates to the measurement process, which I take to be on of the more fundamental things. Since our perception of the environment, is driven by interactions and measurements. And thing the finiteness of measurement and representation (memory limits) is the fundamental cause for the fuzz.
/Fredrik
"why the contents of sensory experience comes from a short interval of geometric time and why this interval shifts to geometric future quantum jump by quantum jump. In this picture memories represent breaking of the naive "reality as time=constant snapshot of space-time" in the scale of lifetime."
Does the below reflections in any sense satisfy some part of your question?
I agree that memory beeing time-snapshot of reality is seemingly a too simplistic idea. I more think of memory as a evolving compactification of history, where the quantum jumps as you say (which I assume would correspond to new "interaction events" or simply "data"??) in my view would provide the basis for "change", which by my means of relations in the datastream can be used to define time as a way to parametrize the changes we experience, relative to subchanges in a clock device.
So I picture a local time differential as a parametrization of the expected direction of change. And the expectation is some kind of subjective bayesian-like expectation. During chaotic conditions, the expectations are rarely met, and thus the notion of time becomes almost useless. And this is continously updated, excess data must also be discarded unless the memory grows. This is kind of the part of the information geometric and ME dynamics I like.
So I ultimately think the physical laws can be deduced to "predictive updates". The idea is that the logic of a "particle responding to a field" is to be unified with the general case of a information device responding to new information. In this respect we see an immediate analogy with a particle and the human brain, except of course the human brain are a little bit ahead in the evolutionary level :) But I think there exist a unifying evolutionary mechanism that unifies us.
I am also curious if you make sense of this at all, or if this comes out at philosophical mumblings? Something which I would completely understand.
/Fredrik
"Does the below reflections in any sense satisfy some part of your question?"
Our language are so different that cannot give any precise answers since I am not at all sure whether I have even understood your questions! Therefore I sharpen my own views so that you can deduce the answer.
My question is basically about detailed understanding of physchological arrow of time. Also about why contents of sensory input are localized around very short time interval whereas memories about long time interval. Why past seems to be rather inert with respect to intentional action or is this just belief: could I reconstruct my past so that it would be happier?
I am not quite sure what you mean with predictive updates. In any case, my basic point is that in new view about time there is no absolute need to update memories again and again: memories are in the geometric past where the event creating them happened and communication with geometric past makes possible memory recall. The concrete implication is that you might be able to remember practically without any brain: only the communication to geometric past would be needed.
I would also emphasize that quantum jump is really something very non-trivial: entire four-dimensional reality is re-created: also the geometric past.
I would not try to reduce everything to data or information. One can define information measures as one tell how intense the color red is but it might impossible to define it just as it is impossible to characterize the sensation of color using language: one must simply experience it. The reason is of course that information characterizes contents of conscious experience rather than physical state.
In TGD based ontology there are three levels: quantum states, quantum jumps to which conscious existence can be assigned, and space-time level: classical physics as space-time geometry. I think that all these levels of ontology are necessary for a real understanding, the attempts to reduce everything to some kind of monism lead to paradoxes.
Still a short comment. The basic aspect of data and information is its aboutness. Also consciousness is about something. Matter as described by physics is not about: it just is! This suggests that physics is not reducible to information theory.
"Our language are so different that cannot give any precise answers since I am not at all sure whether I have even understood your questions! Therefore I sharpen my own views so that you can deduce the answer."
Yes, I have not formulated my questions in mathematics, because at least from this specific point of view here, they are currently outside the realm of mathematics. But my ambition is that once the mystery is solved, there will be a mathematical formalism that approximates this to sufficient accuracy. But I've got some journey until I'm there, in particular since I don't have much time to spend on this. I take it very seriously, but it's still a "hobby" as far as how much time I have to dispose on it.
But your sharpenings did help to answer the questions. I think it's sufficiently clear now. My intention was as much in your methods as it was in your results so to speak. Your mutual interest in the sometimes "controversial" consciousness interests me.
/Fredrik
"Still a short comment. The basic aspect of data and information is its aboutness. Also consciousness is about something. Matter as described by physics is not about: it just is! This suggests that physics is not reducible to information theory."
I have no choice but to disagree here, but like you said, that's what makes it interesting.
/Fredrik
"Why past seems to be rather inert with respect to intentional action or is this just belief: could I reconstruct my past so that it would be happier?"
I think your questions are more psychological than mine. I assume this isn't mean to be a mathematical question. So if you mean what I think you mean, in my thinking the answer to that may be that the we have a certain confidence in the past, which can interpreted as a kind of inertia to new input. New input does change the knowledge of the past, but only to limits allowed by the relative confidence of past vs new input. If our memory is large, the past has a massive "confidence intertia" relative to new information. Also the remodelling of memeory I think has a kind of generalised inertia, because change of remodelling is resisted by intertia of the past.
I'm not sure if it's related, they seem to be. Yet our methods differ. If so, that's the fascinating part.
In my view, this can ultimately be described in a mathematical formalism involving evolutionary subjective logic. Exactly how this works chemically in a human brain is outside my question set. I'm trying to understand what the information abstracted view could be like.
/Fredrik
"Why past seems to be rather inert with respect to intentional action or is this just belief: could I reconstruct my past so that it would be happier?"
This question is probably not made often by neuro-scientist who has not much time to ponder about mystery of time. The view about time forced by the mathematical framework of TGD makes this question very natural.
This question does not make sense if one accepts the identification of experienced time and geometric time of physicist.
However, if one requires that the notion of geometric time of general relativity makes sense, that it is equivalent with experienced time, and accepts quantum jump, one ends up with paradox unless one modifies the ontology by assuming that quantum states are entire histories from moment of big bang to infinitely distant future. With this paradox Einstein was fighting for his whole life after QM.
This new view about time makes us 4-D beings in geometric sense able to affect our geometric past (but not subjective!). It brings in new view about memory since in principle there is no need to store memories.
The confidence of memories of geometric past could be guaranteed by building several copies of memories and this would not require to fill the entire life cycle with these copies.
What seems obvious is that intentional action on geometric past is rather limited: otherwise the decision in geometric past could instantaneously change my life in a revolutionary manner. On the other hand, miracle healings involving visible changes are reported continually. For instance, Candace Pert, almost a nobelist in biochemistry, reported this kind of healing in her book.
"However, if one requires that the notion of geometric time of general relativity makes sense, that it is equivalent with experienced time, and accepts quantum jump, one ends up with paradox unless one modifies the ontology by assuming that quantum states are entire histories from moment of big bang to infinitely distant future"
I think I am slowly converging into an understanding of some your questions, but I am no clear yet.
But it sounds like the closest thing that in your framwork is a instant update of the entire geometry, is what I would think of as an update of the "prior" in response to new data.
If I would take on the view that any information by necessity was to take on certain geometric strucutres, it doesn't seem unreasonable that similar things may happen there too. If the geometry is given an interpretation of existence independent of observation it seems to be a problem.
Is that close?
Then in my view, there seems to be no paradox though, because I never attach any ontological existence to my the history.
/Fredrik
Yes, instant update is quite near. "Instant" with respect to subjective time, not geometric.
To make things a little bit more complicated;-) one has however to notice that there is quantum superposition of 4-surfaces (space-times) so that the idea about single space-time is idealization.
The quantum jump replaces deterministic quantum evolution with a new one and non-determinism is outside space-time. This is the basic point. In standard framework one should assume that Schrodinger equation ceases to hold true for a moment and this kind of assumption is definitely nonsense.
Bohr took the easy route and argued that one simply must give up the idea of objective existence (existence as a quantum state) and take quantum mechanics as mere calculational rules. Einstein refused to accept non-determinism and was also wrong. The new view resolves Einstein-Bohr debate and shows that both were both right and wrong in their own manners.
"But it sounds like the closest thing that in your framwork is a instant update of the entire geometry, is what I would think of as an update of the "prior" in response to new data."
I need to clarify that I use the word prior here in a more general setting than a simple actual prior probability distribution. There are two (I think equivalent) approaches here where one either can evolve a prior during constraints, or you can define a far more complex "prior model" where the constraints are builtin. The latter will be more complex, but the dynamical equations will be simpler since there are no constraints to be added separately.
So I picture that the generalised _prior model_ is update upon new observations. In the continuum limit, this is a smooth and effectively differentiable evolution. In the case of either very massive information quanta, or low confidence in past relative current new data it will not be continous, it will be more of "jumps".
/Fredrik
"there is quantum superposition of 4-surfaces (space-times) so that the idea about single space-time is idealization."
This is completely in line with my view too, even though the "line of reasoning" to arrive at this is different because we have different starting points and methods. But I do not see this as a contradiction. What may be an axiom to you is a theorem for me, and vice versa. I think sometimes formalism can be confusing, so I try not to focus too hard on the formalism itself, at least not at the early stage where my prefered formalism isn't yet matured.
I think the results of our thinking seem to have a little more in common like I originally smelled, even though we speak different languages.
/Fredrik
"Yes, instant update is quite near. "Instant" with respect to subjective time, not geometric."
Translating that into my view that makes sense!
Only that in my view, the "with respect to time" is an identity. The perceived changes is the qualifying support for time itself. One without the other is hard to imagine in my world, so in my thinking it's two different descriptions of the same thing.
A differential of time, corresponds to a differential of change (as a probabilistic measure).
If I think we are closing up on each other, I think that the geometric time is by construction (in my view) not fundamentally observer invariant, because it's basically a relative expectation.
But if I understand you right, I suspec that your embedding in the hihger dimensions has effectively solved this assuming there is nothing more to reality (to be seen in the future) than the standard model? But those technical things I could not tell because I certainly don't know your theory well enough. This is just a hunch based on my brief looking into some papers and this discussion.
/Fredrik
""Instant" with respect to subjective time, not geometric."
Another detail. In my thinking subjective time doesn't just apply to psychology, there is a subjective time even for a particle in the way I picture things. This is also the basis for unification of particles, complex molecules, replicating molecules, simples cellular mechanics, and ultimately animals with brains. What I picture is an abstract "scaling" across several the levels of complexity.
/Fredrik
Subjective time is by no means restricted to humans or living systems. Even cognition and intentionality appears at elementary particle level if one accepts their p-adic description and would manifest directly in the properties of mass spectrum of elementary particles. Without exaggerating one can say that this is quite dramatic (and highly successful!) prediction.
Space-time geometry corresponds to the existence in the sense of classical physics. Since already physical states correspond to super-positions of nearly identical space-time surfaces, also geometric time can be measured only with a finite accuracy.
p-Adic length scales define a hierarchy of resolutions and space-time sheets with size scale characterized by prime p define physics given meter sticks and chronons. Basic length scale unit is CP_2 size: in fact, one can end up with TGD by just requiring that there is unique purely geometric length scale unit in the theory. String models and GRT lack this kind of unit.
Finite measurement resolution and associated effective discretization of space-time lead to a new quantum measurement theory for which von Neuman developed the basic mathematical tools long ago but did from quite different physical motivations.
The essential point is that I believe on classical geometry: there is no discretization at Planck scale or something like that in TGD Universe. Structures analogous to non-commutative geometries assigned usually with Planck length emerge as a result of finite resolution of quantum measurement and cognition.
The classical geometry can be however generalized infinite-D geometry of world of classical worlds, which I believe to be essentially unique from the mere existence requirement (there are very strong reasons to believe so) and p-adic variants of space-time geometry. Hierarchy of Planck constants leads to a further generalization: copies of 8-D imbedding spaces are glued together to form a book like structure.
Now it's getting clearer.
Trying to understand you, your first two paragraphs now seems to make sense to me.
Paragraph 3 and 4 I think I also more or less have a picture of your view, which I can understand from my point of view as well.
Your p-adic ideas intrigues me, even though it seems your way of arriving at it, is completely different what be compliant with my method. Perhaps I will come back to you on this another time. My interest is that even though I do not start out from any mathematical guidance principles, there appears natural sequences in the evolutionary model. In the continuum approximation these problem completely goes away, but the observation I've done is that interesting things does happen when the contiuum approximation is invalid. Here I have feared that the formalism may end up beeing something more nasty. I am currently working on many things but in time I will do a closeup on the properties implied by the evolutionary series. I honestly have no idea if it is related to your ideas or not, but the possibility was obvious. If so, it would be amazing that you have found it in guided by a different method. We'll see. I don't know enough about your stuff, or haven't had enough progress on my thinking yet to discuss this in detail just yet.
"Hierarchy of Planck constants leads to a further generalization: copies of 8-D imbedding spaces are glued together to form a book like structure."
I'm not sure I understand this, but is it some kind of expansion in terms of orders of complexity of classical models? If so, is there anything driving the rate this expansion? Or is this expansion more like a possible expansion to be done by us? (ie the inventor, You in this case :) or is it done dynamically in nature?)
/Fredrik
The physical content of the hierarchy of Planck constant realized as a book like structure of the generalized imbedding space is hierarchy of quantum critical systems.
You start with water/ice/vapor and go to tri-critical point where these phases coincide. You are at criticality. Generalize the critical point to critical manifold. Inside this critical manifold you find a further critical manifold inside which you find .... The higher the criticality the higher the evolutionary level which reflects among other things as time scale of memories and planned action.
If you believe in this hierarchy of criticalities you must conclude that any civilization by definition lives at the edge. I am afraid that our western civilization is rapidly rolling down to lower levels of criticality.
Post a Comment