https://matpitka.blogspot.com/2018/12/

Friday, December 28, 2018

What bio-teleportation could mean?

Below is a comment from a discussion about teleportation. How to make teleportation a more realistic notion? What bio-teleportation could mean?

Teleportation uses all information needed to code the quantum state of the system to be teleported and then transfers this information to distant target where it is used to rebuild the system from basic building bricks existing there. The amount of information is huge.

Quantum entanglement increases this information exponentially from what it would be classically: recently this has been proposed as an argument against practical realization of quantum computation.

How could one transform teleportation ideas to something more realistic?

  1. One can argue that in living matter quantum entanglement is not at all free. TGD leads to the notion of negentropic entanglement: entanglement coefficients are in extension of rationals (algebraic numbers ). This allows to speak about p-adic entanglement entropy. The p-adic entropies can be non-positive telling that entanglement carries information - about the relationship of the entangled systems. One identification for conscious experienced associated with this kind of entangled relationship is as experience of love.

    Could the condition that entanglement is negentropic in this sense, reduce the number of possible entangled configurations to a more reasonable number?

  2. One can of course challenge the idea that one should transfer all the information needed to construct the state. One could provide just the needed prerequisites for the system to do it it itself: that is to self-organise and evolve to the state. In biology genes seems to be this prerequisite.

Transferring all information is not realistic. What can one transfer?. Could one just transfer just the property of being living in the sense that we understand it? Or perhaps transfer some constraints to the outcome of the spontaneously occurring evolution generated by the signal.
  1. Metabolic energy feed is certainly one prerequisite. It is needed to create state with larger value of heff carrying able to build rather stable negentropic entanglement carrying conscious information. This problem disappears if the target receives energy in some form, say from star near it. The signal sending the needed information could have large value of heff (consist of dark photons) and provide the needed energy and do it with precisely desired manner so that the induced evolution might be much faster.

  2. The transfer of biological life seems hopelessly complicated task. Biomolecules are extremely complex. TGD however leads to a proposal that so called dark genetic code with proton triplets providing representations for DNA and RNA codons, amino-acids, tRNA, is universal: dark proton sequences are dark nuclei with reduced binding energy. What is highly non-trivial that the model predicts correctly the vertebrate genetic code. Model has also strong empirical basis based on on findings of Pollack.

    Dark nuclei realize genetic code and define the dramatically simpler dynamics - dark variant of nuclear physics - behind extremely complex shadow dynamics of biochemistry. Dark nuclear physics would serve as the template for bio-chemistry and all basic processes like replication, transcription, and translation would have very simple templates at the level of dark nuclear physics. One should reconstruct the dark variants of basic biomolecules using genetic information and the system at the second end would do the rest.

    Something like this could indeed happen in the experiments of Montagnier et al and Gariaev discussed from TGD point of view here.

  3. How could one communicate the needed information over long distances? Radiation would be needed and it should be highly negentropic - dark photons - to provide metabolic energy at the same time. I have proposed what I call bio-harmony - it turns out to provide a realization of genetic code (something highly non-trivial as also the dark realization of genetic code) - allowing to assign to dark codons 3-chords consisting of light (or even sound).

    This would allow coding of DNA to 3-chords of dark photons allowing the transfer of genetic information along long distances to receiver, which could be water: could this induce generation of dark proton sequences by Pollack effect, creating dark copies of the original dark genes. These in turn could eventually lead to a generation of life as we know as biochemistry would develop around them. One might be however forced to wait for some billion years! If the dark proton sequences can be constructed as precise copies of original this process could become must faster.

  4. How to translate the pattern of dark photons 3-chords back to a sequence dark proton triplets?

    An antenna, which can send, can also receive. The reception would be time reversal of the process of sending and generate the desired dark proton sequence but only in ZEO allowing time reversals of topological quantum computations inducing processes at the level of ordinary matter as shadow processes (bio-chemistry would be shadow dynamics induced by the much simpler dynamics of dark proton sequences realizing dark variants of basic biomolecules and realizing genetic code).

    Could this work? I have considered here the ZEO based idea that motor actions in very general senses including also DNA transcription etc.. are realized using the time reversal of the reverse of motor action as a template realized at magnetic body and inducing the motor action at the level of ordinary matter. These time reversals would be realized as braiding patterns for magnetic flux tubes, topological quantum computer programs. State function reduction changing the arrow of clock time would be essential and would be possible in ZEO but not in standard quantum theory. The essential point is that one does not send 3-D pattern, but entire 4-D pattern, process. ZEO makes this possible.

    This picture is possible only in TGD framework ( the notion of magnetic body implied by many-sheeted space-time, dark matter as a hierarchy of heff=n× h0 phases, and ZEO).

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.


Friday, December 21, 2018

Motor actions as TQC programs written by Nature itself

I wrote a long email explaining the basic terms used in the model of motor actions as topological quantum computer programs. It might be useful to add it also as a blog post.

ZEO based view about consciousness

What is state function reduction? What is self? What is death? What is re-incarnation?

  1. What ZEO is? Zero energy states are pairs of states with opposite total quantum numbers. Members of state are associated with opposite boundaries of causal diamond (CD). CD is geometric correlate for self. CD has size. CD has active and passive boundary: see below.

  2. What is state function reduction in ZEO? There are small and big state function reductions.

  3. Small state function reduction.

    1. In small reduction - weak measurement - following unitary evolution, the passive boundary of CD is unaffected as also members of state pairs (defining zero energy states) at it. Passive boundary corresponds to unchanging part of consciousness.

    2. The members of state pairs at active boundary of CD change in small reduction which is much like classical measurement, not dramatic changes. This corresponds to the sensory input and thoughts etc induced by it - the changing part of consciousness, Maya might some-one say. Also the location of the active boundary boundary changes - its distance from fixed boundary of CD increases in statistical sense - unavoidably. This corresponds to the increase of clock time assignable to the sequence of small reductions.

      Arrow of time corresponds to direction to which the active boundary of CD shifts.

  4. Big state function reduction. The roles of passive and active boundary change. The arrow of time changes. CD begins to increase in opposite direction of geometric time. Previous self dies and new is born and lives in opposite direction of time.

Motor actions and sensory percepts correspond to mental images -sub-selves- sub-CDs of CD. Motor action is sensory percept in opposite arrow of time. It seems that it cannot be conscious to self with opposite arrow of time. The definition of these notions is extremely general. For instance, DNA transcription corresponds to a motor action.

Magnetic body, biological body, tensor network, braiding, TQC program

  1. Magnetic body (MB) and biological body (BB) are key notions. MB controls BB and receives sensory data. Braiding generated by motion of parts of BB corresponds to sensory input to MB (not the only sensory input).

    Behaviours are essentially motor actions controlled by BB.

  2. Tensor network formed by magnetic flux tubes of MB and space-time sheets representing BB. When one has MB -magnetic flux tubes as space-time sheets parallel to space-time sheets representing matter in M4×CP2 (overlapping M4 projections) - they have 3-D contacts and interaction.

    Flux tubes of MB connect space-time sheets representing particles of ordinary bio-matter. This is tensor network.

  3. Topological quantum computer program as braiding: dance metaphor.

    When these particles of BB move flux tubes of MB get braided and when flux tubes de-braid, matter is forced to move: this is motor action.

    Dance metaphor helps: dancers with feet connected to wall by threads. Dance as dynamical pattern forces the threads to get braided: this defines memory of dance as topological quantum computer program realizes as space-time topology, topology of flux tube structure.

Motor action induces TQC program and TQC program in reversed time direction induces motor action
  1. Assume that BB lives in standard time direction but MB can change its time direction by big state function reduction.

  2. Consider motor action for BB living in standard direction of time but assume that one has time reversal of motor action otherwise. From end to the beginning. The motion of parts of BB gives rise to a braiding of flux tubes defining TQC program.

    This TQC program is recorded automatically - this is the big thing. There is no need for a nerd to write the code. Nature does it automatically. This is like learning from model. Here connections to Sheldrake's ideas are rather obvious.
    Nature learns all the time: habits/ behaviors/ functions/ motor actions - as you wish.

  3. Perform now big quantum jump to the MB. Arrow of time changes and the time reversal of braiding takes place. Since braiding for MB represents time reversal of motor action for BB, its time reversal forces motor action in the desired time direction for BB. Note that BB has standard time direction.

  4. When BB and MB have same time direction: MB gets sensory data by braiding. When MB has time direction opposite to that of BB, MB induces a motor action of BB.

See the chapter Sensory Perception and Motor Action as Time Reversals of Each Other: a Royal Road to the Understanding of Other Minds? or the article with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, December 17, 2018

How could the TQC programs representing basic bio-reactions emerge?

The basic bio-chemical processes such as replication, transcription, translation have remained mysteries in standard biology. My conviction is that a lot of new physics is needed. Bio-chemistry is not enough, even QFT is not enough. Even standard views about space-time and classical fields, QM, and basic ontology are not enough.

TGD approach indeed brings in several new physics elements.

  1. The notion of magnetic body (MB). MB carrying dark matter identified as dark variants of charged particles having non-standard value heff= n×h0 of Planck constant is central in TGD inspired quantum biology. MB is the intentional agent receiving sensory input from biological body and controlling it. The interactions at the level of ordinary bio-matter would be governed by the MBs of molecules, and bio-chemistry would be a shadow of this much simpler dynamics.

    MB of water entrains to the cyclotron frequencies of the MBs of the basic biomolecules by varying flux tube thickness. This makes possible water memory (see this) and implies homeopathy like mechanisms serving as basic quantal building bricks in the functioning of the immune system. Dark variants of DNA, etc.. realized as dark proton sequences would be one aspect of this representation.

  2. The braiding of the magnetic flux tubes makes possible realization of topological quantum computer (TQC) programs. Biological functions should correspond to TQC programs and the challenge is to understand how they emerge naturally. A possible answer to this question will be proposed in the sequel.

  3. There are also other central notions such as zero energy ontology (ZEO) predicting that the arrow of time is not fixed. The following arguments suggests that ZEO is absolutely essential for the understanding of the miracles of bio-chemistry. TQC programs running backwards in time would generate as output various biological functions such as DNA transcription and other basic processes.

What are the big problems?

It is best to start from the problems that one should solve. At bio-molecular level the basic problem is to understand how complex temporal sequences of bio-chemical reactions involving bio-catalysts are possible as highly deterministic sequences.

  1. How the reacting molecules - including catalysts - are able to find each other in the molecular soup?

    TGD answer: Contraction of flux tubes connecting molecules very selectively as heff is reduced brings molecules together. Connections between molecules are generated by reconnection of U-shaped flux tubes scanning enviroment and producing pair of flux tubes connecting the two systems provided they have the same cyclotron frequency. Resonant em coupling by dark photons is in question.

  2. How the attached molecules are able to attach to just the correct spot and orient just in the correct manner?

    TGD answer: the contraction mechanism for flux tubes automatically guarantees also this.

  3. How the rate of reaction can exceed the expected rate by so huge factor?

    TGD answer: Reactants are connected by flux tubes so that the probability that they find each other is much higher and depends on the occurrence of heff reducing transition which occurs spontaneously. The energy liberated in the contraction of flux tube allows to overcome potential wall in the reaction and exponential increase in the rate is achieve.

  4. How bio-catalysis can proceed in time ordered manner like deterministic computer program so that very many initial states can lead to the same outcome?

    Here the initial states would correspond to positions orientations, etc of input molecules. Huge number of initial states lead to the same outcome.

    I think that this is the really difficult question. I am highly skeptic about the possibility to understand this in QFT framework. In the following I propose TGD inspired solution of this problem requiring ZEO, which means a revolutionary modification of basic ontology and of views about time.

How bio-catalysis can proceed in time ordered manner like deterministic computer program so that very many initial states can lead to the same outcome?

Apparently a breaking of second law is involved. Very many initial states lead to the same outcome rather than vice versa. As if the process would be controlled by the time reversal of the original process and entropy would increase but in opposite time direction as usually but at the control level! The notion of syntropy introduced by Fantappie comes in mind!

TGD answer would involve at least the following pieces.

  1. Dark DNA and dark variant associated with enzyme should be part of the story. Large heff brings in conscious information realized as algebraic complexity and large scale quantum coherence.

  2. ZEO allowing time reversed processes should be essential. ZEO predicts both directions of time and motor actions are postulated to correspond to sensory perception in opposite arrow of time (see this). What this precisely means is not however clear.

  3. Magnetic body (MB) should be the boss controlling dynamics. This dynamics should be very simple. Biochemistry should be shadow dynamics and apparently extremely complex.

  4. Topological quantum computational aspect(TQC) is also central but I have not been able to articulate what TQC programs emerge: the following ZEO arguments suggests an astonishingly simple mechanism for this.

    The complex reaction sequences like transcription should correspond to a running of topological quantum computer (TQC) program coded by the braiding. I just made a big step of progress in the understanding of sensory memories. Memory recall would be like a quantum computer program running backwards in time and producing sensory experience as output (see this).

    There is a strong temptation to believe that this is completely general aspect of all also motor actions. By fractality also DNA transcription, translation, etc... are analogs of motor actions. Somehow they should be coded to TQC programs realized as braidings of flux tubes of MB.

    The output of the TQC program running backwards with respect to the standard direction of time would be motor action as we observe it. All basic bio-processes involving several steps be coded to braidings. One can imagine a hierarchical structure: programs, subprograms, etc... for the TQC programs. Braidings of braidings of.... This conforms with the hierarchical many-sheeted structure of space-time.

How to realize motor actions as outputs of TQC programs running in non-standard direction of time?
  1. Assume that when some process - such as DNA transcription or its time reversal occurs - it induces braiding of flux tubes - topological quantum computer (TQC) program at the level of MB.

    As this TQC program runs backwards in time, the time reversal of the original process is generated as output at the level of ordinary bio-matter.

  2. For instance, in the case of transcription, one should assume that the time reversal of transcription meaning the decay of mRNA back to its building bricks generates the TQC program as braiding. Running of this TQC problem in the reverse time direction should generate transcription.

  3. What looks strange that the time reversal of the assembly process - essentially a decay process occurring in very manners - would code for the highly deterministic TQC program for the assembly process. But this is actually just what one wants!!

    The decay process is highly unpredictable but its time-reversal is highly predictable! There are very many TQC programs, which give rise to the desired output! The ways from Rome lead to all possible directions but all ways lead to Rome! In ZEO butterfly effect transforms to extreme predictivity in opposite time direction!

  4. How MB and space-time sheets assignable to ordinary matter and having opposite arrows of time - or more generally two levels of heff hierarchy with different values of heff and different arrow of time - could interact? If the arrows of time are opposite, the intersection of space-time sheets should have dimension smaller than D=4. Since the classical dynamics determined by twistor lift breaks T symmetry (the analog of Kähler action in M4 degrees of freedom is the reason), 3-D intersection does not imply that the surfaces co-incide for the space-time surface and its time reversal.

    The interaction should be via common boundary conditions: the space-time sheets with different arrow of time should intersect along 3-D or even lower-dimensional surfaces at the boundaries of CD and perhaps also at the 3-D light-like orbits of partonic 2-surfaces at which the signature of the induced metric changes. Magnetic flux tubes induce braiding, which suggests that magnetic flux tubes of MB as 4-D surfaces should have at most 3-D intersection with the space-time surfaces representing ordinary bio-matter and defining the nodes of tensor network (see this). These 3-D - possibly light-like - intersections would mediate the interaction. For the usual arrow of time for MB interaction would be sensory input to MB and induce braiding. For the opposite arrow of time for MB it would be motor action in which MB would be the controller forcing bio-matter to follow in the un-braiding process.

    In the generic case the intersection of two 4-surfaces in M4× CP2 is discrete. Could the intersection of space-time surfaces with different arrow of time consist of a discrete set of points? Could this be enough for MB to control bio-matter? Note that cognitive representations identified as intersections of real space-time surfaces and their p-adic variants consist of a discrete set of points (see this).

  5. The connection with Sheldrake's vision about morphogenetic fields, in particular the genereration of "habits" even at the level of so called dead matter is rather obvious. TQC programs would indeed code for habits and would be generated by Nature without a need of a programmer writing the code. I have discussed Sheldrake's vision from a slightly different viewpoint here.

There are interesting connections to ancient Indian philosophy and Christianity. ZEO has analog in ancient Indian philosophy as I learned from a discussion with Savyasanchi Ghose while writing this. As notions doer and un-doer are analogous to self and time reversed self. MB would be in the role of supreme observer although it would not be outsider to the Universe. The undoing the time reversal of deed by MB would serve as a template for the dynamics of deed at the level of ordinary matter.

Building braids and opening them are the basic operations in TQC according to ZEO. A visit to web using "undoer" reveals that it appears also in Christianity, Mary the undoer of knots! Knots are now a metaphor for sins and undoing them means mercy. In Christianity God would be the counterpart of MB and we would be 4-D dynamical images of God.

To sum up, this sounds like mystics and brings strongly in my mind a french movie about time that I saw decades ago. It was very poetic and somehow caught at the emotional level something very deep about the mysteries of time, life, and consciousness in a manner not expressible using the vocabulary of scientist. It seems that TGD is providing the language that I did not have at that time and that ZEO is starting to demonstrate its magnificent explanatory power.

See the chapter Sensory Perception and Motor Action as Time Reversals of Each Other: a Royal Road to the Understanding of Other Minds? or the article with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Long term sensory memories in TGD framework

There was a highly interesting popular article (see this) inspired by the recent findings about long term memory (see this) in conflict with the standard view about memories. Of course, also the memory feats of so called idiot savants known for decades are in sharp conflict with the standard view about memory.

The discussion of these findings in TGD framework led to a decisive improvement in the understanding of the proposed mechanism of sensory memory recall. Also a connection with the model of topological quantum computation realized axon-microtubule level emerged. Sensory memory would be realized as a topological quantum computer program running in reversed time direction in memory recall and generating the virtual sensory input from brain to sensory organs creating the original sensory experience.

The findings

The following gives a brief summary of the results of the experiment discussed here.

  1. A huge amount of storage capacity is required and it increases as more and more experiences are experienced.

    One can imagine an abstraction as a cure: store only essentials about the input. This is extremely powerful manner to store the relevant information. Picture about grandmother's house with all detail is replaced with world "grandmother's house". What is lost is detail. This storage mechanism is certainly used at higher levels of evolutionary hierarchy. Verbal memories are a good example.

    The experiment mentioned above however demonstrates that the memory storage is at least 1000 times more detailed than it could be, which suggest that a different very detailed storage mechanism usually unconscious to us is involved.

    Indeed, the memory feats of idiot savants show that sensory percepts can be stored in amazing detail. A possible TGD based explanation is that all of us have sensory memories - essentially re-experiences but at a lower level of personal self hierarchy, not as mental images represented as sub-selves but as sub-sub-..-selves not directly conscious to us. Temporal democracy would make it impossible to distinguish between recent and past and make it difficult to survive. Here would be the reason for why these persons are often called idiot savants.

    Sensory memories must be unconscious at our level of self hierarchy to allow the experience about living in definite moment of time and only cognitive (symbolic, verbal) memories involving a lot of abstraction satisfy this condition. If the percept is cognitive, it is about geometric past. If sensory, it is about "Now". Perceptive field effectively reduces from 4-D to 3-D (actually the duration of sensory chronon is about .1 seconds).

    Situation changes when temporal lobes are stimulated electrically as neuroscientists have known for decades but "forgotten". Perhaps animals do not conceptualize and have sensory memories.

  2. Proteins used for the storage in terms of modified synaptic contacts is slow by a factor 1000 slower than required to understand the above experiment. Memorizing would require a repeated stimulation but now the pictures were seen only once or twice.

  3. The lifetime of the proteins in synaptic contacts is only few weeks so that also long term memories would be unstable. Humans can remember for about 50 years, 1000 times longer than expected.

  4. The technical realization of the 3-D storage is also a problem. One should remember also the place, where the memory is stored, not only the memory itself! Here the association mechanism seems the only possibility but would allow only conditionings. In computer language LISP this idea is very concretely realized. Conditionings are however only pseudo-memories.

Wrong views about time andthe notion of memory as the basic problems

To sum up, the standard view about memories suffers from two fatal problems.

  1. The first fatal problem of the standard model of memory is the wrong view about the relationship between experienced and geometric time. The identification of these times forces to the notion of memory storage analogous to that in computer. The information about what happened must be stored again and again. This view has many problems already discussed.

  2. Second fatal problem is the conceptual flaw forced by behaviorism: memories are identified as conditiongs, habits, or behaviors - as you like. Genuine sensory memories are however re-experiences and would however correspond to re-experience to which is associated a synchronously firing neuron group: what neurons fire is not determined by synaptic contacts but by the sensory input mapped topographically to sensory area. This is very delicate and crucial difference.

TGD view about sensory memories

Could one realize memory as re-experience in TGD framework?

  1. In zero energy ontology (ZEO) of TGD no 3-D memory storage to the "brain now" is required. Memories are ideally where (in 4-D sense) the event occurred but memory recall creates further - usually less detailed and more abstracted copies - of the memory. To remember (in the genuine sense of the word) is to re-experience. Memory in this sense would be in the geometric past. Memory recall would be seeing in time: sending a signal to geometric past, where it is time-reflected back. Each memory recall could generate at least conceptual copy about the memory and in this manner the signal sent to the geometric past would have higher probability to generate the re-experience or at least secondary version of it. Learning, which is not mere conditioning, could rely on the generation of copies of the memory in 4-D perceptive field.

  2. Memories as re-experiences would involve synchronously firing neuron groups associated with quantum coherent units defined by magnetic bodies (MBs) of neurons and representing mental images. To understand this concretely, one needs besides the notion of MB also the hierarchy heff= n× h0, h=6×h0 of Planck constants. The synchronously firing neuron group (involving quantum coherent part of MB) in the geometric past is woken up by the time reversed signal to the geometric past and reflecting from it by providing energy (now negative). ZEO makes this possible.

  3. How the memory recall could realize this synchronous firing in the geometric past? This mechanism should be analogous to the reflection of negative energy signal in time direction from the brain of the geometric past. ZEO allows sending of a negative energy signal travelling to geometric past. It should somehow induce a transition generating the synchronous firing. The signal generating this transition should be very simple. It must induce the transition at correct location in the geometric past. Here the period of the carrier wave of the signal could be essential and large value of heff could make the signal energetic enough despite the period which could be measured in years so that energy for the ordinary value of Planck constant would be extremely small. Signal could also provide metabolic energy for the neurons, which should fire synchronously. Replicas of the memory help to achieve activation at the correct location.

  4. There must be a coding of the sensory input to the physical state of neuronal pathways coded by nerve pulse patterns representing the original sensory input from the sensory organs. If genuine sensory re-experience is required a signal generating the original sensory experience and thus the nerve pulse pattern from sensory organs creating it should be re-generated.

    As if one had in the geometric past a magnetic tape representing somehow the original experience. When played it would generate a signal to the sensory organs in turn generating the signal to the brain (including nerve pulses) giving rise to the original sensory experience. Note that ZEO indeed allows the sensory experience to be in geometric past. It is however communicate cognitive information about it to recent too.

TGD leads to a model for what could happen based on the idea that topological computation is realized in terms of the braiding of magnetic flux tubes connecting two subsystems (see this and this). This model leads to a model of memory representations as a kind of topological quantum computer program giving the original experience as an output while running.

Let us assume that second system is axonal membrane along which the nerve pulse patterns (and whatever else is needed) representing the sensory input flow. Second system would be naturally microtubules inside it.

  1. The flux tubes would connect the lipids of the axonal membrane to the tubulins (or units formed by them). Axonal membrane can be in liquid-crystal state meaning that the lipid are like liquid particles able to move. Nerve pulses would induce a 2-D liquid flow inducing the braiding of the flux tubes having second end fixed to (say) tubulin of the microtubule.

    There would be both time-like and space-like braiding. Dance metaphor is very helpful here. Consider dancers at the parquet with legs connected by threads (flux tubes) to a wall (microtubule). Time-like braiding would correspond to the dynamical dance pattern of lipids in time direction having a representation as a 2-D projection defined by the paths of dancers at the parquet. Time like braiding would be analogous to a running topological quantum computer program.

    Space-like braiding would be the outcome of the dance representing tangle of the flux tubes fixed to the wall and defining topological quantum computer program serving as a representation for the time like braiding and therefore also for the nerve pulse pattern (and whatever the signal involves) and the sensory input. Space-like braiding is analogous to the code representing the topological quantum computer program and should make possible to represent the program.

    If this space-like braiding can generate a signal serving as a virtual sensory input to the sensory organs, the sensory memory could be regenerated. The running of the topological quantum computer program would mean the opening/un-knotting of this braiding and would represent the time reversal of the sensory input, not yet sensory input, which could correspond to nerve pulse pattern from the sensory organs generating the sensory percept. It seems that the opening must generate a signal to sensory organs as virtual sensory input.

  2. Virtual sensory input brain indeed is the basic element of TGD inspired model of sensory perception as construction of artwork. The basic difference to the standard view is that the sensory qualia are at the level of sensory organs rather than in brain. Brain only gives names for the percepts and builds standard sensory mental images by using virtual sensory input from brain. The process is like pattern recognition by driving sensory input to a standard input near to the real input.

    In TGD framework however nerve pulse patterns would not carry the sensory information to the brain but would generate sensory input to MB as Josephson radiation from the cell membrane. The transmitters emitted at the synaptic contacts would generate bridges connecting axonal magnetic flux tubes to longer connected flux tubes and in this manner create the communication channels - kind of wave guides. Along thee dark photons (which can transform to bio-photons) could travel with light velocity.

    This communication mechanism is dramatically faster than the communication by nerve pulses and allows forth-and-back signalling involving virtual sensory input from brain to generate the standard percepts assignable to the synchronously firing neuron groups accompanied by magnetic bodies obtained by connecting neuronal magnetic bodies by flux tubes.

    The standard mental images would contain only the features relevant for survival or otherwise interesting. A still open question is whether the virtual sensory input corresponds to the time reversal of the ordinary sensory input (see this). The following consideration suggests that time reversal is indeed in question.

  3. If the virtual sensory input from brain is in time reversed time direction, one can think of very simple model for memory as re-experience. Big state function could occur meaning that the mental images associated with braiding generated by nerve pulse pattern and dark photon beam die and re-incarnate in opposite time direction. A time-reversed mental image is generated. This mental images is not conscious at our level of hierarchy living in opposite time direction.

    This mental image is not quite exact time reversal of the original and there is non-determinism of state function reduction involved. One can have however statistical determinism possible if large enough number of neurons are involved. Therefore the differences need not be too big. Also standardization comes in rescue: it would take care that the sensory mental is very nearly the counterpart of the original.

    The time-reversed signal from brain to the sensory organ should generate a nerve pulse pattern just as in the case of ordinary perception and the dark photon signal generating the sensory mental image defining the original sensory memory in good approximation.

  4. For the simplest alternative dark photons alone induce the flow of the lipids. Hitherto it has been assumed that the flow is induced by nerve pulse patterns. The most general option is that both are involved in the generation of the flow. One cannot exclude the possibility that the communication of data about nerve pulse pattern to MB generates a control signal which induces the liquid flow. There are many options to consider but the basic idea is clear and involves ZEO and MB in a crucial manner.

  5. An important open question is whether the virtual sensory input using dark photons propagates

    1. to the "sensory organs then" so that only cognitive memories would result as copies. In this case a person, who has lost eyesight during lifetime could have visual memories from time when she could see.

    2. or via the MB to the "sensory organs now" and stimulates sensory experience in "brain now". Person lost eyes during lifetime could not have visual sensory memories in this case.

    For the latter option one can ask whether the sensory experience is
    1. realized by the mere virtual sensory input to sensory organs. No copies of the sensory representation at the microtubule-axon level would be generated. If sensory organs are not intact, sensory memories would not be possible.

    2. or whether also a signal from sensory organs to brain involving nerve pulse pattern is needed to generate the experience. Each memory recall would create an almost exact copy of topological computer program giving rise to a genuine sensory memory while running.

    Various options might be tested by electric stimulation of the temporal lobes known to generate sensory memories.
See the chapter Sensory Perception and Motor Action as Time Reversals of Each Other: a Royal Road to the Understanding of Other Minds? or the article with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Thursday, December 13, 2018

BCS super conductivity at almost room temperature

Towards the end of year 2018 I learned about the discovery of BCS type (ordinary) superconductivity at temperature warmer than that at North Pole (see this). The compound in question was Lantanium hydride LaH10. Mihail Eremets and his colleagues found that it BCSame superconducting at temperate -23 C and high pressure 170 GPa about 1.6 million times the atmospheric pressure (see this).

The popular article proposed an intuitive explanation of BCS superconductivity, which was new to me and deserves to be summarized here. Cooper pairs would surf on sound waves. The position would correspond to a constant phase for the wave and the velocity of motion would be the phase velocity of the sound wave. The intensity of sound wave would be either maximum or minimum corresponding to a vanishing force on Cooper pair. One would have equilibrium position changing adiabatically, which would conform with the absence of dissipation.

This picture would conform with the general TGD based vision inspired by Sheldrakes's findings and claims related to morphic resonance (see this) , and by the conjectured general properties of preferred extremals of the variational principle implied by twistor lift of TGD (see this). The experimental discovery is of course in flagrant conflict with the predictions of the BCS theory. As the popular article tells, before the work of Eremets et al the maximum critical temperature was thought to be something like 40 K corresponding to -233 C.

The TGD based view is that Cooper pairs have members (electrons) at parallel flux tubes with opposite directions of magnetic flux and spin and have non-standard value of Planck constant heff= n× h0= n× h/6 (see this and this), which is higher than the ordinary value, so that Cooper pairs can be stable at higher temperatures. The flux tubes would have contacts with the atoms of the lattice so that they would experience the sound oscillations and electrons could surf at the flux tubes.

The mechanism binding electrons to Cooper pair should be a variant of that in BCS model. The exchange of phonons generates an attractive interaction between electrons leading to the formation of the Cooper pair. The intuitive picture is that the electrons of the Cooper pair can be thought of lying on a mattress and creating a dip towards which the other electron tends to move. The interaction of the flux tubes with the lattice oscillations inducing magnetic oscillations should generate this kind of interaction between electrons at flux tubes and induce a formation of a Cooper pair.

Isotope effect is the crucial test: the gap energy and therefore critical temperature are proportional the oscillation frequency ωD of the lattice (Debye frequency) proportional to 1/M1/2 of the mass M of the molecule in question and decreases with the mass of the molecule. One has lantanium-hydroxide, and can use an isotope of hydrogen to reduce the Debye frequency. The gap energy was found to change in the expected manner.

Can TGD inspired model explain the isotope effect and the anomalously high value of Debye energy? The naive order of magnitude estimate for the gap energy is of form Egap= x× hbareffωD, x a numerical factor. The larger the value of heff= n× h0= n× h/6, the larger the gap energy. Unless the high pressure increases ωD dramatically, the critical temperature 250 K would require n/6∼ Tcr/Tmax(BCS)∼ 250/40∼ 6. For this value the cyclotron energy Ec= hefffc is much below thermal energy for magnetic fields even in Tesla range so that the binding energy must be due to the interaction with phonons.

The high pressure is needed to keep lattice rigid enough at high temperatures so that indeed oscillates rather than "flowing". I do not see how this could prevent flux tube mechanism from working. Neither do I know, whether high pressure could somehow increase the value of Debye frequency to get the large value of critical temperature. Unfortunately, the high pressure (170 GPa) makes this kind of high Tc superconductors unpractical.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wednesday, December 12, 2018

Intelligent blackholes


Thanks for Nikolina Benedikovic for kindly providing an interesting link and for arousing my curiosity. In the link one learns that Leonard Susskind has admitted that superstrings do not provide a theory of everything. This is actually not a mindblowing surprise since very few can claim that the news about the death of superstring theory would be premature. Congratulations in any case to Susskind: for a celebrated super string guru it requires courage to change one's mind publicly. I will not discuss in the following the tragic fate of superstrings. Life must continue despite the death of superstring theory and there are much more interesting ideas to consider.

Susskind is promoting an idea about growing blackholes increasing their volume as the swallow matter around them (see this). The idea is that the volume of the blackhole measures the complexity of the blackhole and from this its not long way to the idea that information - may be conscious information (I must admit that I cannot imagine any other kind of information) - is in question.

Some quantum information theorists find this idea attractive. Quantum information theoretic ideas find a natural place also in TGD. Magnetic flux tubes would naturally serving as space-time correlates for entanglement (the p-adic variants of entanglement entropy can be negative and would serve as measures of conscious information) and this leads to the idea about tensor networks formed by the flux tubes (see this). So called strong form of holography states that 2-D objects - string world sheets and partonic 2-surfaces as sub-manifolds of space-time surfaces carry the information about space-time surface and quantum states. M8-M4 ×CP2 correspondence would realize quantum information theoretic ideas at even deeper level and would mean that discrete finite set of data would code for the given space-time surface as preferred extremal.

In TGD Universe long cosmic strings thickened to flux tubes would be key players in the formation of galaxies and would contain galaxies as tangles along them. These tangles would contain sub-tangles having interpretation as stars and even planets could be such tangles.

I just wrote an article describing a model of quasars (see this) based on this idea. In this model quasars need not be blackholes in GRT sense but have structure including magnetic moment (blackhole has no hair), an empty disk around it created by the magnetic propeller effect caused by radial Lorentz force, a luminous ring and accretion disk, and so called Elvis structure involving outwards flow of matter. One could call them quasi- blackholes - I will later explain why.

  1. Matter would not fall in blackhole but magnetic and volume energy in the interior would transform to ordinary matter and mean thickening of the flux tubes forming a configuration analogous to flow lines of dipole magnetic fields by looping. Think of formation of dipole field by going around flux line replaced by flux tube, returning and continuing along another flux line/tube.

  2. The dipole part of the structure would be cylindrical volume in which flux tubes would form structure consisting analogous to a coil in which one makes n2 ≈ 107 (GN = R2/n2h0) windings in CP2 direction and continues in different position in M4 and repeats the same. This is like having a collection of coils in M4 but each in CP2 direction. This collection of coils would fill the dipole cylinder having the case of quasar studied a radius smaller than the Schwartshild radius rS ≈ 5×109 km but with the same order of magnitude. The wire from given coil would continue as a field line of the magnetic dipole field and return back at opposite end of dipole cylinder and return along it to opposite pole. The total number of loops in the collection of n1 dipole coils with n2 windings in CP2 direction is n1 ×n2.

  3. Both the Kähler magnetic energy and volume energy (actually magnetic energy associated with twistor sphere) are positive and the expansion of flux tubes stops when the minum string tension is achieved. This corresponds roughly to a biological length scale about 1 mm for the value of cosmological constant in the length scale of the observed universe (see this).

    Remark: Note that the twistor lift of TGD allows to consider entire hierarchy of cosmological constants behaving like 1/L(k)2, where L(k) is p-adic length scale corresponding to p≈ 2k.

    How to obtain the observed small value of cosmological constant? This is not possible for the simplest imaginable induced twistor structure and the cosmological consant would be huge. A simple solution of the problem would be the p-adic length scale evolution of Λ as Λ ∝ 1/p, p≈ 2k. At certain radius of flux tube the total energy becomes minimum. A phase transition reducing the value of Λ allows further expansion and transformation of the energy of flux tube to particles. There is also a simple proposal for the imbedding of the twistor sphere of space-time surface to the product of twistor spheres of M4 and CP2 allowing the desired dependence of Λ on p-adic length scale.

    This in turn leads to a precise definition what coupling constant evolution does mean: this has been one of the most longstanding problems of quantum TGD. The evolution would follow from the invariance of action under small enough changes of Λ induce by simple modifications of the imbedding of the twistor sphere of space-time surface the product of twistor spheres of M4 and CP2. There is a family of imbeddings labelled by rotations of these twistor spheres with respect to other and one can consider a one-dimension sub-family of these imbeddings.

    This would solve the basic problem of cosmology, which is understanding why cosmological constant manages to be so small at early times. Now time evolution would be replaced with length scale evolution and cosmological constant would be indeed huge in very short scales but its recent value would be extremely small.

  4. Cosmological expansion would naturally relate to the thickening of the flux tubes, and one can also consider the possibility that the long cosmic string gets more and more looped (dipole field gets more and more loops) so that the quasi-blackhole would increase in size by swallowing more and more of long cosmic string spaghetti to the dipole region and transforming it to the loops of dipole magnetic field.

  5. The quasar (and also galactic blackhole candidates and active galactic nuclei) would be extremely intelligent fellows with number theoretical intelligence quotient (number of sheets of the space-time surfaces as covering) about

    heff/h = n/6= n1×n2/6 > GMm(CP2)/v0×ℏ = (rS/R(CP2))× (1/2β0),

    where one has β0= v0/c, where v0 roughly of the order 10-3c is a parameter with dimensions of velocity rS is Schwartschild radius of quasi-blackhole of order 109 km, and R(CP2) is CP2 radius of order 10-32 meters. If this blackhole like structure is indeed cosmic string eater, its complexity and conscious intelligence increases and it would represent the brains of the galaxy as a living organism. This picture clearly resembles the vision of Susskind about blackholes.

  6. This cosmic spaghetti eater has also a time reversed version for which the magnetic propellor effect is in opposite spatial direction: mass consisting of ordinary particles flows to the interior. Could this object be the TGD counterpart of blackhole? Or could one see both these objects as e blackholes dual to each other (maybe as analogs of white holes and blackholes)? The quasar like blackhole would eat cosmic string and its time reversal would swallow from its environment the particle like matter that its time reversed predecessor generated. Could one speak of breathing? Inwards breath and outwards breath would be time reversals of each other. This brings in mind the TGD inspired living cosmology based on zero energy ontology (ZEO) (see this) as analog of Penrose's cyclic cosmology, which dies and re-incarnates with opposite arrow of time again and again.

A natural question is whether also the ordinary blackholes are quasi-blackholes of either kind. In the fractal Universe of TGD this would look extremely natural.
  1. How to understand the fusion of blackholes (or neutron stars, I will however talk only about blackholes in the sequel) to bigger blackhole observed by LIGO if quasi-blackholes are in question? Suppose that the blackholes indeed represent dipole light tangles in cosmic string. If they are associated with the same cosmic string, they collisions would be much more probable than one might expect. One can imagine two extreme cases for the motion of the blackholes. There are two options.

    1. Tangles plus matter move along string like along highway. The collision would be essentially head on collision.

    2. Tangles plus matter around them move like almost free particles and string follows: this would however still help the blackholes to find each either. The observed collisions can be modelled as a formation of gravitational bound state in which the blackholes rotate around each other first.

    The latter option seems to be more natural.
  2. Do the observed black-hole like entities correspond to quasar like objects or their time reversals (more like ordinary blackholes). The unexpectedly large masses would suggests that they have not yet lost their mass by thickening as stars usually so that they are analogs of quasars. These objects would be cosmic string eaters and this would also favour the collisions of blackhole like entities associated with the same cosmic string.

  3. This picture would provide a possible explanation for the evidence for gravitational echoes and evidence for magnetic fields in the case of blackholes formed in the fusion of blackholes in LIGO (see this). The echoes would result from the repeated reflection of the radiation from the inner blackhole like region and from the ring bounding the accretion disk.

    Note that I have earlier proposed a model of ordinary blackholes in which there would be Schwartschild radius but at some radius below it the space-time surface would become Euclidian. In the recent case the Euclidian regions would be however associated only with wormhole contacts with Euclidian signature of metric bounded by light-like orb its of partonic 2-surfaces and might have sizes of order Compton length scaled up by the value of heff/h for dark variants of particle and therefore rather small as compared to blackhole radius.

See the article TGD view about quasars or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, December 10, 2018

Mice in magnetic fields

I received from Donald Adams a highly interesting link relating to the effects of magnetic field on mice. The claims of the article seem sensational. I do not know whether to trust on the claimed findings since from the viewpoint of TGD inspired quantum biology seem to be too good to be true. I attach a piece of article here reprinted from: Exotic Research Report (V3N1, Jan/Feb/Mar 1999) Magnetism ... A Natural Form of Energy by Walter C. Rawls Jr..

1. How animals dramatically change in relation to magnetic field exposure?

Twelve mice were placed in a cage to be used as controls (untreated). Another twelve mice were placed in a separate cage with exposure to the South pole field of a 2,000 gauss magnet, and the last twelve were put in a cage exposed to the North pole energies of a like magnet. An equal number of males and females were put in each cage. Exposure time was two months.

The untreated control mice behaved and functioned as normal mice. Without exception, the South pole mice slowly became very messy in their housekeeping, their appetites increased, they engaged in sex more, and their offspring were larger than those of the controls. Also, as time passed, they became mentally slow, loosing sensitivity to sound and light changes in the laboratory. Their young were difficult to teach the customary tests; they were lazy, listless, careless and very dirty in appearance.

The North pole mice became very neat and tidy, cleaning themselves frequently. They also became extremely sensitive to any noise or light variations in the laboratory. Their offspring were smaller than those of the controls. They were mentally superior to the controls and out performed the South pole young by several hundred percent in all phases of natural behavior.

The South pole mice were larger, grew faster, matured sooner, and mated continually. They also died earlier than their control counterparts. The North pole mice matured slower and lived 45 to 50 percent longer than the controls. They were also mentally superior to the controls and several hundred percent smarter than the South pole mice. They were much less frequent with sexual behavior than the South pole treated mice and less than the controls.

Rats were the next test subjects, and the results were the same as the findings with the mice. Rabbits and later cats were tested, again the results were the same as with the mice. These experiments are facts of the results of actual controlled experiments and are not theories or ideas. Anyone wishing to do so can reproduce these experiments.

Can we now program man to be more physical or mental, depending on the need of society? Based on our findings from these early experiments, we believe man can be conditioned in a like manner and his life expectancy extended far beyond what is now considered to be his three score and ten years.

Remembering that these tests were conducted on the entire body of the animal, could we by placing the North pole of a magnet directly at the center of the brain of larger animals and voluntary human subjects raise the intelligence and sensitivity?

2. Comments about claimed findings from TGD point of view

If true, these findings provide a direct evidence for the notion of magnetic body (MB) central in TGD inspired theory of consciousness and quantum biology. MB would use biological body as a motor instrument and sensory receptor and serves as an intentional agent. One could understand the findings as being due the loss of the control of the behavior performed by magnetic body as the south directed magnetic field is added. North directed magnetic field seems to have opposite effect.

The fields used are rather strong: the strength is 2 Tesla, by a factor of 10,000 stronger than the endogenous magnetic field Bend=.2 Gauss playing key role in TGD based quantum biology. This field has been assumed to define lower bound for the endogenous magnetic field strengths but it seems that also weak field strengths are possible down to the values where cyclotron energies of dark photons are proportional to heff ∝ m (and thus do not depend on mass of the charged particle and are universal) become smaller than thermal energy at physiological temperature.

2.1 Paramagnetic effect as strengthening of the coupling between MB and biological body?

The explanation for the effects could be that paramagnetic effect occurs and depending on the direction of the applied field increases or reduces the coupling of brain to Schumann resonances. The MB of the water and thus of living organisms and of their parts are indeed proposed to entrain with the Schumann resonances of the Earth's magnetic field by resonance coupling. These frequencies would be crucial for the control of biological body by MB.

Why the direction of external magnetic field does affect the situation? Brain contains magnetic molecules organised in the direction of Earth's magnetic field. The external field would tend increase or reduce the strength of these dipoles and the effect would be enhanced/reduced coupling to Schumann resonances for north/south directed external field. This would strengthen/weaken the communications/control by MB - the higher level intentional agent- and lead to the observed effects.

The frequencies of Schumann resonances indeed correspond to EEG resonance frequencies (see this). Callahan found that in the regions, where Schumann resonances are weak, there is a lot of social disorder so that Schumann resonances seem to be essential also for collective consciousness and well-being. Callahan also found that plants growth faster if the soil is paramagnetic.

2.2 The function of magnetite and other magnetic molecules in brain?

The function of magnetic molecules in brain has remained somewhat a mystery. Certainly they help to navigate but the function might be much deeper. Could magnetic molecules would build a stronger connection to the magnetosphere and magnetic body - could one say that they serve the role of antenna. This would be directly visible in EEG for instance. Resonances would be stronger and communications to and control by MB would be more effective.

Wikipedia article about magnetite tells also about the role of magnetite in brain. The text below contains also my comments.

  1. Living organisms can produce magnetite. In humans, magnetite can be found in various parts of the brain including the frontal, parietal, occipital, and temporal lobes, brainstem, cerebellum and basal ganglia. Iron can be found in three forms in the brain – magnetite, hemoglobin (blood) and ferritin (protein), and areas of the brain related to motor function generally contain more iron. Magnetite can be found in the hippocampus associated with information processing, specifically learning and memory.

    Comment: If magnetite would serve only for navigation, it would be probably appear only in some special part(s) of brain.

  2. Hemoblogin and ferritin contain iron. Hemoglobin is however only weakly paramagnetic whereas ferritin (protein!) nanoparticles can be superparamagnetic .

    Comment: What puts bells ringing is that Callahan found the addition of paramagnetic substances in soil to be beneficial for the plant growth by Callahan.

  3. Magnetite can have toxic effects due to its charge or magnetic nature and its involvement in oxidative stress or the production of free radicals. Research suggests that beta-amyloid plaques and tau proteins associated with neurodegenerative disease (Alzherimer) frequently occur after oxidative stress and the build-up of iron.

    Comment: But could the higher level of paramagnetic iron be the reason for Alzheimer or is it due to due to attempt of brain to improve coupling to Schumann resonances and overcome the effects of Alzheimer? Same question has been made also concerning the plaques in axons emerging in Alzheimer (for TGD based model for Alzheimer see this).

2.3 Magnetic healing at the level of organisms and social structures?

Could one consider artificial strengthening of the coupling of brain to Schumann resonances as magnetic healing of not only biological but also social disorders? Could one just add magnetic molecules to brain? One cannot exclude this kind of possibility and it might be possible to test this with animals.

Many of us are well aware about the worsening situation in our society governed by market economy. Many researchers speak even about a possible collapse of our civilization. Also the strength of the magnetic field of Earth is weakening with a rate of 5 per cent per century (see this). Is this mere accident? It would be interesting to see whether something similar has happened for the local magnetic field during the collapses of the earlier civilizations. If there is a connection, could one imagine improving the situation by magnetic healing?

3. Side remark about spectrum of the strengths of Bend

The endogenous magnetic field Bend is in key role in TGD inspired quantum biology but also other field values than Bend are possible. The range of audible frequencies spanning the range 20 to about 20,000 Hz for humans corresponds to 3 orders of magnitude (10 octaves). Bats hear frequencies up to 200,000. This would give range of 4 orders of magnitude if they were able to hear frequencies down to 20 Hz, they are however able to hear only frequencies above 1 kHz. If also frequencies between 10-20 Hz present in EEG in wake-up state are counted, one obtains 4 orders of magnitude.

The spectrum of the magnetic field strengths has been assumed to correlate directly with the frequencies of heard sounds and to make it possible to map the audible frequencies to the frequencies of dark photon cyclotron radiation with the same frequency communicating the sound signal to MB. Note that dark particles correspond to ordinary particles but with non-standard value of Planck constant heff=n× h0, h= 6× h0. In the case of EEG the values of n are of the order of 1013.

For the most recent view about notion of magnetic body and the role of water entraining to Schumann frequency 60 Hz in the healing of cancer see this.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Sunday, December 09, 2018

Quantum theory cannot consistently describe the use of itself: really?

The article "Quantum theory cannot consistently describe the use of itself" of Frauchiger and Renner has created a lot of debate. The title sounds very Gödelian and gives for taste about the abstractness of the problems considered. There is also a popular article in
Quanta Magazine.

The authors claim that the thought experiment shows that the following 3 apparently innocent and obvious assumptions about quantum measurement theory are mutually inconsistent.

  1. Quantum theory is universal, which means that agent - I translate it to conscious observer- can analyze second system, even a complex one including other agents, using quantum mechanics.

  2. The predictions made by different agents using quantum theory are not contradictory. This looks trivial but perhaps the point is in the meaning of "prediction".

  3. The outcome of quantum measurement is unique. This looks totally trivial but is not so in Many Worlds interpretation.

The article has created a lot of criticism and objections. It has been seen as an excellent manner to compare various interpretations of quantum theory and authors indeed do it. The article of Mateus Araujo and blog article of Lubos Motl claim that the article contains a computational error.

It is difficult to believe that authors could have made a computational error since the system is basically very simple and one essentially compares the outcomes of subsequent measurements for a pair of qubits with quantization axes rotated by 45 degrees with respect to those in the first measurements. I would seek the error is at the level of interpretation rather than computation. Authors assume that conscious entities are describable as extremely simple quantum systems (qubits) but simultaneously believe that they are classical entities with memories surviving in the further quantum measurements posed on them.

Scott Aaronson has a lot of fun with the assumption that conscious entities like humans are modelled as qubits.

The thought experiment

Alice and Bob measure their laboratories containing their friends FA and FB: the possible outcomes of measurements are specified. Reader can of course argue that measuring laboratories is not possible. Certainly it is not with recent technology but quantum theory does not deny this possibility. There are 4 measurements.

  1. FA measures a qubit - this is popularized as coin toss - and and codes the result to two spin states communicated to FB as spin states. These are non-orthogonal - this is essential. One can assume that tail correspons to spin UP in z-direction that is state |UP> and and head corresponds to - say - spin UP but in direction making angle of 45 degrees with z-direction. The spin up state in this direction is superposition proportional to |UP> - | DOWN > .

  2. FB measures the spin in z-direction for the state communicated to him by FA. The outcome is |UP> for tail but either |UP> or |DOWN> for head.

    If FB observes |DOWN> he can conclude that FA got head. This is the crucial bit of information and assumed to be stored in memory of agent FB(whatever memory means!). Even more, FB is assumed to keep the memory in the sequel under the measurements applied to laboratory by Bob. It is also assumed that all observers have memory surviving futher measurements. This is an implicit assumption and is about consciousness rather than quantum mechanics.

    Agents are assumed and to know their QM and be able to apply to it to deduce information about the measurement outcomes of others.

  3. Alice in turn measures the state of her lab containing FA, and coin. Now the state basis for coin (essentially qubit) is spanned by |OKA > = |tail P>- | head > and |FAILA > = |tail P>+ | head > .

  4. Bob does the same for his lab containing FB and spin. These states basis are rotated by 45 degrees with respect to those used by FA and FB. The state basis is spanned by |OKB > = |UP>- | DOWN > and |FAILB > = |UP> +| DOWN > .

  5. The 4 possible final states are of form |OKA > ⊗ |OKB > , |OKA > ⊗ |FAILB > , |FAILA > ⊗ |OKB > , and |FAILA > ⊗ |FAILB >

The authors look what it means if Alice and Bob obtain state |OKA > ⊗ |OKB >. This state is obtained in 1/8 of all cases. It is trivial to see that this state contains state |tail > ⊗ |DOWN>. This state is however not a possible outcome in the measurements performed by FA and FB since |tail > corresponds to |DOWN> by the construction.

Authors claim that this is a paradox. If FA and FB, where just qubits, the authors would not speak of paradox. This kind of measurements have been done for ordinary spins and the predictions of QM have been verified.

There is no paradox if one just regards the systems as spins having no memory or if the memories are possible, they are affected in further measurements. Therefore the paradox must relate to the assumption that the outcomes of ealier measurements by agents FA and FB are stored in memory and that these memories are preserved under measurements by Alice and Bob. Since the agents in question have mind consisting of single qubit this assumption leads to a contradiction. There is no conflict between the 3 listed basic assumptions about QM. The paradox results from wrong assumptions about consciousness.

Suppose qubit minds are possible

What if one just for fun assumes that single bit minds are possible? The essential point is that coin⊗FA, spin⊗FB and FA⊗Alice and FB⊗ Bob represent different conscious entities than FA, FB, FA, Alice and Bob before the state function reduction taking place in measurement in question. WhenX and Y are entangled, it is X⊗Y, which is conscious whereas X and Y are unconscious! This means loss of the memory. The moment of state function reduction producing unentangled product state is moment of consciousness for both X and Y (even for spin!). Hence the information about earlier measurement outcome is destroyed.

For genuine conscious entities the situation is probably different. They can store information about previous measurements so that it is preserved in further quantum measurements involving enanglement and no paradoxes appear. For instance, in the many-sheeted space-time of TGD involving fractal hierarchy of p-adic length scales and scales coming as scales proportional to effective Planck constant the memory storage is possible and biology provides the actual realization. There is also hierarchy of ...selves-subselves-sub-sub-selves.... whre sub-selves of self correspond to its mental images and selves at lower and also higher levels of the hierarchy can store information preserved in the state function reductions.

The view provide by zero energy ontology (ZEO)

In TGD framework ZEO provides some general insights about the notion of memory.

  1. Zero energy states provide a generalization of the quantum states as pair of positive and negative energy states with vanishing total quantum numbers assignable to opposite boundaries of CD. Zero energy state can be regarded as superposition of deterministic classical time evolutions connecting initial and final states at boundaries of CD. The motivation for ZEO is that it resolves the basic paradox of quantum measurement theory since state function reduction replacing the superposition of deterministic time evolutions with a new superposition does not break the determinism of any time evolution.

  2. During the lifecycle of self identified as a sequence of what I call small state function reductions (analogs of weak measurements, see Wikipedia), the members of state pairs at the passive boundary of causal diamond (CD) - remain unaffected. One can talk about generalized Zeno effect. The state at passive boundary represents conserved memories. Note that one has hierarchy of CDs inside CDs so that the situation is rather complex.

  3. The states at the at the opposite -active - boundary of CD change. In each unitary evolution the active boundary of CD is de-localized in the moduli space of CDs and the small reduction involves localization in the moduli space of CDs, in particular time localization. The size of CD measured the temporal distance between its dips increases in statistical sense at least and this corresponds to clock time correlating strongly with subjective time defined by the sequence of reductions.

  4. In big state function reduction the role of passive and active states change and CD begins to increase in opposite direction: conscious entity dies and reincarnates as time reversed one. These reductions correspond to the state function reductions occurring in say particle physics experiments and involve drastic change of the quantum state. The memories represented by the state at passive boundary are destroyed and the outcome of the big state function reduction at active boundary represents the new memories.

  5. For minds with size of qubit, the memories would be indeed destroyed and new ones formed. For bigger minds it is quite possible that some sub-...-sub-selves in the hierarchy can preserve the memory and that it can be recalled in the subsequent re-incarnations in the original time direction. Sleep state could correspond at our level of consciousness to temporary death and re-incarnation at opposite time direction. We indeed remember something about yesterday, even previous year! Our mental images also die and re-incarnate and the interpretation would be as medemphychosis at the level of mental images.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.






Wednesday, December 05, 2018

Still about twistor lift of TGD

Twistor lift of TGD led to a dramatic progress in the understanding of TGD but also created problems with previous interpretation. The new element was that Kähler action as analog of Maxwell action was replaced with dimensionally reduced 6-D Kähler action decomposing to 4-D Kähler action and volume term having interpretation in terms of cosmological constant.

Is the cosmological constant really understood?

The interpretation of the coefficient of the volume term as cosmological constant has been a longstanding interpretational issue and caused many moments of despair during years. The intuitive picture has been that cosmological constant obeys p-adic length scale scale evolution meaning that Λ would behave like 1/Lp2= 1/p≈ 1/2k.

This would solve the problems due to the huge value of Λ predicted in GRT approach: the smoothed out behavior of Λ would be Λ∝ 1/a2, a light-cone proper time defining cosmic time, and the recent value of Λ - or rather, its value in length scale corresponding to the size scale of the observed Universe - would be extremely small. In the very early Universe - in very short length scales - Λ would be large.

It has however turned out that I have not really understood how this evolution could emerge! Twistor lift seems to allow only a very slow (logarithmic) p-adic length scale evolution of Λ. Is there any cure to this problem?

  1. The magnetic energy decreases with the area of string like 1/p≈ 1/2k, where p defines the transversal length scale of the flux tube. Volume energy (magnetic energy associated with the twistor sphere) is positive and increases like S. The sum of these has minimum for certain radius of flux tube determined by the value of Λ. Flux tubes with quantized flux would have thickness determined by the length scale defined by the density of dark energy: L∼ ρvac-1/4, ρdark= Λ/8π G. ρvac∼ 10-47 GeV4 (see this) would give L∼ 1 mm, which would could be interpreted as a biological length scale (maybe even neuronal length scale).

  2. But can Λ be very small? In the simplest picture based on dimensionally reduced 6-D Kähler action this term is not small in comparison with the Kähler action! If the twistor spheres of M4 and CP2 give the same contribution to the induced Kähler form at twistor sphere of X4, this term has maximal possible value!

    The original discussions treated the volume term and Kähler term in the dimensionally reduced action as independent terms and Λ was chosen freely. This is however not the case since the coefficients of both terms are proportional to 1/αK2S, where S is the area of the twistor sphere which is same for the twistor spaces of M4 and CP2 if CP2 size defines the only fundamental length scale. I did not even recognize this mistake.

The proposed fast p-adic evolution of the cosmological constant would have extremely beautiful consequences. Could the original intuitive picture be wrong, or could the desired p-adic length scale evolution for Λ be possible after all? Could dynamics somehow give it? To see what can happen one must look in more detail the induction of twistor structure.
  1. The induction of the twistor structure by dimensional reduction involves the identification of the twistor spheres S2 of the geometric twistor spaces T(M4)= M4× S2(M4) and of TCP2 having S2(CP2) as fiber space. What this means that one can take the coordinates of say S2(M4) as coordinates and imbedding map maps S2(M4) to S2(CP2). The twistor spheres S2(M4) and S2(CP2) have in the minimal scenario same radius R(CP2) (radius of the geodesic sphere of CP2. The identification map is unique apart from SO(3) rotation R of either twistor sphere. Could one consider the possibility that R is not trivial and that the induced Kähler forms could almos cancel each other?

  2. The induced Kähler form is sum of the Kähler forms induced from S2(M4) and S2(CP2) and since Kähler forms are same apart from a rotation in the common S2 coordinates, one has Jind = J+R(J), where R denotes the rotation. The sum is Jind=2J if the relative rotation is trivial and Jind=0 if R corresponds to a rotation Θ→ Θ+π changing the sign of J= sin(Θ)dΘ ∧dΦ.

  3. Could p-adic length scale evolution for Λ correspond to a sequence of rotations - in the simplest case Θ → Θ + Δk Θ taking gradually J from 2J at very short length scales to J=0 corresponding to Δ Θ=π at very long length scales? A suitable spectrum for Δk (Θ) could reproduce the proposal Λ ∝ 2-k for Λ.

  4. One can of course ask whether the resulting induced twistor structure is acceptable. Certainly it is not equivalent with the standard twistor structure. In particular, the condition J2= -g is lost. In the case of induced Kähler form at X4 this condition is also lost. For spinor structure the induction guarantees the existence and uniqueness of the spinor structure, and the same applies also to the induced twistor structure being together with the unique properties of twistor spaces of M4 and CP2 the key motivation for the notion.

  5. Could field equations associated with the dimensional reduction allow p-adic length scale evolution in this sense?

    1. The sum J+R(J) defining the induced Kähler form in S2(X4) is covariantly constant since both terms are covariantly constant by the rotational covariance of J.

    2. The imbeddings of S2(X4) as twistor sphere of space-time surface to both spheres are holomorphic since rotations are represented as holomorphic transformations. This in turn implies that the second fundamental form in complex coordinates is a tensor having only components of type (1,1) and (-1,-1) whereas metric and energy momentum tensor have only components of type (1,-1) and (-1,1). Therefore all contractions appearing in field equations vanish identically and S2(X4) is minimal surface and Kähler current in S2(X4) vanishes since it involves components of the trace of second fundamental form. Field equations are indeed satisfied.

    3. The solution of field equations becomes a family of space-time surfaces parametrized by the values of the cosmological constant Λ as function of S2 coordinates satisfying Λ/8π G = ρvac=J∧(*J)(S2). In long length scales the variation range of Λ would become arbitrary small.

  6. If the minimal surface equations solve separately field equations for the volume term and Kähler action everywhere apart from a discrete set of singular points, the cosmological constant affects the space-time dynamics only at these points. The physical interpretation of these points is as seats of fundamental fermions at partonic 2-surface at the ends of light-like 3-surfaces defining their orbits (induced metric changes signature at these 3-surfaces). Fermion orbits would be boundaries of fermionic string world sheets.

    One would have family of solutions of field equations but particular value of Λ would make itself visible only at the level of elementary fermions by affecting the values of coupling constants. p-Adic coupling constant evolution would be induced by the p-adic coupling constant evolution for the relative rotations R for the two twistor spheres. Therefore twistor lift would not be mere manner to reproduce cosmological term but determine the dynamics at the level of coupling constant evolution.

  7. What is nice that also Λ=0 option is possible. This would correspond to the variant of TGD involving only Kähler action regarded as TGD before the emergence of twistor lift. Therefore the nice results about cosmology obtained at this limit would not be lost.

See the article TGD view about quasars or the chapter The Recent View about Twistorialization in TGD Framework of "Towards M-matrix".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

What does one really mean with gravitational Planck constant?

There are important questions related to the notion of gravitational Planck constant, to the identification of gravitational constant, and to the general structure of magnetic body. What gravitational Planck constant really is? What the formula for gravitational constant in terms of CP2 length defining Planck length in TGD does really mean, and is it realistic? What space-time surface as covering space does really mean?

What does one mean with space-time as covering space?

The central idea is that space-time corresponds to n-fold covering for heff=n× h0. It is not however quite clear what this statement does mean.

  1. How the many-sheeted space-time corresponds to the space-time of QFT and GRT? QFT-GRT limit of TGD is defined by identifying the gauge potentials as sums of induced gauge potentials over the space-time sheets. Magnetic field is sum over its values for different space-time sheets. For single sheet the field would be extremely small in the present case as will be found.

  2. A central notion associated with the hierarchy of effective Planck constants heff/h0=n giving as a special case ℏgr= GMm/v0 assigned to the flux tubes mediating gravitational interactions. The most general view is that the space-time itself can be regarded as n-sheeted covering space. A more restricted view is that space-time surface can be regarded as n-sheeted covering of M4. But why not n-sheeted covering of CP2? And why not having n=n1× n2 such that one has n1-sheeted covering of CP2 and n2-sheeted covering of M4 as I indeed proposed for more than decade ago but gave up this notion later and consider only coverings of M4? There is indeed nothing preventing the more general coverings.

  3. n=n1× n2 covering can be illustrated for an electric engineer by considering a coil in very thin 3 dimensional slab having thickness L. The small vertical direction would serve and as analog of CP2. The remaining 2 large dimensions would serve as analog for M4. One could try to construct a coil with n loops in the vertical direction direction but for very large n one would encounter problems since loops would overlap because the thickness of the wire would be larger than available room L/n. There would be some maximum value of n, call it nmax.

    One could overcome this limit by using the decomposition n=n1× n2 existing if n is prime. In this case one could decompose the coil into n1 parallel coils in plane having n2≥ nmax loops in the vertical direction. This provided n2 is small enough to avoid problems due to finite thickness of the coil. For n prime this does not work but one can of also select n2 to be maximal and allow the last coil to have less than n2 loops.

    An interesting possibility is that that preferred extremal property implies the decomposition ngr=n1× n2 with nearly maximal value of n2, which can vary in some limits. Of course, one of the n2-coverings of M4 could be in-complete in the case that ngr is prime or not divisible by nearly maximal value of n2. We do not live in ideal Universe, and one can even imagine that the copies of M4 covering are not exact copies but that n2 can vary.

  4. In the case of M4× CP2 space-time sheet would replace single loop of the coil, and the procedure would be very similar. A highly interesting question is whether preferred extremal property favours the option in which one has as analog of n1 coils n1 full copies of n2-fold coverings of M4 at different positions in M4 and thus defining an n1 covering of CP2 in M4 direction. These positions of copies need not be close to each other but one could still have quantum coherence and this would be essential in TGD inspired quantum biology.

    Number theoretic vision suggests that the sheets could be related by discrete isometries of CP2 possibly representing the action of Galois group of the extension of rationals defining the adele and since the group is finite sub-group of CP2, the number of sheets would be finite.

    The finite sub-groups of SU(3) are analogous to the finite sub-groups of SU(2) and if they action is genuinely 3-D they correspond to the symmetries of Platonic solids (tetrahedron,cube,octahedron, icosahedron, dodecahedron). Otherwise one obtains symmetries of polygons and the order of group can be arbitrary large. Similar phenomenon is expected now. In fact the values of n2 could be quantized in terms of dimensions of discrete coset spaces associated with discrete sub-groups of SU(3). This would give rise to a large variation of n2 and could perhaps explain the large variation of G identified as G= R2(CP2)/n2 suggested by the fountain effect of superfluidity.

  5. There are indeed two kinds of values of n: the small values n=hem/h0=nem assigned with flux tubes mediating em interaction and appearing already in condensed matter physics and large values n=hgr/h0=ngr associated with gravitational flux tubes. The small values of n would be naturally associated with coverings of CP2. The large values ngr=n1× n2 would correspond n1-fold coverings of CP2 consisting of complete n2-fold coverings of M4. Note that in this picture one can formally define constants ℏ(M4)= n10 and ℏ(CP2)= n20 as proposed for more than decade ago.

Planck length as CP2 radius and identification of gravitational constant G

There is also a puzzle related to the identification of gravitational Planck constant. In TGD framework the only theoretically reasonable identification of Planck length is as CP2 length R(CP2), which is roughly 103.5 times longer than Planck length. Otherwise one must introduce the usual Planck length as separate fundamental length. The proposal was that gravitational constant would be defined as G =R2(CP2)/ℏgr, ℏgr≈ 107ℏ. The G indeed varies in un-expectedly wide limits and the fountain effect of superfluidity suggests that the variation can be surprisingly large.

There are however problems.

  1. Arbitrary small values of G=R2(CP2)/ℏgr are possible for the values of ℏgr appearing in the applications: the values of order ngr ∼ 1013 are encountered in the biological applications. The value range of G is however experimentally rather limited. Something clearly goes wrong with the proposed formula.

  2. Schwartschild radius rS= 2GM = 2R2(CP2)M/ℏgr would decrease with ℏgr. One would expect just the opposite since fundamental quantal length scales should scale like ℏgr.

  3. What about Nottale formula ℏgr= GMm/v0? Should one require self-consistency and substitute G= R2(CP2)/ℏgr to it to obtain ℏgr=(R2(CP2)Mm/v0)1/2. This formula leads to physically un-acceptable predictions, and I have used in all applications G=GN corresponding to ngr∼ 107 as the ratio of squares of CP2 length and ordinary Planck length.

Could one interpret the almost constancy of G by assuming that it corresponds to ℏ(CP2)= n20, n2≈ 107 and nearly maximal except possibly in some special situations? For ngr=n1× n2 the covering corresponding to ℏgr would be n1-fold covering of CP2 formed from n1 n2-fold coverings of M4. For ngr=n1× n2 the covering would decompose to n1 disjoint M4 coverings and this would also guarantee that the definition of rS remains the standard one since only the number of M4 coverings increases.

If n2 corresponds to the order of finite subgroup G of SU(3) or number of elements in a coset space G/H of G (itself sub-group for normal sub-group H), one would have very limited number of values of n2, and it might be possible to understand the fountain effect of superfluidity from the symmetries of CP2, which would take a role similar to the symmetries associated with Platonic solids. In fact, the smaller value of G in fountain effect would suggest that n2 in this case is larger than for GN so that n2 for GN would not be maximal.

See the article TGD View about Quasars or the chapter About the Nottale's formula for hgr and the possibility that Planck length lP and CP2 length R are identical giving G= R2/ℏeff.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.