Friday, June 29, 2012

Metabolic energy, ATP, and negentropic entanglement

The ideas about the detailed relationship between metabolism and negentropic entanglement are still in a state of turmoil. Let us sum up those concepts and ideas which look reasonably reliable.

  1. Negentropic entanglement is the first basic notion. There is a strong tendency to consider the presence of a magnetic flux tube connecting two objects and carrying negentropically entangled quantum state as a fundamental structure giving rise to a directed attention. Negentropic entanglement would be basic element of conscious cognition, and one can assign to it various attributes like experience of understanding. The mildest assumption is that negentropic entanglement is associated with the flux tube. A stronger assumption is that it is between states assignable to the ends of the flux tube identifiable as observer and target of attention.

    An analogy with Orch OR is suggestive. The period of negentropic entanglement - period of directed attention - would correspond to Orch Or and its end to state function reduction.

  2. The identification of the increments zero point kinetic energies as universal metabolic energy quanta is one of the oldest hypothesis of TGD inspired theory of consciousness. Zero point kinetic energy is associated with the zero point motion of particle at space-time sheet. The finite size of the space-time sheet gives rise to this energy for which non-relativistic parametrization is E0= k× 3hbar2π2/2mL2(k). L(k)=2(-k+151)/2 L(151), L(151) ≈ 10 nm is the p-adic length scale of the space-time sheet, and k is a numerical factor not far from unity. Particle in 3-D box gives k=1.

    As particle is transferred to a larger space-time sheet the zero point kinetic energy is reduced, and the difference is liberated as usable metabolic energy. For proton the size scale of this space-time sheet could be atomic size scale k=137. For electron it could electron Cooper pair k=149 (prime) corresponding to a lipid layer of cell membrane could be in question. Entire hierarchy of metabolic energy quanta is predicted and the energy scale depends on the particle mass and p-adic length scale and geometric factors characterizing the shape of space-time sheet only.

    One can ask whether the high energy phosphate bond in the phosphate of ATP molecule contains this kind of smaller space-time sheet and in the transition ATP → ADP, electron or proton drops from this kind of space-time sheet. The following considerations show that this hypothesis is not necessary, and that one can also modify the identification of the fundamental metabolic energy quantum as zero point kinetic energy without losing anything. Therefore the details of the scenario are far from being fully nailed down.

  3. Magnetic flux tubes are carriers of charged particle and the hypothesis is that cyclotron Bose-Einstein condensates for fermionic Cooper pairs and bosonic ions are relevant for consciousness. In particular, cyclotron transitions in which bosons in these condensates are excited would be important for the generation of conscious experiences. The hierarchy of Planck constants and the fact that cyclotron energy is proportional to hbar allows to have arbitrarily high cyclotron energies in given magnetic field. This is essential in the model for the effects of ELF em fields on living matter (see this).

  4. Becker's findings about the relevance of DC currents for healing of wounds lead to an idea about how electromagnetic radiation interacts with the charged particles at magnetic flux tubes (see this). What would happen is that charged particles experience the electric and magnetic fields of the radiation field described in terms of massless extremals. Electric field would generate acceleration in the direct of the flux tube and could excite Becker currents which would give rise to biological effects - healing of wound in the simplest case. The proposal has been that this process gives rise to what could be seen as a loading of metabolic batteries.

    The combination of this view with the notion of cyclotron BE condensate leads to a slightly more complex picture. Radiation field can excite single boson states both in transversal and longitudinal degrees of freedom. Transversal ones correspond to cyclotron states with energies Ec,n= (n+1/2)Ec, Ec= hbar qB/m and the energies of excitations are of form nEc. Longitudinal degrees of freedom correspond to a particle in 1-D box -possibly in presence of longitudinal electric field: a simple model for the states was derived in the model for Becker's DC currents.

    In the absence of longitudinal electric field the energy spectrum is En=n2E0, E0=hbar2π2/2mL2, L the length of the flux tube. Longitudinal excitations correspond to energies (nf-ni)2E0 and would classically correspond to the acceleration in the electric field component parallel to the flux tube giving rise to Becker currents. For both excitations negentropically entangled states result very naturally as superpositions of single particle excitations and possibly also multi-particle excitations. Both incoming photons and liberation of metabolic energy quantum as photon can induce the excitation.

    One could reinterpret the idea about universal metabolic energy quanta by interpreting it as increment of longitudinal energies at flux tube. For the excitation n=1→ 2 the energy would be 3hbar2π2/2mL2 which is same as zero point kinetic energy for a particle in 3-D box of side L. Quantitative prediction is therefore same as that of the original model. One can of course consider also the original option that the transfer of particles from the flux tube to a larger space-time sheet indeed liberates metabolic energy.

Let us now try to weave these ideas to an internally consistent picture. It is perhaps best to proceed by making questions.
  1. Could one assign negentropic entanglement with high energy phosphate bond?If so, the period of negentropic entanglement (having Orch OR as a counterpart) would correspond to the presence of ATP and the end of this period to ATP→ ADP. I have considered this possibility earlier. The problem is that it is difficult to understand how negentropic entanglement could be assigned simultaneously both to ATP and to the magnetic flux tube whose length and thickness are proportional to hbar and therefore varies. One should treat ATP and flux tube as single basic structure and this does not sound convincing since the scales of flux tubes are expected to be much longer than the size scale of ATP molecule. Therefore the safest assumption would be that ATP is just what it is believed to be: provider of metabolic energy only. One can leave also open the question whether high energy phosphate bond can be interpreted in terms of zero point kinetic energy or not.

  2. Could the non-local excitations of cyclotron Bose-Einstein condensates by large hbar photon give rise to the negentropically entangled states? Excitation of cyclotron BE condensate requires energy so that metabolic energy is required. ATP could provide this energy. Cyclotron energy quantum is given by Ec= hbar qB/m , q and m are charged and mass of the boson. As already found, the energy of boson is sum of two contributions: energy En∝ n2 associated with free longitudinal motion and magnetic energy Ec,n∝ n+1/2. Longitudinal excitations could be assigned to the generation of Becker currents. This proposal would integrate metabolism, negentropy generation, and quantum like behavior of ELF em fields in living matter to single picture.

  3. Could it be that ATP - instead of being a carrier of negentropic entanglement as suggested earlier - only provides the metabolic energy quantum transformed to cyclotron energy quantum or longitudinal energy quantum when negentropic entanglement is generated by exciting the cyclotron BE condensate? Or could ATP carry both metabolic energy and negentropic entanglement and both of them are transferred to the magnetic flux tube
    in ATP → ADP process?

    1. Cyclotron energies are quite too small for this to make sense for the ordinary value of Planck constant. The nominal value of the metabolic energy quantum is E0=0.5 eV which by E0= h0f0 corresponds to frequency f0= 5 × 1013 Hz in near infrared. The value of electron's cyclotron frequency in the endogenous magnetic field Bend=.2× 10-4 Tesla postulated to explain the effects of ELF em fields on vertebrate brain is fc,e≈ 6× 105 Hz. If metabolic energy quantum is to excite cyclotron state (n→ n+1), one must have Ec=E0.

      Even for electron Ec is much below E0 small for B=Bend and hbar=hbar0. One can however scale both B from Bend and hbar from hbar0. Requiring Ec(hbar,B)= E0 and using Ec= hf gives fc,e/f0=r1r, r1= B/Bend and r=hbar/hbar0, where hbar0 denotes the standard value of Planck constant. This gives r1r≈ (5/6)× 108.

    2. There are many manners to achieve the desired upwards scaling of cyclotron energies. Magnetic flux quantization gives further constraints. One could require that magnetic flux is quantized, and that for hbar=hbar0 the flux quantum has radius of order L(151) (1 nm, cell membrane thickness) corresponding to the thickness of a flux tube assignable to single DNA nucleotide.

      The radius of flux quantum corresponds to the magnetic length rB= (hbar/qB)1/2. In the scaling Bend→ 1 Tesla (r1= 2.5× 104), magnetic length scales as rB≈ 2.5 μm → 11 nm. From the condition r1r= (5/6)× 108 one has for the scaling of Planck constant r ∼ 3.3× 103. The scaling of the flux tubes length of L(151) would give flux tube length of order 3× 10 μ m, which corresponds to cell size so that a flux tube connecting DNA and cell membrane could be in question. Note that the scaling of hbar does not affect zero point kinetic energy in the longitudinal direction since L scales as hbar.

    3. For flux tube length L(151) and for hbar=hbar0 the energy of the lowest longitudinal excitation is same order of magnitude as metabolic energy quantum so that the excitation of longitudinal states could be in key role in the generation of Becker's currents. There is evidence about non-local excitations of electrons in photosynthesis (see also the blog posting), which suggests that the longitudinal energy excitations for Cooper pairs could indeed play the role of fundamental metabolic energy quantum transferred to the the energy of high energy phosphate bond of ATP. This interpretation leaves open the structure of high energy phosphate bond and there is no absolute need to assign zero point kinetic energy with it.

      Longitudinal energies are negligible, one must require flux tube length to be considerably longer than L(151) for the ordinary value of hbar. Longitudinal energies are significant only for electron for given flux tube length. Indeed, Becker currents are known to be carried by electrons.

    4. If one allows ionic Bose Einstein condensates the value of Planck constant must be scaled up by the mass ratio mI/me, where mI and me are the masses of ion and electron. For proton this would give scaling ratio r=211 and one would end up with the hierarchy of Planck constants coming as powers of 211 suggests years ago. What is remarkable that in cyclotron degrees of freedom also protons and ions can play a signification role: the quantal effects of ELF em fields on vertebrate brain suggest that this is the case.

  4. What happens if one has just electrons rather than Cooper pairs? In both transversal and longitudinal degrees of freedom one would have the analog of Fermi sphere with electron states filled up to some maximum values integers characterizing cyclotron energy and longitudinal momentum. Transitions induce also now negentropic entanglement. For cyclotron states the energy increment would be Ec so that basic metabolic energy quantum can induce the transitions. In longitudinal degrees of freedom the minimal energy increment would be (2N+1)E0, where N characterizes the populated state with maximal longitudinal momentum. This energy should be equal to the metabolic energy quantum. This can be arranged but is not so natural. Experimental work is sooner or later bound to reveal whether electrons or their Cooper pairs are in question.

The option developed above is perhaps the most elegant found hitherto: it would raise the BE condensates of electronic and ionic Cooper pairs in a special position, it would lead to explicit proposal for what negentropic entanglement is, and it would requires no modification of the ideas related to ATP, even the standard view about ATP can be kept. It suggests that electronic cyclotron BE condensates are essential also for the understanding of photosynthesis. The absorption of dark photon would generate a non-local excitation of BE condensate of electron Cooper pairs- also a negentropically entangled state. The energy gain in this process could be interpreted as a fundamental metabolic energy quantum, and the subsequent steps in photosynthesis would only take care of the storage of the energy to ATP. Also the metabolic energy liberated in ATP→ ADP could be realized universally as IR dark photon absorbed by cyclotron BE condensate at magnetic flux tube so that dark photon beams would become the key actors of metabolism and negentropy generation. Note that a maximal negentropy gain is obtained if the number of Cooper pairs in the condensate is power of prime. Relatively small primes in the scale defined by the p-adic length scales assignable to elementary particles would be in question.

Tuesday, June 26, 2012

Magnetic body, DNA replication, mitosis, meiosis, and fertilization

If magnetic body uses biological body as a motor instrument and sensory receptor, the natural question is whether basic process such as mitosis, meiosis. could be induced by more fundamental processes for the magnetic body. One can argue that if magnetic flux tubes are responsible for making living organism and even population a kind of Indra's net, cell division should be induced by magnetic body and should produce automatically this Indra's net.

As a matter of fact, cell division brings strongly in mind division of magnetic dipole but also the reconnection of magnetic flux tubes can be considered as a basic mechanism. At least the following basic mechanisms can be considered.

  1. Consider a pair of magnetic flux tubes with opposite fluxes connecting objects A and B. The division of A+B to A and B would be induced by a reconnection process for the members of the pair producing two loops associated with A and B but no connection between A and B anymore. The problem of this option is that the flux tube connection defined in this manner might not be stable enough.

  2. Magnetic dipole would correspond to a flux tube at the core of the dipole field itself decomposing to flux tubes with weaker magnetic flux at its ends. The division to two dipoles would correspond to a formation of segment in which flux tube decomposes into several flux tubes, which need not be parallel anymore. Two new dipole ends are formed and the old dipole ends remain connected so that the repetition of this process would yield a kind of Indra's net predicting that all cells of living organism are connected by the flux tubes to single coherent whole.

    The division of flux tube to several flux tubes could also correspond to the increase of Planck constant by integer factor n along a segment of flux tube. The resulting n flux sheets would corresponds to the sheets of the covering. The length of the segment would be scaled up by n.

  3. If one has pair of dipoles A-B and C-D with same total flux, a reconnection leading to A-D and C-B is possible.
Could biochemical processes associated with cell division be induced by some of the listed processes? The two latter options would predict that the cells produced in cell division remain connected by magnetic flux tubes. The division of dipole creates two new dipole ends connected by short flux tube. The already existing ends remain connected by "long" flux tubes carrying weak magnetic fields as compared to that carried by the dipole itself. Also the processes of meiosis and fertilization could respect the presence of long flux tubes connecting the cells participating in the process so that flux tube connections could also exist between parents and offspring. The members of population could form a kind of super-organism. Remote interactions between DNA and other biomolecules of closely related members of species and even shared use of DNA (and its TGD variant "dark DNA") can be imagined.
  1. Consider first DNA replication and reshuffling taking place in meiosis essential for the sexual reproduction in eukariotes. The dividing nucleus (of form MMFF) is ordinary nucleus and contains two pairs of chromosomes coming both mother (MM) and father (FF). Division produces four haploid cells containing only two chromosomes (AB) with A and B obtained by reshuffling the DNAs of mother and father to obtained 4 unique chromosome pairs. In sexual reproduction these cells fuse to form diploid cells (MMFF).

    1. The reshuffling of a pair MF of DNA strands from father and mother could be induced by a repeated reconnection process for flux tubes parallel to DNA strands. The simplest reconnection for strands A-B and C-D produces strands A-X-D and C-Y-B where A-Z and C-Y are pieces of A-B and C-D with same number of codons.

    2. The replication of DNA takes place for all four chromosomes before reshuffling. One obtains a nucleus containing 4 pairs of doubled chromosomes. This double nucleus divides to two daugher nuclei containing 2 doubled chromosomes each.These divide further to two nuclei each containing only two chromosomes each (AB).

      The DNA reshuffling could correspond to a multiple reconnection process if the two DNA strands are accompanied by by long magnetic dipoles (flux tubes). Note that in absence of additional restrictions many combinations (28) are possible.

    3. After replication and reshuffling the division of the nucleus two two intermedaires could be induced either by splitting of a flux tube connecting pairs of doubled chromosomes to flux tubes not anymore parallel to each other. The flux could diverge to a larger volume in this segment. Second possibility is that the increase of Planck constant increases the length of segment and at the same time divides the flux into sub-fluxes. Dipole field flux tube would give long flux tubes and split dipole shorter flux tubes connecting the resulting cells together.

    4. Also the chromosome pairs of the resulting intermediate nuclei could be connected to each other by flux tubes to form a connected structure A-B-C-D and reconnection process could divide it to A-B plus C-D (say) and lead to a division of the nucleus producing 4 ordinary daughter nuclei.

  2. In mitosis the initial nucleus corresponds to MMFF and DNA replication leads to pairs of doubled chromosomes but without re-shuffling. One doubled pair from mother and one pair from father the members of doubled chromosomes are connected by a kind of bridge. In the mitosis proper the doubled chromosome pairs are split and two chromosome pairs containing one chromosome from father and mother are formed. After this division leads to two diploid cells similar to the dividing cell.

  3. In fertilization gametes from father and mother fuse together to form a single cell with two pairs of chromosomes from both father and mother. The question is how the two gametes are able to find each other. The reconnection of closed magnetic flux tubes associated with the gametes could lead to a formation of bridges connection the two gametes and a phase transition reducing the value of Planck constant could lead the two gametes near each other and make possible the fusion.

DNA replication is clearly the fundamental process, and the question is whether also this step could be reduced to a reconnection for a pair flux tubes: first would connect the separated DNA strands and second one free nucleotide and its conjugate.
  1. Suppose that there are flux tubes connecting nucleotides of DNA and corresponding nucleotides of the conjugate strand: they could be rather short flux tubes of length shorter than 1 nm in the normal situation but could grow longer when DNA strands separate. This might involve a phase transition increasing temporarily the value of Planck constant assignable to these flux tubes and increasing the length of the segment and of connecting flux tube and therefore the distance of DNA strands.

  2. There are also free DNA nucleotides and their conjugates in the environment which can be used in the replication process as building bricks. If also free nucleotides and their conjugates are connected in a pairwise manner by similar flux tubes and if the value of magnetic flux characterizes a given pair then reconnection could take place for these two kinds of flux tubes and lead to a correct pairing of DNA strand with conjugate nucleotides. Same would happen for the conjugate strand. The reduction of Planck constant would lead to a pair of ordinary DNA double strands.

  3. The details of the dynamics would be determined by other factors but the outcome would be fixed by the nucleotide-conjugate pairing and dependence of the flux on the nucleotide pair. In particular, conservation of magnetic flux would guarantee that the nucleotides can be assigned only with their conjugates.

These arguments suggest that reconnection of magnetic flux tubes, temporary change of the Planck constant, and coding of nucleotide-conjugate pairs by magnetic flux could be key element of meiosis, mitosis, and reshuffling of chromosomes in meiosis. Also higher level processes - such as cell division and fertilization - could involve reconnection process as a fundamental step. These mechanisms would appear in several length scales corresponding to DNA, nucleus, and cell length scale. In an approach based on mere chemistry, this must be assumed as a result of reaction kinetics.

Tuesday, June 19, 2012

DNA, speech, music, and ordinary sound

Peter Gariaev's group has made rather dramatic claims about DNA during years. The reported findings have served as inspiration in the development of TGD based view about living matter (see this, this, this, this and this).

  1. The group has proposed that the statistical distributions of nucleotides and codons in the intronic portion of DNA resemble the distribution of letters and words in the natural languages. For instance, it is proposed that Zipf law applying to natural languages applies to the distributions of codons in the intronic portion of DNA. One can study the popularity of the words in natural languages and order them against their popularity. Zipf law states that the integer characterizing popularity is in constant proportion to the number of times it appears in given long enough text.

  2. It has been also claimed that DNA can be reprogrammed using modulated laser light or even radio waves. I understand that reprogramming means a modified gene expression. Gariaev's group indeed proposes that the meaning of the third nucleotide (having a rather low significance in the DNA-aminoacid correspondence) in the genetic codon depends on the context giving rise to a context dependent translation to amino-acids. This is certainly a well-known fact for certain variants of the genetic code. This context dependence might make possible the re-programming. The notion of dark DNA allows to consider much more radical possibility based on the transcription of dark DNA to mRNA followed by translation to aminoacds. This could effectively replaced genes with new ones.

  3. Also the modulation of the laser light by speech is claimed to have the re-programming effect. The broad band em wave spectrum resulting in the scattering of red laser light on DNA is reported to have rather dramatic biological effects. The long wave length part of this spectrum can be recorded and transformed to sound waves and these sound waves are claimed to have the same biological effects as the light. The proposal is that acoustic solitons propagating along DNA represent this effect on DNA.
I do not have the competence to make statements about the plausibility of these claims. TGD view about quantum biology makes also rather strong claims. The natural question is however whether a justification for the claims of Gariaev and collaborators could be found in TGD framework? In particular, can one say about possible effects of sound on DNA. One intriguing fact about sound perception is that music and speech have meaning whereas generic sounds to not. Could one say something interesting about how this meaning is generated at the level of DNA?

Basic picture

Before continuing it is good to restate the basic TGD inspired ideas about the generation of meaning.

  1. The generation of the negentropic entanglement is the correlate for the experience of the meaning. In the model inspired by Becker's findings discussed in the earlier posting, the generation of negentropic entanglement involves a generation of supra currents along flux tubes moving in the electric field parallel to them. This is a critical phenomenon taking place when the voltage along the flux tube is near critical value. The generation of nerve pulse near critical value of the resting potential is one example of this criticality. Becker's direct currents involved with the healing of wounds is another example.

    The flow of the supra current gives rise to the acceleration of charges along the flux tubes and generation of Cooper pairs or even many-electrons systems at smaller space-time sheets in negentropically entangled state and carrying metabolic energy quantum as zero point kinetic energy. The period of negentropic entanglement gives rise to a conscious experience to which one can assign various attributes such as understanding, attention, and so on. Negentropic entanglement would measure the information contained by a rule having as instances the state pairs in the quantum superposition defining the entangled state. When the period of negentropic entanglement ceases, the metabolic energy is liberated.

  2. Remote activation of DNA by analogs of laser beams is another essential piece of TGD inspired quantum biology (see this). In the proposed addressing mechanism a collection of frequencies serves as a password activating intronic portions of DNA. This would take place via a resonance for the proposed interaction between photons and dark supra currents flowing along magnetic flux tubes and perhaps also along DNA strands or flux tubes parallel to them. The superposition of electric fields of photons (massless extremals) with the electric fields parallel to flux tubes (so that massless extremals serving as correlates for laser beams would traverse the flux tube in orthogonal direction).

  3. The flux tubes, and more generally flux sheets labelled by the value of Planck constant, and along which the radiation arrives would be transversal to DNA and contain DNA strands. This kind of flux tubes and sheets also define the connections to the magnetic body, and form parts of it. A given flux sheet would naturally select the portion of DNA, which is activated by the radiation: it could be a portion of intronic part of DNA activating in turn a gene. These flux tubes and sheets could be connected to the lipids of nuclear and cell membranes - also cell membranes of other cells - as assumed in the model of DNA as topological quantum computer. The sheets could also give rise to a hierarchy of genomes - besides genome one would have super-genome in which genomes are organelles are integrated by flux sheets to a large coherently expressed structure containing individual genomes like page of a book contains lines of text. These pages would be in turn organized to a book - hyper-genome as I called it. One could have also libraries, etc... There would fractal flux quanta inside flux quanta structure.

Phonons and photons In TGD Universe

Consider next phonons and their coupling to photons in TGD Universe.

  1. Sound waves could quite well transform to electromagnetic radiation since living matter is piezo-crystal transforming sound to radiation and vice versa. Microwave hearing represents an example of this kind transformation. This would require that photons of given energy and varying value of Planck constant couple to phonons with the same energy, Planck constant, and frequency.

  2. Whether one can assign to phonons a non-standard value of Planck constant is not quite clear, but there seems to be no reason preventing this. If so, even photons of audible sounds would have energies above thermal threshold and have direct quantal effects on living matter if they have same Planck constant as the photons with same frequency.

  3. Acoustic phonons represent longitudinal waves and this would require longitudinal photons. In Maxwell's electrodynamics they are not possible but in TGD framework photon is predicted to have a small mass and also longitudinal photons are possible.

  4. For general condensed matter systems one can have also optical phonons for which the polarization is orthogonal to the wave vector and these could couple to ordinary photons. The motion of the charged particles in the electromagnetic field of massless extremal (topological light ray) would be a situation in which phonons and photons accompany each other. This would make possible the piezo-electric mechanism.
Under these assumptions the collections of audible frequencies could also represent passwords activating the intronic portion of the genome and lead to gene expression or some other activities. If one believes on the hypothesis that DNA acts like topological quantum computer based on the braid strand connections between nucleotides in the intronic portion of DNA with the lipids of the nuclear and/or cell membranes, also topological quantum computation type processes could be activated by the collections of sound frequencies (see this).

What distinguishes speech and music from sounds without meaning?

Speech and music ares very special form of sound in that they have direct meaning. The more one thinks about these facts, the more non-trivial they look. For music - say singing - the frequency of the carrier wave is piecewise constant whereas for speech it remains constant and the amplitude modulation is important. In fact, by slowing down the recorded speech, one gets the impression that carrier frequency is actually modulated like in chirp (frequency goes down and covers a range of frequencies). What is the mechanism giving to speech and music its meaning and in this manner distinguishes them from other sounds?

Besides the frequency also phase is important for both speech and music experience. Speech and reverse speech sound quite different the intensity in frequency space is same. Therefore the relative phases associated with the Fourier coefficients of various frequencies must be important. For music simple rational multiples of the fundamental define the scale. Could it be that also the frequencies relevant to the comprehension of speech correspond to these rational multiples?

Suppose that one indeed believes on the proposed vision based on the fundamental role of negentropic entanglement in generation of meaning and takes seriously the proposed mechanisms for generating it. Can one understand why music and speech differ from general sounds and what distinguishes between them?

  1. With these assumptions suitable collections of frequencies sound wave would indee activates the intronic portion of DNA by generating negentropic entanglement. Also other dark flux tubes than those assignable to DNA are involved. For instance, hair cells responsible for hearing of sounds around particular frequencies could involved flux tubes and utilize similar mechanism. Allowing only hair cells would define the conservative option. On the other hand, one could well claim that what happens in ear has nothing to do with the understanding of the speech and music, it could take place only at the level of neuronal nuclei.

  2. Could the direct interaction of sound waves with magnetic flux tubes generate the experiences of speech and music? In other words, assign meaning to sounds? The criterion for sound to have an interpretation as speech or music would be that it contains the resonance frequencies needed to activate the DNA, or more generally generate dark super currents generating Cooper pairs in this manner loading metabolic energy storages. This would apply to both speech and musical sounds.

  3. The pitch of the speech and musical sound can vary. We are aware of the key of the music piece and of modulations of the key and remember the starting key, and it is highly satisfactory to make a return to "home" defined by the original key. This would imply that the overall scale of the collection of frequencies can be varied and that the pitch of the speech defines a natural expectation value of this scale. For persons possessing so called absolute ear this scaling symmetry would be broken in a well-defined sense.

  4. Musical scales involve frequencies coming as rational multiples of the basic frequency. Octaves - power of two multiples- of the frequency can be said to be equivalent as far musical experience is considered. One might understand the special role of rational multiples of the basic frequency if the Fourier components have same phase periodically so that the experience is invariant under discrete time translations. This requires commensurable frequencies expressible as rational multiples of the same fundamental frequency. The preferred role of p-adic primes comings as powers of two could relate to the octave phenomenon.

  5. Are the relative phases of different Fourier components important for music experience? If one requires a periodical occurrence of maximal possible intensity (maximal constructive interference) then the relative phases must vanish at the values of time for maximal possible intensity. What seems essential that the presence of commensurate frequencies gives rise to time translation invariant sensation whereas speech consists of pulses.

Are speech and music quantum duals like position and momentum?

Frequencies are crucial for music experience. In the case of of speech the relative phases are very important as the example of reverse speech demonstrates. How a given phoneme is heard is determined to high degree by the frequency spectrum in the beginning of the phoneme (this distinguishes between consonants). Vowels are nearer to notes in vocalization. Speech consists of pulses and destructive interference between different frequencies is required to generate pulses and different pulse shapes so that phase information is important. At least the harmonics of the basic rational multiples of the fundamental are necessary for speech.

One can criticize the previous discussion in that it has been completely classical. Phase and frequency are in wave mechanics canonically conjugate variables analogous to position and momentum. Is it really possible to understand the difference between music and speech purely classically by assuming that one can assign to sound waves both frequencies and phases simultaneously - just like one assigns to a particle sharp values of both momentum and position? Or should one use either representation either in terms numbers of phonons in different modes labelled by frequencies or as coherent states of phonons with ill defined phonon numbers but well defined amplitudes? Could the coherent states serve as the analogs of classical sound waves. Speech would be as near as possible to classical sound and music would be quantal. Of course, there is a large variety of alternative choices of basis states between these two extremes as a specialist in quantum optics could tell.

Suppose that this picture is more or less correct. What could be the minimal scenario allowing to understand the differences between speech and music?

  1. Only a subset of frequencies could activate DNA (or if one wants to be conservative, the hair cells) also in the case of speech. One could still pick up important frequencies for which the ratios are simples rational numbers as in the case of musical scale plus their harmonics If this assumption is correct, then speech from which all frequencies except for the harmonics of the simple rational multiples of the fundamental are removed, should be still be comprehensible as speech. The pitch of the speech would determine a good candidate for the fundamental frequency.

  2. The harmonics of frequencies activating DNA would be crucial for speech. Harmonics are present also in music and their distribution allows to distinguish between different instruments and persons. The deviation of musical notes from ideal Fock states would correspond to this.

  3. The naive guess is that the simple rational multiples of fundamental and the possibility of having their harmonics could be reflected in the structure of intronic portions of DNA as repetitive structures of various sizes. This cannot be the case since the wavelengths of ordinary photons would be so small that the energies would be in keV range. Neither is this expected to be the case. It is magnetic flux tubes and sheets traversing the DNA which carry the radiation and the natural lengths assignable to these flux quanta should correspond to the wave lengths. The larger, the flux quantum, the lower the frequency and the larger the value of Planck constant. Harmonics of the fundamental would appear for given flux tube length naturally.

    The DNA strands and flux tubes and sheets form a kind of electromagnetic music instrument with flux quanta taking the role of guitar strings and DNA strands and other structures such as lipids and possible other molecules to which flux tubes get attached taking the role of frets in guitar. This analogy suggests that for wave lengths measured in micrometers the basic frequencies correspond to the distances between "frets" defined by cell and nuclear membranes in the tissue in the scale of organism. This would relate the spectrum of resonance frequencies to the spectrum of distances between DNAs in the tissue.

    For wavelengths corresponding to very large values of Planck constant giving rise to frequencies in VLF and ELF range and corresponding also to audible frequencies, the preferred wave lengths would correspond to lengths of flux quanta in Earth size scale. One should understand whether the quantization of these lengths in simple rational ratios could take place for the preferred extremals.

  4. Could the pulse shape associated with massless extremals (MEs, topological light rays) allow to distinguish classically between speech and music at the level of space-time correlates? Linear superposition of Fourier components in the direction of ME is possible and this allows to speak about pulse shape. It allows also the notions of coherent state and Fock state for given direction of wave vector. Essential would be the restriction of the superposition of fields in single direction of propagation to be distinguished from the superposition of the effects of fields associated with different space-time sheets on multiply topologically condensed particle. Maybe this would allow to make testable predictions.
This text can be found at my homepage from an article with title Quantum Model for the Direct Currents of Becker. See also the chapter Quantum Mind, Magnetic Body, and Biological Body of "TGD based view about living matter and remote mental interactions".

Sunday, June 17, 2012

Could magnetic flux tubes make possible effective holograms?

The notions of massless extremals (topological light rays) and magnetic flux tubes carrying dark matter identified as phases have large value of effective Planck constant have become central to TGD inspired quantum biology and new applications emerge continually. Just some time ago I told about addressing mechanism based on collection of frequencies acting as a password. The latest application is an interpretation for what holograms - or rather effective holograms- could be in TGD framework.

The idea about living matter as hologram of some kind is not new. Peter Gariaev's approach to DNA uses the notion of hologram. In neutroscience Karl Pribram is one of the advocators of hologram concept. There is a lot of empirical support for the notion. The notion of hologram - or rather, conscious hologram - is key concept also in TGD inspired view about quantum biology.

But what (conscious) holograms really are? Are they genuine holograms or are they holograms only in the sense that the scattering of light beams from them is very much like scattering on ordinary holograms - that is like scattering from the original object. Could one imagine mechanism making possible scattering from the original object effectively represented by the hologram like structure?

Holograms as relay stations?

To proceed notice that there is rather general belief that just some objects possessed by the patient is enough for healer- in some sense this object are holograms of the patient. Usually this belief is of course regarded as primitive pars pro toto magic. This belief might however have some justification in terms of negentropic entanglement expected to be fundamental aspect of remote mental interactions. In principle negentropic quantum entanglement can take place via arbitrary number of relay stations and magnetic flux tubes connecting the entangled objects would be the quantum correlate for it. Negentropic entanglement would serve as a correlate for attention, experience of understanding, etc., and it would correlate closely with metabolism: generation of ATP and associated high energy phosphate bond would generate negentropically entangled electron Cooper pair or add electron to negentropically entangled existing many-electron system and its decay to ADP would liberate metabolic energy quantum and destroy the negentropic entanglement.

Negentropic entanglement could actually mean that objects of the external world - say living beings - can act like parts of our biological body. There is a wide variety of psychological experiments which show how illusory is our view about what our body is. Quantum entanglement of object with its target having magnetic flux tubes as geometric correlates making object a relay station. The object - call it O - would only serve as a relay station connected to say person, call it P, possesses the object. The light scattering from the O could actually transform to dark photons and travel along flux tubes to P, where it is scattered back- say from DNA- and returns back along flux tubes and leaves O. Effectively this is like scattering from a hologram of P represented by object O. The flux tube connection would make various objects in our vicinity effective holograms. This is something that one actually expects since attention- both visual and auditory - has flux tubes connecting perceiver to the target of attention as correlates.

One can consider two options since the radiation to object could transform to positive or negative energy photons. In the first case scattering could be seen as ordinary scattering from P. Negative energy photons would however represent signals traveling to the geometric past (analogs of phase conjugate laser beams) and scatter back from P as positive energy photons traveling to O. TGD based models of memory as communications with the geometric past and intentional action as a process in which negative energy signal to geometric past initiates neural activities (Libet's findings about active aspects of consciousness) involve similar mechanism. Also the remote metabolism based on sending of negative energy signals to a energy storage (analogous to population inverted laser) relies on the same mechanism.

Peter Gariaev's experiments irradiating DNA with red laser beam generate broad of radio waves, which in TGD Universe could correspond to photons with same energy but with large Planck constant. These photons have biological effects on organisms of the same species and even on closely related species. TGD based proposal is that the scattered laser beam defines a collection of frequencies serving as addresses for parts of DNA activating gene expression.

If this represents a basic mechanism of genetic expression, one can quite well imagine that an organism - call it A - whose DNA is somehow damaged, could utilize the healthy DNA of another organism B by sending to it the counterpart of laser beam which scatters and generates the superposition of dark photon beams serving as an address activating the DNA of A. A would effectively use the DNA of B and B would effectively become part of A:s biological body. Even more, B could apply remote versions of various functions of its DNA, say remote replication and (in particular) remote transcription. One can say that A and B combine their genomes and collaborate. This idea is not too surprising if one acccepts the notion of super-genome suggested strongly by the hypothesis that DNA strands can organize to magnetic flux sheets traversing them to form larger structures allowing coherent gene expression even at the level of species.

This mechanism could explain why the mere presence of healthy organisms of the species can induce the healing of organism which is not healthy. It could be the basic mechanism of healing: patient could remotely use the healthy DNA of the healer to generate signals activating her own genes.

Further comments and questions

Some further comments and questions are in order.

  1. The relay station mechanism could universal in biology. The transformation of
    ordinary photons to dark photons at flux tubes defining the magnetic body of DNA is assumed in the model explaining the photos taken by Peter Gariaev and his group about DNA sample showing the presence of what looks like macroscopic flux tube structures kenociteallb/dnahologram.

  2. The mechanism could also explain phantom DNA as real DNA connected by flux tubes to the chamber that contained the original DNA. The laser beam arriving to the empty chamber would travel along flux tubs to the place, where the removed DNA is, scatter and return back. This would create the scattering pattern assigned with the phantom DNA.

  3. One can even ask whether the basic mechanism of homeopathy relies on relay station mechanism. Homeopathically treated water would be a collection of flux tube connections to the molecules, which were present in the first stage of the preparation process of the homeopathic remedy. Since the dark photons travel with light velocity, the times for travel of photons would be so small that the scattering of incoming light via the relay station mechanism would almost instantaneous so that the original molecules would be effectively present.

  4. For instance, the de-differentiation of cells which looks to my rather mysterious phenomenon, means rejuvenation. Could one imagine that the genetic programs are replaced with those in geometric past and similar mechanism is at work. Could the rejuvenation mechanism involve scattering of the counterpart of phase conjugate laser light from non-differentiated healthy cells of the geometric past? If so, one should try to achieve the same effect directly at the level of cells. One could try to induce de-differentiation of the cells of the owner of the object serving as a relay station in the same manner. Healing of say cancer cells by de-differentiating them to omnipotent state. In the experiments involving Becker's DC current just this happened. In this microscopic situation might be can demonstrate the effect really convincingly.

Neutron anomaly as evidence for many-sheeted space-time

There was a very interesting article about magnetic anomaly UCN trapping. UCN is a shorthand for ultra-cold neutrons. The article had a somewhat hypish title Magnetic anomaly in UCN trapping: signal for neutrons oscillations to parallel world?. Perhaps this was explains why I did not bother to look at it at the first time I saw it.

As I saw again the popular story hyping the article, I realized that the anomaly - if real - could provide a direct evidence for the transitions of neutrons between parallel space-time sheets of many-sheeted space-time. TGD of course predicts that this phenomenon is completely general applying to all kinds of particles.

The interpretation of authors is that ultra-cold neutrons oscillate between parallel worlds- albeit in different sense as in TGD. The authors describe this oscillation using same mathematical model as describing neutrino oscillations. What would be observed would be that in statistical sense neutrons in the beam disappear and reappear periodically. The model predicts that the frequency for this is just the Larmor frequency ω = μ • B/2 for the precession of spin of neutron in magnetic field. The authors claim that just this is observed and the interpretation is somewhat outlandish looking. Standard model gauge group is doubled: all particles have exact mirror copies with same quantum numbers. This of course is extremely inelegant interpretation. Something much more elegant is needed.

TGD based description of the situation

TGD allows to understand the finding in terms of many-sheeted space-time and one ends up with a phenomenological model similar to that of authors. Now however the phenomenon is predicted to be completely general applying to all kinds of particles and does not require the weird doubling of standard model symmetries.

Imagine the presence of two space-time sheets (or even more of them) carrying magnetic fields which decompose to flux tubes.

  1. Suppose that neutron is topologically condensed in one of these flux tubes. What happens when the flux tubes are "above each other" in the sense that that their Minkowski space projections intersect and the flux tubes are extremely near to each other: the distance is of order CP2 size of order 104 Planck lengths. It took long time to take seriously the obvious: neutrons topologically condense on both space-time sheets and experience the sum of the magnetic fields in these regions. This actually allows to overcome the basic objection against TGD due to the fact that all classical gauge fields are expressible in terms of CP2 coordinates and their gradients so that enormously powerful constraints between classical gauge fields are satisfied and linear superposition of fields is lost. In many-sheeted space-time this superposition is replaced with the superposition of their effects in multiple topological condensation,

  2. In the regions where the intersection of M4 projections of flux tubes is empty, topological condensation takes place on either space-time sheet.

  3. What happens when one has neutrons propagating along flux tube 1 characterized by magnetic field B1 arrive to a region where flux tube 2 of magnetic field B2 resides? In the intersection region the neutrons experience the field B1+B2 in good approximation. The interaction energy E=μ B• σ, where B is the magnetic field and σ is the spin of neutron. In flux tube 1 has B=B1, in flux tube 2 one has B=B2 and in the intersection region B=B1+B2. It can happen that neutron arriving along flux tube 1 continues its travel along flux tube 2.

  4. Magnetic fields in question actually consists of large number of nearly parallel flux tubes and the travel of neutron is a series of segments: Bi1→ B1+B2 → Bi2→ ..... As if neutron would make jumps between parallel worlds. Now these worlds are geometrically parallel rather than identifiable as copies in tensor product of standard model gauge groups.

A phenological description predicting the probabilities for the transitions between the parallel worlds assignable to the two magnetic fields could be based on simple Hamiltonian used to describe also neutrino mixing. Hamiltonian is sum of spin Hamiltonians Hi =μ Bi• σ and of non-diagonal mixing term ε. H= H1⊕ H2 + ε The diagonal term Hi are non-vanishing in the nonintersecting region i and non-diagonal describing what happens in the intersecting regions. Just this description was used by the authors of the article to parametrize the observed anomaly.

One can test this interpretation by introducing a third magnetic field. The interpretation of authors might force to introduce even third copy of standard model gauge group;-).

Amusing co-incidence

What is so amusing that the magnetic field used in the experiments was .2 Gauss. It is exactly the nominal value of the endogenous magnetic field needed to explain the strange quantal effects of radiation at cyclotron frequencies of biologically important ions on vertebrate brain. The frequencies are extremely low - in EEG range - and corresponding thermal energies are 10 orders below thermal energy so that standard quantum mechanics predicts no effects. The explanation assumes Bend=.2 GeV containing dark variants of these ions with so large Planck constants that the cyclotron energies are above thermal energy at physiological temperatures. Why experimentalists happened to use just this .2 Gauss magnetic field which is 2/5 of the the nominal value of the Earth's magnetic field BE=.5 Gauss? If I were a paranoid, I would swear that the experimentalists were well aware of TGD;-). Of course they were not! One cannot be aware of TGD in a company of respectable scientists and even less in respectable science journals;-)!

Saturday, June 09, 2012

About deformations of known extremals of Kähler action

I have done a considerable amount of speculative guesswork to identify what I have used to call preferred extremals of Kähler action. The problem is that the mathematical problem at hand is extremely non-linear and that there is no existing mathematical literature. One must proceed by trying to guess the general constraints on the preferred extremals which look physically and mathematically plausible. The hope is that this net of constraints could eventually chrystallize to Eureka! Certainly the recent speculative picture involves also wrong guesses. The need to find explicit ansatz for the deformations of known extremals based on some common principles has become pressing. The following considerations represent an attempt to combine the existing information to achieve this.

The dream is to discover the deformations of all known extremals by guessing what is common to all of them. One might hope that the following list summarizes at least some common features.

Effective three-dimensionality at the level of action

  1. Holography realized as effective 3-dimensionality also at the level of action requires that it reduces to 3-dimensional effective boundary terms. This is achieved if the contraction jαAα vanishes. This is true if jα vanishes or is light-like, or if it is proportional to instanton current in which case current conservation requires that CP2 projection of the space-time surface is 3-dimensional. The first two options for j have a realization for known extremals. The status of the third option - proportionality to instanton current - has remained unclear.

  2. As I started to work again with the problem, I realized that instanton current could be replaced with a more general current j=*B∧J or concretely: jα= εαβγδBβJγδ, where B is vector field and CP2 projection is 3-dimensional, which it must be in any case. The contractions of j appearing in field equations vanish automatically with this ansatz.

  3. Almost topological QFT property in turn requires the reduction of effective boundary terms to Chern-Simons terms: this is achieved by boundary conditions expressing weak form of electric magnetic duality. If one generalizes the weak form of electric magnetic duality to J=Φ *J one has B=dΦ and j has a vanishing divergence for 3-D CP2 projection. This is clearly a more general solution ansatz than the one based on proportionality of j with instanton current and would reduce the field equations in concise notation to Tr(THk)=0.

  4. Any of the alternative properties of the Kähler current implies that the field equations reduce to Tr(THk)=0, where T and Hk are shorthands for Maxwellian energy momentum tensor and second fundamental form and the product of tensors is obvious generalization of matrix product involving index contraction.

Could Einstein's equations emerge dynamically?

For jα satisfying one of the three conditions, the field equations have the same form as the equations for minimal surfaces except that the metric g is replaced with Maxwell energy momentum tensor T.

  1. This raises the question about dynamical generation of small cosmological constant Λ: T= Λ g would reduce equations to those for minimal surfaces. For T=Λ g modified gamma matrices would reduce to induced gamma matrices and the modified Dirac operator would be proportional to ordinary Dirac operator defined by the induced gamma matrices. One can also consider weak form for T=Λ g obtained by restricting the consideration to sub-space of tangent space so that space-time surface is only "partially" minimal surface but this option is not so elegant although necessary for other than CP2 type vacuum extremals.

  2. What is remarkable is that T= Λ g implies that the divergence of T which in the general case equals to jβJβα vanishes. This is guaranteed by one of the conditions for the Kähler current. Since also Einstein tensor has a vanishing divergence, one can ask whether the condition to T= κ G+Λ g could the general condition. This would give Einstein's equations with cosmological term besides the generalization of the minimal surface equations. GRT would emerge dynamically from the non-linear Maxwell's theory although in slightly different sense as conjectured (see this)! Note that the expression for G involves also second derivatives of the imbedding space coordinates so that actually a partial differential equation is in question. If field equations reduce to purely algebraic ones, as the basic conjecture states, it is possible to have Tr(GHk)=0 and Tr(gHk)=0 separately so that also minimal surface equations would hold true.

    What is amusing that the first guess for the action of TGD was curvature scalar. It gave analogs of Einstein's equations as a definition of conserved four-momentum currents. The recent proposal would give the analog of ordinary Einstein equations as a dynamical constraint relating Maxwellian energy momentum tensor to Einstein tensor and metric.

  3. Minimal surface property is physically extremely nice since field equations can be interpreted as a non-linear generalization of massless wave equation: something very natural for non-linear variant of Maxwell action. The theory would be also very "stringy" although the fundamental action would not be space-time volume. This can however hold true only for Euclidian signature. Note that for CP2 type vacuum extremals Einstein tensor is proportional to metric so that for them the two options are equivalent. For their small deformations situation changes and it might happen that the presence of G is necessary. The GRT limit of TGD discussed in kenociteallb/tgdgrt kenocitebtart/egtgd indeed suggests that CP2 type solutions satisfy Einstein's equations with large cosmological constant and that the small observed value of the cosmological constant is due to averaging and small volume fraction of regions of Euclidian signature (lines of generalized Feynman diagrams).

  4. For massless extremals and their deformations T= Λ g cannot hold true. The reason is that for massless extremals energy momentum tensor has component Tvv which actually quite essential for field equations since one has Hkvv=0. Hence for massless extremals and their deformations T=Λ g cannot hold true if the induced metric has Hamilton-Jacobi structure meaning that guu and gvv vanish. A more general relationship of form T=κ G+Λ G can however be consistent with non-vanishing Tvv but require that deformation has at most 3-D CP2 projection (CP2 coordinates do not depend on v).

  5. The non-determinism of vacuum extremals suggest for their non-vacuum deformations a conflict with the conservation laws. In, also massless extremals are characterized by a non-determinism with respect to the light-like coordinate but like-likeness saves the situation. This suggests that the transformation of a properly chosen time coordinate of vacuum extremal to a light-like coordinate in the induced metric combined with Einstein's equations in the induced metric of the deformation could allow to handle the non-determinism.

Are complex structure of CP2 and Hamilton-Jacobi structure of M4 respected by the deformations?

The complex structure of CP2 and Hamilton-Jacobi structure of M4 could be central for the understanding of the preferred extremal property algebraically.

  1. There are reasons to believe that the Hermitian structure of the induced metric ((1,1) structure in complex coordinates) for the deformations of CP2 type vacuum extremals could be crucial property of the preferred extremals. Also the presence of light-like direction is also an essential elements and 3-dimensionality of M4 projection could be essential. Hence a good guess is that allowed deformations of CP2 type vacuum extremals are such that (2,0) and (0,2) components the induced metric and/or of the energy momentum tensor vanish. This gives rise to the conditions implying Virasoro conditions in string models in quantization:

    gξiξj=0 , gξ*iξ*j=0 , i,j=1,2 .

    Holomorphisms of CP2 preserve the complex structure and Virasoro conditions are expected to generalize to 4-dimensional conditions involving two complex coordinates. This means that the generators have two integer valued indices but otherwise obey an algebra very similar to the Virasoro algebra. Also the super-conformal variant of this algebra is expected to make sense.

    These Virasoro conditions apply in the coordinate space for CP2 type vacuum extremals. One expects similar conditions hold true also in field space, that is for M4 coordinates.

  2. The integrable decomposition M4(m)=M2(m)+E2(m) of M4 tangent space to longitudinal and transversal parts (non-physical and physical polarizations) - Hamilton-Jacobi structure- could be a very general property of preferred extremals and very natural since non-linear Maxwellian electrodynamics is in question. This decomposition led rather early to the introduction of the analog of complex structure in terms of what I called Hamilton-Jacobi coordinates (u,v,w,w*) for M4. (u,v) defines a pair of light-like coordinates for the local longitudinal space M2(m) and (w,w*) complex coordinates for E2(m). The metric would not contain any cross terms between M2(m) and E2(m): guw=gvw= guw* =gvw*=0.

    A good guess is that the deformations of massless extremals respect this structure. This condition gives rise to the analog of the constraints leading to Virasoro conditions stating the vanishing of the non-allowed components of the induced metric. guu= gvv= gww=gw*w* =guw=gvw= guw* =gvw*=0. Again the generators of the algebra would involve two integers and the structure is that of Virasoro algebra and also generalization to super algebra is expected to make sense. The moduli space of Hamilton-Jacobi structures would be part of the moduli space of the preferred extremals and analogous to the space of all possible choices of complex coordinates. The analogs of infinitesimal holomorphic transformations would preserve the modular parameters and give rise to a 4-dimensional Minkowskian analog of Virasoro algebra. The conformal algebra acting on CP2 coordinates acts in field degrees of freedom for Minkowskian signature.

Field equations as purely algebraic conditions

If the proposed picture is correct, field equations would reduce basically to purely algebraically conditions stating that the Maxwellian energy momentum tensor has no common index pairs with the second fundamental form. For the deformations of CP2 type vacuum extremals T is a complex tensor of type (1,1) and second fundamental form Hk a tensor of type (2,0) and (0,2) so that Tr(THk)= is true. This requires that second light-like coordinate of M4 is constant so that the M4 projection is 3-dimensional. For Minkowskian signature of the induced metric Hamilton-Jacobi structure replaces conformal structure. Here the dependence of CP2 coordinates on second light-like coordinate of M2(m) only plays a fundamental role. Note that now Tvv is non-vanishing (and light-like). This picture generalizes to the deformations of cosmic strings and even to the case of vacuum extremals.

For background see the chapter Basic Extremals of Kähler action of "Physics in Many-Sheeted Space-time". For details see the article About deformations of known extremals of Kähler action.

Wednesday, June 06, 2012

What kind of preferred extremals Maxwell phase could correspond?

I became again interested in finding preferred extremals of Kähler action, which would have 4-D CP2 and perhaps also M4 projections. This would correspond to Maxwell phase that I conjectured long time ago. Deformations of CP2 type vacuum extremals would correspond also to these extremals. The signature of the induced metric might be also Minkowskian. It however turns out that the solution ansatz requires Euclidian signature and that M4 projection is 3-D so that original hope is not realized.

I proceed by the following arguments to the ansatz.

  1. Effective 3-dimensionality for action (holography) requires that action decomposes to vanishing jαAα term + total divergence giving 3-D "boundary" terms. The first term certainly vanishes (giving effective 3-dimensionality and therefore holography) for

    DβJαβ=jα=0 .

    Empty space Maxwell equations, something extremely natural. Also for the proposed GRT limit these equations are true.

  2. How to obtain empty space Maxwell equations jα=0? Answer is simple: assume self duality or its slight modification:


    holding for CP2 and CP2 type vacuum extremals or a more general condition

    J=k*J ,

    k some constant not far from unity. * is Hodge dual involving 4-D permutation symbol.k=constant requires that the determinant of the induced metric is apart from constant equal to that of CP2 metric. It does not require that the induced metric is proportional to the CP2 metric, which is not possible since M4 contribution to metric has Minkowskian signature and cannot be therefore proportional to CP2 metric.

  3. Field equations reduce with these assumptions to equations differing from minimal surfaces equations only in that metric g is replaced by Maxwellian energy momentum tensor T. Schematically:

    Tr(THk)=0 ,

    where T is Maxwellian energy momentum tensor and Hk is the second fundamental form - asymmetric 2-tensor defined by covariant derivative of gradients of imbedding space coordinates.

  4. It would be nice to have minimal surface equations since they are the non-linear generalization of massless wave equations. This is achieved if one has

    T= Λ g .

    Maxwell energy momentum tensor would be proportional to the metric! One would have dynamically generated cosmological constant! This begins to look really interesting since it appeared also at the proposed GRT limit of TGD.

  5. Very skematically and forgetting indices and being sloppy with signs, the expression for T reads as

    T= JJ -g/4 Tr(JJ) .

    Note that the product of tensors is obtained by generalizing matrix product. This should be proportional to metric.

    Self duality implies that Tr(JJ) is just the instanton density and does not depend on metric and is constant.

    For CP2 type vacuum extremals one obtains

    T= -g+g=0 .

    Cosmological constant would vanish in this case.

  6. Could it happen that for deformations a small value of cosmological constant is generated? The condition would reduce to

    JJ= (Λ-1)g .

    Λ must relate to the value of parameter k appearing in the generalized self-duality condition. This would generalize the defining condition for Kähler form

    JJ=-g (i2=-1 geometrically)

    stating that the square of Kähler form is the negative of metric. The only modification would be that index raising is carried out by using the induced metric containing also M4 contribution rather than CP2 metric.

  7. Explicitly:

    Jαμ Jμβ = (Λ-1)gαβ .

    Cosmological constant would measure the breaking of Kähler structure.

One could try to develop ansatz to a more detailed form. The most obvious guess is that the induced metric is apart from constant conformal factor the metric of CP2. This would guarantee self-duality apart from constant factor and jα=0. Metric would be in complex CP2 coordinates tensor of type (1,1) whereas CP2 Riemann connection would have only purely holomorphic or anti-holomorphic indices. Therefore CP2 contributions in Tr(THk) would vanish identically. M4 degrees of freedom however bring in difficulty. The M4 contribution to induced metric should be proportional to CP2 metric and this is impossible due to the different signatures. The M4 contribution to the induced metric breaks its Kähler property.

A more realistic guess based on the attempt to construct deformations of CP2 type vacuum extremals is following.

  1. Physical intuition suggests that M4 coordinates can be chosen so that one has integrable decomposition to longitudinal degrees of freedom parametrized by two light-like coordinates u and v and to transversal polarization degrees of freedom parametrized by complex coordinate w and its conjugate. M4 metric would reduce in these coordinates to a direct sum of longitudinal and transverse parts. I have called these coordinates Hamilton Jacobi coordinates.

  2. w would be holomorphic function of CP2 coordinates and therefore satisfy massless wave equation. This would give hopes about rather general solution ansatz. u and v cannot be holomorphic functions of CP2 coordinates. Unless wither u or v is constant, the induced metric would have contributions of type (2,0) and (0,2) coming from u and v which would break Kähler structure and complex structure. These contributions would give no-vanishing contribution to all minimal surface equations. Therefore either u or v is constant: the coordinate line for non-constant coordinate -say u- would be analogous to the M4 projection of CP2 type vacuum extremal.

  3. With these assumptions the induced metric would remain (1,1) tensor and one might hope that Tr(THk) contractions vanishes for all variables except u because the there are no common index pairs (this if non-vanishing Christoffel symbols for H involve only holomorphic or anti-holomorphic indices in CP2 coordinates). For u one would obtain massless wave equation expressing the minimal surface property.

  4. The induced metric would contain only the contribution from the transversal degrees of freedom besides CP2 contribution. Minkowski contribution has however rank 2 as CP2 tensor and cannot be proportional to CP2 metric. It is however enough that its determinant is proportional to the determinant of CP2 metric with constant proportionality coefficient. This condition gives an additional non-linear condition to the solution. One would have wave equation for u (also w and its conjugate satisfy massless wave equation) and determinant condition as an additional condition.

    The determinant condition reduces by the linearity of determinant with respect to its rows to sum of conditions involved 0,1,2 rows replaced by the transversal M4 contribution to metric given if M4 metric decomposes to direct sum of longitudinal and transversal parts. Derivatives with respect to derivative with respect to particular CP2 complex coordinate appear linearly in this expression they can depend on u via the dependence of transversal metric components on u. The challenge is to show that this equation has non-trivial solutions.

What makes the ansatz attractive is that a special solutions of Euclidian Maxwell empty space equations are in question, equations reduces to non-linear generalizations of Euclidian massless wave equations Minkowskian coordinate variables, and cosmological constant pops up dynamically. What makes the ansatz attractive is that special solutions of Maxwell empty space equations are in question, equations reduces to non-linear generalizations of Euclidian massless wave equations, and cosmological constant pops up dynamically. These properties are true also for the GRT limit of TGD that I discussed in here.

Tuesday, June 05, 2012


Sean Carroll in Cosmic Variance had a posting related to the low entropy of the universe at big bang. The posting is motivated by a criticism of Carroll's own views. Carroll believes that the low entropy of the very early universe is a problem and that the notion of multiverse somehow resolves it. Carroll has naive view about evolution of universe: just Hamiltonian and unitary time evolution and that's all. No questions about problems with general coordinate invariance and symmetries.

Carroll has developed a rhetoric Occam's razor argument about simplicity of theory. Even lawyer would admire it. Since multiverse interpretation does not involve wave function "collapse" ("":s are due to Carroll) it is simpler as a theory and we have a good reason to accept it. I have learned that a standard manner to build a simple theory is to throw out those things which are difficult to understand. Biology, neuroscience, and consciousness belong to this unlucky stuff in recent day theoretical physics.

Lubos comments the posting of Carroll. Lubos does not see anything problematic in the low entropy of the very early Universe. Second law forces it by definition. Lubos in his characteristic manner takes the recent thermodynamics as the final word of science and concludes that Carroll and everyone disagreeing with him does not understand thermodynamics and is pseudo-scientist.

Second issue discussed is the notion of multiverse and many worlds interpretation of quantum mechanics. Carroll assumes cosmological multiverse in his argument for the low entropy as well as the closely related anthropic principle needed to make at least some sense of multiverse. Lubos believes in multiverse because he believes on inflation (and M-theory!). Lubos does not believe in many worlds interpretation since he does not believe that Schrödinger amplitudes - or more generally quantum states- are "real". I would justify my disbelief on Everett interpretation by simple fact that the multiverse poetry about splitting quantum states to my best understanding does not have any translation to any existing mathematics. This does not however require giving up the notion of quantum state if one is ready to return to the roots and make questions about the nature of time.

Discussion revolves among many notions and to my opinion a lot of assumptions leading to the recent crisis remain implicit in the discussion. It is amusing to see that most problems relate to uncritical belief of the theoretical paradigms created during last decades and to notions which should be challenged. Thefatal turning point was probably the introduction of GUTs for four decades ago bringing in untestable assumptions (such as extension of gauge group). Inflationary scenario is highly hypothetical scenario leading to the multiverse. A further hypothetical element is standard view about SUSY. And finally the landscape summarizing the outcome of super string models.

In the following I summarize TGD point of view on the issues discussed. I hope that reader has familiarity with TGD.

Challenging the status of QFT

Both Carroll and Lubos take QFT approach as granted when one talks about low energy physics. TGD forces to modify this view dramatically.

  1. The notion of many-sheeted space-time and the reduction of the dynamics of classical fields to the dynamics of preferred extremals of Kähler action (just four field like variables, I could say something about simplicity here!). The notion of many-sheeted space-time allows to circumvent the basic objection stating that one cannot have a linear superposition of various fields independently, not even for single field. The point is that only the effects - classically forces - caused by the fields superpose. Particles can have topological sum contacts to large number of space-time sheets and thus they experience superpositions of corresponding forces. This modifies profoundly the existing view. Even the description of Maxwellian fields created by current distribution can be translated to a many-sheeted description by replacing the sum of nearby and radiation fields with a union distinct space-time sheets.

  2. This means giving up the standard view that some GUT with a huge number of field variables describes the low energy limit of THE theory. This picture is just wrong if TGD is correct and I believe that Occam favors TGD. GUTs are also responsible for the many difficulties in theoretical particle physics and cosmology. Standard SUSY is be replaced with something dramatically simpler and having very different physical interpretation. In this framework it is difficult to imagine that the whole CERN would be trying to identify the point of a very high-dimensional parameter space at which the Universe is believed to be lurking. It is strange that SUSY enthusiasts do not realize that just ending up on this situation tells that something must be wrong. Same applies of course to landscapeologists.

Views about quantum theory is

Also the view what quantum theory is must be modified in TGD framework. The original motivation was simple: the path integral approach simply failed in TGD context.

  1. TGD leads to a generalization of Einstein's geometrization program for classical fields. Entire quantum theory must be geometrized in terms of the Kähler geometry of "world of classical worlds" (WCW). Quantum states are interpreted as classical spinor fields in the "world of classical worlds" (WCW). These geometric generalizations of Schrödinger amplitudes are definitely something very real so that I am forced to disagree with Copenhagen interpretation and Lubos. This also leads to a beautiful geometrizaton of fermionic statistics in terms of Clifford algebra of WCW and to "quantization without quantization". At space-time level one has free induced spinor fields and quantization for these. Bosonic emergence means that all states are obtained by using fermions as building bricks. This is also an enormous simplification and if this picture is correct, the standard QFT limit need not have much sense except as approximate description.

  2. Zero energy ontology (ZEO) in which quantum can be seen as a square root of thermodynamics. Quantum states are characterized by entanglement between positive and negative energy parts of zero energy parts and they form what could be seen as a complex square root of density matrix. In this framework "unitary time evolution generated by Hamiltonian" assumed by Carroll is hopelessly simplistic and simply wrong view about the real physical situation. In the p-adic context the the notion of unitary time evolution generated by Hamiltonian is simply non-sense. Zero energy ontology has powerful implications for the notion of Feynman diagrams which are topologized and geometrized. In particular, finiteness of the amplitudes is manifest since also virtual particles consist of massless on mass shell wormhole throats. There are also intriguing connections to the twistor approach.

  3. TGD based view about quantum jump and state function reduction and extension of physics to a theory of consciousness.The new view about the relationship of experienced time and geometric time however allows to have also the notion of quantum jump without logical contradictions, and one can resolve the basic paradox of quantum measurement theory without giving up the notion of objective reality but replacing it with quantum universe recreated in each quantum jump. Free will and consciousness have place in the theory.

    Understanding the anatomy of quantum jumps in ZEO leads to a new view about U-matrix, M-matrix and S-matrix and the resulting picture is extremely simple. U-matrix relates two state basis prepared with respect to either light-like boundary of CD. Time evolution by quantum jumps means sequences of reduction with respect to these two state basis: time flip-flow might be the proper term. Quantum classical corresponds can be used to argue that at space-time level given system is not able to detect the change of the arrow of imbedding space geometric time in this alternating sequence of state function reductions.

    Negentropy Maximization Principle the fundamental principle of consciousness theory generalizing quantum measurement theory implies second law in a generalized form required by ZEO, new view about time, and new view about entanglement due to the possibility of p-adic physics identified as physics of cognition.

Generalization of thermodynamics

The generalization of thermodynamics implied by TGD means strong deviations from the naive belief of Lubos that there is nothing to add to what Boltzmann gave to us.

  1. This includes the definition of entropy in ZEO as a scale dependent notion. This is very relevant when one wants to speak about entropy in cosmological context: entropy characterizes zero energy state and therefore corresponding CD rather than time=constant snapshot of cosmology.

  2. The distinction between subjective time and geometric time demonstrates that many problems of cosmology related to entropy are pseudo problems. Big Bang is temporal boundary, not a moment of creation: quantum jumps are moments of re-creation and can be localized to any CD anywhere in the imbedding space. One should not assign "before" and "after" to geometric time. Colleagues still refuse to consider seriously the possibility that subjective time and geometric might be different although the debate between Bohr and Einstein revolved around this difference (about which neither of them was aware). Presumably the materialistic dogma is the basic explanation for this short sightedness.

  3. Second law holds true but the reversal of the arrow of geometric time is possible at imbedding space level: phase conjugate laser beams and many self-organization processes in biology would represent natural candidates in this respect.

  4. Genuinely negentropic entanglement stable with respect to Negentropy Maximization Principle (NMP) is possible. Negentropic entanglement is the main characterizer of what it is to be living, and space-time correlates of thermodynamics discussed in earlier posting.

TGD analogs for multiverse and inflation

The notion of many words, multiverse and landscape have analogies in TGD Universe but there is nothing problematic in these concepts.

  1. Many world view corresponds to the identification of WCW spinor fields as quantum superpositions of 3-surfaces allowing by holography an interpretations superposition of 4-surfaces. There is however no need to assume the mysterious repeated splitting of the quantum states.

  2. The huge vacuum degeneracy of Kähler action implying 4-D spin glass degeneracy something analogous to multiverse mathematically but free of interpretational difficulties. The non-quantum fluctuating variables - zero modes - include the induced Kähler field with completely well-defined physical interpretation, and serve as macroscopic classical variables essential in quantum measurement theory.

  3. In M-theory spontaneous compactification is needed to get something which could be called physically sensible theory. This leads to the landscape catastrophe spoiling completely the original idea that the dynamics of strings gives gravitation by bringing in gravitational fields as dynamical quantites at target space level. Imbedding space is not dynamical in TGD Universe. This saves from the landscape. Imbedding space is fixed by standard model quantum numbers and the conjecture is that it imbedding space is unique just from the mathematical existence of WCW and has number theoretic interpretation. Already in loop spaces Kähler geometry is unique and requires maximal Kac-Moody symmetry algebra.

  4. Both Lubos and Carroll believe in inflation in turn leading to multiverse. Inflationary scenario is however highly speculative and plagued by numerous difficulties. The recent findings challenging the notion of galactic halo of dark matter, do not help in this respect and again one must ask is the whole approach wrong in some manner.

    In TGD framework there is no inflation. The critical Robertson-Walker cosmologies are unique apart from their duration and correspond to accelerating expansion due to a negative pressure term. Cosmic string explain acceleration expansion at more microscopic level. Cosmic strings dominate during primordial cosmology. Later they expand and give rise to cosmic magnetic fields. They also decay to ordinary matter and dark matter so that the inflaton field is eliminated as so many other fields. Their magnetic tension is responsible for the negative pressure term in the long length scale description in terms of Robertson Walker cosmology. Their magnetic energy corresponds to dark energy.

Sunday, June 03, 2012

Dark magnetism

I have been been waiting with fear in my heart the moment when some-one boldly and independently represents the idea about dark matter as phases with large Planck constant at magnetic flux tubes as a basic structure of Universe - or something picking up suitable pieces of this vision. Tom Banks, the teacher of Lubos has independently represented two key ideas of TGD as his own (hypefinite factors and causal diamonds): it is amazing how miserable web skills a prominent theoretician can have;-). Susskind as veteran has understandably rather poor web skills and has proposed p-adic physics as his own discovery to the problems of multiverse.

In TGD framework dark energy corresponds to Kähler magnetic energy and primordial cosmology corresponds to magnetic flux tubes, which are exactly string like objects having 1-D M4 projection. Later the flux tube projection of thickens and magnetic fields get weaker although the fluxes are conserved. The model leads to a fractal model of Universe explaining the formation of galaxies and even the formation stars and planetary systems and of course - the generation of cosmic magnetic fields, which is a mystery in the standard approach. It also replaces inflationary scenario with a new one in which no exponential inflation takes place: negative pressure corresponds to magnetic tension and thus has completely natural physical interpretation. Inflaton field is replaced with magnetic field and the transformation of magnetic energy identified as dark energy to particles generates ordinary matter and dark matter. The path of inflationary theories has been full of tortures but now inflation is in especially grave difficulties because the experimental evidence for the absence of a spherical distribution of galactic dark matter is accumulating. This I told in previous posting.

In recent New Scientist the notion of dark magnetism appeared and the obvious question was whether the notion was finally "independently discovered". Unfortunately, I do not have access to New Scientist article . Maybe some-one has and could kindly send it to me. I found however an article of Jimenez and Maroto about the topic from web and a paper by Martinez et al summarizing the development of the concept. Bacry has proposed the notion around 1993. At that time I had understood the role of cosmic strings but had not discovered the hierarchy of Planck constant and the vision about dark matter. In Bacry's model the apparent number of particles in the magnetic field would be larger than the actual one so that in this sense one can speak about apparently existing dark matter. This is hopelessly tricky and has nothing do with the TGD based notion of dark magnetism.

Mark Williams kindly told how to get the dark magnetism article from New Scientist web page. I read the article discussing the work of Beltran and Maroto.

  1. The authors propose a modification of Maxwellian electrodynamics by giving up gauge invariance realized in terms of Lorentz gauge condition, and interpreting the scalar component of the gauge potential as a dynamical degree of freedom and giving rise to an effective cosmological constant in energy momentum tensor. Inflation is assumed be responsible for the generation of the scalar modes. Giving up gauge invariance at the level of Maxwell equation - even in long length scales- looks very ad hoc and very bad to me.

  2. The modified scalar modes are claimed to generate also magnetic fields and it is a well-known fact that cosmos is full of magnetic fields with un-known origin. This looks good.

  3. I could have made a bet that the accelerating expansion would have been explained in terms magnetic tension giving rise to "negative pressure" and vacuum energy identified as magnetic energy but if I understood correctly this was not done. Of course, this is only one description among many provided by TGD. In shorter length scales one has critical cosmologies predicting negative pressure automatically from imbeddability. The counterparts of black-holes in GRT limit correspond to regions of space-time with Euclidian signature and correspond to non-vanishing cosmological constant which can give via averaging rise to the observed small value of cosmological constant.
It is interesting to compare this with TGD based view about the situation. In particular, the issues of gauge invariance is interesting.
  1. In TGD gauge invariance is trivially true and also gauge fixing is trivial for classical gauge fields. One can assign to K\"ahler gauge potential U(1) gauge transformations but this degree of freedom is trivially eliminated because of the geometrization of the gauge fields.

  2. The symplectic transformations of CP2 induce to Kähler gauge potential effective U(1) gauge transformations. These transformations are not gauge symmetries nor even symmetries of the Kähler action (they are however isometries of "world of classical worlds") since also the induced metric describing classical gravitation changes and extremal property is not preserved.

  3. There is is a huge number of vacuum extrema: any space-time surface with CP2 a Lagrangian manifold - is vacuum extremal and symplectic transformations give new vacuum extremals. Small deformations of these represent non-vacuum states classically. Symplectic transformation means a concrete deformation of the space-time surface so that a gauge symmetry is not in question. The transformation is approximate symmetry of action associated with 4-D spin glass degeneracy broken only by classical gravity.

  4. The simplest scenario in zero energy ontology is that all particles, including photon, have at least a small mass and the size scale of causal diamond containing the photon defines the mass scale of photon. In TGD framework this need not mean loss of gauge invariance since basic quanta, which are fermions at the throats of wormhole contacts, are exactly massless. Mass is associated with many-fermion states purely kinematically because the massless building bricks do not have parallel four-momenta.

  5. Authors talk about scalar potential (voltage) and currents in cosmological scales. This is actually natural in the fractal Universe of TGD too. Magnetic flux tubes can carry longitudinal electric fields and one has kind of electric circuitry in all scales. The applications of this picture to biology are of special interest and I have discussed them
    in recent postings.

  6. The physicality of third polarization means usually massivation. The simplest TGD based scenario predicts that also photons have very small mass.
Despite the apparent resemblances, there are deep differences. TGD actually predicts much more radical modification of gauge field concept as the proposed theory breaking explicitly gauge invariance, which looks to me rather ugly. Here I agree with Sean Carroll. Second problem is that authors try to keep in the wagon all the heavy load (such as inflation), which has accumulated during last four decades following the acceptance of GUT paradigm and all what followed from it. The modification of Maxwell's theory provided by TGD does not break gauge invariance.
  1. At classical level both electroweak and color gauge fields are geometrized in terms of CP2 spinor connection and Killing vector fields. The basic objection is that the implied huge reduction in degrees of freedom is unphysical. This objection is circumvented in many-sheeted space-time: superposition for classical fields is replaced with the superposition for their effects. Particles can condense on several space-time sheets simultaneously and the fields of separate space-time sheets effectively sum up.

  2. Field quanta, or rather Feynman diagrams, are replaced with generalized Feynman diagrams consisting of regions of space-time with Euclidian signature of the induced metric. In ZEO also virtual lines consist of on mass shell massless wormhole throats giving extremely powerful kinematical constraints on loops and guarateeing finiteness.

To sum up, there is still long way to the discovery of the geometrization of classical fields in terms of sub-manifold gravity and from this to the discovery that quantum theory can be reduced to the (spinor) geometry of world of classical worlds! The basic question is how to communicate this idea to colleagues who read only respected journals? I try to stay patient!;-).