Monday, May 02, 2016

Very strong support for TGD based model of cold fusion from the recent article of Holmlid and Kotzias

I received from Jouni a very helpful comment to an earlier blog posting telling about the work of Prof. Leif Holmlid related to cold fusion and comparing Holmlid's model with TGD inspired model (see also the article). This helped to find a new article of Holmlid and Kotzias with title "Phase transition temperatures of 405-725 K in superfluid ultra-dense hydrogen clusters on metal surfaces" published towards the end of April and providing very valuable information about the superdense phase of hydrogen/deuterium that he postulates to be crucial for cold fusion (see this ).

The postulated supra dense phase would have properties surprisingly similar to the phase postulated to be formed by dark magnetic flux tubes carrying dark proton sequences generating dark beta stable nuclei by dark weak interactions. My original intuition was that this phase is not superdense but has a density nearer to ordinary condensed matter density. The density however depends on the value of Planck constant and with Planck constant of order mp/me ≈ .94 ×211=1880 times the ordinary one one obtains the density reported by Holmlid so that the models become surprisingly similar. The earlier representation were mostly based on the assumption that the distance between dark protons is in Angstrom range rather than picometer range and thus by a factor 32 longer. The modification of the model is straightforward: one prediction is that radiation with energy scale of 1-10 keV should accompany the formation of dark nuclei.

In fact, there are also similarities about which I did not know of!

  1. The article tells that the structures formed from hydrogen/deuterium atoms are linear string like structures: this was completely new to me. The support comes from the detection of what is interpreted as decay products of these structures resulting in fragmentation in the central regions of these structures. What is detected is the time-of-flight distribution for the fragments. In TGD inspired model magnetic flux tubes carrying dark proton/D sequences giving rise to dark nuclei are also linear structures.

  2. The reported superfluid (superconductor) property and the detection of Meissner effect for the structures were also big news to me and conforms with TGD picture allowing dark supraphases at flux tubes. Superfluid/superconductor property requires that protons form Cooper pairs. The proposal of Holmlid and Kotzias that Cooper pairs are pairs of protons orthogonal to the string like structure corresponds to the model of high Tc superconductivity in TGD inspired model of quantum biology assuming a pair of flux tubes with tubes containing the members of the Cooper pairs. High Tc would be due to the non-standard value of heff=n× h. This finding would be a rather direct experimental proof for the basic assumption of TGD inspired quantum biology (see this).

  3. In TGD model it is assumed that the density of protons at dark magnetic flux tube is determined by the value of heff. Also ordinary nuclei are identified as nuclear strings and the density of protons would be the linear density protons for ordinary nuclear strings scaled down by the inverse of heff - that is by factor h/heff=1/n.

    If one assumes that single proton in ordinary nuclear string occupies length given by proton Compton length equal to (me/mp) time proton Compton length and if the volume occupied by dark proton is 2.3 pm very nearly equal to electron Compton length 2.4 pm in the ultra-dense phase of Holmlid, the value of n must be rather near n≈ mp/me ≈ 211≈ 2000 as the ratio of Compton lengths of electron and proton. The physical interpretation would be that the p-adic length scale of proton is scaled up to essentially that of electron which from p-adic mass calculations corresponds to p-adic prime M127=2127-1 (see this). The ultra dense phase of Holmlid would correspond to dark nuclei with heff/h≈ 211.

    My earlier intuition was that the density is of the order of the ordinary condensed matter density. If the nuclear binding energy scales as 1/heff (scaling like Coulomb interaction energy) as assumed in the TGD model, the nuclear binding energy per nucleon would scale down from about 7 MeV to about 3.5 keV for k=127. This energy scale is same as that for Coulomb interaction energy for distance of 2.3 pm in Holmlid's model (about 5 keV). It must be emphasized that larger values of heff are possible in TGD framework and indeed suggested by TGD inspired quantum biology. The original too restricted hypothesis was that the values of n comes as powers of 211.

  4. In TGD based model scaled down dark nuclear binding energy would (more than) compensate for the Coulomb repulsion and laser pulse would induce the phase transition increasing the density of protons and increasing also Planck constant making protons dark and leading from the state of free protons to that consisting of dark purely protonic nuclei in turn transforming by dark weak interactions to beta stable nuclei and finally to ordinary nuclei liberating essentially ordinary nuclear binding energy. In TGD based model the phase transition would give rise to charge separation and the transition would be highly analogous to that occurring in Pollack's experiments.

It seems that the model of Holmlid and TGD based model are very similar and Holmlid's experimental findings support the vision about hierarchy of dark matter as phases of the ordinary matter labelled by the value of heff/h=n. There are also other anomalies that might find explanation in terms of dark nuclei with n≈ 211. The X rays from Sun have been found to induce a yearly variation of nuclear decay rates correlating with the distance of Earth from Sun (see for instance this, this and this).
  1. One possible TGD based explanation relies on the nuclear string model). Nucleons are assumed to be connected by color flux tubes, which are usually neutral but can be also charged. For instance, proton plus negative charged flux tube connecting it to the neighboring nucleon behaves effectively as neutron. This predicts exotic nuclei with the same chemical properties as ordinary nuclei but with possibly different statistics. X rays from Sun could induce transitions between ordinary and exotic nuclei affecting the measured nuclear reaction rates which are averages of all states of nuclei. A scaled down variant of gamma ray spectroscopy of ordinary nuclei would provide an experimental proof of TGD based model.

  2. The fact that the energy scale is around 3 keV suggests that X rays could generate transitions of dark nuclei. If so, the transformations of dark nuclei to ordinary ones would affect the measured nuclear transition rates. There are also other anomalies (for instance those reported by Rolfs et al, for referencs see the article), which might find explanation in terms of presence of dark variants of nuclei ordinary nuclei.

For background and references see the chapter Cold fusion again of "Hyper-finite factors, p-adic length scale hypothesis, and dark matter hierarchy" or article with the same title.

For a summary of earlier postings see Latest progress in TGD.

Confirmation of Santilli's detection of antimatter galaxies via a telescope with concave lenses: really?

encountered in Facebook a really bizarre sounding title reading The incredible pictures scientists say prove invisible alien entities ARE here on Earth (see this) and just for curiosity decided to add one click to the web page in question (means higher income from ads) knowing that this is just what they want me to do! The story involves aliens spying us so that that the street credibility index of the story reduced zero. The tool to detect the spies would be Santilli's telescope using concave lenses. Santilli, whose work is familiar to me, also talks about two types of invisible terrestrials detected by his telescope. It would be easy to ridicule but let us be patient.

An earlier article with title Apparent detection of antimatter galaxies via a telescope with convex lenses (see this) reports a detection of antimatter galaxies. There is also an article with title "Confirmation of Santilli’s detection of antimatter galaxies via a telescope with concave lenses" published in American Journal of Modern Physics claiming an independent observation of antimatter galaxies, antimater asteroids, and antimatter cosmic rays by Santilli's telescope (see this). These articles say nothing about aliens spying us.

Since I suffer from a pathological trait of taking half-seriously even the weirdest stories, I decided to learn what Santilli's telescope using concave lenses might mean. Ordinary telescope uses convex lenses (see this). The light rays coming from the other side converge to form a picture of the source. For concave lense the light rays coming from the other side diverge so that concave lense does not sound like a good idea for detecting light coming from distant objects.

It is however claimed that Santilli's telescope detects light sources in darkness. This is only possible if the index of refraction n=c/v characterizing the medium via the ratio of light velocity in vacuum to the velocity of light in medium changes sign. From Snell's law n1sin(θ1)= n2sin(θ2) follow the basic facts about lenses (see this). It is possible to construct lenses which have negative index of refraction so that concave lense behaves like convex one. Presumably this is not be the case now since according to the existing theory, ordinary light would have the negative index of reflection (unless it is somehow transformed when arriving to the lense).

Concerning the theoretical arguments Santilli makes several claims, which do not make sense to me.

  1. The photons are identified as antimatter photons assumed to have negative energies. These antimatter photons are assumed to have repulsive gravitational interaction with ordinary matter. The claim is that this implies negative index of refraction. This does not make sense since gravitational interaction is quite too weak to cause refraction. Electromagnetic interaction must be in question. Antimatter photons are claimed to propagate with superluminal speeds and arrive instantaneously from remote galaxies. The assumption is in dramatic conflict with what we know about antimatter.

  2. Refractive index is claimed to be a property of light. This does not make sense: refractive index characterizes medium. Its sign however changes when the energy of photon changes sign. From Snell's law the sign of refractive index must change sign as the light enters to the concave lense. This would require that Santilli's antimatter photons transform to ordinary photons.

These arguments are more than enough for dooming the claims of Santilli as pseudoscience but what if there is something in it? The experimental finding is so simple that if it is not an artefact of poor experimentation, some interesting - possibly new - physics could be involved. So let us looks the situation from different point of view forgetting the theory behind it and taking seriously the claimed observations. Could one explain the findings in TGD framework?

Zero energy ontology (ZEO) is one of the cornerstones of TGD and could indeed explain the claims of Santilli and colleagues. In ZEO zero energy states are pairs of positive and negative energy states at opposite light-like boundaries of causal diamonds (CD) forming a scale hierarchy. Zero energy states are counterparts of physical events in standard ontology.

  1. ZEO predicts that the arrow of time can have both directions. In ZEO based quantum measurement theory state function reductions occur at either boundary of CD. Conscious entities correspond to sequences of reductions leaving everything unaffected at the boundary (Zeno effect) but changing the situation at the opposite boundary, in particular increasing its distance from the fixed boundary, which gives rise to the experienced flow of time. The first reduction to opposite boundary replaces the zero energy state with time reversed one. This can happen also for photons.

  2. The particles with non-standard arrow of time are not antimatter (I have considered also this possibility since it might explain the experimental absence of antimatter) but propagate in reverse time direction and have negative energies. There is a considerable evidence for this notion. Phase conjugate laser beams known to obey second law in reverse time direction would be one example. There are also old observations of Akimov and Kozyrev claiming that the instrument of Akimov gives three images of distant astrophysical objects: one would be from past, one from recent, and one from future. I do not know about the construction of Kozyrev's instrument but one can ask whether it involved concave lenses. Also the notion of syntropy introduced by the Italian physicists Fantappie conforms with this picture. In biology syntropy is in central role since in biology time reversed radiation would play a key role.

  3. Since the sign of the energy is negative for phase conjugate photons, their refractive index is negative. n2 for concave lense and n1 for the medium behind lense must have opposite signs to explain the claims of Santilli and colleagues. This happens if the incoming negative energy photons from the geometric future are transformed to positive energy photons photons at the surface of the lense. This process would represent time reflection of the incoming negative energy photons
    to ordinary positive energy photons propagating inside lense.

The claimed results could be an outcome of a bad experimentation. What however remains is a test of ZEO - or more precisely, the notion of time reversed photons - using telescopes with convex lenses. The implication would be possibility to see to the geometric future using telescopes with concave lenses! An entire geometric future of the Universe would be open to us! This possibility is a good enough reason for seeing the trouble of proving experimentally that Santilli is (and I am) wrong! Negative index of refraction as a function of frequency is a real phenomenon in condensed matter physics (see this), and one can of course ask whether also it involves the transformation of positive energy photons to negative energy photons.

For background see the chapter TGD About Concrete Realization of Remote Metabolism of "Bio-Systems as Conscious Holograms".

For a summary of earlier postings see Latest progress in TGD.

Inverse Research on Decisions Shows Instinct Makes Us Behave Like Cyborgs, not Robots: Really?


I learned about an interesting work, which relates to the relationship of experienced time and geometric time but ortodoxically assumes that these two and one and the same thing. The title of the popular article was Inverse Research on Decisions Shows Instinct Makes Us Behave Like Cyborgs, not Robots (see this). It tells about the work of Adam Bear and Paul Bloom. The article claims that our mind for some mysterious-to-me reason tricks us to believe that were are responsible for totally automatic or reflexive behaviours. In fact, these behaviors by definition are such that we do not feel of being responsible for them. Bear how allows us some subconscious free will so that we are not programmed robots but cyborgs whatever that might mean.

This work is an excellent example about how a dominating paradigm, which is wrong, leads to wrong interpretation of experimental findings, which as such are correct. The standard belief in neuroscience and standard physics is that causal effects propagate always in the same direction of the geometric time. This interpretation follows from the identification of geometric time (time of physicist) with subjective time. This despite the fact that these times have very different properties: consider only reversibility viz. irreversibility, existence of both future and past viz. only past exists.

The classical experiments of Libet challenge this dogma. Person decides to raise finger but neuro-activity begins fraction of second earlier. Mainstream neuroscientist of interprets this by saying that there is no free will. Second proposed interpretation is that the decision is made earlier at subconscious level and at our level the experience of free will is an illusion. One can of course wonder why this illusion.

The third manner to interpret the situation respects our immediate experience that we indeed have free will but in order to avoid mathematical contradictions must be accompanied by a new more general view about quantum physics accepting as a fact that there are two causalities: that of free will and that of deterministic laws of field equations. In TGD framework Zero Energy Ontology realizes this view. The outcome is prediction of signals which can propagate in both directions of geometric time. If the conscious decision generates a signal to geometric past it initiates a neural activity in geometric past. An excellent tool for survival in jungle or in modern market economy full or merciless predators.

In the experiment considered subject persons saw five dots and selected one. One of the dots became red with a varying time lag but subject person did not know when. Subject person had to tell whether her choice had been correct, wrong, or whether she had failed to make any choice at all before the change took place.

The surprising observation was that the shorter the time interval from the guess to change of color to red was, the better the reported ability to guess correctly was and in conflict with statistical model based on fixed arrow of time. If information can travel backwards in geometric time, the natural interpretation would be the same as in Libet's experiments and in the experiments of say Radin and Bierman claimed to demonstrate precognition. This is possible in zero energy ontology (ZEO). ZEO allows also a slightly different interpretation relies. In ZEO in which mental images correspond to causal diamonds (CDs). For sensory mental images their time scale would be of order .1 seconds so that below this scale one cannot anymore put events in precise time order and one indeed has precognition. What this means that one does not know whether the sensory input corresponds to the "upper" or "lower" boundary of CD so that these interpretations are equivalent.

Neuroscientist cannot of course publicly utter the word "precognition" associating immediately with really dirty word "paranormal". The orthodox conclusion is that subject persons are "cheating" themselves without knowing it. Very bizarre interpretation - if taken completely seriously it forces to question all our knowledge! One can also ask, why the subjects would tend to cheat themselves when the change occurred immediately after their choice: why not always? The interpretation is a heroic attempt to save the standard world view but can we accept irrational heroism in science?

A simple modification of the experiment would be an addition of a keystroke telling the choice when it was done and before the change in color. This would immediately tell whether something like precognition was involved.

For background see the chapter About the nature of time of "TGD Inspired Theory of Consciousness".

For a summary of earlier postings see Latest progress in TGD.

Gravitational Waves from Black Hole Megamergers Are Weaker Than Predicted

There was an interesting article in Scientific American with title "Gravitational Waves from Black Hole Megamergers Are Weaker Than Predicted" (see this). The article told about the failure to find support for the effects of gravitational waves from the fusion of supermassive blackholes. The fusions of supermassive blackholes generate gravitational radiation. These collisions would be scaled up versions of the LIGO event.

Supermassive blackholes in galactic centers are by statistical arguments expected to fuse in the collisions of galaxies so often that the generated gravitational radiation produces a detectable hum. This should produce a background hum which should be seen as a jitter for the arrival times of photons of radiation from pulsars. This jitter is same for all pulsars and therefore is expected to be detectable as kind of "hum" defined by gravitational radiation at low frequencies. The frequencies happen to be audible frequencies. For the past decade, scientists with the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) collaboration tried to detect this constant "hum" of low-frequency gravitational waves (see this). The outcome is negative and one should explain why this is the case.

I do not know how much evidence there exists for nearby collisions of galaxies in which fusion of galactic supermassive blackholes really take place. What would TGD suggest?

  1. In TGD Universe galaxies could be like pearls in necklace carrying dark magnetic energy identifiable as dark matter. This explains galactic rotation curves correctly 1/ρ force in plane orthogonal to the long cosmic string (in TGD sense) defining the necklace gives constant velocity spectrum plus free motion along string: this prediction distinguishes TGD from the competing models. Halo is not spherical since stars are in free motion along cosmic string. The galactic dark matter is identified as dark energy in turn identifiable as magnetic energy of long cosmic string. There is a considerable evidence for these necklaces and this model is one of the oldest parts of TGD inspired astrophysics and cosmology.

  2. Galaxies as vehicles moving along cosmic highways defined by long cosmic strings is more dynamical metaphor than pearls in necklace and better in recent context. The dominating interaction would be the gravitational interaction keeping the galaxy at highway and might make fusion of galactic blackholes a rare process.
This model allows to consider the possibility that the fusions of galactic super-massive blackholes are much rarer than expected in the standard model.
  1. The gravitational interaction between galaxies at separate highways passing near each other would be secondary interaction and galaxies would pass each other without anything dramatic occurring.

  2. If the highways intersect each other the galaxies could collide with each other if the timing is correct but this would be a rare event. This is like two vehicles arriving a crossing simultaneously. In fact, I wrote for a couple of years ago about the possibility that Milky Way could have resulted as the intersection of two cosmic highways (or as a result of cosmic traffic accident).

  3. If the galaxies are moving in opposite directions along the same highway, the situation changes and a fusion of galactic nuclei in head on collision is unavoidable. It is difficult to say how often this kind of events occur: it could occur that galaxies have after sufficiently many collisions "learned" to move in the same direction and define analog of hydrodynamical flow. A cosmic flow has been observed in "too" long scales and could correspond to a coherent flow along cosmic string.
For background see the chapter Quantum Astrophysics.

For a summary of earlier postings see Latest progress in TGD.

Wednesday, April 27, 2016

Teslaphoresis and TGD

I found an interesting popular article about a recently discovered phenomenon christened Teslaphoresis (see this). This phenomenon might involve new physics. Tesla studied systems critical against di-electric breakdown and observed strange electrical discharges occurring in very long length scales. Colleagues decided that these phenomena have mere entertainment value and are "understood" in Maxwellian electrodynamics. The amateurs have however continued the experiments of Tesla, and Teslaphoresis could be the final proof that something genuinely new is involved.

In TGD framework these long ranged strange phenomena could correspond in TGD quantum criticality and to large values of Planck constant implying quantum coherence in long length scales. The phases of ordinary matter with non-standard value heff=n× h of Planck constant would correspond to dark matter in TGD framework. I have earlier considered Tesla's findings from TGD point of view and my personal opinion has been that Tesla might have been the first experimenter to detect dark matter in TGD sense. Teslaphoresis gives further support for this proposal.

The title of the popular article is "Reconfigured Tesla coil aligns, electrifies materials from a distance" and tells about the effects involved. The research group is led by Paul Churukuri and there is also an abstract about the work in ADS Nano journal. This article contains also an excellent illustration allowing to understand both the Tesla coil and the magnetic and electric fields involved. The abstract of the paper provides a summary about the results.

This paper introduces Teslaphoresis, the directed motion and self-assembly of matter by a Tesla coil, and studies this electrokinetic phenomenon using single-walled carbon nanotubes (CNTs). Conventional directed self-assembly of matter using electric fields has been restricted to small scale structures, but with Teslaphoresis, we exceed this limitation by using the Tesla coil’s antenna to create a gradient high-voltage force field that projects into free space. CNTs placed within the Teslaphoretic (TEP) field polarize and self-assemble into wires that span from the nanoscale to the macroscale, the longest thus far being 15 cm. We show that the TEP field not only directs the self-assembly of long nanotube wires at remote distances (≥ 30 cm) but can also wirelessly power nanotube-based LED circuits. Furthermore, individualized CNTs self-organize to form long parallel arrays with high fidelity alignment to the TEP field. Thus, Teslaphoresis is effective for directed self-assembly from the bottom-up to the macroscale.

Concisely: what is found that single-walled carbon nanotubes (CNTs) polarise and self-assemble along the electric fields created by capacitor in much longer length scales than expected. Biological applications (involving linear molecules like microtubules) come in mind. CNTs tend to also move towards the capacitance of the secondary coil of the Tesla coil (TC).

It is interesting to understand the TGD counterparts for the Maxwellian em fields involved with Tesla coils and it is found that many-sheetedness of space-time is necessary to understand the standing waves also involved. The fact that massless extremals (MEs) can carry light-like currents is essential for modelling currents classically using many-sheeted space-time. The presence of magnetic monopole flux tubes distinguishing TGD from Maxwellian theory is suggestive and could explain why Teslaphoresis occurs in so long length scales and why it induces self-organization phenomena for CNTs. The situation can be seen as a special case of more general situation encountered in TGD based model of living matter.

For background see the chapter About Concrete Realization of Remote Metabolism or the article Teslaphoresis and TGD.

For a summary of earlier postings see Latest progress in TGD.

Tuesday, April 26, 2016

Indications for high Tc superconductivity at 373 K with heff/h=2

Some time ago I learned about a claim of Ivan Kostadinov about superconductivity at temperature of 373 K (100 C). There is also claims by E. Joe Eck about superconductivity: the latest at 400 K. I am not enough experimentalist to be able to decide whether to take the claims seriously or not.

The article of Kostadinov provides a detailed support for the claim. Evidence for diamagnetism (induced magnetization tends to reduce the external magnetic field inside superconductor) is represented: at 242 transition reducing the magnitude of negative susceptibility but keeping it negative takes place. Evidence for gap energy of 15 mV was found at 300 K temperature: this energy is same as thermal energy T/2= 1.5 eV at room temperature. Tape tests passing 125 A through superconducting tape supported very low resistance (for Copper tape started burning after about 5 seconds).

I-V curves at 300 K are shown to exhibit Shapiro steps with radiation frequency in the range [5 GHz, 21 THz]. Already Josephson discovered what - perhaps not so surprisingly - is known as Josephson effect. As one drives super-conductor with an alternating current, the voltage remain constant at certain values. The difference of voltage values between subsequent jumps are given by Shapiro step Δ V= h f/Ze. The interpretation is that voltage suffers a kind of phase locking at these frequencies and alternating current becomes Josephson current with Josephson frequency f= ZeV/h, which is integer multiple of the frequency of the current.

This actually gives a very nice test for heff=n× h hypothesis: Shapiro step Δ V should be scaled up by heff/h=n. The obvious question is whether this occurs in the recent case or whether n=1 explains the findings.

The data represented by Figs. 12, 13,14 of the artcle suggest n=2 for Z=2. The alternative explanation would be that the step is for some reason Δ V= 2hf/Ze corresponding to second harmonic or that the charge of charge carrier is Z=1 (bosonic ion). I worried about a possible error in my calculation several hours last night but failed to find any mistake.

  1. Fig 12 shows I-V curve at room temperature T=300 K. Shapiro step is now 45 mV. This would correspond to frequency f= ZeΔ V/h=11.6 THz. The figure text tells that the frequency is fR=21.762 THz giving fR/f ≈ 1.87. This would suggest heff/h=n ≈ fR/f≈ 2.

  2. Fig. 13 shows another at 300 K. Now Shapiro step is 4.0 mV and corresponds to a frequency 1.24 THz. This would give fR/f≈ 1.95 giving heff/h=2.

  3. Fig. 14 shows I-V curve with single Shapiro step equal to about .12 mV. The frequency should be 2.97 GHz whereas the reported frequency is 5.803 GHz. This gives fR/f≈ 1.95 giving n=2.
Irrespectively of the fate of the claims of Kostadinov and Eck, Josephson effect could allow an elegant manner to demonstrate whether the hierarchy of Planck constants is realized in Nature.

For background see the chapter Quantum Model for Bio-Superconductivity: II.

For a summary of earlier postings see Latest progress in TGD.

Monday, April 25, 2016

Correlated Polygons in Standard Cosmology and in TGD

Peter Woit had an interesting This Week's Hype . The inspiration came from a popular article in Quanta Magazine telling about the proposal of Maldacena and Nima Arkani-Hamed that the temperature fluctuations of cosmic microwave background (CMB) could exhibit deviation from Gaussianity in the sense that there would be measurable maxima of n-point correlations in CMB spectrum as function of spherical angles. These effects would relate to the large scale structure of CMB. Lubos Motl wrote about the article in different and rather aggressive tone.

The article in Quanta Magazine does not go into technical details but the original article of Maldacena and Arkani-Hamed contains detailed calculations for various n-point functions of inflaton field and other fields in turn determining the correlation functions for CMB temperature. The article is technically very elegant but the assumptions behind the calculations are questionable. In TGD Universe they would be simply wrong and some habitants of TGD Universe could see the approach as a demonstration for how misleading the refined mathematics can be if the assumptions behind it are wrong.

It must be emphasized that already now it is known and stressed also in the articl that the deviations of the CMB from Gaussianity are below recent measurement resolution and the testing of the proposed non-Gaussianities requires new experimental technology such as 21 cm tomography mapping the redshift distribution of 21 cm hydrogen line to deduce information about fine details of CMB now n-point correlations.

Inflaton vacuum energy is in TGD framework replaced by Kähler magnetic energy and the model of Maldacena and Arkani-Hamed does not apply. The elegant work of Maldacena and Arkani-Hamed however inspired a TGD based consideration of the situation but with very different motivations. In TGD inflaton fields do not play any role since inflaton vacuum energy is replaced with the energy of magnetic flux tubes. The polygons also appear in totally different manner and are associated with symplectic invariants identified as Kähler fluxes, and might relate closely to quantum physical correlates of arithmetic cognition. These considerations lead to a proposal that integers (3,4,5) define what one might called additive primes for integers n≥ 3 allowing geometric representation as non-degenerate polygons - prime polygons. On should dig the enormous mathematical literature to find whether mathematicians have proposed this notion - probably so. Partitions would correspond to splicings of polygons to smaller polygons.

These splicings could be dynamical quantum processes behind arithmetic conscious processes involving addition. I have already earlier considered a possible counterpart for conscious prime factorization in the adelic framework. This will not be discussed in this section since this topic is definitely too far from primordial cosmology. The purpose of this article is only to give an example how a good work in theoretical physics - even when it need not be relevant for physics - can stimulate new ideas in completely different context.

For details see the chapter More About TGD Inspired Cosmology or the article Correlated Triangles and Polygons in Standard Cosmology and in TGD .

For a summary of earlier postings see Latest progress in TGD.

Number theoretical feats and TGD inspired theory of consciousness

Number theoretical feats of some mathematicians like Ramanujan remain a mystery for those believing that brain is a classical computer. Also the ability of idiot savants - lacking even the idea about what prime is - to factorize integers to primes challenges the idea that an algorithm is involved. In this article I discuss ideas about how various arithmetical feats such as partitioning integer to a sum of integers and to a product of prime factors might take place. The ideas are inspired by the number theoretic vision about TGD suggesting that basic arithmetics might be realized as naturally occurring processes at quantum level and the outcomes might be "sensorily perceived". One can also ask whether zero energy ontology (ZEO) could allow to perform quantum computations in polynomial instead of exponential time.

The indian mathematician Srinivasa Ramanujan is perhaps the most well-known example about a mathematician with miraculous gifts. He told immediately answers to difficult mathematical questions - ordinary mortals had to to hard computational work to check that the answer was right. Many of the extremely intricate mathematical formulas of Ramanujan have been proved much later by using advanced number theory. Ramanujan told that he got the answers from his personal Goddess. A possible TGD based explanation of this feat relies on the idea that in zero energy ontology (ZEO) quantum computation like activity could consist of steps consisting quantum computation and its time reversal with long-lasting part of each step performed in reverse time direction at opposite boundary of causal diamond so that the net time used would be short at second boundary.

The adelic picture about state function reduction in ZEO suggests that it might be possible to have direct sensory experience about prime factorization of integers (see this). What about partitions of integers to sums of primes? For years ago I proposed that symplectic QFT is an essential part of TGD. The basic observation was that one can assign to polygons of partonic 2-surface - say geodesic triangles - Kähler magnetic fluxes defining symplectic invariance identifiable as zero modes. This assignment makes sense also for string world sheets and gives rise to what is usually called Abelian Wilson line. I could not specify at that time how to select these polygons. A very natural manner to fix the vertices of polygon (or polygons) is to assume that they correspond ends of fermion lines which appear as boundaries of string world sheets. The polygons would be fixed rather uniquely by requiring that fermions reside at their vertices.

The number 1 is the only prime for addition so that the analog of prime factorization for sum is not of much use. Polygons with n=3,4,5 vertices are special in that one cannot decompose them to non-degenerate polygons. Non-degenerate polygons also represent integers n>2. This inspires the idea about numbers 3,4,5 as "additive primes" for integers n>2 representable as non-degenerate polygons. These polygons could be associated many-fermion states with negentropic entanglement (NE) - this notion relate to cognition and conscious information and is something totally new from standard physics point of view. This inspires also a conjecture about a deep connection with arithmetic consciousness: polygons would define conscious representations for integers n>2. The splicings of polygons to smaller ones could be dynamical quantum processes behind arithmetic conscious processes involving addition.

For details see the chapter Conscious Information and Intelligence
or the article Number Theoretical Feats and TGD Inspired Theory of Consciousness.

For a summary of earlier postings see Latest progress in TGD.

Monday, April 18, 2016

"Final" solution to the qualia problem

The TGD inspired theory of (qualia has evolved gradually to its recent form.

  1. The original vision was that qualia and and other aspects of consciousness experience are determined by the change of quantum state in the reduction: the increments of quantum numbers would determine qualia. I had not yet realized that repeated state function reduction (Zeno effect) realized in ZEO is central for consciousness. The objection was that qualia change randomly from reduction to reduction.

  2. Later I ended up with the vision that the rates for the changes of quantum numbers would determine qualia: this idea was realized in terms of sensory capacitor model in which qualia would correspond to kind of generalized di-electric breakdown feeding to subsystem responsible for quale quantum numbers characterizing the quale. The Occamistic objection is that the model brings in an additional element not present in quantum measurement theory.

  3. The view that emerged while writing the critics of IIT was that qualia correspond to the quantum numbers measured in the state function reduction. That in ZEO the qualia remain the same for the entire sequence of repeated state function reductions is not a problem since qualia are associated with sub-self (sub-CD), which can have lifetime of say about .1 seconds! Only the generalization of standard quantum measurement theory is needed to reduce the qualia to fundamental physics. This for instance supports the conjecture that visual colors correspond to QCD color quantum numbers. This makes sense in TGD framework predicting a scaled variants of QCD type physics even in cellular length scales.

    This view implies that the model of sensory receptor based on the generalization of di-electric breakdown is wrong as such since the rate for the transfer of the quantum numbers would not define the quale. A possible modification is that the analog of di-electric breakdown generates Bose-Einstein condensate and that the the quantum numbers for the BE condensate give rise to qualia assignable to sub-self.

For details see the article TGD Inspired Comments about Integrated Information Theory of Consciousness.

For a summary of earlier postings see Latest progress in TGD.

NMP and adelic physics

In given p-adic sector the entanglement entropy (EE) is defined by replacing the logarithms of probabilities in Shannon formula by the logarithms of their p-adic norms. The resulting entropy satisfies the same axioms as ordinary entropy but makes sense only for probabilities, which must be rational valued or in an algebraic extension of rationals. The algebraic extensions corresponds to the evolutionary level of system and the algebraic complexity of the extension serves as a measure for the evolutionary level. p-Adically also extensions determined by roots of e can be considered. What is so remarkable is that the number theoretic entropy can be negative.

A simple example allows to get an idea about what is involved. If the entanglement probabilities are rational numbers Pi=Mi/N, ∑i Mi=N, then the primes appearing as factors of N correspond to a negative contribution to the number theoretic entanglement entropy and thus to information. The factors of Mi correspond to negative contributions. For maximal entanglement with Pi=1/N in this case the EE is negative. The interpretation is that the entangled state represents quantally concept or a rule as superposition of its instances defined by the state pairs in the superposition. Identity matrix means that one can choose the state basis in arbitrary manner and the interpretation could be in terms of "enlightened" state of consciousness characterized by "absence of distinctions". In general case the basis is unique.

Metabolism is a central concept in biology and neuroscience. Usually metabolism is understood as transfer of ordered energy and various chemical metabolites to the system. In TGD metabolism could be basically just a transfer of NE from nutrients to the organism. Living systems would be fighting for NE to stay alive (NMP is merciless!) and stealing of NE would be the fundamental crime.

TGD has been plagued by a longstanding interpretational problem: can one apply the notion of number theoretic entropy in the real context or not. If this is possible at all, under what conditions this is the case? How does one know that the entanglement probabilities are not transcendental as they would be in generic case? There is also a second problem: p-adic Hilbert space is not a well-defined notion since the sum of p-adic probabilities defined as moduli squared for the coefficients of the superposition of orthonormal states can vanish and one obtains zero norm states.

These problems disappear if the reduction occurs in the intersection of reality and p-adicities since here Hilbert spaces have some algebraic number field as coefficient field. By SH the 2-D states states provide all information needed to construct quantum physics. In particular, quantum measurement theory.

  1. The Hilbert spaces defining state spaces has as their coefficient field always some algebraic extension of rationals so that number theoretic entropies make sense for all primes. p-Adic numbers as coefficients cannot be used and reals are not allowed. Since the same Hilbert space is shared by real and p-adic sectors, a given state function reduction in the intersection has real and p-adic space-time shadows.

  2. State function reductions at these 2- surfaces at the ends of causal diamond (CD) take place in the intersection of realities and p-adicities if the parameters characterizing these surfaces are in the algebraic extension considered. It is however not absolutely necessary to assume that the coordinates of WCW belong to the algebraic extension although this looks very natural.

  3. NMP applies to the total EE. It can quite well happen that NMP for the sum of real and p-adic entanglement entropies does not allow ordinary state function reduction to take place since p-adic negative entropies for some primes would become zero and net negentropy would be lost. There is competition between real and p-adic sectors and p-adic sectors can win! Mind has causal power: it can stabilize quantum states against state function reduction and tame the randomness of quantum physics in absence of cognition! Can one interpret this causal power of cognition in terms of intentionality? If so, p-adic physics would be also physics of intentionality as originally assumed.

A fascinating question is whether the p-adic view about cognition could allow to understand the mysterious looking ability of idiot savants (not only of them but also of some greatest mathematicians) to decompose large integers to prime factors. One possible mechanism is that the integer N represented concretely is mapped to a maximally entangled state with entanglement probabilities Pi=1/N, which means NE for the prime factors of Pi or N. The factorization would be experienced directly.

One can also ask, whether the other mathematical feats performed by idiot savants could be understood in terms of their ability to directly experience - "see" - the prime composition (adelic decomposition) of integer or even rational. This could for instance allow to "see" if integer is - say 3rd - power of some smaller integer: all prime exponents in it would be multiples of 3. If the person is able to generate an NE for which probabilities Pi=Mi/N are apart from normalization equal to given integers Mi, ∑ Mi=N, then they could be able to "see" the prime compositions for Mi and N. For instance, they could "see" whether both Mi and N are 3rd powers of some integer and just by going through trials find the integers satisfying this condition.

For details see the article TGD Inspired Comments about Integrated Information Theory of Consciousness.

For a summary of earlier postings see Latest progress in TGD.

Thursday, April 14, 2016

TGD Inspired Comments about Integrated Information Theory of Consciousness

I received form Lian Sidoroff a link to a very interesting article by John Horgan in Scientific American with title "Can Integrated Information Theory Explain Consciousness?". Originally IIT is a theoretical construct of neuroscientst Giulio Tononi (just Tononi in the sequel). Christof Koch is one of the coworkers of Tononi. IIT can be regarded as heavily neuroscience based non-quantum approach to consciousness and the goal is to identify the axioms about consciousness, which should hold true also in physics based theories. The article of Horgan was excellent and touched the essentials and it was relatively easy to grasp what is common with my own approach to consciousness and comment also what I see as weaknesses of IIT approach.

To my opinion, the basic weakness is the lack of formulation in terms of fundamental physics. As such quantum physics based formulation is certainly not enough since the recent quantum physics is plagued by paradoxes, which are due the lack of theory of consciousness needed to understand what the notion of observer means. The question is not only about what fundamental physics can give to consciousness but also about what consciousness can give to fundamental physics.

The article Consciousness: here, there and everywhere of Tononi and Koch gives a more detailed summary about IIT. The article From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory gives a more techanical description of IIT. Also the article of Scott Aaronson was very helpful in providing computer scientific view about IIT and representing also mathematical objections.

Tononi and Koch emphasize that IIT is a work in progress. This applies also to TGD and TGD inspired theory of consciousness. Personally I take writing of TGD inspired commentary about IIT as a highly interesting interaction, which might help to learn new ideas and spot the weaknesses and imperfections in the basic definitions of TGD inspired theory of consciousness. If TGD survives from this interaction as such, the writing of these commentaries have been waste of time.

The key questions relate to the notion of information more or less identified as consciousness.

  1. In IIT the information is identified essentially as a reduction of entropy as hypothetical conscious entity learns what the state of the system is. This definition of information used in the definition of conscious entity is circular. It involves also probabilistic element bringing thus either the notion of ensemble or frequency interpretation.

  2. In TGD the notion of information relies on number theoretical entanglement entropy (EE) measuring the amount of information associated with entanglement. It makes sense for algebraic entanglement probabilities. In fact all probabilities must be assumed to belong to algebraic extension of rationals if one adopts p-adic view about cognition and extends physics to adelic physics involving real and various p-adic number fields. Circularity is avoided but the basic problem has been whether one can apply the number theoretic definition of entanglement entropy only in p-adic sectors of the adelic Universe or whether it applies under some conditions also in the real sector. Writing this commentary led to a solution of this problem: the state function reduction in the intersection of realities and p-adicities which corresponds to algebraic extension of rationals induces the reductions at real and p-adic sectors. Negentropy Maximization Principle (NMP) maximizes the sum of real and various p-adic negentropy gains. The outcome is highly non-trivial prediction that cognition can stabilize also the real entanglement and has therefore causal power. One can say that cognition tames the randomness of the ordinary state function reduction so that Einstein was to some degree right when he said that God does not play dice.

  3. IIT identifies qualia with manner, which I find difficult to take seriously. The criticism however led also to criticism of TGD identification of qualia and much simpler identification involving only the basic assumptions of ZEO based quantum measurement theory emerged. Occam's razor does not leave many options in this kind of situation.

IIT predicts panpsychism in a restricted sense as does also TGD. The identification of maximally integrated partition of elementary system endowed with mechanism, which could correspond to computer program, to two parts as conscious experience is rather near to epiphenomenalism since it means that consciousness is property of physical system. In TGD framework consciousness has independent causal and ontological status. Conscious existence corresponds to quantum jumps between physical states re-creating physical realities being therefore outside the existences defined by classical and quantum physics (in TGD classical physics is exact part of quantum physics).

The comparison of IIT with TGD was very useful. I glue below the abstract of the article comparing IIT with TGD inspired theory of consciousness.

Abstract

Integrated Information Theory (IIT) is a theory of consciousness originally proposed by Giulio Tononi. The basic goal of IIT is to abstract from neuroscience axioms about consciousness hoped to provide constraints on physical models. IIT relies strongly on information theory. The basic problem is that the very definition of information is not possible without introducing conscious observer so that circularity cannot be avoided. IIT identifies a collection of few basic concepts and axioms such as the notions of mechanism (computer program is one analog for mechanism), information, integration and maximally integrated information (maximal interdependence of parts of the system), and exclusion. Also the composition of mechanisms as kind of engineering principle of consciousness is assumed and leads to the notion of conceptual structure, which should allow to understand not only cognition but entire conscious experience.

A measure for integrated information (called Φ) assignable to any partition of system to two parts is introduced in terms of relative entropies. Consciousness is identified with a maximally integrated decomposition of the system to two parts (Φ is maximum). The existence of this preferred decomposition of the system to two parts besides computer and program running in it distinguishes IIT from the computational approach to consciousness. Personally I am however afraid that bringing in physics could bring in physicalism and reduce consciousness to an epiphenomenon. Qualia are assigned to the links of network. IIT can be criticized for this assignment as also for the fact that it does not say much about free will nor about the notion of time. Also the principle fixing the dynamics of consciousness is missing unless one interprets mechanisms as such.

In this article IIT is compared to the TGD vision relying on physics and on general vision about consciousness strongly guided by the new physics predicted by TGD. At classical level this new physics involves a new view about space-time and fields (in particular the notion of magnetic body central in TGD inspired quantum biology and quantum neuroscience). At quantum level it involves Zero Energy Ontology (ZEO) and the notion of causal diamond (CD) defining 4-D perceptive field of self; p-adic physics as physics of cognition and imagination and the fusion of real and various p-adic physics to adelic physics; strong form of holography (SH) implying that 2-D string world sheets and partonic surfaces serve as "space-time genes"; and the hierarchy of Planck constants making possible macroscopic quantum coherence.

Number theoretic entanglement entropy (EE) makes sense as number theoretic variant of Shannon entropy in the p-adic sectors of the adelic Universe. Number theoretic EE can be negative and corresponds in this case to genuine information: one has negentropic entanglement (NE). TGD inspired theory of consciousness reduces to quantum measurement theory in ZEO. Negentropy Maximization Principle (NMP) serves as the variational principle of consciousness and implies that NE can can only increase - this implies evolution. By SH real and p-adic 4-D systems are algebraic continuations of 2-D systems ("space-time genes") characterized by algebraic extensions of rationals labelling evolutionary levels with increasing algebraic complexity. Real and p-adic sectors have common Hilbert space with coefficients in algebraic extension of rationals so that the state function reduction at this level can be said to induce real and p-adic 4-D reductions as its shadows.

NE in the p-adic sectors stabilizes the entanglement also in real sector (the sum of real (ordinary) and various p-adic negentropies tends to increase) - the randomness of the ordinary state function reduction is tamed by cognition and mind can be said to rule over matter. Quale corresponds in IIT to a link of a network like structure. In TGD quale corresponds to the eigenvalues of observables measured repeatedly as long as corresponding sub-self (mental image, quale) remains conscious.

In ZEO self can be seen as a generalized Zeno effect. What happens in death of a conscious entity (self) can be understood and it accompanies re-incarnation of time reversed self in turn making possible re-incarnation also in the more conventional sense of the word. The death of mental image (sub-self) can be also interpreted as motor action involving signal to geometric past: this in accordance with Libet's findings.

There is much common between IIT and TGD at general structural level but also profound differences. Also TGD predicts restricted pan-psychism. NE is the TGD counterpart for the integrated information. The combinatiorial structure of NE gives rise to quantal complexity. Mechanisms correspond to 4-D self-organization patterns with self-organization interpreted in 4-D sense in ZEO. The decomposition of system to two parts such that this decomposition can give rise to a maximal negentropy gain in state function reduction is also involved but yields two independent selves. Engineering of conscious systems from simpler basic building blocks is predicted. Indeed, TGD predicts infinite self hierarchy with sub-selves identifiable as mental images. Exclusion postulate is not needed in TGD framework. Also network like structures emerge naturally as p-adic systems for which all decompositions are negentropically entangled inducing in turn corresponding real systems.

For details see the article TGD Inspired Comments about Integrated Information Theory of Consciousness.

For a summary of earlier postings see Latest progress in TGD.

Sunday, April 10, 2016

How Ramanujan did it?

Lubos Motl wrote recently a blog posting about P≠ NP conjecture proposed in the theory of computation based on Turing's work. This unproven conjecture relies on a classical model of computation developed by formulating mathematically what the women doing the hard computational work in offices at the time of Turing did. Turing's model is extremely beautiful mathematical abstraction of something very every-daily but does not involve fundamental physics in any manner so that it must be taken with a caution. The basic notions include those of algorithm and recursive function, and the mathematics used in the model is mathematics of integers. Nothing is assumed about what conscious computation is: it is somewhat ironic that this model has been taken by strong AI people as a model of consciousness!

  1. A canonical model for classical computation is in terms of Turing machine, which has bit sequence as inputs and transforms them to outputs and each step changes its internal state. A more concrete model is in terms of a network of gates representing basic operations for the incoming bits: from this basic functions one constructs all recursive functions. The computer and program actualize the algorithm represented as a computer program and eventually halts - at least one can hope that it does so. Assuming that the elementary operations require some minimum time, one can estimate the number of steps required and get an estimate for the dependence of the computation time as function of the size of computation.

  2. If the time required by a computation, whose size is characterized by the number N of relevant bits, can be carried in time proportional to some power of N and is thus polynomial, one says that computation is in class P. Non-polynomial computation in class NP would correspond to a computation time increasing with N faster than any power of N, say exponentially. Donald Knuth, whose name is familiar for everyone using Latex to produce mathematical text, believes on P=NP in the framework of classical computation. Lubos in turn thinks that the Turing model is probably too primitive and that quantum physics based model is needed and this might allow P=NP.

What about quantum computation as we understand it in the recent quantum physics: can it achieve P=NP?
  1. Quantum computation is often compared to a superposition of classical computations and this might encourage to think that this could make it much more effective but this does not seem to be the case. Note however that the amount of information represents by N qubits is however exponentially larger than that represented by N classical bits since entanglement is possible. The prevailing wisdom seems to be that in some situations quantum computation can be faster than the classical one but that if P=NP holds true for classical computation, it holds true also for quantum computations. Presumably because the model of quantum computation begins from the classical model and only (quantum computer scientists must experience this statement as an insult - apologies!) replaces bits with qubits.

  2. In quantum computer one replaces bits with entangled qubits and gates with quantum gates and computation corresponds to a unitary time evolution with respect to a discretized time parameter constructed in terms of fundamental simple building bricks. So called tensor networks realize the idea of local unitary in a nice manner and has been proposed to defined error correcting quantum codes. State function reduction halts the computation. The outcome is non-deterministic but one can perform large number of computations and deduce from the distribution of outcomes the results of computation.

What about conscious computations? Or more generally, conscious information processing. Could it proceed faster than computation in these sense of Turing? To answer this question one must first try to understand what conscious information processing might be. TGD inspired theory of consciousnesss provides one a possible answer to the question involving not only quantum physics but also new quantum physics.
  1. In TGD framework Zero energy ontology (ZEO) replaces ordinary positive energy ontology and forces to generalize the theory of quantum measurement. This brings in several new elements. In particular, state function reductions can occur at both boundaries of causal diamond (CD), which is intersection of future and past direct light-cones and defines a geometric correlate for self. Selves for a fractal hierarchy - CDs within CDs and maybe also overlapping. Negentropy Maximization Principle (NMP) is the basic variational principle of consciousness and tells that the state function reductions generate maximum amount of conscious information. The notion of negentropic entanglement (NE) involving p-adic physics as physics of cognition and hierarchy of Planck constants assigned with dark matter are also central elements.

  2. NMP allows a sequence of state function reductions to occur at given boundary of diamond-like CD - call it passive boundary. The state function reduction sequence leaving everything unchanged at the passive boundary of CD defines self as a generalized Zeno effect. Each step shifts the opposite - active - boundary of CD "upwards" and increases its distance from the passive boundary. Also the states at it change and one has the counterpart of unitary time evolution. The shifting of the active boundary gives rise to the experienced time flow and sensory input generating cognitive mental images - the "Maya" aspect of conscious experienced. Passive boundary corresponds to permanent unchanging "Self".

  3. Eventually NMP forces the first reduction to the opposite boundary to occur. Self dies and reincarnates as a time reversed self. The opposite boundary of CD would be now shifting "downwards" and increasing CD size further. At the next reduction to opposite boundary re-incarnation of self in the geometric future of the original self would occur. This would be re-incarnation in the sense of Eastern philosophies. It would make sense to wonder whose incarnation in geometric past I might represent!

Could this allow to perform fast quantal computations by decomposing the computation to a sequence in which one proceeds in both directions of time? Could the incredible feats of some "human computers" rely on this quantum mechanism. The indian mathematician Srinivasa Ramanujan is the most well-known example of a mathematician with miraculous gifts. He told immediately answers to difficult mathematical questions - ordinary mortals had to to hard computational work to check that the answer was right. Many of the extremely intricate mathematical formulas of Ramanujan have been proved much later by using advanced number theory. Ramanujan told that he got the answers from his personal Goddess.

Might it be possible in ZEO to perform quantally computations requiring classically non-polynomial time much faster - even in polynomial time? If this were the case, one might at least try to understand how Ramanujan did it although higher levels selves might be involved also (did his Goddess do the job?).

  1. Quantal computation would correspond to a state function reduction sequence at fixed boundary of CD defining a mathematical mental image as sub-self. In the first reduction to the opposite boundary of CD sub-self representing mathematical mental image would die and quantum computation would halt. A new computation at opposite boundary proceeding to opposite direction of geometric time would begin and define a time-reversed mathematical mental image. This sequence of reincarnations of sub-self as its time reversal could give rise to a sequence of quantum computation like processes taking less time than usually since one half of computations would take place at the opposite boundary to opposite time direction (the size of CD increases as the boundary shifts).

  2. If the average computation time is same at both boundaries, the computation time would be only halved. Not very impressive. However, if the mental images at second boundary - call it A - are short-lived and the selves at opposite boundary B are very long-lived and represent very long computations, the process could be very fast from the point of view of A! Could one overcome the P≠NP constraint by performing computations during time-reversed re-incarnations?! Short living mental images at this boundary and very long-lived mental images at the opposite boundary - could this be the secret of Ramanujan?

  3. Was the Goddess of Ramanujan - self at higher level of self-hierarchy - nothing but a time reversal for some mathematical mental image of Ramanujan (Brahman=Atman!), representing very long quantal computations! We have night-day cycle of personal consciousness and it could correspond to a sequence of re-incarnations at some level of our personal self-hierarchy. Ramanujan tells that he met his Goddess in dreams. Was his Goddess the time reversal of that part of Ramanujan, which was unconscious when Ramanujan slept? Intriguingly, Ramanujan was rather short-lived himself - he died at the age of 32! In fact, many geniuses have been rather short-lived.

  4. Why the alter ego of Ramanujan was Goddess? Jung intuited that our psyche has two aspects: anima and animus. Do they quite universally correspond to self and its time reversal? Do our mental images have gender?! Could our self-hierarchy be a hierarchical collection of anima and animi so that gender would be something much deeper than biological sex! And what about Yin-Yang duality of Chinese philosophy and the ka as the shadow of persona in the mythology of ancient Egypt?

For a summary of earlier postings see Latest progress in TGD.