Wednesday, April 27, 2016

Teslaphoresis and TGD

I found an interesting popular article about a recently discovered phenomenon christened Teslaphoresis (see this). This phenomenon might involve new physics. Tesla studied systems critical against di-electric breakdown and observed strange electrical discharges occurring in very long length scales. Colleagues decided that these phenomena have mere entertainment value and are "understood" in Maxwellian electrodynamics. The amateurs have however continued the experiments of Tesla, and Teslaphoresis could be the final proof that something genuinely new is involved.

In TGD framework these long ranged strange phenomena could correspond in TGD quantum criticality and to large values of Planck constant implying quantum coherence in long length scales. The phases of ordinary matter with non-standard value heff=n× h of Planck constant would correspond to dark matter in TGD framework. I have earlier considered Tesla's findings from TGD point of view and my personal opinion has been that Tesla might have been the first experimenter to detect dark matter in TGD sense. Teslaphoresis gives further support for this proposal.

The title of the popular article is "Reconfigured Tesla coil aligns, electrifies materials from a distance" and tells about the effects involved. The research group is led by Paul Churukuri and there is also an abstract about the work in ADS Nano journal. This article contains also an excellent illustration allowing to understand both the Tesla coil and the magnetic and electric fields involved. The abstract of the paper provides a summary about the results.

This paper introduces Teslaphoresis, the directed motion and self-assembly of matter by a Tesla coil, and studies this electrokinetic phenomenon using single-walled carbon nanotubes (CNTs). Conventional directed self-assembly of matter using electric fields has been restricted to small scale structures, but with Teslaphoresis, we exceed this limitation by using the Tesla coil’s antenna to create a gradient high-voltage force field that projects into free space. CNTs placed within the Teslaphoretic (TEP) field polarize and self-assemble into wires that span from the nanoscale to the macroscale, the longest thus far being 15 cm. We show that the TEP field not only directs the self-assembly of long nanotube wires at remote distances (≥ 30 cm) but can also wirelessly power nanotube-based LED circuits. Furthermore, individualized CNTs self-organize to form long parallel arrays with high fidelity alignment to the TEP field. Thus, Teslaphoresis is effective for directed self-assembly from the bottom-up to the macroscale.

Concisely: what is found that single-walled carbon nanotubes (CNTs) polarise and self-assemble along the electric fields created by capacitor in much longer length scales than expected. Biological applications (involving linear molecules like microtubules) come in mind. CNTs tend to also move towards the capacitance of the secondary coil of the Tesla coil (TC).

It is interesting to understand the TGD counterparts for the Maxwellian em fields involved with Tesla coils and it is found that many-sheetedness of space-time is necessary to understand the standing waves also involved. The fact that massless extremals (MEs) can carry light-like currents is essential for modelling currents classically using many-sheeted space-time. The presence of magnetic monopole flux tubes distinguishing TGD from Maxwellian theory is suggestive and could explain why Teslaphoresis occurs in so long length scales and why it induces self-organization phenomena for CNTs. The situation can be seen as a special case of more general situation encountered in TGD based model of living matter.

For background see the chapter About Concrete Realization of Remote Metabolism or the article Teslaphoresis and TGD.

For a summary of earlier postings see Latest progress in TGD.

Tuesday, April 26, 2016

Indications for high Tc superconductivity at 373 K with heff/h=2

Some time ago I learned about a claim of Ivan Kostadinov about superconductivity at temperature of 373 K (100 C). There is also claims by E. Joe Eck about superconductivity: the latest at 400 K. I am not enough experimentalist to be able to decide whether to take the claims seriously or not.

The article of Kostadinov provides a detailed support for the claim. Evidence for diamagnetism (induced magnetization tends to reduce the external magnetic field inside superconductor) is represented: at 242 transition reducing the magnitude of negative susceptibility but keeping it negative takes place. Evidence for gap energy of 15 mV was found at 300 K temperature: this energy is same as thermal energy T/2= 1.5 eV at room temperature. Tape tests passing 125 A through superconducting tape supported very low resistance (for Copper tape started burning after about 5 seconds).

I-V curves at 300 K are shown to exhibit Shapiro steps with radiation frequency in the range [5 GHz, 21 THz]. Already Josephson discovered what - perhaps not so surprisingly - is known as Josephson effect. As one drives super-conductor with an alternating current, the voltage remain constant at certain values. The difference of voltage values between subsequent jumps are given by Shapiro step Δ V= h f/Ze. The interpretation is that voltage suffers a kind of phase locking at these frequencies and alternating current becomes Josephson current with Josephson frequency f= ZeV/h, which is integer multiple of the frequency of the current.

This actually gives a very nice test for heff=n× h hypothesis: Shapiro step Δ V should be scaled up by heff/h=n. The obvious question is whether this occurs in the recent case or whether n=1 explains the findings.

The data represented by Figs. 12, 13,14 of the artcle suggest n=2 for Z=2. The alternative explanation would be that the step is for some reason Δ V= 2hf/Ze corresponding to second harmonic or that the charge of charge carrier is Z=1 (bosonic ion). I worried about a possible error in my calculation several hours last night but failed to find any mistake.

  1. Fig 12 shows I-V curve at room temperature T=300 K. Shapiro step is now 45 mV. This would correspond to frequency f= ZeΔ V/h=11.6 THz. The figure text tells that the frequency is fR=21.762 THz giving fR/f ≈ 1.87. This would suggest heff/h=n ≈ fR/f≈ 2.

  2. Fig. 13 shows another at 300 K. Now Shapiro step is 4.0 mV and corresponds to a frequency 1.24 THz. This would give fR/f≈ 1.95 giving heff/h=2.

  3. Fig. 14 shows I-V curve with single Shapiro step equal to about .12 mV. The frequency should be 2.97 GHz whereas the reported frequency is 5.803 GHz. This gives fR/f≈ 1.95 giving n=2.
Irrespectively of the fate of the claims of Kostadinov and Eck, Josephson effect could allow an elegant manner to demonstrate whether the hierarchy of Planck constants is realized in Nature.

For background see the chapter Quantum Model for Bio-Superconductivity: II.

For a summary of earlier postings see Latest progress in TGD.

Monday, April 25, 2016

Correlated Polygons in Standard Cosmology and in TGD

Peter Woit had an interesting This Week's Hype . The inspiration came from a popular article in Quanta Magazine telling about the proposal of Maldacena and Nima Arkani-Hamed that the temperature fluctuations of cosmic microwave background (CMB) could exhibit deviation from Gaussianity in the sense that there would be measurable maxima of n-point correlations in CMB spectrum as function of spherical angles. These effects would relate to the large scale structure of CMB. Lubos Motl wrote about the article in different and rather aggressive tone.

The article in Quanta Magazine does not go into technical details but the original article of Maldacena and Arkani-Hamed contains detailed calculations for various n-point functions of inflaton field and other fields in turn determining the correlation functions for CMB temperature. The article is technically very elegant but the assumptions behind the calculations are questionable. In TGD Universe they would be simply wrong and some habitants of TGD Universe could see the approach as a demonstration for how misleading the refined mathematics can be if the assumptions behind it are wrong.

It must be emphasized that already now it is known and stressed also in the articl that the deviations of the CMB from Gaussianity are below recent measurement resolution and the testing of the proposed non-Gaussianities requires new experimental technology such as 21 cm tomography mapping the redshift distribution of 21 cm hydrogen line to deduce information about fine details of CMB now n-point correlations.

Inflaton vacuum energy is in TGD framework replaced by Kähler magnetic energy and the model of Maldacena and Arkani-Hamed does not apply. The elegant work of Maldacena and Arkani-Hamed however inspired a TGD based consideration of the situation but with very different motivations. In TGD inflaton fields do not play any role since inflaton vacuum energy is replaced with the energy of magnetic flux tubes. The polygons also appear in totally different manner and are associated with symplectic invariants identified as Kähler fluxes, and might relate closely to quantum physical correlates of arithmetic cognition. These considerations lead to a proposal that integers (3,4,5) define what one might called additive primes for integers n≥ 3 allowing geometric representation as non-degenerate polygons - prime polygons. On should dig the enormous mathematical literature to find whether mathematicians have proposed this notion - probably so. Partitions would correspond to splicings of polygons to smaller polygons.

These splicings could be dynamical quantum processes behind arithmetic conscious processes involving addition. I have already earlier considered a possible counterpart for conscious prime factorization in the adelic framework. This will not be discussed in this section since this topic is definitely too far from primordial cosmology. The purpose of this article is only to give an example how a good work in theoretical physics - even when it need not be relevant for physics - can stimulate new ideas in completely different context.

For details see the chapter More About TGD Inspired Cosmology or the article Correlated Triangles and Polygons in Standard Cosmology and in TGD .

For a summary of earlier postings see Latest progress in TGD.

Number theoretical feats and TGD inspired theory of consciousness

Number theoretical feats of some mathematicians like Ramanujan remain a mystery for those believing that brain is a classical computer. Also the ability of idiot savants - lacking even the idea about what prime is - to factorize integers to primes challenges the idea that an algorithm is involved. In this article I discuss ideas about how various arithmetical feats such as partitioning integer to a sum of integers and to a product of prime factors might take place. The ideas are inspired by the number theoretic vision about TGD suggesting that basic arithmetics might be realized as naturally occurring processes at quantum level and the outcomes might be "sensorily perceived". One can also ask whether zero energy ontology (ZEO) could allow to perform quantum computations in polynomial instead of exponential time.

The indian mathematician Srinivasa Ramanujan is perhaps the most well-known example about a mathematician with miraculous gifts. He told immediately answers to difficult mathematical questions - ordinary mortals had to to hard computational work to check that the answer was right. Many of the extremely intricate mathematical formulas of Ramanujan have been proved much later by using advanced number theory. Ramanujan told that he got the answers from his personal Goddess. A possible TGD based explanation of this feat relies on the idea that in zero energy ontology (ZEO) quantum computation like activity could consist of steps consisting quantum computation and its time reversal with long-lasting part of each step performed in reverse time direction at opposite boundary of causal diamond so that the net time used would be short at second boundary.

The adelic picture about state function reduction in ZEO suggests that it might be possible to have direct sensory experience about prime factorization of integers (see this). What about partitions of integers to sums of primes? For years ago I proposed that symplectic QFT is an essential part of TGD. The basic observation was that one can assign to polygons of partonic 2-surface - say geodesic triangles - Kähler magnetic fluxes defining symplectic invariance identifiable as zero modes. This assignment makes sense also for string world sheets and gives rise to what is usually called Abelian Wilson line. I could not specify at that time how to select these polygons. A very natural manner to fix the vertices of polygon (or polygons) is to assume that they correspond ends of fermion lines which appear as boundaries of string world sheets. The polygons would be fixed rather uniquely by requiring that fermions reside at their vertices.

The number 1 is the only prime for addition so that the analog of prime factorization for sum is not of much use. Polygons with n=3,4,5 vertices are special in that one cannot decompose them to non-degenerate polygons. Non-degenerate polygons also represent integers n>2. This inspires the idea about numbers 3,4,5 as "additive primes" for integers n>2 representable as non-degenerate polygons. These polygons could be associated many-fermion states with negentropic entanglement (NE) - this notion relate to cognition and conscious information and is something totally new from standard physics point of view. This inspires also a conjecture about a deep connection with arithmetic consciousness: polygons would define conscious representations for integers n>2. The splicings of polygons to smaller ones could be dynamical quantum processes behind arithmetic conscious processes involving addition.

For details see the chapter Conscious Information and Intelligence
or the article Number Theoretical Feats and TGD Inspired Theory of Consciousness.

For a summary of earlier postings see Latest progress in TGD.

Monday, April 18, 2016

"Final" solution to the qualia problem

The TGD inspired theory of (qualia has evolved gradually to its recent form.

  1. The original vision was that qualia and and other aspects of consciousness experience are determined by the change of quantum state in the reduction: the increments of quantum numbers would determine qualia. I had not yet realized that repeated state function reduction (Zeno effect) realized in ZEO is central for consciousness. The objection was that qualia change randomly from reduction to reduction.

  2. Later I ended up with the vision that the rates for the changes of quantum numbers would determine qualia: this idea was realized in terms of sensory capacitor model in which qualia would correspond to kind of generalized di-electric breakdown feeding to subsystem responsible for quale quantum numbers characterizing the quale. The Occamistic objection is that the model brings in an additional element not present in quantum measurement theory.

  3. The view that emerged while writing the critics of IIT was that qualia correspond to the quantum numbers measured in the state function reduction. That in ZEO the qualia remain the same for the entire sequence of repeated state function reductions is not a problem since qualia are associated with sub-self (sub-CD), which can have lifetime of say about .1 seconds! Only the generalization of standard quantum measurement theory is needed to reduce the qualia to fundamental physics. This for instance supports the conjecture that visual colors correspond to QCD color quantum numbers. This makes sense in TGD framework predicting a scaled variants of QCD type physics even in cellular length scales.

    This view implies that the model of sensory receptor based on the generalization of di-electric breakdown is wrong as such since the rate for the transfer of the quantum numbers would not define the quale. A possible modification is that the analog of di-electric breakdown generates Bose-Einstein condensate and that the the quantum numbers for the BE condensate give rise to qualia assignable to sub-self.

For details see the article TGD Inspired Comments about Integrated Information Theory of Consciousness.

For a summary of earlier postings see Latest progress in TGD.

NMP and adelic physics

In given p-adic sector the entanglement entropy (EE) is defined by replacing the logarithms of probabilities in Shannon formula by the logarithms of their p-adic norms. The resulting entropy satisfies the same axioms as ordinary entropy but makes sense only for probabilities, which must be rational valued or in an algebraic extension of rationals. The algebraic extensions corresponds to the evolutionary level of system and the algebraic complexity of the extension serves as a measure for the evolutionary level. p-Adically also extensions determined by roots of e can be considered. What is so remarkable is that the number theoretic entropy can be negative.

A simple example allows to get an idea about what is involved. If the entanglement probabilities are rational numbers Pi=Mi/N, ∑i Mi=N, then the primes appearing as factors of N correspond to a negative contribution to the number theoretic entanglement entropy and thus to information. The factors of Mi correspond to negative contributions. For maximal entanglement with Pi=1/N in this case the EE is negative. The interpretation is that the entangled state represents quantally concept or a rule as superposition of its instances defined by the state pairs in the superposition. Identity matrix means that one can choose the state basis in arbitrary manner and the interpretation could be in terms of "enlightened" state of consciousness characterized by "absence of distinctions". In general case the basis is unique.

Metabolism is a central concept in biology and neuroscience. Usually metabolism is understood as transfer of ordered energy and various chemical metabolites to the system. In TGD metabolism could be basically just a transfer of NE from nutrients to the organism. Living systems would be fighting for NE to stay alive (NMP is merciless!) and stealing of NE would be the fundamental crime.

TGD has been plagued by a longstanding interpretational problem: can one apply the notion of number theoretic entropy in the real context or not. If this is possible at all, under what conditions this is the case? How does one know that the entanglement probabilities are not transcendental as they would be in generic case? There is also a second problem: p-adic Hilbert space is not a well-defined notion since the sum of p-adic probabilities defined as moduli squared for the coefficients of the superposition of orthonormal states can vanish and one obtains zero norm states.

These problems disappear if the reduction occurs in the intersection of reality and p-adicities since here Hilbert spaces have some algebraic number field as coefficient field. By SH the 2-D states states provide all information needed to construct quantum physics. In particular, quantum measurement theory.

  1. The Hilbert spaces defining state spaces has as their coefficient field always some algebraic extension of rationals so that number theoretic entropies make sense for all primes. p-Adic numbers as coefficients cannot be used and reals are not allowed. Since the same Hilbert space is shared by real and p-adic sectors, a given state function reduction in the intersection has real and p-adic space-time shadows.

  2. State function reductions at these 2- surfaces at the ends of causal diamond (CD) take place in the intersection of realities and p-adicities if the parameters characterizing these surfaces are in the algebraic extension considered. It is however not absolutely necessary to assume that the coordinates of WCW belong to the algebraic extension although this looks very natural.

  3. NMP applies to the total EE. It can quite well happen that NMP for the sum of real and p-adic entanglement entropies does not allow ordinary state function reduction to take place since p-adic negative entropies for some primes would become zero and net negentropy would be lost. There is competition between real and p-adic sectors and p-adic sectors can win! Mind has causal power: it can stabilize quantum states against state function reduction and tame the randomness of quantum physics in absence of cognition! Can one interpret this causal power of cognition in terms of intentionality? If so, p-adic physics would be also physics of intentionality as originally assumed.

A fascinating question is whether the p-adic view about cognition could allow to understand the mysterious looking ability of idiot savants (not only of them but also of some greatest mathematicians) to decompose large integers to prime factors. One possible mechanism is that the integer N represented concretely is mapped to a maximally entangled state with entanglement probabilities Pi=1/N, which means NE for the prime factors of Pi or N. The factorization would be experienced directly.

One can also ask, whether the other mathematical feats performed by idiot savants could be understood in terms of their ability to directly experience - "see" - the prime composition (adelic decomposition) of integer or even rational. This could for instance allow to "see" if integer is - say 3rd - power of some smaller integer: all prime exponents in it would be multiples of 3. If the person is able to generate an NE for which probabilities Pi=Mi/N are apart from normalization equal to given integers Mi, ∑ Mi=N, then they could be able to "see" the prime compositions for Mi and N. For instance, they could "see" whether both Mi and N are 3rd powers of some integer and just by going through trials find the integers satisfying this condition.

For details see the article TGD Inspired Comments about Integrated Information Theory of Consciousness.

For a summary of earlier postings see Latest progress in TGD.

Thursday, April 14, 2016

TGD Inspired Comments about Integrated Information Theory of Consciousness

I received form Lian Sidoroff a link to a very interesting article by John Horgan in Scientific American with title "Can Integrated Information Theory Explain Consciousness?". Originally IIT is a theoretical construct of neuroscientst Giulio Tononi (just Tononi in the sequel). Christof Koch is one of the coworkers of Tononi. IIT can be regarded as heavily neuroscience based non-quantum approach to consciousness and the goal is to identify the axioms about consciousness, which should hold true also in physics based theories. The article of Horgan was excellent and touched the essentials and it was relatively easy to grasp what is common with my own approach to consciousness and comment also what I see as weaknesses of IIT approach.

To my opinion, the basic weakness is the lack of formulation in terms of fundamental physics. As such quantum physics based formulation is certainly not enough since the recent quantum physics is plagued by paradoxes, which are due the lack of theory of consciousness needed to understand what the notion of observer means. The question is not only about what fundamental physics can give to consciousness but also about what consciousness can give to fundamental physics.

The article Consciousness: here, there and everywhere of Tononi and Koch gives a more detailed summary about IIT. The article From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory gives a more techanical description of IIT. Also the article of Scott Aaronson was very helpful in providing computer scientific view about IIT and representing also mathematical objections.

Tononi and Koch emphasize that IIT is a work in progress. This applies also to TGD and TGD inspired theory of consciousness. Personally I take writing of TGD inspired commentary about IIT as a highly interesting interaction, which might help to learn new ideas and spot the weaknesses and imperfections in the basic definitions of TGD inspired theory of consciousness. If TGD survives from this interaction as such, the writing of these commentaries have been waste of time.

The key questions relate to the notion of information more or less identified as consciousness.

  1. In IIT the information is identified essentially as a reduction of entropy as hypothetical conscious entity learns what the state of the system is. This definition of information used in the definition of conscious entity is circular. It involves also probabilistic element bringing thus either the notion of ensemble or frequency interpretation.

  2. In TGD the notion of information relies on number theoretical entanglement entropy (EE) measuring the amount of information associated with entanglement. It makes sense for algebraic entanglement probabilities. In fact all probabilities must be assumed to belong to algebraic extension of rationals if one adopts p-adic view about cognition and extends physics to adelic physics involving real and various p-adic number fields. Circularity is avoided but the basic problem has been whether one can apply the number theoretic definition of entanglement entropy only in p-adic sectors of the adelic Universe or whether it applies under some conditions also in the real sector. Writing this commentary led to a solution of this problem: the state function reduction in the intersection of realities and p-adicities which corresponds to algebraic extension of rationals induces the reductions at real and p-adic sectors. Negentropy Maximization Principle (NMP) maximizes the sum of real and various p-adic negentropy gains. The outcome is highly non-trivial prediction that cognition can stabilize also the real entanglement and has therefore causal power. One can say that cognition tames the randomness of the ordinary state function reduction so that Einstein was to some degree right when he said that God does not play dice.

  3. IIT identifies qualia with manner, which I find difficult to take seriously. The criticism however led also to criticism of TGD identification of qualia and much simpler identification involving only the basic assumptions of ZEO based quantum measurement theory emerged. Occam's razor does not leave many options in this kind of situation.

IIT predicts panpsychism in a restricted sense as does also TGD. The identification of maximally integrated partition of elementary system endowed with mechanism, which could correspond to computer program, to two parts as conscious experience is rather near to epiphenomenalism since it means that consciousness is property of physical system. In TGD framework consciousness has independent causal and ontological status. Conscious existence corresponds to quantum jumps between physical states re-creating physical realities being therefore outside the existences defined by classical and quantum physics (in TGD classical physics is exact part of quantum physics).

The comparison of IIT with TGD was very useful. I glue below the abstract of the article comparing IIT with TGD inspired theory of consciousness.

Abstract

Integrated Information Theory (IIT) is a theory of consciousness originally proposed by Giulio Tononi. The basic goal of IIT is to abstract from neuroscience axioms about consciousness hoped to provide constraints on physical models. IIT relies strongly on information theory. The basic problem is that the very definition of information is not possible without introducing conscious observer so that circularity cannot be avoided. IIT identifies a collection of few basic concepts and axioms such as the notions of mechanism (computer program is one analog for mechanism), information, integration and maximally integrated information (maximal interdependence of parts of the system), and exclusion. Also the composition of mechanisms as kind of engineering principle of consciousness is assumed and leads to the notion of conceptual structure, which should allow to understand not only cognition but entire conscious experience.

A measure for integrated information (called Φ) assignable to any partition of system to two parts is introduced in terms of relative entropies. Consciousness is identified with a maximally integrated decomposition of the system to two parts (Φ is maximum). The existence of this preferred decomposition of the system to two parts besides computer and program running in it distinguishes IIT from the computational approach to consciousness. Personally I am however afraid that bringing in physics could bring in physicalism and reduce consciousness to an epiphenomenon. Qualia are assigned to the links of network. IIT can be criticized for this assignment as also for the fact that it does not say much about free will nor about the notion of time. Also the principle fixing the dynamics of consciousness is missing unless one interprets mechanisms as such.

In this article IIT is compared to the TGD vision relying on physics and on general vision about consciousness strongly guided by the new physics predicted by TGD. At classical level this new physics involves a new view about space-time and fields (in particular the notion of magnetic body central in TGD inspired quantum biology and quantum neuroscience). At quantum level it involves Zero Energy Ontology (ZEO) and the notion of causal diamond (CD) defining 4-D perceptive field of self; p-adic physics as physics of cognition and imagination and the fusion of real and various p-adic physics to adelic physics; strong form of holography (SH) implying that 2-D string world sheets and partonic surfaces serve as "space-time genes"; and the hierarchy of Planck constants making possible macroscopic quantum coherence.

Number theoretic entanglement entropy (EE) makes sense as number theoretic variant of Shannon entropy in the p-adic sectors of the adelic Universe. Number theoretic EE can be negative and corresponds in this case to genuine information: one has negentropic entanglement (NE). TGD inspired theory of consciousness reduces to quantum measurement theory in ZEO. Negentropy Maximization Principle (NMP) serves as the variational principle of consciousness and implies that NE can can only increase - this implies evolution. By SH real and p-adic 4-D systems are algebraic continuations of 2-D systems ("space-time genes") characterized by algebraic extensions of rationals labelling evolutionary levels with increasing algebraic complexity. Real and p-adic sectors have common Hilbert space with coefficients in algebraic extension of rationals so that the state function reduction at this level can be said to induce real and p-adic 4-D reductions as its shadows.

NE in the p-adic sectors stabilizes the entanglement also in real sector (the sum of real (ordinary) and various p-adic negentropies tends to increase) - the randomness of the ordinary state function reduction is tamed by cognition and mind can be said to rule over matter. Quale corresponds in IIT to a link of a network like structure. In TGD quale corresponds to the eigenvalues of observables measured repeatedly as long as corresponding sub-self (mental image, quale) remains conscious.

In ZEO self can be seen as a generalized Zeno effect. What happens in death of a conscious entity (self) can be understood and it accompanies re-incarnation of time reversed self in turn making possible re-incarnation also in the more conventional sense of the word. The death of mental image (sub-self) can be also interpreted as motor action involving signal to geometric past: this in accordance with Libet's findings.

There is much common between IIT and TGD at general structural level but also profound differences. Also TGD predicts restricted pan-psychism. NE is the TGD counterpart for the integrated information. The combinatiorial structure of NE gives rise to quantal complexity. Mechanisms correspond to 4-D self-organization patterns with self-organization interpreted in 4-D sense in ZEO. The decomposition of system to two parts such that this decomposition can give rise to a maximal negentropy gain in state function reduction is also involved but yields two independent selves. Engineering of conscious systems from simpler basic building blocks is predicted. Indeed, TGD predicts infinite self hierarchy with sub-selves identifiable as mental images. Exclusion postulate is not needed in TGD framework. Also network like structures emerge naturally as p-adic systems for which all decompositions are negentropically entangled inducing in turn corresponding real systems.

For details see the article TGD Inspired Comments about Integrated Information Theory of Consciousness.

For a summary of earlier postings see Latest progress in TGD.

Sunday, April 10, 2016

How Ramanujan did it?

Lubos Motl wrote recently a blog posting about P≠ NP conjecture proposed in the theory of computation based on Turing's work. This unproven conjecture relies on a classical model of computation developed by formulating mathematically what the women doing the hard computational work in offices at the time of Turing did. Turing's model is extremely beautiful mathematical abstraction of something very every-daily but does not involve fundamental physics in any manner so that it must be taken with a caution. The basic notions include those of algorithm and recursive function, and the mathematics used in the model is mathematics of integers. Nothing is assumed about what conscious computation is: it is somewhat ironic that this model has been taken by strong AI people as a model of consciousness!

  1. A canonical model for classical computation is in terms of Turing machine, which has bit sequence as inputs and transforms them to outputs and each step changes its internal state. A more concrete model is in terms of a network of gates representing basic operations for the incoming bits: from this basic functions one constructs all recursive functions. The computer and program actualize the algorithm represented as a computer program and eventually halts - at least one can hope that it does so. Assuming that the elementary operations require some minimum time, one can estimate the number of steps required and get an estimate for the dependence of the computation time as function of the size of computation.

  2. If the time required by a computation, whose size is characterized by the number N of relevant bits, can be carried in time proportional to some power of N and is thus polynomial, one says that computation is in class P. Non-polynomial computation in class NP would correspond to a computation time increasing with N faster than any power of N, say exponentially. Donald Knuth, whose name is familiar for everyone using Latex to produce mathematical text, believes on P=NP in the framework of classical computation. Lubos in turn thinks that the Turing model is probably too primitive and that quantum physics based model is needed and this might allow P=NP.

What about quantum computation as we understand it in the recent quantum physics: can it achieve P=NP?
  1. Quantum computation is often compared to a superposition of classical computations and this might encourage to think that this could make it much more effective but this does not seem to be the case. Note however that the amount of information represents by N qubits is however exponentially larger than that represented by N classical bits since entanglement is possible. The prevailing wisdom seems to be that in some situations quantum computation can be faster than the classical one but that if P=NP holds true for classical computation, it holds true also for quantum computations. Presumably because the model of quantum computation begins from the classical model and only (quantum computer scientists must experience this statement as an insult - apologies!) replaces bits with qubits.

  2. In quantum computer one replaces bits with entangled qubits and gates with quantum gates and computation corresponds to a unitary time evolution with respect to a discretized time parameter constructed in terms of fundamental simple building bricks. So called tensor networks realize the idea of local unitary in a nice manner and has been proposed to defined error correcting quantum codes. State function reduction halts the computation. The outcome is non-deterministic but one can perform large number of computations and deduce from the distribution of outcomes the results of computation.

What about conscious computations? Or more generally, conscious information processing. Could it proceed faster than computation in these sense of Turing? To answer this question one must first try to understand what conscious information processing might be. TGD inspired theory of consciousnesss provides one a possible answer to the question involving not only quantum physics but also new quantum physics.
  1. In TGD framework Zero energy ontology (ZEO) replaces ordinary positive energy ontology and forces to generalize the theory of quantum measurement. This brings in several new elements. In particular, state function reductions can occur at both boundaries of causal diamond (CD), which is intersection of future and past direct light-cones and defines a geometric correlate for self. Selves for a fractal hierarchy - CDs within CDs and maybe also overlapping. Negentropy Maximization Principle (NMP) is the basic variational principle of consciousness and tells that the state function reductions generate maximum amount of conscious information. The notion of negentropic entanglement (NE) involving p-adic physics as physics of cognition and hierarchy of Planck constants assigned with dark matter are also central elements.

  2. NMP allows a sequence of state function reductions to occur at given boundary of diamond-like CD - call it passive boundary. The state function reduction sequence leaving everything unchanged at the passive boundary of CD defines self as a generalized Zeno effect. Each step shifts the opposite - active - boundary of CD "upwards" and increases its distance from the passive boundary. Also the states at it change and one has the counterpart of unitary time evolution. The shifting of the active boundary gives rise to the experienced time flow and sensory input generating cognitive mental images - the "Maya" aspect of conscious experienced. Passive boundary corresponds to permanent unchanging "Self".

  3. Eventually NMP forces the first reduction to the opposite boundary to occur. Self dies and reincarnates as a time reversed self. The opposite boundary of CD would be now shifting "downwards" and increasing CD size further. At the next reduction to opposite boundary re-incarnation of self in the geometric future of the original self would occur. This would be re-incarnation in the sense of Eastern philosophies. It would make sense to wonder whose incarnation in geometric past I might represent!

Could this allow to perform fast quantal computations by decomposing the computation to a sequence in which one proceeds in both directions of time? Could the incredible feats of some "human computers" rely on this quantum mechanism. The indian mathematician Srinivasa Ramanujan is the most well-known example of a mathematician with miraculous gifts. He told immediately answers to difficult mathematical questions - ordinary mortals had to to hard computational work to check that the answer was right. Many of the extremely intricate mathematical formulas of Ramanujan have been proved much later by using advanced number theory. Ramanujan told that he got the answers from his personal Goddess.

Might it be possible in ZEO to perform quantally computations requiring classically non-polynomial time much faster - even in polynomial time? If this were the case, one might at least try to understand how Ramanujan did it although higher levels selves might be involved also (did his Goddess do the job?).

  1. Quantal computation would correspond to a state function reduction sequence at fixed boundary of CD defining a mathematical mental image as sub-self. In the first reduction to the opposite boundary of CD sub-self representing mathematical mental image would die and quantum computation would halt. A new computation at opposite boundary proceeding to opposite direction of geometric time would begin and define a time-reversed mathematical mental image. This sequence of reincarnations of sub-self as its time reversal could give rise to a sequence of quantum computation like processes taking less time than usually since one half of computations would take place at the opposite boundary to opposite time direction (the size of CD increases as the boundary shifts).

  2. If the average computation time is same at both boundaries, the computation time would be only halved. Not very impressive. However, if the mental images at second boundary - call it A - are short-lived and the selves at opposite boundary B are very long-lived and represent very long computations, the process could be very fast from the point of view of A! Could one overcome the P≠NP constraint by performing computations during time-reversed re-incarnations?! Short living mental images at this boundary and very long-lived mental images at the opposite boundary - could this be the secret of Ramanujan?

  3. Was the Goddess of Ramanujan - self at higher level of self-hierarchy - nothing but a time reversal for some mathematical mental image of Ramanujan (Brahman=Atman!), representing very long quantal computations! We have night-day cycle of personal consciousness and it could correspond to a sequence of re-incarnations at some level of our personal self-hierarchy. Ramanujan tells that he met his Goddess in dreams. Was his Goddess the time reversal of that part of Ramanujan, which was unconscious when Ramanujan slept? Intriguingly, Ramanujan was rather short-lived himself - he died at the age of 32! In fact, many geniuses have been rather short-lived.

  4. Why the alter ego of Ramanujan was Goddess? Jung intuited that our psyche has two aspects: anima and animus. Do they quite universally correspond to self and its time reversal? Do our mental images have gender?! Could our self-hierarchy be a hierarchical collection of anima and animi so that gender would be something much deeper than biological sex! And what about Yin-Yang duality of Chinese philosophy and the ka as the shadow of persona in the mythology of ancient Egypt?

For a summary of earlier postings see Latest progress in TGD.

Friday, April 08, 2016

Quantum critical dark matter and tunneling in quantum chemistry

Quantum revolution, which started from biology, has started to infect also chemistry. Phys.org contains interesting article titled Exotic quantum effects can govern the chemistry around us. The article tells about the evience that quantum tunnelling takes place in chemical reactions even at temperatures above the boiling point of water. This is not easy to explain in standard quantum theory framework. No one except me has the courage to utter aloud the words "non-standard value of Planck constant". This is perfectly understandable since at thist moment these worlds would still mean instantaneous academic execution.

Quantum tunneling means that quantum particle is able to move through a classically forbidden region, where its momentum would be imaginary. The tunnelling probability can be estimated by solving the Schrödinger equation assuming that a free particle described as a wave arrives from the other side of the barrier and is partially reflected and partially transmitted. Tunneling probability is proportional to exp(-2∫ Kdx), k=iK is the imaginary wave vector in forbidden region - imaginary because the kinetic energy T=p2/2m of particle equals to T= E-V and is negative. In forbidden region momentum p is imaginary as also the wave vector k=iK = p/hbar. The trasmission-/tunnelling probability decreases exponentially with the height and width of the barrier. Hence the tunnelling should be extremely improbable in macroscopic and even nano-scales. The belief has been that this is true also in chemistry. Especially so at high temperatures, where quantum coherence lengths are expected to be short. Experiments have forced to challenge this belief.

In TGD framework the hierarchy of phases of ordinary matter with Planck constant given by heff=n× h. The exponent in the tunneling probablity is proportional to 1/hbar. If hbar is large, the tunnelling probability increases since the damping exponential is near to unity. Tunneling becomes possible in scales, which are by a factor heff/h=n longer than usually. At microscopic level - in the sense of TGD space-time - the tunnelling would occur along magnetic flux tubes. This could explain the claimed tunneling effects in chemistry. In biochemistry these effects would of special importance.

In TGD framework non-standard values of Planck constant are associated with quantum criticality and there is experimental evidence for quantum criticality in the bio-chemistry of proteins (see also this. In TGD framework quantum criticality is the basic postulate about quantum dynamics in all length scales and makes TGD unique since the fundamental coupling strength is analogous to critical temperature and therefore has a discrete spectrum.

Physics student reading this has probably already noticed that diffraction is another fundamental quantum effect. By naive dimensional estimate, the sizes of diffraction spots should scale up by heff. This might provide a second manner to detect the presence of large heff photons and also other particles such as electrons. Dark variants of particles wold not be directly observable but might induce effects in ordinary matter making the scaled up diffraction spots visible. For instance, could our visual experience provide some support for large heff diffraction? The transformation of dark photons to biophotons might make this possible.

P. S. Large heff quantum tunnelling could provide one further mechanism for cold fusion. The tunnelling probabily for overcoming Coulomb wall separating incoming charged nucleus from target nucleus is extremely small. If the value of Planck constant is scaled up, the probability increases by the above mechanism. Therefore TGD allows to consider at least 3 different mechanisms for cold fusion: all of them would rely on hierarchy of Planck constants.

For a summary of earlier postings see Latest progress in TGD.

Wednesday, April 06, 2016

Is cold fusion becoming a new technology?

The progress in cold fusion research has been really fast during last years and the most recent news might well mean the final breakthrough concerning practical applications which would include not only wasteless energy production but maybe also production of elements such as metals. The popular article titled Cold Fusion Real, Revolutionary, and Ready Says Leading Scandinavian Newspaper ) tells about the work of Prof. Leif Holmlid and his student Sinder-Zeiner-Gundersen. For more details about the work of Holmlid et als ee this, this, this, and this.

The latter revealed the details of an operating cold fusion reactor in Norway reported to generate 20 times more energy than required to activate it. The estimate of Holmlid is that Norway would need 100 kg of deuterium per year to satisfy its energy needs (this would suggest that the amount of fusion products is rather small to be practical except in situations, where the amounts needed are really small). The amusing co-incidence is that I constructed towards the end of the last year a detailed TGD based model of cold fusion ( see this) and the findings of Leif Holmlid served as an important guideline although the proposed mechanism is different.

Histories are cruel, and the cruel history of cold fusion begins in 1989, when Pons and Fleichmann reported anomalous heat production involving palladium target and electrolysis in heavy water (deuterium replacing hydrogen). The reaction is impossible in the world governed by text book physics since Coulomb barrier makes it impossible for positively charged nuclei to get close enough. If ordinary fusion is in question, reaction products should involve gamma rays and neutrons and these have not been observed.

The community preferred text books over observations and labelled Pons and Fleichman and their followers as crackpots and it became impossible to publish anything in so called respected journals. The pioneers have however continued to work with cold fusion and for few years ago American Chemical Society had to admit that there might be something in it and cold fusion researchers got a status of respectable researcher. There have been several proposals for working reactors such as Rossi's E-Cat and NASA is performing research in cold fusion. In countries like Finland cold fusion is still a cursed subject and will probably remain so until cold fusion becomes the main energy source in heating of also physics department.

The model of Holmlid for cold fusion

Leif Holmlid is a professor emeritus in chemistry at the University of Gothemburg. He has quite recently published a work on Rydberg matter in the prestigious journals of APS and is now invited to tell about his work on cold fusion to a meeting of American Physical Society.

  1. Holmlid regards Rydberg matter) as a probable precursor of cold fusion. Rydberg atoms have some electrons at very high orbitals with large radius. Therefore the nuclei plus core electrons look for them like a point nucleus, which charge equal to nuclear charge plus that of core electrons. Rydberg matter forms layer-like structures with hexagonal lattice structure.

  2. Cold fusion would involve the formation of what Holmlid calls ultra-dense deuterium having Rydberg matter as precursor. If I have understood correctly, the laser pulse hitting Rydberg matter would induce the formation of the ultra-dense phase of deuterium by contracting it strongly in the direction of the pulse. The ultra-dense phase would then suffer Coulomb explosion. The compression seems to be assumed to happen in all directions. To me the natural assumption would be that it occurs only in the direction of laser pulse defining the direction of force acting on the system.

  3. The ultra-dense deuterium would have density about .13× 106 kg/m3, which is 1.3× 103 times that of ordinary water. The nuclei would be so close to each other that only a small perturbation would make possible to overcome the Coulomb wall and cold fusion can proceed. Critical system would be in question. It would be hard to predict the outcome of individual experiment. This would explain why the cold fusion experiments have been so hard to replicate. The existence of ultra-dense deuterium has not been proven but cold fusion seems takes place.

    Rydberg matter, which should not be confused with the ultra-dense phase would be the precursor of the process. I am not sure whether Rydberg matter exists before the process or whether it would be created by the laser pulse. Cold fusion would occur in the observed microscopic fracture zones of solid metal substances.

Issues not so well-understood

The process has some poorly understood aspects.

  1. Muons as also of mesons like pion and kaon are detected in the outgoing beam generated by the laser pulse. Muons with mass about 106 MeV could be decay products of pions with mass of 140 MeV and kaons but how these particles with masses much larger than scale of nuclear binding energy per nucleon of about 7-8 MeV for ligher nuclei could be produced even if low energy nuclear reactions are involved? Pions appear as mediators of strong interaction in the old-fashioned model of nuclear interactions but the production on mass shell pions seems very implausible in low energy nuclear collisions.

  2. What is even stranger that muons produced even when laser pulse is not used to initiate the reaction. Holmlid suggests that there are two reaction pathways for cold fusion: with and without the laser pulse. This forces to ask whether the creation of Rydberg matter or something analogous to it is alone enough to induce cold fusion and whether the laser beam actually provides the energy needed for this so that ultra-dense phase of deuterium would not be needed at all. Coulomb wall problem would be solve in some other manner.

  3. The amount of gamma radiation and neurons is small so that ordinary cold fusion does not seem to be in question as would be implied by the proposed mechanism of overcoming the Coulomb wall. Muon production would suggest muon catalyzed fusion as a mechanism of cold fusion but also this mechanism should produce gammas and neutrons.

TGD inspired model of cold fusion

It seems that Holmlid's experiments realize cold fusion and that cold fusion might be soon a well-established technology. A real theoretical understanding is however missing. New physics is definitely required and TGD could provide it.

  1. TGD based model of cold fusion relies on TGD based view about dark matter. Dark matter would correspond to phases of ordinary matter with non-standard value of Planck constant heff=n× h implying that the Compton sizes of elementary particles and atomic nuclei are scaled up by n and can be rather large - of atomic size or even larger.

    Also weak interactions can become dark: this means that weak boson Compton lengths are scaled up so that they are effectively massless below Compton length and weak interactions become as strong as electromagnetic interactions. If this happens, then weak interactions can lead to rapid beta decay of dark protons transforming them to neutrons (or effectively neutrons as it turns out). For instance, one can imagine that proton or deuteron approaching nucleus transforms rapidly to neutral state by exchange of dark W bosons and can overcome the Coulomb wall in this manner: this was my original proposal for the mechanism of cold fusion.

  2. The model assumes that electrolysis leads to a formation of so called fourth phase of water discovered by Pollack. For instance, irradiation by infrared light can induce the formation of negatively charged exclusion zones (EZs) of Pollack. Maybe also the laser beam used in the experiments of Holmlid could do this so that compression to ultra-dense phase would not be needed. The fourth phase of water forms layered structures consisting of 2-D hexagonal lattices with stoichiometry H1.5O and carrying therefore a strong electric charge. Also Rydberg matter forms this kind of lattices, which suggests a connection with the experiments of Holmlid.

    Protons must go somewhere from the EZ and the interpretation is that one proton per hydrogen bonded pair of water molecules goes to a flux tube of the magnetic body of the system as dark proton with non-standard value of Planck constant heff=n× h and forms sequence of dark protons forming dark nucleus. If the binding energy of dark nucleus scales like 1/heff (1/size) the binding energy of dark nucleus is much smaller than that for ordinary nucleus. The liberated dark nuclear binding energy in the formation would generate further EZs and one would have a kind of chain reaction.

    In fact, this picture leads to the proposal that even old and boring ordinary electrolysis involves new physics. Hard to confess, but I have had grave difficulties in understanding why ionization should occur at all in electrolysis! The external electric field between the electrodes is extremely weak in atomic scales and it is difficult to understand how it induce ionization needed to load the electric battery!

  3. The dark proton sequences need not be stable - the TGD counterpart for the Coulomb barrier problem. More than half of the nucleons of ordinary nuclei are neutrons and similar situation is the first expectation now. Dark weak boson (W) emission could lead to dark beta decay transforming proton to neutron or what looks like neutron (what this cryptic statement means would requires explanation about nuclear string model). This would stabilize the dark nuclei.

    An important prediction is that dark nuclei are beta stable since dark weak interactions are so fast. This is one of the predictions of the theory. Second important prediction is that gamma rays and neutrons are not produced at this stage. The analogs of gamma rays would have energies of order dark nuclear binding energy, which is ordinary nuclear energy scale scaled down by 1/n. Radiation at lower energies would be produced. I have a vague memory that X rays in keV range have been detected in cold fusion experiments. This would correspond to atomic size scale for dark nuclei.

  4. How the ordinary nuclei are then produced? The dark nuclei could return back to negatively charged EZ (Coulomb attraction) or leave the system along magnetic flux tubes and collide with some target and transform to ordinary nuclei by phase transition reducing the value of heff. It would seem that metallic targets such as Pd are favorites in this respect. A possible reason is that metallic target can have negative surface charge densities (electron charge density waves are believed by some workers in the field to be important for cold fusion) and attract the positively charged dark nuclei at magnetic flux tubes.

    Essentially all of the nuclear binding energy would be liberated - not only the difference of binding energies for the reacting nuclei as in hot fusion. At this stage also ultra-dense regions of deuterium might be created since huge binding energy is liberated and could induce also ordinary fusion reactions. This process would create fractures in the metal target.

    This would also explain the claimed strange effects of so called Brown's gas generated in electrolysis on metals: it is claimed that Brown's gas (one piece of physics, which serious academic physicists enjoying monthly salary refuse to consider seriously) can melt metals although its temperature is not much more than 100 degrees Celsius.

  5. This model would predict the formation of beta stable nuclei as dark proton sequences transform to ordinary nuclei. This process would be analogous to that believed to occur in supernova explosions and used to explain the synthesis of nuclei heavier than iron. This process could also replace the hypothesis about super-nova nucleosynthesis: indeed, SN1987A did not provide support for this hypothesis.

    The reactor of Rossi is reported to produce heavier isotopes of Ni and of Copper. This would strongly suggest that protons also fuse with Ni nuclei. Also heavier nuclei could enter to the magnetic flux tubes and form dark nuclei with dark protons transformed partially to neutral nucleons. Also the transformation of dark nuclei to ordinary nuclei could generate so high densities that ordinary nuclear reactions become possible.

  6. What about the mysterious production of pions and mesons producing in turn muons?

    1. Could the transformation of nuclei to ordinary nuclei generate so high a local temperature that hadron physics would provide an appropriate description of the situation. Pion mass corresponds to 140 MeV energy and huge temperature about .14 GeV. This is much higher than solar temperature and looks totally implausible.

    2. The total binding energy of nucleus with 20 nucleons as single pion would generate energy of this order of magnitude. Dark nuclei are quantum coherent structures: could this make possible this kind of "holistic" process in the transformation to ordinary nucleus. This might be part of the story.

    3. Could the transformation to ordinary nucleus involve the emission of dark W boson with mass about 80 GeV decaying to dark quark pairs binding to dark mesons transforming eventually to ordinary mesons? Could dark W boson emission occur quantum coherently so that the amplitude would be sum over the emission amplitudes, and one would have an amplification of the decay rate so that it would be proportional to the square of dark nuclear charge? The effective masslessness below atomic scale would make the rate for this process high. The emission would lead directly to the final state nucleus by emission of on mass shell mesons.

For background see the background see the chapter Cold fusion again of "Hyper-finite factors, p-adic length scale hypothesis, and dark matter hierarchy" or article with the same title.

For a summary of earlier postings see Latest progress in TGD.

Tuesday, April 05, 2016

Could N=2 super-conformal algebra be relevant for TGD?

The concrete realization of the super-conformal symmetry (SCS) in TGD framework has remained poorly understood. In particular, the question how SCS relates to super-conformal field theories (SCFTs) has remained an open question. The most general super-conformal algebra assignable to string world sheets by strong form of holography has N equal to the number of 4+4 =8 spin states of leptonic and quark type fundamental spinors but the space-time SUSY is badly broken for it. Covariant constancy of the generating spinor modes is replaced with holomorphy - kind of "half covariant constancy". I have considered earlier a proposal that N=4 SCA could be realized in TGD framework but given up this idea. Right-handed neutrino and antineutrino are excellent candidates for generating N=2 SCS with a minimal breaking of the corresponding space-time SUSY. Covariant constant neutrino is an excellent candidate for the generator of N=2 SCS. The possibility of this SCS in TGD framework will be considered in the sequel.

1. Questions about SCS in TGD framework

This work was inspired by questions not related to N=2 SCS, and it is good to consider first these questions.

1. 1 Could the super-conformal generators have conformal weights given by poles of fermionic zeta?

The conjecture (see this) is that the conformal weights for the generators super-symplectic representation correspond to the negatives of h= -ksk of the poles sk fermionic partition function ζF(ks)=ζ(ks)/ζ(2ks) defining fermionic partition function. Here k is constant, whose value must be fixed from the condition that the spectrum is physical. ζ(ks) defines bosonic partition function for particles whos energies are given by log(p), p prime. These partition functions require complex temperature but is completely sensible in Zero Energy Ontology (ZEO), where thermodynamics is replaced with its complex square root.

For non-trivial zeros 2ks=1/2+iy of ζ(2ks) s would correspond pole s= (1/2+iy)/2k of zF(ks). The corresponding conformal weights would be h=(-1/2-iy)/2k. For trivial zeros 2ks=-2n, n=1,2,.. s=-n/k would correspond to conformal weights h=n/k>0. Conformal confinement is assumed meaning that the sum of imaginary parts of of generators creating the state vanishes.

What can one say about the value of k? The pole of ζ(ks) at s=1/k would correspond to pole and conformal weight h=-1/k. For k=1 the trivial conformal weights would be positive integers h=1,2,...: this certainly makes sense. This gives for the real part for non-trivial conformal weights h=-1/4. By conformal confinement both pole and its conjugate belong to the state so that this contribution to conformal weight is negative half integers: this is consistent with the facts about super-conformal representations. For the ground state of super-conformal representation the conformal weight for conformally confined state would be h=- K/2. In p-adic mass calculations one would have K=6 (see this) .

The negative ground state conformal weights of particles look strange but p-adic mass calculations require that the ground state conformal weights of particles are negative: h=-3 is required.

1.2 What could be the origin of negative ground state conformal weights?

Super-symplectic conformal symmetries are realized at light-cone boundary and various Hamiltonians defined analogs of Kac-Moody generators are proportional functions f(rM)HJ,m HA, where HJ,m correspond to spherical harmonics at the 2-sphere RM=constant and HA is color partial wave in CP2, f(rM) is a partial wave in radial light-like coordinate which is eigenstate of scaling operator L0=rMd/dRM and has the form (rM/r0)-h, where h is conformal weight which must be of form h=-1/2+iy.

To get plane wave normalization for the amplitudes

(rM/r0) h=(rM/r0)-1/2exp(iyx) ,

x=log(rM/r0) ,

one must assume h=-1/2+iy. Together with the invariant integration measure drM this gives for the inner product of two conformal plane waves exp(iyix), x=log(rM/r0) the desired expression ∫ exp[iy1-y2)x] dx= δ(y1-y2), where dx= drM/rM is scaling invariance integration measure. This is just the usual inner product of plane waves labelled by momenta yi.

If rM/r0 can be identified as a coordinate along fermionic string (this need not be always the case) one can interpret it as real or imaginary part of a hypercomplex coordinate at string world sheet and continue these wave functions to the entire string world sheets. This would be very elegant realization of conformal invariance.

1.3. How to relate degenerate representations with h>0 to the massless states constructed from tachyonic
ground states with negative conformal weight?

This realization would however suggest that there must be also an interpretation in which ground states with negative conformal weight hvac=-k/2 are replaced with ground states having vanishing conformal weights hvac=0 as in minimal SCAs and what is regarded as massless states have conformal weights h= -hvac>0 of the lowest physical state in minimal SCAs.

One could indeed start directly from the scaling invariant measure drM/rM rather than allowing it to emerge from drM. This would require in the case of p-adic mass calculations that has representations satisfying Virasoro conditions for weight h=-hvac>0. p-Adic mass squared would be now shifted downwards and proportional to L0+hvac. There seems to be no fundamental reason preventing this interpretation. One can also modify scaling generator L0 by an additive constant term and this does not affect the value of c. This operation corresponds to replacing basis {zn} with basis {zn+1/2}.

What makes this interpretation worth of discussing is that the entire machinery of conformal field theories with non-vanishing central charge and non-vanishing but positive ground state conformal weight becomes accessible allowing to determine not only the spectrum for these theories but also to determine the partition functions and even to construct n-point functions in turn serving as basic building bricks of S-matrix elements (see this) .

ADE classification of these CFTs in turn suggests at connection with the inclusions of hyperfinite factors and hierarchy of Planck constants. The fractal hierarchy of broken conformal symmetries with sub-algebra defining gauge algebra isomorphic to entire algebra would give rise to dynamic symmetries and inclusions for HFFs suggest that ADE groups define Kac-Moody type symmetry algebras for the non-gauge part of the symmetry algebra.

2. Questions about N=2 SCS

N=2 SCFTs has some inherent problems. For instance, it has been claimed that they reduce to topological QFTs. Whether N=2 can be applied in TGD framework is questionable: they have critical space-time dimension D=4 but since the required metric signature of space-time is wrong.

2.1 Inherent problems of N=2 SCS

N=2 SCS has some severe inherent problems.

  1. N=2 SCS has critical space-time dimension D=4, which is extremely nice. On the other, N=2 requires that space-time should have complex structure and thus metric signature (4,0), (0,4) or (2,2) rather than Minkowski signature. Similar problem is encountered in twistorialization and TGD proposal is Hamilton-Jacobi structure (se the appendix of (see this), which is hybrid of hypercomplex structure and Kähler structure. There is also an old proposal by Pope et al (see this) that one can obtain by a procedure analogous to dimensional reduction N=2 SCS from a 6-D theory with signature (3,3). The lifting of Kähler action to twistor space level allows the twistor space of M4 to have this signature and the degrees of freedom of the sphere S2 are indeed frozen.

  2. There is also an argument by Eguchi that N=2 SCFTs reduce under some conditions to mere topological QFTs (see this). This looks bad but there is a more refined argument that N=2 SCFT transforms to a topological CFT only by a suitable twist (see this). This is a highly attractive feature since TGD can be indeed regarded as almost topological QFT. For instance, Kähler action in Minkowskian regions could reduce to Chern-Simons term for a very general solution ansatz. Only the volume term having interpretation in terms of cosmological constant (see this) (extremely small in recent cosmology) would not allow this kind of reduction. The topological description of particle reactions based on generalized Feynman diagrams identifiable in terms of space-time regions with Euclidian signature of the induced metric would allow to build n-point functions in the fermionic sector as those of a free field theory. Topological QFT in bosonic degrees of freedom would correspond naturally to the braiding of fermion lines.

2.2 Can one really apply N=2 SCFTs to TGD?

TGD version of SCA is gigantic as compared to the ordinary SCA. This SCA involves super-symplectic algebra associated with metrically 2-dimensional light-cone boundary (light-like boundaries of causal diamonds) and the corresponding extended conformal algebra (light-like boundary is metrically sphere S2). Both these algebras have conformal structure with respect to the light-like radial coordinate rM and conformal algebra also with respect to the complex coordinate of S2. Symplectic algebra replaces finite-dimensional Lie algebra as the analog of Kac-Moody algebra. Also light-like orbits of partonic 2-surfaces possess this SCA but now Kac-Moody algebra is defined by isometries of imbedding space. String world sheets possess an ordinary SCA assignable to isometries of the imbedding space. An attractive interpretation is that rM at light-cone boundary corresponds to a coordinate along fermionic string extendable to a hypercomplex coordinate at string world sheet.

N=8 SCS seems to be the most natural candidate for SCS behind TGD: all fermion spin states would correspond to generators of this symmetry. Since the modes generating the symmetry are however only half-covariantly constant (holomorphic) this SUSY is badly broken at space-time level and the minimal breaking occurs for N=2 SCS generated by right-handed neutrino and antineutrino.

The key motivation for the application of minimal N=2 SCFTs to TGD is that SCAs for them have a non-vanishing central charge c and vacuum weight h≥ 0 and the degenerate character of ground state allows to deduce differential equations for n-point functions so that these theories are exactly solvable. It would be extremely nice is scattering amplitudes were basically determined by n-point functions for minimal SCFTs.

A further motivation comes from the following insight. ADE classification of N=2 SCFTs is extremely powerful result and there is connection with the hierarchy of inclusions of hyperfinite factors of type II1, which is central for quantum TGD. The hierarchy of Planck constants assignable to the hierarchy of isomorphic sub-algebras of the super-symplectic and related algebras suggest interpretation in terms of ADE hierarchy a rather detailed view about a hierarchy of conformal field theories and even the identification of primary fields in terms of critical deformations.

The application N=2 SCFTs in TGD framework can be however challenged. The problem caused by the negative value of vacuum conformal weight has been already discussed but there are also other problems.

  1. One can argue that covariantly constant right-handed neutrino - call it νR - defines a pure gauge super-symmetry and it has taken along time to decide whether this is the case or not. Taking at face value the lacking evidence for space-time SUSY from LHC would be easy but too light-hearted manner to get rid of the problem.

    Could it be that at space-time level covariantly constant right-handed neutrino (νR) and its antiparticle (ν*R) generates pure gauge symmetry so that the resulting sfermions correspond to zero norm states? The oscillator operators for νR at imbedding space level have commutator proportional to pkγk vanishing at the limit of vanishing massless four-momentum. This would imply that they generate sfermions as zero norm states. This argument is however formulated at the level of imbedding space: induced spinor modes reside at string world sheets and covariant constancy is replaced by holomorphy!

    At the level of induced spinor modes located at string world sheets the situation is indeed different. The anti-commutators are not proportional to pkγk but in Zero Energy Ontology (ZEO) can be taken to be proportional to nkγk where nk is light-like vector dual to the light-like radial vector of the point of the light-like boundary of causal diamond CD (part of light-one boundary) considered. Therefore also constant νR and ν*R are allowed as non-zero norm states and the 3 sfermions are physical particles. Both ZEO and strong form of holography (SH) would play crucial role in making the SCS dynamical symmetry.

  2. Second objection is that LHC has failed to detect sparticles. In TGD framework this objection cannot be taken seriously. The breaking of N=2 SUSY would be most naturally realized as different p-adic length scales for particle and sparticle. The mass formula would be the same apart from different p-adic mass scale. Sparticles could emerge at short p-adic length scale than those studied at LHC (labelled by Mersenne primes M89 and MG,79= (1+i)79).

    One the other hand, one could argue that since covariantly constant right-handed neutrino has no electroweak-, color- nor gravitational interactions, its addition to the state should not change its mass. Again the point is however that one considers only neutrinos at string world sheet so that covariant constancy is replaced with holomorphy and all modes of right-handed neutrino are involved. Kähler Dirac equation brings in mixing of left and right-handed neutrinos serving as signature for massivation in turn leading to SUSY breaking. One can of course ask whether the p-adic mass scales could be identical after all. Could the sparticles be dark having non-standard value of Planck constant heff=n× h and be created only at quantum criticality (see this).

This is a brief overall view about the most obvious problems and proposed solution of them in TGD framework and in the following I will discuss the details. I am of course not a SCFT professional. I however dare to trust my physical intuition since experience has taught to me that it is better to concentrate on physics rather than get drowned in poorly understood mathematical technicalities.

For details see the new chapter Could N=2 Super-Conformal Algebra Be Relevant For TGD? or the article with the same title.

For a summary of earlier postings see Links to the latest progress in TGD.