https://matpitka.blogspot.com/2016/12/

Thursday, December 29, 2016

Non-commutative space and strong form of holography

The precise formulation of strong form of holography (SH) is one of the technical problems in TGD. A comment in FB page of Gareth Lee Meredith led to the observation that besides the purely number theoretical formulation based on commutativity also a symplectic formulation in the spirit of non-commutativity of imbedding space coordinates can be considered. One can however use only the notion of Lagrangian manifold and avoids making coordinates operators leading to a loss of General Coordinate Invariance (GCI).

Quantum group theorists have studied the idea that space-time coordinates are non-commutative and tried to construct quantum field theories with non-commutative space-time coordinates (see this). My impression is that this approach has not been very successful. In Minkowski space one introduces antisymmetry tensor Jkl and uncertainty relation in linear M4 coordinates mk would look something like [mk, ml] = lP2Jkl, where lP is Planck length. This would be a direct generalization of non-commutativity for momenta and coordinates expressed in terms of symplectic form Jkl.

1+1-D case serves as a simple example. The non-commutativity of p and q forces to use either p or q. Non-commutativity condition reads as [p,q]= hbar Jpq and is quantum counterpart for classical Poisson bracket. Non-commutativity forces the restriction of the wave function to be a function of p or of q but not both. More geometrically: one selects Lagrangian sub-manifold to which the projection of Jpq vanishes: coordinates become commutative in this sub-manifold. This condition can be formulated purely classically: wave function is defined in Lagrangian sub-manifolds to which the projection of J vanishes. Lagrangian manifolds are however not unique and this leads to problems in this kind of quantization. In TGD framework the notion of "World of Classical Worlds" (WCW) allows to circumvent this kind of problems and one can say that quantum theory is purely classical field theory for WCW spinor fields. "Quantization without quantization would have Wheeler stated it.

GCI poses however a problem if one wants to generalize quantum group approach from M4 to general space-time: linear M4 coordinates assignable to Lie-algebra of translations as isometries do not generalize. In TGD space-time is surface in imbedding space H=M4× CP2: this changes the situation since one can use 4 imbedding space coordinates (preferred by isometries of H) also as space-time coordinates. The analog of symplectic structure J for M4 makes sense and number theoretic vision involving octonions and quaternions leads to its introduction. Note that CP2 has naturally symplectic form.

Could it be that the coordinates for space-time surface are in some sense analogous to symplectic coordinates (p1,p2,q1,q2) so that one must use either (p1,p2) or (q1,q2) providing coordinates for a Lagrangian sub-manifold. This would mean selecting a Lagrangian sub-manifold of space-time surface? Could one require that the sum Jμν(M4)+ Jμν(CP2) for the projections of symplectic forms vanishes and forces in the generic case localization to string world sheets and partonic 2-surfaces. In special case also higher-D surfaces - even 4-D surfaces as products of Lagrangian 2-manifolds for M4 and CP2 are possible: they would correspond to homologically trivial cosmic strings X2× Y2⊂ M4× CP2, which are not anymore vacuum extremals but minimal surfaces if the action contains besides Käction also volume term.

But why this kind of restriction? In TGD one has strong form of holography (SH): 2-D string world sheets and partonic 2-surfaces code for data determining classical and quantum evolution. Could this projection of M4 × CP2 symplectic structure to space-time surface allow an elegant mathematical realization of SH and bring in the Planck length lP defining the radius of twistor sphere associated with the twistor space of M4 in twistor lift of TGD? Note that this can be done without introducing imbedding space coordinates as operators so that one avoids the problems with general coordinate invariance. Note also that the non-uniqueness would not be a problem as in quantization since it would correspond to the dynamics of 2-D surfaces.

The analog of brane hierarchy for the localization of spinors - space-time surfaces; string world sheets and partonic 2-surfaces; boundaries of string world sheets - is suggesetive. Could this hierarchy correspond to a hierarchy of Lagrangian sub-manifolds of space-time in the sense that J(M4)+J(CP2)=0 is true at them? Boundaries of string world sheets would be trivially Lagrangian manifolds. String world sheets allowing spinor modes should have J(M4)+J(CP2)=0 at them. The vanishing of induced W boson fields is needed to guarantee well-defined em charge at string world sheets and that also this condition allow also 4-D solutions besides 2-D generic solutions. This condition is physically obvious but mathematically not well-understood: could the condition J(M4)+J(CP2)=0 force the vanishing of induced W boson fields? Lagrangian cosmic string type minimal surfaces X2× Y2 would allow 4-D spinor modes. If the light-like 3-surface defining boundary between Minkowskian and Euclidian space-time regions is Lagrangian surface, the total induced Kähler form Chern-Simons term would vanish. The 4-D canonical momentum currents would however have non-vanishing normal component at these surfaces. I have considered the possibility that TGD counterparts of space-time super-symmetries could be interpreted as addition of higher-D right-handed neutrino modes to the 1-fermion states assigned with the boundaries of string world sheets.

An alternative - but of course not necessarily equivalent - attempt to formulate this picture would be in terms of number theoretic vision. Space-time surfaces would be associative or co-associative depending on whether tangent space or normal space in imbedding space is associative - that is quaternionic. These two conditions would reduce space-time dynamics to associativity and commutativity conditions. String world sheets and partonic 2-surfaces would correspond to maximal commutative or co-commutative sub-manifolds of imbedding space. Commutativity (co-commutativity) would mean that tangent space (normal space as a sub-manifold of space-time surface) has complex tangent space at each point and that these tangent spaces integrate to 2-surface. SH would mean that data at these 2-surfaces would be enough to construct quantum states. String world sheet boundaries would in turn correspond to real curves of the complex 2-surfaces intersecting partonic 2-surfaces at points so that the hierarchy of classical number fields would have nice realization at the level of the classical dynamics of quantum TGD.

For background see the chapter How the hierarchy of Planck constants might relate to the almost vacuum degeneracy for twistor lift of TGD?.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

p-Adic logic and hierarchy of partition algebras

As found in the article Boolean algebra, Stone spaces, and TGD, one can generalize Boolean logic to a logic in finite field G(p) with p elements. p-Logics have very nice features. For a given set the p-Boolean algebra can be represented as maps having values in finite field G(p). The subsets with a given value 0≤ k<p define subsets of a partition and one indeed obtains p subsets some of which are empty unless the map is surjection.

The basic challenges are following: generalize logical negation and generalize Boolean operations AND and OR. I have considered several options but the one based on category theoretical thinking seems to be the most promising one. One can imbed p1-Boolean algebras to p-Boolean algebra by considering functions which have values in G(p1)⊂ G(p). One can also project G(p) valued functions to G(p1) by mod p1 operation. The operations should respect the logical negation and p-Boolean operations
if possible.

  1. The basic question is how to define logical negation. Since 2-Boolean algebra is imbeddable to any p-Boolean algebra, it is natural to require that also in p-Boolean case the operation permute 0 and 1. These elements are also preferred elements algebraically since they are neutral elements for sum and product. This condition could be satisfied by simply defining negation as an operation leaving other elements of G(p) un-affected. An alternative definition would be as shift k→ k-1. This is an attractive option since it corresponds to a cyclic symmetry.For G(p) also higher powers of this operation would define analogs of negation in accordance with p-valuedness.

    I have considered also the possibility that for p>2 the analog of logical negation could be defined as an additive inverse k→ p-k in G(p) and k=p-1 would be mapped to k=1 as one might expect. The non-allowed value k=0 is mapped to k=p=0. k=0 would be its own negation. This would suggest that k=0 corresponds to an ill-defined truth value for p>2. For p=2 k=0 must however correspond to false. This option is not however consistent with category theory inspired thinking.

  2. For G(p)-valued functions f, one can define the p-analogs of both XOR (excluded or [(A OR B) but not (A AND B)] and AND using local sum and product for the everywhere-non-vanishing G(p)-valued functions. One can also define the analog of OR in terms of f1+f2-f1f2 for arbitrary G(p)-valued functions. Note that minus sign is essential as one can see by considering p=3 case (1+1-1× 1=1 and 1+1+1× 1=0). For p=2 this would give ordinary OR and it would be obviously non-vanishing unless both functions are identically zero. For p>2 A OR B defined in this manner f1 +f2- f1f2 for functions having no zeros can however have zeros. The mod p1 projection from G(p)→ G(p1) indeed commutes with these operations.

    Could 3-logic with 0 interpreted as ill-defined logical value serve as a representation of Boolean logic? This is not the case: 1× 2=2 would correspond to 1× 0=0 but 2× 2=1 does not correspond to 0× 0=0.

  3. It would be nice to have well-defined inverse of Boolean function giving additional algebra structure for the partitions. For non-vanishing values of f(x) one would have (1/f)(x)=1/f(x). How to define (1/f)(x) for f(x)=0? One can consider three options.

    1. Option I: If 0 is interpreted as ill-defined value of p-Boolean function, there is a temptation to argue that the value of 1/f is also ill defined: (1/f)(x)=0 for f(x)=0. That function values would be replaced with their inverses only at points, where they are no-vanishing would conform with how ill-defined Boolean values are treated in computation. This leads to a well-defined algebra structure but the inverse defined in this manner is only local inverse. One has f (f-1) (x))=1 only for f(x)>0. One has algebra but not a field.

    2. Option II: One could consider the extension of G(p) by the inverse of 0, call it ∞, satisfying 0× ∞=1 ("false" AND ∞ = "true"!). Arithmetic intuition would suggest k× ∞ = ∞ for k>0 and k+∞ = ∞ for all k.

      On the other hand, the interpretation of + as XOR would suggest that k+∞ corresponds to [(k OR ∞) but not (k AND ∞)=∞] suggesting k+∞= k so that 0 and ∞ would be in completely symmetrical position with respect to product and sum (k+∞=k and k+0=k; k× ∞=∞ and k× 0=0). It would be nice to have a logical interpretation for the inverse and for the element ∞. Especially so in 2-Boolean case. A plausible looking interpretation of ∞ would be as "ill-defined" implying that k OR ∞ and k AND ∞ is also "ill-defined". ["false" AND "ill-defined"]="true" sounds however strange.

      For a set with N elements this would give a genuine field with (p+1)N elements. For the more convincing arithmetic option the outcome is completely analogous to the addition of point ∞ to real or complex numbers.

    3. Option III: One could also consider functions, which are non-vanishing at all points of the set are allowed. This function space is not however closed under summation.

  4. For these three options one would have K(N)=pN, K(N)=(p+1)N and K(N)=(p-1)N different maps of this kind having additive and multiplicative inverses. This hierarchy of statements about statements continues ad infinitum with K(n)=K(K(n-1)). For Option II this gives K(n)= (p+1)K(n-1) so that one does not obtain finite field G(p,N) with pN elements but function field.

  5. One can also consider maps for which values are in the range 0<k<p. This set of maps would be however closed with respect to OR and would not obtain hierarchy of finite fields. In this case the interpretation of 0 would be is un-determined and for p=2 this option would be trivial. For p=3 one would have effectively two well-defined logic values but the algebra would not be equivalent with ordinary Boolean algebra.

The outcome for Option II would be a very nice algebraic structure having also geometric interpretation possibly interesting from the point of view of logic. p-Boolean algebra provides p-partitions with generalizations of XOR, OR, AND, negation, and finite field structure at each level of the hierarchy: kind of calculus for p-partitions.

The lowest level of the algebraic structure generalizes as such also to p-adic-valued functions in discrete or even continuous set. The negation fails to have an obvious generalization and the second level of the hierarchy would require defining functions in the infinite-D space of p-adic-valued functions.

See the article Boolean algebra, Stone spaces, and TGD.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, December 26, 2016

How AC voltage at critical frequencies could induce transition to microtubular superconductivity?

Blog and Facebook discussions have turned out to be extremely useful and quite often new details to the existing picture emerge from them. We have had interesting exchanges with Christoffer Heck in the comment section to the posting Are microtubules macroscopic quantum systems? and this pleasant surprise occurred also now thanks to a question by Christoffer.

Recall that Bandyopadhyay's team claims to have detected the analog of superconductivity, when microtubules are subjected to AC voltage (see this). The transition to superconductivity would occur at certain critical frequencies. For references and the TGD inspired model see the article.

The TGD proposal for bio-superconductivity - in particular that appearing in microtubules - is same as that for high Tc superconductivity. Quantum criticality,large heff/h=n phases of of Cooper pairs of electrons and parallel magnetic flux tube pairs carrying the members of Cooper pairs for the essential parts of the mechanism. S=0 (S=1) Cooper pairs appear when the magnetic fields at parallel flux tubes have opposite (same) direction.

Cooper pairs would be present already below the gap temperature but possible super-currents could flow in short loops formed by magnetic flux tubes in ferromagnetic system. AC voltage at critical frequency would somehow induce transition to superconductivity in long length scales by inducing a phase transition of microtubules without helical symmetry to those with helical symmetry and fusing the conduction pathways with length of 13 tubulins to much longer ones by reconnection of magnetic flux tubes parallel to the conduction pathways.

The phonon mechanism for the formation of Cooper pair in ordinary superconductivity cannot be however involved with high Tc superconductivity nor bio-superconductivity. There is upper bound of about 30 K for the critical temperature of BCS superconductors. Few days ago I learned about high Tc superconductivity around 500 K for n-alkanes (see the blog posting) so that the mechanism for high Tc is certainly different .

The question of Christoffer was following. Could microwave radiation for which photon energies are around 10-5 eV for ordinary value of Planck constant and correspond to the gap energy of BCS superconductivity induce phase transition to BCS super-conductivity and maybe to micro-tubular superconductivity (if it exists at all)?

This inspires the question about how precisely the AC voltage at critical frequencies could induce the transition to high Tc- and bio-super-conductivity. Consider first what could happen in the transition to high Tc super-conductivity.

  1. In high Tc super conductors such as copper-oxides the anti-ferromagnetism is known to be essential as also 2-D sub-lattice structures. Anti-ferromagnetism suggests that closed flux tubes form of squares with opposite directions of magnetic field at the opposite sides of square. The opposite sides of the square would carry the members of Cooper pair.

  2. At quantum criticality these squares would reconnect to very long flattened squares by reconnection. The members of Cooper pairs would reside at parallel flux tubes forming the sides of the flattened square. Gap energy would consists interaction energies with the magnetic fields and the mutual interaction energy of magnetic moments.

    This mechanism does not work in standard QM since the energies involved are quite too low as compared to thermal energy. Large heff/h=n would however scale up the magnetic energies by n. Note that the notion of gap energy should be perhaps replaced with collective binding energy per Cooper pair obtained from the difference of total energies for gap phase formed at higher temperature and for superconducting phase formed at Tc by dividing with the number of Cooper pairs.

    Another important distinction to BCS is that Cooper pairs would be present already below gap temperature. At quantum criticality the conduction pathways would become much longer by reconnection. This would be represent an example about "topological" condensed matter physics. Now hover space-time topology would be in question.

  3. The analogs of phonons could be present as transversal oscillations of magnetic flux tubes: at quantum criticality long wave length "magneto-phonons" would be present. The transverse oscillations of flux tube squares would give rise to reconnection and formation of

If the irradiation or its generalization to high Tc works the energy of photon should be around gap energy or more precisely around energy difference per Cooper pair for the phases with long flux tubes pairs and short square like flux tubes.
  1. To induce superconductivity one should induce formation of Cooper pairs in BCS superconductivity. In high Tc super-conductivity it should induce a phase transition in which small square shaped flux tube reconnect to long flux tubes
    forming the conducting pathways. The system should radiate away the energy difference for these phases: the counterpart of binding energy could be defined as the radiated energy per Cooper pair.

  2. One could think the analog of stimulated emission. Assume that Cooper pairs have two states: the genuine Cooper pair and the non-superconducting Cooper pair. This is the case in high Tc superconductivity but not in BCS superconductivity, where the emergence of superconductivity creates the Cooper pairs. One can of course ask whether one could speak about the analog of stimulated emission also in this case.

  3. Above Tc but below gap temperature one has the analog of inverted population: all pairs are in higher energy state. The irradiation with photon beam with energy corresponding to energy difference gives rise to stimulated emission and the system goes to superconducting state with a lower energy state with a lower energy.

This mechanism could explain the finding of Bandyopadhyay's team that AC perturbation at certain critical frequencies gave rise to a ballistic state (no dependence of the resistance on the length of the wire so that the resistance must be located at its ends). The team used photons with frequency scales of MHz, GHz, and THz. The corresponding photon energy scales are about 10-8 eV, 10-5, 10-2 eV for the ordinary value of Planck constant and are below thermal energies.

In TGD classical radiation should have also large heff/h=n photonic counterparts with much larger energies E=heff×f to explain the quantal effects of ELF radiation at EEG frequency range on brain (see this). The general proposal is that heff equals to what I have called gravitational Planck constant hbargr=GMm/v0 (see this or this). This implies that dark cyclotron photons have universal energy range having no dependence on the mass of the charged particle. Bio-photons have energies in visible and UV range much above thermal energy and would result in the transition transforming dark photons with large heff = hgr to ordinary photons.

One could argue that AC field does not correspond to radiation. In TGD framework this kind of electric fields can be interpreted as analogs of standing waves generated when charged particle has contacts to parallel "massless extremals" representing classical radiation with same frequency propagating in opposite directions. The net force experienced by the particle corresponds to a standing wave.

Irradiation using classical fields would be a general mechanism for inducing bio-superconductivity. Superconductivity would be generated when it is needed. The findings of Blackman and other pioneers of bio-electromagnetism about quantal effects of ELF em fields on vertebrate brain stimulated the idea about dark matter as phases with non-standard value of Planck constant. Also these finding could be interpreted as a generation of superconducting phase by this phase transition.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Taos hum as remote metabolism?

I have been considering an explanation of taos hum. Hints come from the observations that it begins after sunset as microwave presumably "static" generated by living organisms, and also from the phenomenon of microwave hearing: microwaves modulated with sound frequencies can be heard. Taos hum is claimed to correlate also with the acoustics of the building which suggests that it is a real phenomenon. Taos hum can be an (extremely) unpleasant experience. It sounds like an idling diesel engine. I know this from personal experience since I suffered from taos hum when I was younger (as I realized while developing a model for it!).

Could the microwaves transformed inbody to sounds or directly to nerve pulse patterns generating sensation of hearing be the reason for taos hum.

Why taos hum? Could animals use microwaves for "seeing" in absence of sunlight? But for what purpose plants would use microwaves? Could organisms send negative energy heff=n×h microwaves to environment and suck metabolic energy quanta with energy around .5 eV in this manner? Remote metabolism! Or maybe time reversed photosynthesis in dark! Biophotons indeed have energy spectrum in visible and UV as also sunlight does. This would require non-standard value of Planck constant.

This hypothesis would explain why the microwaves causing taos hum not hum are not observed directly. And if something is sucking metabolic energy from you, it is would be rather natural to experience very unpleasant feelings
and try to find a place to hide as many sufferers of taos hum try to do!

For background see the chapter Bio-Systems as Conscious Holograms .

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Sunday, December 25, 2016

High Tc superconductivity in n-alkanes above 231 C

Super conductivity with critical temperature of 231 C for n-alkanes containing n=16 or more carbon atoms in presence of graphite has been reported (see this).

Alkanes (see this) can be linear (CnH2n+2) with carbon backbone forming a snake like structure, branched (CnH2n+2, n > 2) in which carbon backbone splits in one, or more directions or cyclic (CnH2n) with carbon backbone forming a loop. Methane CH4 is the simplest alkane.

What makes the finding so remarkable is that alkanes serve as basic building bricks of organic molecules. For instance, cyclic alkanes modified by replacing some carbon and hydrogen atoms by other atoms or groups form aromatic 5-cycles and 6-cycles as basic building bricks of DNA. I have proposed that aromatic cycles are superconducting and define fundamental and kind of basic units of molecular consciousness and in case of DNA combine to a larger linear structure.

Organic high Tc superconductivity is one of the basic predictions of quantum TGD. The mechanism of super-conductivity would be based on Cooper pairs of dark electrons with non-standard value of Planck constant heff=n×h implying quantum coherence is length scales scaled up by n (also bosonic ions and Cooper pairs of fermionic ions can be considered).

The members of dark Cooper pair would reside at parallel magnetic flux tubes carrying magnetic fields with same or opposite direction: for opposite directions one would have S=0 and for the same direction S=1. The cyclotron energy of electrons proportional to heff would be scaled up and this would scale up the binding energy of the Cooper pair and make super-conductivity possible at temperatures even higher than room tempeture (see this).

This mechanism would explain the basic qualitative features of high Tc superconductivity in terms of quantum criticality. Between gap temperature and Tc one one would have superconductivity in short scales and below Tc superconductivity in long length scales. These temperatures would correspond to quantum criticality at which large heff phases would emerge.

What could be the role of graphite? The 2-D hexagonal structure of graphite is expected to be important as it is also in the ordinary super-conductivity: perhaps graphite provides long flux tubes and n-alkanes provide the Cooper pairs at them. Either graphite, n-alkane as organic compound, or both together could induce quantum criticality. In living matter quantum criticality would be induced by different mechanism. For instance, in microtubules it would be induced by AC current at critical frequencies.

For background and for links to TGD inspired work related to super-conductivity see the article New findings about high-temperature super-conductors.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wednesday, December 21, 2016

Slime molds: conscious intelligence without central nervous system?

Andrei Patrascu gave in FB a link to a fascinating article about monocellulars known as slime molds. These monocellulars have no central nervous system but behave like conscious intelligent creatures.

The author of the article assumes that the monocellular slime mold is not conscious? Why? Because the belief is that consciousness is not possible without neurons. But this belief is however only a belief.

By reading the article one learns that slime mold behaves very much like a conscious intelligent entity and is able to perform almost miracles. For instance, slime molds can communicate their memories (learned skills) to each other by fusing together and separating after that! This brings strongly in mind the theory of morphic fields by Rupert Sheldrake predicting that learning at the level of individual implies learning at the level of population.

No neurons, no brain! Where do the memories (learned behaviors) reside? Magnetic body is my bet. MB would be the intentional agent controlling the behavior of slime mold and even force it to split to pieces, which fuse together later. If this behavior is not intentional what then!

Maybe the fusion of slime molds induces the replication of MBs and thus behaviors. This mechanism would also explain why both pieces (also that without brain) of a split flat worm growing to full flat worm inherit the behaviors of the planaria. A possible test for MB is the existence of an analog of EEG in some frequency range making possible sensory communications to and control by MB.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Bio-catalysis, morphogenesis by generalized Chladni mechanism, and bio-harmonies

In the article Catalysis, morphogenesis by generalized Chladni mechanism, and bio-harmonies to appear at my homepage I try to relate 3 different ideas inspired by TGD.

  1. The first idea is that bio-catalysis relies on the notion of magnetic body (MB) carrying dark matter: reconnections of U-shaped flux tubes giving rise to super-conducting flux tube pairs connecting two systems, and the reduction of their lengths as the value of heff/h=n is reduced play a key role. The reduction of heff/h=n for dark atom liberates also energy associated with hydrogen atom like states at flux tubes with energy scaling as 1/heff2. This energy could allow the reactants to overcome the potential wall making the otherwise very slow reaction fast (see this).

    This idea emerged from a model for hydrino atoms proposed by Randell Mills having scaled up binding energy spectrum manifesting itself as a radiation band in EUV range having no chemical origin. The simplest explanation TGD explanation is that the value of heff/h=n is n=6 for visible matter and that for hydrino like states it is m=1,2,3. This would predict the scaling of the energy spectrum by (n/m)2 and its occurrence would liberate the excess binding energy to be used by reacting molecules.

  2. Second idea is that generalized Chladni mechanism (see this) is behind morphogenesis and therefore also involved with catalysis. Charged particles and even charged flux tubes would end up to the nodal surface of electric field to form biological structures. One could speak about dynamics of avoidance and the particles ending up to potential minima provide one example of this dynamics.

    In fact, there are strong mathematical and physical reasons to argue that the dynamics of space-time surface is dynamics of avoidance (see this). The preferred extremals for the sum of Kähler action and volume term are extremals of both so that one can say that force density defined by Kähler action vanishes and the motion corresponds to a generalization of geodesic line to 4-D minimal surface.

  3. The third idea is that genetic code is realized as 3-chords of what I call bio-harmony and represented as dark photon triplets and "massless extremals" (MEs) or "topological light rays"(see this). This gives also rise to realization as sounds since living matter consists of electrets transforming light to sound and vice versa. The question is whether the sequence of 3-chords representing gene could provide a basic realization of Chladni mechanism so that morphogenesis could be regarded as "music of blood" (Greg Bear has written a fascinating scifi book with this title).

For details see the article Catalysis, morphogenesis by generalized Chladni mechanism, and bio-harmonies or the chapter Quantum model of hearing of "TGD and EEG".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, December 19, 2016

Antimatter as dark matter?

It has been found in CERN (see this ) that matter and antimatter atoms have no differences in the energies of their excited states. This is predicted by CPT symmetry. Notice however that CP and T can be separately broken and that this is indeed the case. Kaon is classical example of this in particle physics. Neutral kaon and anti-kaon behave slightly differently.

This finding forces to repeat an old question. Where does the antimatter reside? Or does it exist at all?

GUTs predicted that baryon and lepton number are not conserved separately and suggested a solution to the empirical absence of antimatter. GUTs have been however dead for years and there is actually no proposal for the solution of matter-antimatter asymmetry in the framework of mainstream theories (actually there are no mainstream theories after the death of superstring theories which also assumed GUTs as low energy limits!).

In TGD framework many-sheeted space-time suggests possible solution to the problem. Matter and antimatter are at different space-time sheets. One possibility is that antimatter corresponds to dark matter in TGD sense that is a phase with heff=n× h, n=1,2,3,... such that the value of n for antimatter is different from that for visible matter. Matter and antimatter would not have direct interactions and would interact only via classical fields or by emission of say photons by matter (antimatter) suffering a phase transition changing the value of heff before absorbtion by antimatter (matter). This could be rather rare process. Bio-photons could be produced from dark photons by this process and this is assumed in TGD based model of living matter.

What the value of n for ordinary visible matter could be? The naive guess is that it is n=1, the smallest possible value. Randell Mills has however claimed the existence of scaled down hydrogen atoms - Mills calls them hydrinos - with ground state binding energy considerably higher than for hydrogen atom. The experimental support for the claim is published in respected journals and the company of Mills is developing a new energy technology based on the energy liberated in the transition to hydrino state.

These findings can be understood in TGD framework if one has actually n=6 for visible atoms and n=1, 2, or 3 for hydrinos. Hydrino states would be stabilized in the presence of some catalysts. See this.

The model suggests a universal catalyst action. Among other things catalyst action requires that reacting molecule gets energy to overcome the potential barrier making reaction very slow. If an atom - say (dark) hydrogen - in catalyst suffers a phase transition to hydrino (hydrogen with smaller value of heff/h), it liberates binding energy, and if one of the reactant molecules receives it it can overcome the barrier. After the reaction the energy can be sent back and catalyst hydrino returns to the ordinary hydrogen state. The condition that the dark binding energy is above the thermal energy gives a condition on the value of heff/h=n as n≤ 32. The size scale of the dark largest allowed dark atom would be about 100 nm, 10 times the thickness of the cell membrane.

The notion of phosphate high energy bond is somewhat mysterious concept and manifests as the ability provide energy in ATP to ADP transition. There are claims that there is no such bond. I have spent considerable amount of time to ponder this problem. Could phosphate contain (dark) hydrogen atom able to go to the hydrino state (state with smaller value of heff/h) and liberate the binding energy? Could the decay ATP to ADP produce the original possibly dark hydrogen? Metabolic energy would be needed to kick it back to ordinary bond in ATP.

So: could it be that one has n=6 for stable matter and n is different from this for stable antimatter? Could the small CP breaking cause this?

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Saturday, December 17, 2016

Dogmatism, ego, and elite of mental images

I have had many very useful discussions in Facebook with people who represent scientific establishment. This establishment defines itself by a collection of shared basic beliefs. In natural sciences the most important pillars of dogmatism shared by various schools are following.

  1. Physicalism and naive length scale reductionism are key dogmas. Particle physicist would say that the only interesting things to learn about physics are below LHC length scale. Biology and neuroscience and even less condensed matter physics, chemistry, or nuclear physics involve nothing, which could not be understood in terms of recent physics. Saying "just complex" or "dirty" is the convenient manner to get rid of the unpleasant feeling that maybe there might be still something to be discovered.

  2. Consciousness is some totally uninteresting tiny effect possibly interesting for a cognitive scientist but not for a person with wider interests. Consciousness has no causal powers and there is no free will. Some dogmatists even deny the existence of consciousness. I must say that this leaves me speechless.

  3. Quantum mechanics is complete. In particular, there are absolutely no problems involved with quantum measurement theory. Furthermore, the world is in some miraculous manner completely classical above some scale and quantum theory has nothing to say about living matter and brain.

  4. Locality is one of the basic beliefs. Despite the fact that these scientists themselves are concrete examples about coherent biochemistry in macroscopic scales, the belief is that non-locality is impossible.

  5. Modern science is just applications of algorithms ("shut-up and calculate"): all important discoveries have been made.

It is somewhat surprising and disappointing that so many young people who still should have the potential to learn something new are already engulfed by this belief system.

My intuitive view is that this cognitive rigidity of people identifying themselves as members of establishment relates to the notion of ego, and I have become keenly interested on why ego preservation is so central for consciousness and can create so extreme rigidity that no new idea can get through. Tragically, often just those persons, who would have the tools to develop new ideas further, are the most rigid ones. These intellectually rigid persons rarely have or bother to explain any ideas of their own. They just defend their basic dogmas by attacking anyone with something new using arguments, which are mixtures of personal insults and shallow text book statements.

Fortunately, this is not a completely general phenomenon: certain people often regarded as "weird" by so called "normal" individuals, creative person - artists and sometimes even academic scientists - are able to learn new things and get enthusiastic about new ideas. Unfortunately, most academic people however freeze after they have learned the basic algorithms defining their research profile and allowing to produce personal curriculum vitae.

The frozen ones isolate themselves from "revolutionaries" like me so that it is difficult to find "research material" to test this view, so to say. Web and in particular FB, has however changed the situation. I have had many discussions with patients suffering this cognitive paralysis. For instance, I have learned to know theoretical physicists who quite literally continue live the era of of some fad dead long time ago: GUTs, superstrings, or M-theory. Interestingly, there exists even a brain disorder in which time literally stops: the patient can have exactly the same contents of consciousness for decades. The worst - and really tragic - example about intellectual paralysis was a person just repeating "You are a crackpot" as a reaction to any comment of mine.

I naturally want to understand this mental freezing phenomenon using the notions of TGD inspired theory of consciousness. This freezing has been of course observed also by others during millenia and ego is the popular notion used to explain it. Ego wants to stay as it is and defends itself vigorously against anything new. But what is this ego and why this defensive attitude?

TGD inspired proposal is that ego is a collection of highly stabilized mental images defining the personal belief system - kind of elite of mental images, the upper class. Mental images are sub-selves, living creatures, and need metabolic energy. The problem is that also the new mental images want it! The elite fights desperately to preserve the status quo and simply kills the newcomers. The person attacking me using personal insults is actually defending his mental images against incomers, which quite literally threaten the life of the internal cognitive elite. In the case of religious fanaticism, even the person representing different beliefs might be killed.

At quantum criticality the situation changes. The system is in the middle of revolution and stable but often old and tired mental images (also mental images get old!) can suddenly lose their metabolic resources for the vital newcomers. Only people, who can tolerate continual quantum criticality meaning continual cognitive revolution, can avoid the mental paralysis.

Selves form a hierarchy and this applies both at all levels. Proteins define an excellent example: they are frozen to a folded state most of the time and only in presence of external energy feed unfold and re-self-organize (I have called this brief revolutionary period cellular summer). Same repeats itself at the level of entire society.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wednesday, December 14, 2016

Hydrinos again

I have a habit of returning to TGD explanation of various anomalies to see whether progress in TGD could allow new insights. At this time the question about whether hydrinos might be real, served as an inspiration. This led a to consider a possible connection with cold fusion and a new TGD inspired model for hydrinos. I have discussed this topic earlier.

Randell Mills as written a book and numerous articles about hydrino concept and many of them are published in respected journals. The company of Mills has a homepage containing besides commercial side also list of the abstracts with links to the corresponding articles related to the experimental aspects of the hydrino concept giving a brief summary about what is known about hydrinos (see this).

The proposal is that hydrogen atoms allows besides the states labelled by integer n also states labelled by inverse integer 1/n. Ordinary states would have size proportional to n2 and binding energy proportional to 1/n2. Hydrino would have sizes proportional to 1/n2 and binding energies proportional to n2. There would be strange duality between binding energy and size of the orbit and it is difficult to imagine a modification of hydrogen atom making this possible. Not surprisingly, mainstream physicists do not accept the notion since it challenges the existing atomic model.

The most straightforward proof of the concept would be observation of a radiation emitted as ordinary hydrogen atom goes from ground state to hydrino state and emits radiation with energy En ≈ n2 E1, where E1 is the ground state energy E1≈ 13.6 eV. The natural limit for binding energies corresponds to n=137: in this case binding energy becomes larger than electron mass. Also more general transitions 1/n1→ 1/n2 are predicted. (see for the table of transition energies this).

These transitions are not however observed. The explanation is that they are non-radiative transitions occurring as the catalyst molecule having energy level with same energy absorbs the emitted UV photon. The proposal is that the energy from the transition 1/n→ 1/(n+1) and given by (2n+1) E1 goes to a many-particle state formed by n hydrogen atoms and is eventually liberated as a continuum EUV radiation (see this).

Skeptic can argue that if these transitions are possible, they should occur even spontaneously, and that if catalyst is indeed necessary there must be some good explanation for why this is the case. Hence the experimental support for the hypothesis is indirect and one can consider also alternative explanations.

In any case, the articles are published in refereed journals such as European Physics Journal and the claim is that energy is produced and the technology is claimed to already exist. The energy production is explained in terms of hydrino atom.

The article Mechanism of Soft X-ray Continuum Radiation from Low-Energy Pinch Discharges of Hydrogen and Ultra-low Field Ignition of Solid Fuels (see this) gives an idea about the experimental side.

The article reports EUV radiation in the wavelength range 10-20 nm (62 eV-124 eV) assigned with the transition 1/n=1/3→ 1/4 of hydrino atom for which the energy of the emitted quantum would be 94.2 eV. Emission in this wavelength range was observed for electrodes containing metal oxides favorable to undergo reduction to HOH (water) catalysts so that HOH catalyst would play a significant role. A low voltage high current was passed through a solid fuel comprising a source of H and HOH catalyst to produce explosive plasma and similar EUV radiation was detected. This kind EUV radiation cannot be explained in terms of any chemical reaction.

Is there a connection with TGD based model for cold fusion?

The experiment brings in mind the experiments of the group led by Prof. Holmlid (see the popular article and the slides of the talk by Sweinn Olaffsson related to cold fusion (or low energy nuclear reactions (LENR). This work is taken rather seriously by the community and the status of cold fusion has changed. Also in this case one considers electrolyte and water is in key role. Also Coulomb explosion producing plasma is involved and claimed to produce what is interpreted asa very dense phase of condensed matter consisting of string like structure with distance between hydrogen atoms given essentially by the Compton wavelength of electron.

  1. In TGD framework the atomic strings of Holmlid are replaced by nuclear strings (see this) and interpreted as dark nuclei with large value of heff meaning that the Compton length of proton is scaled up to that of electron by a factor about heff/h=211.

    Could the findings of Mills et al relate to the same phenomenon as the findings of Holmlid? The effective radius of the dark nucleus is 2.4× 10-12 meters. The radius of n=4 hydrino would be 3.3 × 10-12 m so that the two phenomena might have common origin.

  2. Dark nuclear binding energy is liberated as dark photons as dark protons fuse to a dark nuclear string. The naive scaling of the nuclear binding energy per nucleon would mean that it is proportional to the Compton length of nucleus and thus to h/heff=2-11. If nuclear binding energy is taken to be of order 1 MeV one has binding energy scale 500 eV, which is about 5-10 times higher than the energies in the energy range of EUV radiation. This would suggests that hydrino does not reduce to same physical effect as cold fusion. One must be however cautious and ready to challenge both the idea about low energy nuclear reactions and about hydrino atom as such.

  3. One could however consider also other values of heff/h. Assume that they come as powers of 2. If one has h/heff=2-14 the Compton length is 2.84 × 10-11 m to be compared with Bohr radius 5.3× 10-11 m. For h/heff= 2-13 the binding energy would be about 63 eV which corresponds to the lower boundary of the energy interval. In this case the size of dark nucleus would be 4 times longer than electron Compton length. Could the phase transition take in two steps or could one have quantum criticality in TGD sense meaning that phases with several values of heff are present? Or could the experiments of Mills and Holmlid differ in that Mills detects heff/h= 213 case and Holmlid heff/h= 211 case.

  4. The formation of dark proton string would give rise to emission of dark photons with nucleon binding energies of the nuclear string and its excited states formed in this manner. These dark photons are observed only if they transform to ordinary photons in the measurement volume. Their wavelength would be anomalously long - by a factor of order 213 longer than the wavelength of ordinary EUV photon in wavelength range 10-20 nm and therefore in the length scale range 80 - 160 μm assignable to living cells. The transformation to ordinary photons could be by the transition heff/h→ 1 and absorbtion by a complex of n hydrogen atoms transforming it to continuum radiation.

  5. The dark nuclei would decay eventually to ordinary nuclei and liberate ordinary nuclear binding energy. There is experimental evidence for the occurrence of this process. It is however quite possible that most of the dark nuclei leak out of the system and that the energy could be liberated in metal targets.
This if of course only a one possible model for the effect observed by Mills and TGD allows to consider also a model of hydrino based on the TGD based view about dark matter.

Hydrino as dark atom?

I have considered several models for hydrino in TGD context. One of them corresponds to a quantum group analog of Laguerre equation giving fractional spectrum for the principal quantum number n. The spectrum would be more general than that proposed by Mills since one would have n→ n/m rather than n→ 1/n.

The following considerations are inspired by the heretic proposal that hydrogen atom might not actually correspond to the smallest possible value of heff/h=n. This idea has popped into my mind repeatedly but I have dismissed it since I have felt that heff/h=n hypothesis is already enough to irritate colleagues beyond the border. The phase transition n→ n1<n scales up the binding energy spectrum by factor n/n1 and is the simplest proposal found hitherto.

The model should explain why hydrino states are generated spontaneously but require the presence of catalyst and why the photons associated to the hydrino transitions are not detected directly but only as a continuum radiation.

  1. The first guess would be that hydrino corresponds to hydrogen atom with non-standard value of Planck constant heff/h= nh. The problem is that the formal substitution h→ heff=nh× h in hydrogen atom scales down the energies by En→ En/nh2 so that they decrease instead of increasing.

    One can however make a heretic question. Does ordinary hydrogen atom really correspond to the smallest possible value of heff/h=neff with neff=1 and thus of αeff= e2/4π hbareff? Should one take this as a purely experimental question and remember also that in perturbative approach Planck constant does not appear in the scattering rates except in loop corrections. Therefore in the lowest order the value of heff could vary. In TGD loop corrections vanish by quantum criticality and coupling constant evolution is discretized and it could be difficult to detect the variation of heff.

  2. Could the ordinary hydrogen atom actually correspond to heff/h = nH>1 and therefore to αeff= αR/nH= e2R/4πhbar nH ("R" for "real") so that one would have αR= nHα? The convergence of the perturbation theory would dictate the value of nH and in only special situations smaller values of nH would be possible. This would explain why hydrogen atom does not make a spontaneous transition to the hydrino state.

    The maximal value of nH would be nH,max=137 (the binding energy becomes larger than the electron mass) implying αR≈ 1 for heff,R = h/137. For hydrino atom made possible by the presence of catalyst, the value of heff would be reduced so that the energy would be scaled up by a factor x2: x= heff,H/heff,h= nH/nh: here "h" is for "hydrino". The energy spectrum would transform as En/E1→ (nH/nh) × (En/E1) rather than En/E1 =1/n2→ n2 as in the model of Mills. The scaling would be fractional.

  3. Could this model explain why the transition to hydrino state is non-radiative? Dark photon with heff/h=nh<nH it would have shorter wave length by factor 1/nh in the range λ/nH, λ ∈ [10,20] nm and would be observed only when transformed to ordinary photon. If the photon emitted in the transition is dark it could leak out of the system, or could be absorbed by the catalyst if the catalyst has also dark hydrogen atoms with the same value of heff/h=nh. The catalyst would serve as a seed of nH→ nh phase transitions.

  4. How to understand the observed spectrum in the EUV range [10,20] nm? The transition energies for the transitions from the ground state of hydrogen atom to hydrino state would be of form

    Δ E/E1= (nH/nh)2 -1 .

    For the transitions between hydrino states with principal quantum numbers n1 and n2 one would have

    Δ E/E1=nH2[ (nh2n2)-2 -(nh1n1)-2]

    If one allows fractional values nH/nh, it is rather easy to explain the effective continuum spectrum. One can also consider the option that the transitions are such that nh is a divisor of nH and more generally nh2 divides nh1 in the transitions of hydrinos. If only the range of EUV energies spanning one octave is assumed, additional conditions follow.

    Here one must notice that single photon transition between ground states n=1 with different values of heff is not possible without spin flip for electron so that the minimum change of n for ground state transitions without spin flip is n=1→ 2. Spin flip allows also transitions n=1→ 1. The photon emitted in nH→ nh transition would define the EUV analog of hydrogen 21 cm line.

  5. The simplest option corresponds to nH=6.

    1. This option satisfies also the natural constraint that Bohr radius for nh=2 hydrino is larger than electron Compton length. There are also more complex options to consider (such as nH=12 and nH=24=16) but this option seems rather unique.

    2. Spin-non-flip transition n=1→ 2 has the energy Δ E/E1 =5E1/4 with Δ E/eV=17.0. Primary spin-flip transitions n=1 → 1 have energies Δ E/E1 ∈ [8,3] with E/eV∈ [108.8, 40.8]. Secondary spin-flip transition has energy Δ E/E1 =5 giving Δ E/eV= 60.0. Only 17 eV transition is outside the EUV energy range considered by Mills.

    3. This would however force to modify the conjecture that the imaginary parts for the zeros of Riemann Zeta correspond to the values of 1/αK assigned with electroweak U(1) hypercharge at p-adic length scales correspond to p-adic primes near prime powers of two (see this). The prediction for αR would be 1/αR=22.8. The minimal critical values of 1/αK would become 6-ples of the imaginary parts. Hydrino would correspond to a phase with an anomalously large value of 1/αK with the existence of perturbation theory possible only in special situations.

The model suggests a universal catalyst action. Among other things catalyst action requires that reacting molecule gets energy to overcome the potential barrier making reaction very slow. If an atom - say (dark) hydrogen - in catalyst suffers a phase transition to hydrino (hydrogen with smaller value of heff/h), it liberates binding energy, and if one of the reactant molecules receives it it can overcome the barrier. After the reaction the energy can be sent back and catalyst hydrino returns to the ordinary hydrogen state. The condition that the dark binding energy is above the thermal energy gives a condition on the value of heff/h=n as n≤ 32. The size scale of the dark largest allowed dark atom would be about 100 nm, 10 times the thickness of the cell membrane.

The notion of phosphate high energy bond is somewhat mysterious concept and manifests as the ability provide energy in ATP to ADP transition. There are claims that there is no such bond. I have spent considerable amount of time to ponder this problem. Could phosphate contain (dark) hydrogen atom able to go to the hydrino state (state with smaller value of heff/h) and liberate the binding energy? Could the decay ATP to ADP produce the original possibly dark hydrogen? Metabolic energy would be needed to kick it back to ordinary bond in ATP.

One could turn the situation upside down and ask whether the cold fusion effects could correspond to the formation of hydrino atoms in the proposed sense.

  1. heff would be reduced rather than increase in the presence of a catalyst inducing a phase transition reducing heff,H. In particular, could the formation of string of dark nuclei with size of electron be replaced with the formation of strings of dark hydrinos with the same size but with smaller Planck constant as for ordinary hydrogen atom? This picture would be more in spirit with that proposed by Holmlid but forces to challenge the hypothesis that cold fusion followed by the decay of dark nuclei to ordinary nuclei is responsible for the anomalous energy production.

  2. Holmlid however reports evidence for superconductivity. The reduction of the value of Planck constant and thus of Compton scale of electron does not support superconductivity.

  3. Of course, both phenomena could be involved. Hydrogen with nH=6 and hydrinos with heff/h =nh ∈ {2,3} for electrons would have dark nuclei with heff/h=211. The scaled down Bohr radius for nh=2 would be 5.9× 10-12 m and dark proton size would be electron Compton length 2.4× 10-12 m. For other options the Bohr radius could be smaller than the size of dark proton so that nH=6 option would be unique.

For more references see the article Hydrinos again and for background the chapter Summary of TGD Inspired Ideas about Free Energy of "TGD and Fringe Physics".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, December 12, 2016

Minimal surface cosmology

Before the discovery of the twistor lift TGD inspired cosmology has been based on the assumption that vacuum extremals provide a good estimate for the solutions of Einstein's equations at GRT limit of TGD . One can find imbeddings of Robertson-Walker type metrics as vacuum extremals and the general finding is that the cosmological with super-critical and critical mass density have finite duration after which the mass density becomes infinite: this period of course ends before this. The interpretation would be in terms of the emergence of new space-time sheet at which matter represented by smaller space-time sheets suffers topological condensation. The only parameter characterizing critical cosmologies is their duration. Critical (over-critical) cosmologies having SO3× E3 (SO(4)) as isometry group is the duration and the CP2 projection at homologically trivial geodesic sphere S2: the condition that the contribution from S2 to grr component transforms hyperbolic 3-metric to that of E3 or S3 metric fixes these cosmologies almost completely. Sub-critical cosmologies have one-dimensional CP2 projection.

Do Robertson-Walker cosmologies have minimal surface representatives? Recall that minimal surface equations read as

Dα(gαββhkg1/2)= ∂α[gαββhk g1/2] + {αkm}
gαββhm g1/2=0 ,

{αkm} ={l km} ∂αhl .

Sub-critical minimal surface cosmologies would correspond to X4⊂ M4× S1. The natural coordinates are Robertson-Walker coordinates, which co-incide with light-cone coordinates (a=[(m0)2-r2M]1/2, r= rM/a,θ, φ) for light-cone M4+. They are related to spherical Minkowski coordinates (m0,rM,θ,φ) by (m0=a(1+r2)1/2, rM= ar). β =rM/m0=r/(1+r2)1/2) corresponds to the velocity along the line from origin (0,0) to (m0,rM).
r corresponds to the Lorentz factor r= γ β=β/(1-β2)1/2) .

The metric of M4+ is given by the diagonal form [gaa=1, grr=a2/(1+r2), gθθ= a2r2, gφφ= a2r2sin2(θ)]. One can use the coordinates of M4+ also for X4.

The ansatz for the minimal surface reads is Φ= f(a). For f(a)=constant one obtains just the flat M4+. In non-trivial case one has gaa= 1-R2 (df/da)2. The gaa component of the metric becomes now gaa=1/(1-R2(df/da)2). Metric determinant is scaled by gaa1/2 =1 → (1-R2(df/da)21/2. Otherwise the field equations are same as for M4+. Little calculation shows that they are not satisfied unless one as gaa=1.

Also the minimal surface imbeddings of critical and over-critical cosmologies are impossible. The reason is that
the criticality alone fixes these cosmologies almost uniquely and this is too much for allowing minimal surface property.

Thus one can have only the trivial cosmology M4+ carrying dark energy density as a minimal surface solution! This obviously raises several questions.


  1. Could Λ=0 case for which action reduces to Kähler action provide vacuum extremals provide single-sheeted model for Robertson-Walker cosmologies for the GRT limit of TGD for which many-sheeted space-time surface is replaced with a slightly curved region of M4? Could Λ=0 correspond to a genuine phase present in TGD as formal generalization of the view of mathematicians about reals as p=∞ p-adic number suggest. p-Adic length scale would be strictly infinite implying that Λ∝ 1/p vanishes.

  2. Second possibility is that TGD is quantum critical in strong sense. Not only 3-space but the entire space-time surface
    is flat and thus M4+. Only the local gravitational fields created by topologically condensed space-time surfaces would make it curved but would not cause smooth expansion. The expansion would take as quantum phase transitions reducing the value of Λ ∝ 1/p as p-adic prime p increases. p-Adic length scale hypothesis suggests that the preferred primes are near but below powers of 2 p≈ 2k for some integers k. This led for years ago to a model for Expanding Earth.

  3. This picture would explain why individual astrophysical objects have not been observed to expand smoothly (except possibly in these phase transitions) but participate cosmic expansion only in the sense that the distance to other objects increase. The smaller space-time sheets glued to a given space-time sheet preserving their size would emanate from the tip of M4+ for given sheet.

  4. RW cosmology should emerge in the idealization that the jerk-wise expansion by quantum phase transitions
    and reducing the value of Λ (by scalings of 2 by p-adic length scale hypothesis) can be approximated by a smooth cosmological expansion.

One should understand why Robertson-Walker cosmology is such a good approximation to this picture. Consider first cosmic redshift.
  1. The cosmic recession velocity is defined from the redshift by Doppler formula.

    z= (1+β)/(1-β)-1 ≈ β = v/c .

    In TGD framework this should correspond to the velocity defined in terms of the coordinate r of the object.

    Hubble law tells that the recession velocity is proportional to the proper distance D from the source. One has

    v= HD , H= (da/dt)/a= 1/(gaaa)1/2 .

    This brings in the dependence on the Robertson-Walker metric.

    For M4+ one has a=t and one would have gaa=1 and H=1/a. The experimental fact is however that the value of H is larger for non-empty RW cosmologies having gaa<1. How to overcome this problem?

  2. To understand this one must first understand the interpretation of gravitational redshift. In TGD framework the gravitational redshift is property of observer rather than source. The point is that the tangent space of the 3-surface assignable to the observer is related by a Lorent boost to that associated with the source. This implies
    that the four-momentum of radiation from the source is boosted by this same boost. Redshift would mean that the Lorentz boost reduces the momentum from the real one. Therefore redshift would be consistent with momentum conservation implied by Poincare symmetry.

    gaa for which a corresponds to the value of cosmic time for the observer should characterize the boost of observer relative to the source. The natural guess is that the boost is characterized by the value of gtt in sufficiently large rest system assignable to observer with t is taken to be M4 coordinate m0. The value of gtt fluctuates do to the presence of local gravitational fields. At the GRT limit gaa would correspond to the average value of gtt.

  3. There is evidence that H is not same in short and long scales. This could be understood if the radiation arrives along different space-time sheets in these two situations.

  4. If this picture is correct GRT description of cosmology is effective description taking into account the effect of local gravitation to the redshift, which without it would be just the M4+ redshift.

Einstein's equations for RW cosmology should approximately code for the cosmic time dependence of mass density at given slightly deformed piece of M4+ representing particular sub-cosmology expanding in jerkwise manner.
  1. Many-sheeted space-time implies a hierarchy of cosmologies in different p-adic length scales and with cosmological constant Λ ∝ 1/p so that vacuum energy density is smaller in long scale cosmologies and behaves on the average as 1/a2 where a characterizes the scale of the cosmology. In zero energy ontology given scale corresponds to causal diamond (CD) with size characterized by a defining the size scale for the distance between the tips of CD.

  2. For the comoving volume with constant value of coordinate radius r the radius of the volume increases as a. The vacuum energy would increase as a3 for comoving volume. This is in sharp conflict with the fact that the mass decreases as 1/a for radiation dominated cosmology, is constant for matter dominated cosmology, and is proportional to a for string dominated cosmology.

    The physical resolution of the problem is rather obvious. Space-time sheets representing topologically condensed matter have finite size. They do not expand except possibly in jerkwise manner but in this process Λ is reduced - in average manner like 1/a2.

    If the sheets are smaller than the cosmological space-time sheet in the scale considered and do not lose energy by radiation they represent matter dominated cosmology emanating from the vertex of M4+. The mass of the co-moving volume remains constant.

    If they are radiation dominated and in thermal equilibrium they lose energy by radiation and the energy of volume behaves like 1/a.

    Cosmic strings and magnetic flux tubes have size larger than that the space-time sheet representing the cosmology. The string as linear structure has energy proportional to a for fixed value of Λ as in string dominated cosmology. The reduction of Λ decreasing on the average like 1/a2 implies that the contribution of given string is reduced like 1/a on the average as in radiation dominated cosmology.

  3. GRT limit would code for these behaviours of mass density and pressure identified as scalars in GRT cosmology in terms of Einstein's equations. The time dependence of gaa would code for the density of the topologically condensed matter and its pressure and for dark energy at given level of hierarchy. The vanishing of covariant divergence for energy momentum tensor would be a remnant of Poincare invariance and give Einstein's equations with cosmological term.

  4. Why GRT limit would involve only the RW cosmologies allowing imbedding as vacuum extremals of Kähler action? Can one demand continuity in the sense that TGD cosmology at p→ ∞ limit corresponds to GRT cosmology with cosmological solutions identifiable as vacuum extremals? If this is assumed the earlier results are obtained. In particular, one obtains the critical cosmology with 2-D CP2 projection assumed to provide a GRT model for quantum phase transitions changing the value of Λ.

If this picture is correct, TGD inspired cosmology at the level of many-sheeted space-time would be extremely simple. The new element would be many-sheetedness which would lead to more complex description provided by GRT limit. This limit would however lose the information about many-sheetedness and lead to anomalies such as two Hubble constants.

See the new chapter Can one apply Occam's razor as a general purpose debunking argument to TGD? of "Towards M-matrix" or article with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Sunday, December 11, 2016

LIGO blackhole anomaly and minimal surface model for star

The TGD inspired model of star as a minimal surface with stationary spherically symmetric metric suggests strongly that the analog of blackhole mertric as two horizons. The outer horizon is analogous to Scwartschild horizon in the sense that the roles of time coordinate and radial coordinate change. Radial metric component vanishes at Scwartschild horizon rather than divergence. Below the inner horizon the metric has Eucldian signature.

Is there any empirical evidence for the existence of two horizons? There is evidence that the formation of the recently found LIGO blackhole (discussed from TGD view point in is not fully consistent with the GRT based model (see this). There are some indications that LIGO blackhole has a boundary layer such that the gravitational radiation is reflected forth and back between the inner and outer boundaries of the layer. In the proposed model the upper boundary would not be totally reflecting so that gravitational radiation leaks out and gave rise to echoes at times .1 sec, .2 sec, and .3 sec. It is perhaps worth of noticied that time scale .1 sec corresponds to the secondary p-adic time scale of electron (characterized by Mersenne prime M127= 2127-1). If the minimal surface solution indeed has two horizons and a layer like structure between them, one might at least see the trouble of killing the idea that it could give rise to repeated reflections of gravitational radiation.

The proposed model (see this) assumes that the inner horizon is Schwarstchild horizon. TGD would however suggests that the outer horizon is the TGD counterpart of Schwartschild horizon. It could have different radius since it would not be a singularity of grr (gtt/grr would be finite at rS which need not be rS=2GM now). At rS the tangent space of the space-time surface would become effectively 2-dimensional: could this be interpreted in terms of strong holography (SH)?

One should understand why it takes rather long time T=.1 seconds for radiation to travel forth and back the distance L= rS-rE between the horizons. The maximal signal velocity is reduced for the light-like geodesics of the space-time surface but the reduction should be rather large for L∼ 20 km (say). The effective light-velocity is measured by the coordinate time Δ t= Δ m0+ h(rS)-h(rE) needed to travel the distance from rE to rS. The Minkowski time Δ m0-+ would be the from null geodesic property and m0= t+ h(r)

Δ m0-+ =Δ t -h(rS)+h(rE) ,

Δ t = ∫rErS(grr/gtt)1/2 dr== ∫rErS dr/c# .

The time needed to travel forth and back does not depend on h and would be given by

Δ m0 =2Δ t =2∫rErSdr/c# .

This time cannot be shorter than the minimal time (rS-rE)/c along light-like geodesic of M4 since light-like geodesics at space-time surface are in general time-like curves in M4. Since .1 sec corresponds to about 3× 104 km, the average value of c# should be for L= 20 km (just a rough guess) of order c#∼ 2-11c in the interval [rE,rS]. As noticed, T=.1 sec is also the secondary p-adic time assignable to electron labelled by the Mersenne prime M127. Since grr vanishes at rE one has c#→ ∞. c# is finite at rS.

There is an intriguing connection with the notion of gravitational Planck constant. The formula for gravitational Planck constant given by hgr= GMm/v0 characterizing the magnetic bodies topologically for mass m topologically condensed at gravitational magnetic flux tube emanating from large mass M. The interpretation of the velocity parameter v0 has remained open. Could v0 correspond to the average value of c#? For inner planets one has v0≈ 2-11 so that the the order of magnitude is same as for the the estimate for c#.

See the new chapter Can one apply Occam's razor as a general purpose debunking argument to TGD? of "Towards M-matrix" or article with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Minimal surface analog of Schwartschild solution: two horizons and possible connection LIGO anomaly

The twistor lift of TGD has led to a palace revolution inside TGD and led to an amazing simplification of the vision. The earlier view that vacuum extremals of Kähler action should provide models for simplest solutions of Einstein's equations has been modified. Only the vacuum extremals, which are minimal surfaces are accepted. This leads to an amazingly simple view: Einsteinian gravitation could be replaced at the level of single space-time sheet with theory of minimal surfaces in M4× S2, where S2 is homologically trivial geodesic sphere of CP2 so that the induced Kähler form vanishes. Both static solutions satisfying Laplace equation with gravitational self-coupling and solutions describing topological quantized gauge and gravitational radiation classically are obtained. The solutions can be and actually carry non-vanishing gauge charge which can be however very small.

The obvious question is whether the spherically symmetric minimal surface of this kind with stationary induced metric have the physical properties assigned to Schwarschild and Reissner-Nordstöm metrics. The modification of simple calculations done already at 90's for vacuum extremal imbeddings of these metrics leads to an ansatz which gives rise to Newtonian gravitational potential at far away regions. It has also the analog of Schwartschild horizon at which the roles of time and radial coordinate are changed and also another horizon at which the radial direction transforms back to Euclidian so that this horizons is light-like 3-surface at which metric signature changes to Euclidian. Interestingly, there is a recent report about indications that LIGO blackhole has a layer like structure at horizon.

See the new chapter Can one apply Occam's razor as a general purpose debunking argument to TGD? of "Towards M-matrix" or article with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Thursday, December 08, 2016

Space-time engineering from space-time legos


TGD predicts shocking simplicity of both quantal and classical dynamics at space-time level. Could one imagine a construction of more complex geometric objects from basic building bricks - space-time legos?

Let us list the basic ideas.

  1. Physical objects correspond to space-time surfaces of finite size - we see directly the non-trivial topology of space-time in everyday length scales.

  2. There is also a fractal scale hierarchy: 3-surfaces are topologically summed to larger surfaces by connecting them with wormhole contact, which can be also carry monopole magnetic flux in which one obtains particles as pairs of these: these contacts are stable and are ideal for nailing together pieces of the structure stably.

  3. In long length scales in which space-time surface tend to have 4-D M4 projection this gives rise to what I have called many-sheeted spacetime. Sheets are deformations of canonically imbedded M4 extremely near to each other (the maximal distance is determined by CP2 size scale about 104 Planck lengths. The sheets touch each other at topological sum contacts, which can be also identified as building bricks of elementary particles if they carry monopole flux and are thus stable. In D=2 it is easy to visualize this hierarchy.
Simplest legos

What could be the simplest surfaces of this kind - legos?

  1. Assume twistor lift so that action contain volume term besides Kähler action: preferred extremals can be seen as non-linear massless fields coupling to self-gravitation. They also simultaneously extremals of Kähler action. Also hydrodynamical interpretation makes sense in the sense that field equations are conservation laws. What is remarkable is that the solutions have no dependence on coupling parameters: this is crucial for realizing number theoretical universality. Boundary conditions however bring in the dependence on the values of coupling parameters having discrete spectrum by quantum criticality.

  2. The simplest solutions corresponds to Lagrangian sub-manifolds of CP2: induced Kähler form vanishes identically and one has just minimal surfaces. The energy density defined by scale dependent cosmological constant is small in cosmological scales - so that only a template of physical system is in question. In shorter scales the situation changes if the cosmological constant is proportional the inverse of p-adic prime.

    The simplest minimal surfaces are constructed from pieces of geodesic manifolds for which not only the trace of second fundamental form but the form itself vanishes. Geodesic sub-manifolds correspond to points, pieces of lines, planes, and 3-D volumes in E3. In CP2 one has points, circles, geodesic spheres, and CP2 itself.

  3. CP2 type extremals defining a model for wormhole contacts, which can be used to glue basic building bricks at different scales together stably: stability follows from magnetic monopole flux going through the throat so that it cannot be split like homologically trivial contact. Elementary particles are identified as pairs of wormhole contacts and would allow to nail the legos together to from stable structures.

Amazingly, what emerges is the elementary geometry. My apologies for those who hated school geometry.

Geodesic minimal surfaces with vanishing induced gauge fields

Consider first static objects with 1-D CP2 projection having thus vanishing induced gauge fields. These objects are of form M1× X3, X3⊂ E3× CP2. M1 corresponds to time-like or possible light-like geodesic (for CP2 type extremals). I will consider mostly Minkowskian space-time regions in the following.

  1. Quite generally, the simplest legos consist of 3-D geodesic sub-manifolds of E3× CP2. For E3 their dimensions are D=1,2,3 and for CP2, D=0,1,2. CP2 allows both homologically non-trivial resp. trivial geodesic sphere S2I resp. S2II. The geodesic sub-manifolds cen be products G3 =GD1× GD2, D2=3-D1 of geodesic manifolds GD1, D1=1,2,3 for E3 and GD2, D2=0,1,2 for CP2.

  2. It is also possible to have twisted geodesic sub-manifolds G3 having geodesic circle S1 as CP2 projection corresponding to the geodesic lines of S1⊂ CP2, whose projections to E3 and CP2 are geodesic line and geodesic circle respectively. The geodesic is characterized by S1 wave vector. One can have this kind of geodesic lines even in M1× E3× S1 so that the solution is characterized also by frequency and is not static in CP2 degrees of freedom anymore.

    These parameters define a four-D wave vector characterizing the warping of the space-time surface: the space-time surface remains flat but is warped. This effect distinguishes TGD from GRT. For instance, warping in time direction reduces the effective light-velocity in the sense that the time used to travel from A to B increases. One cannot exclude the possibility that the observed freezing of light in condensed matter could have this warping as space-time correlate in TGD framework.

    For instance, one can start from 3-D minimal surfaces X2× D as local structures (thin layer in E3). One can perform twisting by replacing D with twisted closed geodesics in D× S1: this gives valued map from D to S1 (subset CP2) representing geodesic line of D× S1. This geodesic sub-manifold is trivially a minimal surface and defines a two-sheeted cover of X2× D. Wormhole contact pairs (elementary particles) between the sheets can be used to stabilize this structure.

  3. Structures of form D2× S1, where D2 is polygon, are perhaps the simplest building bricks for more complex structures. There are continuity conditions at vertices and edges at which polygons D2i meet and one could think of assigning magnetic flux tubes with edes in the spirit of homology: edges as magnetic flux tubes, faces as 2-D geodesic sub-manifolds and interiors as 3-D geodesic sub-manifolds.

    Platonic solids as 2-D surfaces can be build are one example of this and are abundant in biology and molecular physics. An attractive idea is that molecular physics utilizes this kind of simple basic structures. Various lattices appearing in condensed matter physics represent more complex structures but could also have geodesic minimal 3-surfaces as building bricks. In cosmology the honeycomb structures having large voids as basic building bricks could serve as cosmic legos.

  4. This lego construction very probably generalizes to cosmology, where Euclidian 3-space is replaced with 3-D hyperbolic space SO(3,1)/SO(3). Also now one has pieces of lines, planes and 3-D volumes associated with an arbitrarily chosen point of hyperbolic space. Hyperbolic space allows infinite number of tesselations serving as analogs of 3-D lattices and the characteristic feature is quantization of redshift along line of sight for which empirical evidence is found.

  5. These basic building bricks can glued together by wormhole contact pairs defining elementary particles so that matter emerges as stabilizer of the geometry: they are the nails allowing to fix planks together, one might say.

Geodesic minimal surfaces with non-vanishing gauge fields

What about minimal surfaces and geodesic sub-manifolds carrying non-vanishing gauge fields - in particular em field (Kähler form identifiable as U(1) gauge field for weak hypercharge vanishes and thus also its contribution to em field)? Now one must use 2-D geodesic spheres of CP2 combined with 1-D geodesic lines of E2. Actually both homologically non-trivial resp. trivial geodesic spheres S2I resp. S2II can be used so that also non-vanishing Kähler forms are obtained.

The basic legos are now D× S2i, i=I,II and they can be combined with the basic legos constructed above. These legos correspond to two kinds of magnetic flux tubes in the ideal infinitely thin limit. There are good reasons to expected that these infinitely thin flux tubes can be thickened by deforming them in E3 directions orthogonal to D. These structures could be used as basic building bricks assignable to the edges of the tensor networks in TGD.

Static minimal surfaces, which are not geodesic sub-manifolds

One can consider also more complex static basic building bricks by allowing bricks which are not anymore geodesic sub-manifolds. The simplest static minimal surfaces are form M1× X2× S1, S1 ⊂ CP2 a geodesic line and X2 minimal surface in E3.

Could these structures represent higher level of self-organization emerging in living systems? Could the flexible network formed by living cells correspond to a structure involving more general minimal surfaces - also non-static ones - as basic building bricks? The Wikipedia article about minimal surfaces in E3 suggests the role of minimal surface for instance in bio-chemistry (see this).

The surfaces with constant positive curvature do not allow imbedding as minimal surfaces in E3. Corals provide an example of surface consisting of pieces of 2-D hyperbolic space H2 immersed in E3 (see this). Minimal surfaces have negative curvature as also H2 but minimal surface immersions of H2 do not exist. Note that pieces of H2 have natural imbedding to E3 realized as light-one proper time constant surface but this is not a solution to the problem.

Does this mean that the proposal fails?

  1. One can build approximately spherical surfaces from pieces of planes. Platonic solids represents the basic
    example. This picture conforms with the notion of monadic manifold having as a spine a discrete set of points with coordinates in algebraic extension of rationals (preferred coordinates allowed by symmetries are in question). This seems to be the realistic option.

  2. The boundaries of wormhole throats at which the signature of the induced metric changes can have arbitrarily large M4 projection and they take the role of blackhole horizon. All physical systems have such horizon and the approximately boundaries assignable to physical objects could be horizons of this kind. In TGD one has minimal surface in E3× S1 rather than E3. If 3-surface have no space-like boundaries they must be multi-sheeted and the sheets co-incide at some 2-D surface analogous to boundary. Could this 3-surface give rise to an approximately spherical boundary.

  3. Could one lift the immersions of H2 and S2 to E3 to minimal surfaces in E3× S1? The constancy of scalar curvature, which is for the immersions in question quadratic in the second fundamental form would pose one additional condition to non-linear Laplace equations expressing the minimal surface property. The analyticity of the minimal surface should make possible to check whether the hypothesis can make sense. Simple calculations lead to conditions, which very probably do not allow solution.

Dynamical minimal surfaces: how space-time manages to engineer itself?

At even higher level of self-organization emerge dynamical minimal surfaces. Here string world sheets as minimal surfaces represent basic example about a building block of type X2× S2i. As a matter fact, S2 can be replaced with complex sub-manifold of CP2.

One can also ask about how to perform this building process. Also massless extremals (MEs) representing TGD view about topologically quantized classical radiation fields are minimal surfaces but now the induced Kähler form is non-vanishing. MEs can be also Lagrangian surfaces and seem to play fundamental role in morphogenesis and morphostasis as a generalization of Chladni mechanism. One might say that they represent the tools to assign material and magnetic flux tube structures at the nodal surfaces of MEs. MEs are the tools of space-time engineering. Here many-sheetedness is essential for having the TGD counterparts of standing waves.

See the new chapter Can one apply Occam's razor as a general purpose debunking argument to TGD? of "Towards M-matrix" or article with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.