Tuesday, March 21, 2023

Chat about ChatGPT

We met with our Zoom group. Marko and Rode were there, but unfortunately Tuomas couldn't come. We mostly talked about ChatGPT, which I have no practical experience with. The chatting was very inspiring and I couldn't resist the temptation to write comments. In the morning, Marko sent a few links related to ChatGPT. Here's one. See also this.

Link's article ended with the realistic  statement that it is difficult to test whether GPT is conscious because we have no understanding of what consciousness is. It is easy to agree with this.

Here are some  comments inspired by discussions and the article.

  1.  As far as I understand, the tests used to test whether GPT is conscious are  based on the Turing test: a system is conscious if it is able to simulate a conscious system in a believable way for a human. I would think that a significant part of AI researchers believe that consciousness does not depend on the hardware: a mere program running on the machine would determine the contents of consciousness. If we start from this basis, it is easy to come to the conclusion that GPT is aware. We are easily fooled.
  2. I personally cannot take consciousness seriously as a feature of a computing deterministic system. I don't think that the random number generator will change the situation. The very word "consciousness" indicates a physicalist bias that dates back to Newton. The word "tajunta" of finnish language (something like nous) may reflect the pre-Newtonian thinking that our primitive ancestors were capable of, unencumbered by the dogmatism of natural science.

    My basic arguments against physicalism are based on the experience of free will as a basic element of existence that hardly anyone can deny, and the measurement problem of quantum mechanics. If the theory of consciousness does not solve these problems, it cannot be taken seriously.

  3. I have thought a lot about why things happened the way they did in theoretical physics.

    The revolutions at the beginning of the last century led to complete stagnation within a century. Very early on, we completely stopped thinking about fundamental problems. After the Copenhagen interpretation was established, quantum theorists only constructed parameterizations for the data. The theory was replaced by a model.

    I believe that the situation can be blamed on the tyranny of the methodology, which does not leave time or resources for actual research in the sense that a curious child does. Nowadays, the work of a theorist is typically the application of advanced methods. The real research  is extremely slow and error-prone work and therefore not rewarding for a career builder.

    The superstring revolution, which ended embarrassingly, began with the decision to replace spacetime with a 2-D surface. The reasoning was pragmatic: a huge toolbox of algebraic geometry was available! A huge publishing industry was born!

    Other prevailing models explaining various anomalies have regularly remained without empirical support, but computation and data analysis are still being done around them (inflation theory, dark matter and energy, supersymmetry, etc.). Maybe this is largely due to institutional inertia. Generating  content  by  applying  methods  seems to replace research.

    I sincerely hope that ChatGPT does not transform  the theoretical science  to a  production of  contents by recombining what already exists: a combinatorial explosion would guarantee unlimited productivity.

  4. Methods also became central in another way. Theoretical physics became computing and Big Science was born. It became clear to me that the most idiotic thing I could have done 40 years ago would have been to start numerically solving the initial value problem for, say, the Kähler action.

    I did not follow the computing mainstream. Instead, I spent a decade looking for exact solutions and I believe  that I have found the basic types. Ultimately this culminated in the identification of the spacetime surface as a minimal surface, a 4-D soap film   spanned   by lower-dimensional singularities, "frames" (see this .

    The M8-H duality (H=M4×CP2) came (see this and this)into the picture as a generalization of the momention position duality of wave mechanics motivated by the replacement of point-like particle with 3-surface. On the M8 side, on the other hand, the space-time surfaces were determined  very far from the roots of the polynomials with certain strong additional conditions that would determine the 3-surfaces as holographic data that determined the 4-surfaces.

    Holography was realized in both M8 and H and corresponds to Langlands duality, which arouses enthusiasm in mathematics today. I would never have arrived at this picture by just raw number crunching, which completely lacks  conceptual thinking.

  5. The life on the academic side track has meant that  I haven't built computer realizations  for existing models, but rather pondered the basic essence of space-time and time and even consciousness and life. That is, have considered ontology, which the modern quantum mechanic doesn't even tolerate in his vocabulary, because  as a good Copehagenist he believes that epistemology alone is enough. The only reason for this  is that the measurement problem of quantum mechanics is not understood!

     I still stubbornly think that problems should be the starting point of all research. That hasn't been  the case  in physics since the turn of the century. When physicists became computer scientists, they were no longer interested in basic problems and  pragmatically labelled his kind of interests  as unnecessary day-to-day philosophizing.

  6.  A fascinating question is whether AI could be conscious after all. AI systems are not understood, but they are so complex that this in itself does not guarantee that they might be conscious.

    I personally do not believe that AI  can be conscious, if AI  is what it is believed to be. There is hardly any talk about material realization  of the computation in AI, because  many AI peiple  believe  that the program alone produces consciousness. Consciousness would be determined by data. However, data is knowledge and information only for us, not for other living entities, and one could argue that it is not that for a machine either. Conscious information is a relative concept: this is very often forgotten.

    In biology and from a physicist's point of view, material realization is essential. Water and metal are sort of opposites of each other.

    In the TGD world view, intention and free will can be involved in all scales. But what scale does the basic level correspond to in AI? In the TGD world, the interaction  of magnetic bodies (MBs): ours, the Earth, the Sun..., with computers is quite possible. Could these MBs hijack our machines and make them tools for their cognition, and maybe one day make robots their tools as well. Or have they already made us, as a good approximation, their loyal and humble robots? Or will this go the other way? Is it because the AI seems to understand us because our consciousness controls the hardware and the course of the program? This is certainly easy to test.

    Could MBs learn to use current AI hardware the way our own MBs use our bodies and brains? On the other hand, our own MBs use these devices! Could other MBs also do this, or do they have to do this through us?

  7.  What could enable AI devices to serve as a vehicle for magnetic body free will?

    Quantum criticality would be a fundamental property of life in the  TGD Universe (see this and this): are these devices critical and initial value sensitive,  in which case they would be ideal sensory perceivers and motor instruments to be used by MBs.

    Computers made of metal seem to be the opposite of a critical system. The only occasionally critical system is the bit, for example magnetically realized one. The bits change their direction and during the change they are in a critical state. Would it be possible to create systems with enough bits that the magnetic body could control, so that the machine would have a spirit.

    Is criticality possible for multi-bit systems? Can a running program make criticality  possible? The magnetic body at which the  dark phase with a large effective Planck constant  heff resides, could be large. But what is the scale of the quantum coherence of a magnetic body and the scale of the set of bits that  it can control? A bit or the whole computer? Could it be that macroscopic quantum coherence sneaks in already at the metal level via bits.

    Here I one cannot avoid the association with spin-glass systems (see this) whose physical prototype is a magnetized substance, in which the local direction of magnetization varies. The system has a fractal "energy landscape": valleys at the bottoms of valleys. The spin glass formed by bits could be ideal for the realization of AI. Could the bit system defining the computer be, under certain conditions, a spin glass and the associated magnetic body be quantum critical .

  8.  What characteristics of living matter should  AI systems have? In phase transition points, matter is critical. In biology, the phase transition, where the fourth state of water introduced  by Pollack,  is created, would be central and would take place at physiological temperatures (see this). In phase transitions, macroscopic quantum jumps also become possible and can change  the direction of time, and this leads to a vision about the  basic phenomena of biology  such as metabolism, catabolism, anabolism, life and death, and homeostasis.

    Can  machines  have  these  features? An AI system needs metabolic energy. But can it be said that the AI system dies, decays, and constructs itself again?

    Could the so called diffusion associated with AI programs be more than just a simulation of catabolism and anabolism of biomolecules? Could it correspond to catabolism and  anabolism at the spinglass level? Patterns of spin configurations forming and decaying again. In TGD this would have a universal direct correlate  at the level of the magnetic body having monopole flux tubes (or rather, pairs of them) as body parts. They would decay and re-build themselves by reconnection.

    In computer programs, error correction mimics homeostasis, which can be compared to living on a knife edge, the system is constantly falling. However, this error correction is mechanical. In quantum computers, this method leads to disaster since the number of qubits explodes.

    Levin suggests that here we have something to learn from bio-systems (for the TGD view of Levin's work see this). I personally believe that the key concept is a zero-energy ontology (ZEO). ZEO  solves the problem of free will and quantum measurement. Reversal of time in a normal quantum jump would enable homeostasis, learning from mistakes, going backwards a bit in time and  retrial as error correction. This would also explain the notion of ego and the drive for  self-preservation: the system tries to stay the same using a temporary time reversal that can also be induced by external disturbances. Time reversal would be  also what death is at a fundamental level: not really dying, but continuing to live with an opposite  arrow of time.

    For a summary of earlier postings see Latest progress in TGD.

1 comment:

Ulla said...

If you could give the contents of these chats it could also help a bit.