ChatGPT and Artificial General Intelligence
Recently on Twitter, there is an upcoming twitter space talking about Artificial General Intelligence and ChatGPT. The question posed is indeed interesting.
In is important to note that this question occurs frequently throughout our history. One of the very first chatbots Eliza stunned users into thinking it was possible to create an artificially intelligent machine. The notion of "artificial general intelligence" wasn't even an idea in someone's mind. The term only needed to be added to our vernacular once it was agreed that the algorithms being created did in fact perform at a level beyond what human intelligence could do within a specific context. (eg: computer beats human at chess, at Go, at Jeopardy), software can interpret human speech, read handwriting, understand the context of a sentence, and even prove theorems, and produce new artistic works never created or known before by humans. All items only dreamed of only in science fiction a mere 50-60 years ago.
Therefore we needed to distinguish between highly complex algorithms that preformed at the same level (or better) then a human vs the wide range of intelligent actions a human can preform. And so we started calling these highly complex actions "intelligent actions" or "artificial specific intelligence" or "soft ai". Dictionaries even altered the definition of 'intelligence' to be more aligned with this form of thinking.
At best one can argue that the the actions preformed certainly simulate intelligence to a degree so high as to make it learn, preform, and adapt in much the way we assume human intelligence works in those specific domains.
The famous mathematician and 'computer scientist' Alan Turing, proposed a simple test to determine whether or not a machine was exhibiting 'intelligence'. Today we know this as the 'Turing Test' but he called it "The imitation game"
Suppose that we have a person, a machine, and an interrogator. The interrogator is in a room separated from the other person and the machine. The object of the game is for the interrogator to determine which of the other two is the person, and which is the machine. The interrogator knows the other person and the machine by the labels ‘X’ and ‘Y - but does not know which is the 'computer' and which is the 'person' The interrogator is allowed to put questions to the person and the machine of the following kind: “Will X please tell me whether X plays chess?” Whichever of the machine and the other person is X must answer questions that are addressed to X. The object of the machine is to try to cause the interrogator to mistakenly conclude that the machine is the other person; the object of the other person is to try to help the interrogator to correctly identify the machine. About this game, Turing (1950) says:
I believe that in about fifty years’ time it will be possible to program computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning. … I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.
I would argue that Turing was right, and in fact that is exactly what has been happening over the last 50-60 years. We continue to progress more and more in our complex algorithms. However we have done so in a very transparent manner. That is to say when Garry Kasparov lost his first match against a computer. He knew he was playing against a computer. At no point, was Garry 'tricked' into thinking his opponent might be human, so at no point was in a position to conclude he was in fact playing against a fellow human intelligence.
Just like playing chess, chatbots have become increasingly better at fooling humans for increasingly longer periods of time into thinking "We found it - this machine has general intelligence!" - It "passes" the imitation game. This has had a lot of benefits into the AI world in so much as it helps to increase more and more funding into more and more complex algorithms to do things that it was once believed only possible by a human being. Eventually, through our analysis and study we continue to reach the same conclusion "It is not really AGI", it is just a very complex set of algorithms operating within a specific target domain for a specific purpose. Growing the domain by including for example, a) The ability to write software programs, b) the ability to solve complex problems, c) the ability to understand the difference between a potato and a tomato does not mean we have created 'General Intelligence', only that we created a larger domain. Even if you combined all of the algorithms created since the 50's together into once giant piece of software and included all the known knowledge of the universe into that piece of software, it could still be argued that the domain while incredibly large is still limited.
Drop a person into the middle of the forest without any survival training or food or tools, and it is likely that he/she will have a low chance of survival, but it is not impossible. People dropped into unfamiliar environments can and do adapt. Drop the chatGPT progarm into the middle of the forest, give it locomotion, a fake body, and a 3 day supply of power and it is unlikely to be able to adapt to its environment. (Unless of course a human writes an algorithm to teach it what to do in advance)
John Searle's famous Chinese Room thought experiment reveals the error in Turing's imitation game. That the idea of 'artificial general intelligence' is folly. It is not enough that a machine can imitate / simulate intelligence. To be artificially intelligent it would need to be shown that the machine is not just executing instructions that in fact it possesses a mind that is "thinking". This, however is based on the generally accepted premise that as humans we have 'minds' and that our minds 'think'. Much of our brain is still a mystery. It is at least to some degree possible that our brains are simply bio-chemical computers doing nothing more then a complex Chinese Room. However if this is the case, we cannot then even define ourselves as general intelligent beings, which posses a rather large problem.
Let us not forget the important work of John Conway and his game of life as some very important lessons can be learned from such a simple 0-player game
a) Seemingly complex arrangements can be produced by a very small set of rules (and it is likely nature works the same way)
b) "order" and "meaning" seeing 'gliders' and 'spaceships' may say more about our human brains need to 'see' order then to imply that such order "design" actually 'exist' in the universe
This is the specific reason why we keep coming back over and over to the debate of 'have we achieved AGI?' because by design our brains can and do get tricked into believing things into "filling in the gaps" when data is missing and drawing conclusions. It is also why many of us like to watch a magic show and have those seconds where we think "maybe the magic is real", unless the magician reveals the secret and destroys that innocence.
Comments
Post a Comment