Recently, I read an interview with Geoffry Hinton in the Globe and Mail newspaper. He is one of the fathers of Artificial Intelligence and a former employee of Google. In light of the fact that ChatGPT 4.0 has apparently passed the Turing test, I will examine some of what Hinton says. I can’t prove or disprove his remarks because I am not a AI specialist, biologist, or a neuroscientist. But I have been in IT for decades.
“Hinton: …saying chatbots understand in a very different way from the brain? No. They understand in pretty much the same way the brain understands things. With roughly the same mechanisms.”
We have a tendency to compare areas of knowledge, in a way that might make it easier to understand them. For example, Ernest Rutherford explained that an atom was like a solar system with a nucleus in the middle orbited by electrons. This explanation helps us visualize the core concept of an atom. We know that this is only an approximation of what is actually happening at atomic levels. Today we compare human processes and brains to the way computers and their programs work. To say that chatbots understand “pretty much” the same way we do is simply not proven. A chatbot’s mechanisms are not “roughly the same mechanisms”. What computer scientists have done though is, based on what little neuroscience and psychology does know, is map neuronal models onto the algorithms used in AI. (It may be worth noting AI is a broad field that uses many different methods.) AI leverages these concepts to optimize how information is processed. The brain and its neurons are biological things. Chatbots are not. The chatbot is an invention of humans using silicon, metal, mathematics and simplified neuronal models to try to copy what experts think might be happening in the brain. Might is the operative word. To attribute any kind of human intelligence to AI is to fail to understand this. Hinton has worked at Google and is currently a professor at the University of Toronto. He knows more than I will ever know about AI and for that matter, the workings of the human brain. Still, I can’t help hoping that his conjectures are simply doomsaying because AI is our creation and based on that, will we not have a way to tame the beast we have created?
I do think AI is a disruptor in the same way all new technologies throughout history have been. Not only will AI disrupt society, but I think society at large is beginning to question what we mean by intelligence and consciousness. I hope we will demand that that these words be satisfactorily defined, before attributing them to anything else but human beings. I think in this sense, it is an exciting time for philosophy, neuroscience, computer science, and other disciplines that will need to have such evolving definitions.
“Hinton:…I don’t see why in the future you shouldn’t have things that the AI finds very rewarding and therefore does as often as it can.”
I hope this statement is nonsense. He simply doesn’t define what rewarding means. What does it mean for an AI to have an emotion like pleasure? Or chemicals like dopamine and serotonin that drive such satisfaction. Such things, I hope, are very far in the future if at all. I can write a program that does a certain calculation and when it aligns with something I’ve decided means reward, I can direct it to keep finding reward until it either goes into an infinite loop or consumes all the resources on my computer. But is this even a useful or desirable thing to do? In information technology, we mostly call this kind of outcome a bug. I get that he is suggesting that this is exactly the problem. AI, he suggests, may introduce bugs that we may not be able to fix.
“Hinton:..We are mortal and [the machines] are not. But you have to be careful what you mean by immortality. The machines need our world to make the machine that they run on. If they start to do that for themselves, we’re fucked. Because they’ll be much smarter than us.”
He is right. If machines attain a level of blind purpose we cannot understand or circumvent, we are fucked. We will be fucked because they will deplete all the resources of the earth including us to sustain themselves. That is a scary thought and is currently the stuff of science fiction and thought experiments. He is, in fact, referring to a famous thought experiment called the Paperclip Maximizer. This thought experiment imagines an artificial intelligence that has a directive to make paper clips, allowing nothing to get in the way of making as many as possible. Eventually, the maximizer uses up all the resources of earth and then the universe simply building paperclips. I don’t think he really thinks this is smart. He just means they’ll be motivated and have abilities (in decidedly non-human ways), to accomplish their goal at the expense of human beings and will look smart to us. (Who knows? Maybe the universe is already a paperclip maximizer. Except, instead, it is making stars, black holes, and dark matter.)
“Hinton: Consciousness and stuff like that is all the product of a complicated machine. So no, I don’t think there’s anything special about us except that we’re very comprehensive and very advanced. We’re by far the most advanced thing on this planet.”
Are we really? Can we say that for certain? I always find ideas like this rather ethnocentric and arrogant. Is he stepping out of his lane into areas of science he has no expertise in?
“Hinton: …There’s a group of AI researchers who think we’re just a transitory stage in the evolution of intelligence, and that we’ve now created these digital things that are better than us and can replace us. That’s one view.”
While it is fun to reflect on this idea, there really is no basis for believing it. I think humans are way too arrogant and full of ourselves to let anything replace us. It’s conceivable of course, but I’d sooner bet on an alien race taking over the earth, than an AI doing it. I’d also sooner bet on climate change getting us before AI does. My own pet notion is that we’ll merge with AI instead, becoming a new hybrid species.
What experts like him are saying is that we are able to build something that can learn, without us directly programming every step in that process. The inner workings of how it arrives at results is opaque to us, because we have provided algorithms and models that make conclusions based on internal manipulations beyond what we can understand. It does not follow though that the robo-apocalypse is upon us.
I’ve a thing about futurists, academics and experts especially those who venture out of their wheelhouse to pontificate about areas of science that they know very little about. Hinton may or may not be one of these. Hinton and others I believe should absolutely be warning us about the dangers of AI and other new technologies, but not at the expense of ignoring facts nor being sufficiently critical. They must expressly state that these are doom scenarios which may happen but are not in any way a certainty. I hope for the most part that AI will benefit humanity without destroying it. And if it does somehow destroy us, well, we will have no one to blame but ourselves.
Supplemental Links and Sources:
Open Letter About the Dangers of AI
Why Artificial Intelligence is Dangerous
Here's Why AI May Be Extremely Dangerous, Conscious or Not
Tomorrow Ever After - Another movie (on Youtube)