In this, my very first post, I’d like to share a short essay I’ve written. I’m fully aware that as far as first posts go, this one is probably too big. But I’m willing to take that chance because, as far as I can see, this forum is the place that will give this essay the best chance of being read. Very quick background: I teach philosophy in Québec and I have part of a Ph.D. in cognitive sciences.
The main point of the essay is to point a finger to the most basic problem we face when talking about AI: the confusion over some very basic terms. What does intelligence mean? What about consciousness and wisdom? Regardless of what definitions you think are the best, we can all agree on one thing: trying to talk about these issues without having definitions we all agree on is a waste of time. And with some of the issues in AI, we can’t afford to waste time.
In this essay, I offer 3 definitions that I hope are simple enough to rally a lot of people. I also make a list of a few core facts and challenges that we have to acknowledge if we ever want to make progress in AI.
Hopefully, this brief presentation will have intrigued some of you enough to give the essay a read. If you think the content is interesting enough, my wish is that it eventually makes its way to Sam’s hands.
Below are the introductory paragraphs as a longer presentation of my essay.
You’ll find the essay here: http://photo.solstices.ca/autres/machines-intelligence-consciousness-wisdom.pdf
Thanks for your time.
Whether we like it or not the 21st century, from the point of view of technology, is going to be the century of artificial intelligence (AI). Any and all other important technologies are going to be built with and upon AI, so they depend on the progress of that fundamental field. And it’s no surprise either: intelligence has always been the driving force of human evolution and transformation. So when the power (or nature, in this case) of human intelligence changes significantly, human civilization changes as well.
Is human intelligence changing right now? One common answer to this question is that it can’t be changing because our brains are not changing. And indeed, the nature biological intelligence in the strictest sense is not changing. At least not yet. But human intelligence is still, without a doubt, profoundly changing.
The reason is simple: artificial intelligence is now part of what it means for humans to be intelligent. The impossible-to-overstate impact of AI during the last decades of the 20th century makes it clear that AI has actually changed what it means for us to be intelligent. If we were to remove all types of artificial intelligence from our lives, those lives would be transfigured. So would our societies. In fact, for most young people currently living in rich countries, the world around them would look unrecognizable. But of course, such a removal will not happen. The more pertinent question is “Where is this going?”
Some people fear the coming changes and others are puzzled by that fear. Who is right? Should we fear the rapid evolution of AI? Are Hollywood’s apocalyptic movies a glimpse into possible, or even probable futures? Or are the sceptics right? Why should we fear super intelligent AIs since, given that we have to build them ourselves, they can’t be more intelligent than us? Is it even possible for a machine to become more intelligent than us? These are some of the important questions in the AI debate.
Influential people from very different backgrounds and fields have stepped into that debate. From computer scientists to psychologists, from tech billionaires to some of science’s greatest minds, many people passionately feel that AI is either humanity’s most powerful tool for human salvation and flowering, or that it will lead to our utter annihilation.
Yet in some less public recesses, there are a few philosophers that are raising their hands and asking a few basic questions: “What is, actually, intelligence?”, “Is it the same thing as consciousness?” and “What does it mean for a machine to be wise?”
This relatively short essay is a philosopher’s attempt, at the very least, to make sure this crucially important debate about AI starts on firm ground. Sadly, our very vocabulary, which is tributary to centuries of bad metaphysics and other types of problematic philosophy, as well to dubitable religious belief systems, is in large part responsible for the difficulties.
This essay isn’t an attempt to defend or justify any position about the powers and dangers of AI. The only goal of this essay is to clarify three essential terms that are problematic. Intelligence is, of course, the first one. To the surprise of many people, its definition is relatively simple. The second term, which sometimes appears in more enlightened debates about AI, is consciousness. Artificial consciousness, a lot more than intelligence, scares people. As this is the most problematic term, more space will be devoted to it.
There is also a third and very important term that has almost never been brought up in this debate: wisdom. Soon it will become apparent that for most people that are afraid of AI, the real issue is not artificial intelligence nor even artificial consciousness, but artificial wisdom: “What will those AIs do?”
If, after reading the last two paragraphs, the expression AI suddenly seems ambiguous to you, it’s because it is. The word “intelligence” has been used without a proper definition in the artificial intelligence field since its very conception, confounded with consciousness and often used as a placeholder for wisdom. Intelligent people don’t do stupid actions, right?
Nevertheless, these three words are not synonymous. It is sometimes the case in natural languages that two words seem like perfect synonyms, like “happiness” and “joy”. Yet many readers and most linguists will tell you that even these two words don’t have the same exact definition.
In the case of “intelligence”, “consciousness” and “wisdom” though, one would have to willfully ignore the differences between them to use them as synonyms. Clearly, the questions “Is that an intelligent action?”, “Is that a conscious action?” and “Is that a wise action?” are not synonymous. Not at all. Sadly, there are generations of thinkers that have treated them as such, and it has led us to this, the dawn of the AI era, in really bad shape to handle the difficult questions. I will end this essay with a list of core facts and challenges that we have to face and acknowledge in this era of autonomous AIs.