Richard Kyte: The real ethical problem with artificial intelligence
For decades, experts have warned that we should get ready for the ethical challenges that could arise if artificial intelligence becomes a reality. Well, AI is here now and we still aren’t ready for it.
If you use a search engine on your computer, a streaming service for music or movies, a navigation app while driving, have a social media account, a smart phone, purchase products online, or even visit a hospital, you are probably relying on AI. It has become ubiquitous; in most cases, we are hardly aware of its presence in our lives or the many ways we are becoming increasingly dependent on it.
The ethical concerns over AI for the most part fall into three categories: misuse, consciousness and autonomy. All three sets of ethical concerns are significant and deserve a great deal more attention than they are getting.
The first set of concerns are common to many types of new technology. There are questions about who will control AI. Will it be used to deceive or manipulate people? Will it be safe? Will it be used in a way that respects our privacy? Will it incorporate historical biases? And how will it affect our economy? Will AI replace many of the jobs people are doing today, like truck drivers, financial advisors, accountants, doctors and receptionists?
The second set of concerns have to do with Artificial General Intelligence (AGI). What if robots obtain consciousness? Should they have legal and ethical rights just as people do?
You may have heard about the Google engineer who was placed on leave after claiming that LaMDA, which stands for Language Model for Dialogue Applications, is sentient. He apparently thinks its rights are being violated and engineers should seek its consent before running more experiments.
Nobody else seems to think LaMDA is self-aware; it is just a machine running incredibly complex calculations that successfully mimic the way human beings use language. But the case raises an interesting point. If AI continues to develop in the way that it has so far, and if companies like Google successfully create even more sophisticated artificial neural networks, how will we know when a machine actually is self-aware?
If the test of machine consciousness is the ability to pass itself off as a human being in conversation, then the test would seem very close to being passed. After all, countless people regularly say “please” and “thank you” to the Amazon Alexa sitting on their countertop at home. And it can already be difficult to tell whether a customer service phone call is being answered by a real person.
Finally, there are concerns about machines becoming autonomous, acting on their own and perhaps in ways contrary to human interests. That is something likely to happen, whether or not they ever develop consciousness. The more AI is used to run critical infrastructure and security systems, the more potential there is for danger.
As I said, these are all significant ethical concerns, and they deserve a great deal of serious discussion. But they are not the most important concern.
To my mind, the most important ethical challenge presented by AI is its potential for success.
What if AI fulfills all our greatest hopes? What if we manage to avoid the ethical problems by creating a robust regulatory framework and AI proves wildly successful? What if it makes our lives easier and more prosperous? What if it helps us develop new cures for diseases, increases energy efficiency and enhances security? What if it improves the economy by boosting productivity, workplace efficiency and innovation? What if it makes life go much more smoothly by enabling us to realize our goals with greater ease and proficiency?
I happen to think that is a likely outcome of AI. I also think it could be disastrous for humanity.
Over the past century we have witnessed revolutionary changes with the constant development of new technologies. But AI will not be just one more step in a series of innovations. It will vastly increase both the speed and impact of development, bringing more profound and far-reaching changes that we cannot yet envision.
It is important to remember that technology does not just change our ability to act upon the world, it also changes the ways we interact with one another. Every new development reduces our mutual dependency.
New technology is always designed to do one of two things: give us more power or greater freedom. But every success in these directions comes at a severe price — the price of being needed by one another.
All the significant social challenges of our time — loneliness, anxiety, stress, suicide, drug abuse, violence, mistrust, fear, polarization — are worsened by loss of human connection. Yet, we seem determined to build a world that allows for less and less meaningful interaction.
The greatest danger of AI is not that it will be misused but that it will give us exactly what we want — a frictionless life.
If you think social media has eroded social relationships, wait until you see what is coming. AI is poised to become a massive human dependency disruptor.
A life in which one’s goals are realized without struggle or compromise is a life in which nobody is needed. A frictionless life is no life at all.
