For AI to thrive, it must put humans first
Amid the current mad dash of progress in artificial intelligence innovation, a key focal point has often been overlooked.
Recent groundbreaking chatbot releases have been outdoing each other and wowing the public with their dazzling performances. But in the race for technological advancement, sometimes human considerations have been secondary.
This approach has triggered a swell of negative media coverage around fears that powerful AI systems could replace certain human behaviors.
These phobias are not merely restricted to the uninformed. Even some of the most prominent voices in the industry have called for an immediate six-month pause in AI systems training and warned that unchecked AI progress raised the risk of human extinction.
At Nokia, we believe that AI can be a positive force for society if it is developed and implemented responsibly and safely for humanity. That means designing these technologies to augment humans, not substitute them.
We advocate for AI systems that directly benefit human beings and avoid harm. We want these AI systems to be our ally so they can make day-to-day activities more efficient, automate tasks and ultimately contribute to human flourishing. AI success should be measured not in the technological sophistication it reaches, such as how accurately it can detect a facial expression, but rather in the eventual human happiness it creates.
That journey starts with our six pillars of responsible AI.
Where we are today
Overall, the industry is driven by an approach that seeks technological progress above all else. This drive to merely overcome the next technology obstacle has created some of the public fears that have reverberated in the press.
The deployment of next generation chatbots is an example of this outcome. AI researchers sometimes rushed to push technological boundaries before sufficiently stopping to consider what humans would want from it. Despite their extraordinary abilities, these chatbots have also been known to generate responses that sound plausible but are factually wrong, a phenomenon known as artificial hallucination.
The last thing we want is an AI chatbot that generates false information, which will erode the trust the public requires for AI to do its job. Issues like safety, trust, fairness and privacy are at the forefront of people’s concerns about AI, and these issues need to be at the forefront when designing these technologies. Therefore, researchers should be prioritizing these human considerations early in the design stage.
A simple example of how the human element was overlooked in design and development is in online ads. Since the early days of the internet, we have been inundated with ads. Over the past decade, however, powerful machines and algorithms have developed an insatiable appetite for our data. They have become super-intelligent, as well as more annoying and intrusive. Recently, regulators have compelled advertisers to offer users the choice of accepting or rejecting being tracked. But this is not a true choice since we are bombarded with consent pop-ups on every webpage, with insufficient transparency about what these choices mean.
The alternative
At Nokia Bell Labs, we propose an alternative: a human-centric approach that is integrated into the design of all AI systems. Our upcoming systematic review of the AI literature found profound gaps. The current AI research agenda must undergo this significant shift, with a renewed focus on human-centered AI that prioritizes control and human flourishing.
In his article “On the promotion of human flourishing,” Tyler VanderWeele states that on an individual level, we should seek to support “happiness and life satisfaction, meaning and purpose, character and virtue, and close social relationships.” Imagine an AI system that is so addictive that people can’t detach from it and lose touch with reality (The Oscar-winning movie “Her,” in which the protagonist falls in love with his AI assistant, provides just such an example).
One way to prevent these dangerous side effects is to push for human flourishing in AI design.
On a group level, AI must foster the well-being of everyone, including those most vulnerable. Children, for example, are more exposed than ever to technology that collects their data throughout their lives. This can enable companies to create detailed profiles about children that can be used to sell them products or manipulate their behavior in the long term.
Another problem is the exploitation of laborers in AI development, such as data annotators in developing nations whose work is essential for AI to operate. These laborers are often working for low salaries and with minimal benefits or promotion opportunities. We need to ensure AI benefits not only the people who use it, but the people who create it.
Finally, current AI design often reinforces negative stereotypes and perceptions within our society. Our recent study on AI research datasets showed a disproportionate emphasis on Western populations. Not only does this produce results that may not accurately represent global human behavior, but it also excludes and further marginalizes specific populations. We argue that we need to collect data from under-represented populations to obtain an inclusive worldview.
Putting humanity first in AI cannot be achieved by an individual or company effort alone. It is a collective action that requires many pieces to work together. At Bell Labs, our efforts are targeted both at our own internal research and development processes as well as pushing the AI industry forward in a responsible direction.
We have published research and built tools to prompt AI developers into thinking about these aspects as part of our commitment to humanity by design. And we, along with everyone at Nokia, are a strong advocate for diversity and fairness in all aspects of AI research and development.
The road ahead
Research shows there is a gap that needs to be bridged between the current, default AI agenda and the needs of individuals, societies and our planet.
AI research should no longer center solely around its technical capabilities, but rather around its impact on humanity and the environment. We must prioritize designing AI that meets the needs of people and the planet, rather than simply pursuing technological advancement for its own sake. We must consider the inevitable scenarios when technology and morality clash.
It is time to reframe our AI mindset and shift the focus of research toward creating solutions that promote well-being and sustainability for all. Only by doing so can we create a future in which AI serves humanity, rather than the other way around.