Deputy Head, Event Editorial, World Economic Forum
This article is part of: Annual Meeting of the Global Future Councils
- Experts from across issue areas gathered this month at the World Economic Forum’s Annual Meeting of the Global Future Councils.
- Professor Stuart Russell, a computer scientist, told attendees that the world needs to move beyond AI “solutionism.”
- Some of the biggest AI challenges need to be resolved through new forms of collaboration and by asking the right questions, Russell said.
From simulating molecules a million times faster than 10 years ago, to the possibility of applying artificial intelligence (AI) for quality care, AI applications offer many potential opportunities to advance business, improve lives and tackle global challenges.
But AI is not the universal problem solver and having a clearer idea of what AI is, and isn’t, will be critical to move beyond “solutionism,” says Stuart Russell, a Professor of Computer Science at University of California, Berkeley.
Speaking during the opening plenary of the World Economic Forum’s Annual Meeting of the Global Future Councils, Russell explained that “solutionism” is the attitude that, given enough data, machine learning algorithms can offer a solution for all our problems. He also spoke about how to dispel common AI misconceptions and how best to advance the field of study.
What is artificial intelligence?
Russell started by highlighting how surprising it can be that many of us have a hard time defining something that we see on newspaper front pages each day.
One misconception is thinking of AI as a technology. “Artificial intelligence is a field of study, not a technology” and is defined by the problem of how to develop machines able to perform tasks commonly associated with intelligent beings, Russell explained.
A second misconception is thinking of AI as something new, whereas it has, in fact, been 80 years in the making. What is new, however, is the speed at which progress happens today, he added.
Progress appears to be accelerating towards the goal that was stated very early on in the research of creating machines that match or exceed human performance across every conceivable task.
“If we are creating systems more powerful than ourselves, then there is an obvious question: How do we retain power over entities more powerful than ourselves forever?” he said.
“Alan Turing, who founded computer science, gave a speech in 1951 saying that ‘once the machine thinking had started it would soon outstrip our feeble powers. At some point we would have to expect the machines to take control.’ So that was his answer. That we can’t maintain such power,” Russell said. “What is clear is that we had better produce an answer to these questions before we produce those machines that are more powerful than ourselves.”
When asked about the progress on this question by the World Economic Forum’s Global Future Council on the Future of Artificial Intelligence, Russell replied that “we have some partial answers to that question, but those partial answers are not market ready.
“They are not scalable, and they don’t compete with the technologies that companies are putting out there, and technologies that companies are investing tens of billions per month to develop and scale up and make even more powerful.”
He added that “those technologies have no possibility of being safe. We will not be able to control them because we don’t even understand how they work.”
“So the question that we have to ask is how do governments devise regulations so that companies build AI systems that are safe by design, rather than just building AI systems and then trying to make them safe and failing.”
What is needed to maintain control over AI?
Some of the biggest challenges in the field of AI need to be resolved through collaboration and by asking the right questions, said Russell.
Professor Stuart Russell speaks during the Opening Plenary World Economic Forum’s Annual Meeting of the Global Future Councils 2023. Image: World Economic Forum
Unlike other industries, the digital industry doesn’t have the same secondary studies or impact reports to guide them, he added. “We write a few million lines of software and stick it on the world, whether the world likes it or not.”
This approach does not consider the context, or more precisely the socio-technical embedding, of a particular type of software.
“What happens when you take a big chunk of software and stick it in the world in a particular place? Let’s say you have some software that does triage for emergency cases in a hospital, and you put it in an emergency room, what happens?”
“Does it make things better or worse? In computer science we don’t ask that question. We ask, is it good at triage? But that’s not the right question,” Russell highlighted.
“The right question is whether it makes things better or worse,” he explained, pointing to the need for help from sociologists, economists and business specialists and others to look at asking the right questions when developing software.
In the long-term, asking the right questions is critical. For example, what roles will humans have when machines can cover many tasks better than we can? If we have general purpose AI, can humans coexist with it? What roles will humans have when machines do everything better than we can?
And finally, “what is the future that we want where we have this technology and yet a we are a vibrant and forward-thinking, and hopefully much better, civilization than we are now? And if you can’t think of that future, if you can’t describe it, then you have to think then why are we going there?”
“This is a question for our whole civilization,” he explained.
Does society trust AI?
Understanding whether societies trust AI is critical to contextualizing these observations.
What is clear is that trust in AI varies significantly between countries, ranging from 75% of respondents to a global survey somewhat willing to trust AI in India, to less than 30% in Estonia, Japan or Finland.
Acceptance and willingness to trust artificial intelligence (AI) systems in selected countries worldwide in 2022 Image: Statista
Acceptance varies significantly for industries too, with 44% of people willing to trust the application of AI in healthcare for diagnosis and treatment, while a further 31% were ambivalent.
This higher level of willingness to trust AI in healthcare most likely reflects the significant immediate benefits that improved medical diagnosis and treatment precision provide for patients, along with a higher trust in doctors in many countries.
Against a backdrop of varying trust, and as technological breakthroughs are already transforming the systems in which we operate, the question of government regulations to devise AI systems that are safe and the imperative of maintaining control over AI might be more important than ever.
In this light, the question of what a future in which we have AI alongside a vibrant and forward-looking civilization is a question for all of us.
License and Republishing
The views expressed in this article are those of the author alone and not the World Economic Forum.