[box type=”info” align=”” class=”” width=””]Nicholas Davis
Head of Society and Innovation, Member of the Executive Committee, World Economic Forum
We’ve all had something like it happen: one minute you’re searching for a present suitable for a two-year-old; the next, ads for nappies and prams are on every site you visit.
It’s unsettling. No one feels comfortable about bots following us surreptitiously as we roam around the web, when companies use what they learn from our online behaviour to promote products and services in creepy ways.
But could concerns around privacy and informed consent – though undeniably important – be distracting us from what we should be really worried about?
The exploitation of personal information for marketing purposes is a real problem. But the more serious risk is that our personal information can be used against us – not just to advertize a product we don’t want, but to discriminate against us on the basis of our age, race, gender or some other characteristic we can’t control.
Precision prejudice
For example, if you have darker skin, facial-recognition technology is dramatically less accurate than if you have a light complexion. As this technology is progressively rolled out across law enforcement, in border security and even in delivering financial services, the risk that you’ll be unfairly disadvantaged increases depending on your ethnicity.
Similarly, there are examples of artificial intelligence (AI) operating to prevent women or older people seeing certain online employment opportunities.
Not only does this violate the human rights of anyone negatively affected, but it also undermines community trust in AI more broadly. A collapse in community trust in AI would be disastrous, because AI has the potential to be an enormous boon – not just for our economy, but also in making our community more inclusive.
For every instance of AI causing harm, there’s also an uplifting counter-example. This could be anything from AI-powered smartphone applications allowing blind people to “see” the world around them, to huge strides in precision medicine.
Our challenge, therefore, is to build enduring trust in the development and use of a tremendously exciting set of technologies, so we can take advantage of the opportunities while addressing the threats to our basic rights. Unfortunately, this challenge is made harder by a damaging but pervasive myth.
Righting the wrongs
Too often we’re told that if Australia is to compete globally in developing AI products, Australian researchers and companies must not be fettered by human rights concerns, because other countries certainly aren’t. China, for example, is investing heavily in AI technology such as facial recognition to support its “social credit score” system, which involves conducting precise and determinative surveillance of its citizens. In the context of a global AI arms race, it is argued, Australia can’t compete with one arm tied behind its back.
This argument is dangerously wrong. Australia’s liberal democratic values are its strength. The Australian Human Rights Commission’s consultation on human rights and technology has shown that, as Australians learn more about AI, there’s a growing demand that AI only be used in ways that respect their human rights.
This suggests that embedding human-rights protection in AI as it’s developed isn’t just morally right – it’s also smart. If Australia can become known for developing AI that gets the balance right, we can gain a competitive advantage.
After all, consumers in liberal democracies want the benefits of AI, through self-driving cars, better healthcare and super-powerful computers. However, they won’t accept a trade-off that involves mass surveillance, the exclusion of entire groups and a rise in discrimination.
So, what’s the solution?
We know that technology, and especially AI, is developing at breakneck speed. We also know that our laws can be slow to adapt.
This puts greater pressure on Australian institutions to smooth AI’s rough edges in ways that allow us to harness the opportunities without allowing vulnerable members of our community to be crushed.
[ads1]
AI leadership
Several influential voices have already called for an Australian organization to lead on AI. The World Economic Forum and the Australian Human Rights Commission have formed a partnership to build on this emerging consensus. Today, we have invited leading decision-makers in government, industry and academia to join us at the University of Technology Sydney (UTS) to consider how we will tackle this AI leadership challenge.
In a joint statement, we suggest that Australia needs to build its AI strategy on three central pillars. First, we must clearly articulate the values that should underpin AI in Australia – quintessentially Australia values such as equality or the fair go.
Second, an Australian organisation – either a new or existing one – should take a central role in formulating laws, guidelines, accountability and capacity-building strategies in AI.
Third, this organisation should work closely with industry, government and the community to support the development of AI technologies that respect human rights.
The World Economic Forum and the Australian Human Rights Commission are consulting on these issues right now, and have produced a white paper, Artificial Intelligence: Governance and Leadership, on which we’re inviting comments. We welcome your thoughts and broad community input into this process.
[box type=”note” align=”” class=”” width=””]Written by