Kay Firth-Butterfield is Head of AI & ML at World Economic Forum, and a humanitarian with a strong sense of social justice. Kay talks to us about why AI Ethics matter during her presentation at the RE•WORK Applied AI Virtual Summit.

Topics explored include:

  • Gender Equality and Bias in AI
  • What Can We Do to Make Sure That AI Reaches Its 15 Trillion Forecast by 2030 as a Benefit to the World's Economy?
  • Ethical Issues in Facial Recognition and Talent Acquisition
  • The Potential of AI in Educating Young Children
  • When and Why AI Ethics Started
  • Can We Use AI for Anything at Any Time?

Read the full transcript below and watch the video here.

[0:03]

It’s really great to be with you, and thanks to RE.WORK for making it happen. My title is, Does AI Ethics Matter? Well, I'm going to give you two reasons for why it does. But the very biggest reason is, do you want AI to work in your company? And do you want it to not affect your brand value? If yes, then I think there are two things that you really need to think about, one is this Gartner study. By 2020 bias will be so endemic in the algorithms, either through the data or through the development of the algorithm itself, that 85% of all AI projects will deliver out erroneous outcomes. Obviously, that's really bad for gender inequality but also it undermines the usefulness of the algorithms that you're creating. As you can see, the vast majority of professionals in AI are still male, and we have to do something about that.

[1:14]

But the second piece is really, well, if you are not creating trusted AI, then you're going to have a loss of brand value if something goes wrong, so the risks have to really be weighed with the benefits of AI. That's where ethics comes in, but I get a whole lot of people saying to me, well, ethics impede innovation, it's like regulation impedes innovation. Well, I'd like to sort of point out some of these things and suggest that maybe actually, we need trusted AI to actually move the needles forward. I also I don't think that you could ignore trusted AI or AI ethics anymore because we've got over 160 different sets of ethical principles out there, from Beijing to Montreal to everywhere in the world. I think also, we need to be cognizant of the fact that, whereas in 2019, PwC predicted that we would have deployment in the business of AI by about 20%.

[2:39]

Their predictions for 2020 said, actually that's only 4% and that was before we had COVID. The response to tracking apps has shown us that there's a lot of lack of trust amongst the general public in this area. So just taking two used cases, the NHS application for tracking was developed in the UK but trialled it on the Isle of Wight. Their take up has only been 30%, although it's voluntary and one would imagine that people want to contribute to it. So what is that? By the way, you need a 60% take up to actually have any real data that we can work with. So even in this COVID crisis, where health is really important to us, we're not signing up for these tracking apps. Is it that we're just lazy? Is it that we think that we don't want to be tracked? Is it that we don't understand how much we're tracked already? Or is it part of what we're seeing in 2019?  We talked about in January 2020, tech clash. Likewise, on the other application, that is just tracking symptoms and postcodes, they've only had 3 million people volunteer in the UK to sign up to it, and even fewer as a percentage in the US and Sweden.

[4:21]

So again, we're not prepared to trust algorithms, even though our health is at stake, and that was all happening before the IMF said we're likely to use lose 3% of our global GDP. So what can we do to make sure that AI reaches its 15 trillion forecast by 2030 as a benefit to the world's economy whilst also ensuring these ethical parameters? Well, I think it's important that we begin by just telling you a little bit about what we do at the Center for the Fourth Industrial Revolution, which is part of the World Economic Forum. We have offices as you can see in San Francisco, China, Japan, and India. We also have affiliate centres, that's where countries have asked us to open a centre in all these other places in the world, two in South America, three in the Middle East, two in Africa, and two in Europe. What are we doing out of all of these centres? Well, there's a real need and real energy around how do we create public-private partnerships and multi-stakeholder environments, to really make sure that we get through all the benefits of AI and mitigate the risks. We do that by co-creating governance with a small regime around AI.

[6:08]

To give you a couple of examples of that, first of all, we worked with the UK Government to put in place ethical procurement rules for trusted artificial intelligence. So when the UK in the future is looking to purchase artificial intelligence, it'll be looking for those ethical issues as well. That will obviously create an ecosystem in the UK around ethical AI because spending power on procurement is so great. That's obviously being trialled in Bahrain and is currently being piloted in the UAE, so you'll see that network and that scaling globally of everything we do. We also worked with the Singaporean Government on how companies could ethically use AI. So I suggest that you have a look at both of those if you're interested. Also our board toolkit, we found that board members didn't understand very much about AI And so we wanted to make sure that they had a tool kit to help them navigate that and give proper advice to the C-suite.

[7:26]

A few other places where we have high-risk case uses of AI and lots of ethical issues involved is obviously facial recognition technology, we're working on a project with France around that. The other is using AI in talent acquisition and human resources and we have a project where we're working on that as well. So producing guidelines for the ethical use of both of these technologies is really important because these are technologies that will actually touch citizens and if we're to get the benefits of AI, we have to be sure that we minimise the risks, and we increase the trust of the general public. Another area that we could do really well with AI is in educating young children. At the moment, we don't know what they're learning. Your child might have a smart toy. Do you know what the curriculum is? Do you know what the storage of data is? Do you know how they're using facial recognition with that toy? There are many many questions and basically we're using our children as guinea pigs at the moment because the governance is very limited in this area, even if you take out the data privacy issues in American law or in GDPR.

[9:00]

So we want to think about that, but I also want to look back and say, well, where did all this information that we're around ethics come from? Why are we talking about ethics? Well, you'll be interested to know that it started back in sort of 2014 when Stephen Hawkins said AI could be the best thing we've ever done as humans or it could be the worst thing. I actually became the world's first Chief AI Ethics Officer in 2014. A few of us started to think more seriously about AI ethics as we moved through into 2015 and in 2016 we saw the publication of the IEEE seminal work on ethics and AI. Then in the bottom A picture, you'll see me moderating a panel on AI ethics at Asilomar, a number of others including Elon Musk and Larry Page, the founders of DeepMind, and many others gathered at Asilomar to really create the first of these what now is 160 plus principles around AI. It's amazing to me still to think that it was only 2015 when I created the hashtag #AIEthics on Twitter.

[10:38]

So we've had a lot of principles, the IEEE is working obviously on standards and that is what I wanted to talk about next, the operationalisation of ethics. These principles come from the other 10 most ubiquitous that PwC pulled out from looking at over 90 of those principle documents from around the world. We've got all these organisations creating principles, we have some organisations that are creating an observatory for example, like the OECD. We have some organisations that are industry leaders and we're seeing some great work from some of the big tech companies. So for example, Microsoft's ethics committee that reviews AI work, and Paula Goldman's job at Salesforce as Chief Ethics and Humane Use Officer.

[11:51]

So what are these principles and why do they matter? Well, fairness, explainability, beneficial artificial intelligence. There's a lot of feeling that we really need to feel if we're going to use AI. We should make sure that we as humans benefit data privacy. That takes us back to some of the COVID issues, reliability, robustness and security. Obviously, that's fundamental to anything that we do in this space, the transparency that takes us back to how we use the technology and perhaps, human resources and AI peace, safety and human agency. So we're really thinking about augmenting the human experience as opposed to flipping out humans and putting machines artificially enabled machines in their place. If I go back to children and AI, we don't want to stop our teachers teaching. The best way would be really to use AI to teach where there are no teachers, for example, in rural areas and also to augment the teachers that we have at the moment so that we can have better quality teaching, and our teachers are not exhausted the whole time.

[13:19]

Moving on, I have a question that I'm dealing with all the time, AI and the Harry Potter problem I call it, and that is, can we use AI for anything at any time. Well, ethical AI needs a lot of testing, we need to have it trusted. We need to avoid this backlash that we talked about at the beginning of this discussion. Harry Potter took seven books to learn how to use this wand effectively and we should take notice of that. Just to finish by quoting Professor Schwartz who founded the World Economic Forum, that the way that companies and countries can keep the digital ecosystem in which we operate reliable and trustworthy is of vital importance, and we need to keep our customers fully aware of the functionality of products and the services including adverse implications, or negative externalities. If we don’t, then we don't have a bright future with AI, otherwise I think we do. I have talked about some of the 12 projects that we are using at the moment. If you want to join please contact me at [email protected] . That’s [email protected], really easy. Thank you all very much.

Are you interested in reading more AI content from RE•WORK and our community of AI experts? See more blogs below: