The way people interact with technology is always evolving. Think about children today - give them a tablet or a smartphone and they have literally no problem in figuring out how to work it. Whilst this is a natural evolution of our relationships with new tech, as it becomes more and more ingrained in our lives it’s important to think about the ethical implications. This isn’t the first time I’ve spoken about ethics and AI - I"ve had guests on the Women in AI Podcast such as Cansu Canca from the AI Ethics Lab and Yasmin J. Erden from St Mary's University amongst others join me to discuss this area, and I even wrote a white paper on the topic which is on RE•WORK’s digital content hub - so it’s something that’s really causing conversation at the moment. Fiona McEvoy, the founder of YouTheData.com, joined me on the podcast back in June to discuss the importance of collaboration in AI to ensure it's ethically sound. Fiona will be joining us at the Deep Learning Summit in San Francisco this week, so in advance of this, I caught up with her to see what she's been working on:

What’s your background, and how did you begin your work in ethics and AI/what came first?

I studied philosophy in grad school and quickly gravitated towards ethics. I took a class in the ethics of science and technology, and - studying in San Francisco - fairly naturally started to consider how ethical theory could be applied to new and emerging tech. I ended-up writing my thesis on the ethics of “Big Data” and data-driven AI. That was the beginning. From there I started to blog, making observations about anything and everything that was happening around me. Particularly in terms of the potential impact of tech on individuals and broader society. My involvement with the subject really just snowballed from there.

Tell us a bit more about your current work

I still blog regularly, as well as continuing to write in a more academic fashion. I also comment in the media and present my ideas at conferences. Rightly or wrongly, my focus is very broad. One week I’ll be writing about the ethical implications of nudge in immersive environments. The next I might be writing about decision-making algorithms, or chatbots, or NLP. We live in exciting times and there is so much to examine. Similarly, ethics itself is complex and multifarious. I want to introduce ideas that are new to people, and initiate debate.

Before returning to study and launching YouTheData.com, I worked in advocacy for many years - creating and launching public campaigns. I think this has given me a slightly different approach to the whole area of AI/ tech ethics. My tone - for the most part - is less formal. That’s because I’m interested in the conversation beyond its place in academia or inside Silicon Valley firms. I care about the role of the public as key stakeholders in determining how the future unfolds. I’m a big proponent of the idea that this stuff should be accessible.

What do you think are the main concerns in ethics and AI? Can the problems be avoided?

I think - outside of critical, high profile areas like system bias and data privacy - we’re still trying to anticipate problems that could manifest over time. It’s important to remember that there has been a huge amount of development over an incredibly short period. Fortunately, for years now great thinkers have been considering the sorts of ethical dilemma we’re now encountering, and there are long-established fields like medical ethics from which AI ethics can learn a lot.

It would be bold to say all of the problems being identified are soluble, because that evidently won’t be the case (for example, AI bias is proving a difficult nut to crack), but it’s also important to acknowledge that those working on tech ethics aren’t obstructive luddites. For my part, I’m completely in awe of much of the technology we’ve seen emerge over recent years. I acknowledge and respect its incredible capacity to do good in the world. But I’m also highly cognizant that, left unaddressed, issues like bias, psychological harm, worklessness, data misuse, coercion, and other types of catastrophic error could be extremely damaging to humans and technology alike.

Why do you think it’s important for the non technical members of a team to understand ethics?

When anyone is developing anything for mass adoption, the maker should consider how it will affect the eventual users. We all remember the old Jurassic Park line that (paraphrased) says something like: “You were so focused on whether you could, you didn’t stop to think if you should.” The “should” part is so important. Very often, ethicists and philosophers can’t offer definitive answers on what is right and wrong, but rather ethics throws up areas that warrant hesitation. These may not always be immediately intuitive or accessible. Obviously, if you’re working directly with a technology and know it’s capabilities then you are well-placed to spot these problem areas early on. That’s why it’s encouraging to hear that - slowly but surely - more is being done to school technology students in ethical theory.

Do we need a ‘common standard’ for AI ethics or does that depend on individual points of view?

There is a risk that a ‘common standard’ - however that might look - could make ethics a “tick box” exercise for technology companies that really need to engage beyond a set of static rules. Nevertheless, there is clearly a real need for support and guidance too. Groups like the IEEE are driving some great work when it comes to laying the foundations across a spectrum of new technologies, and obviously new regulations like the EU GDPR are game-changing in that regard.

How can we ensure that AI doesn’t inherit some of the intrinsic faults of humans?

By firstly accepting that technology is not neutral, and being vigilant about where the faults creep in. Critically, this means building ethics in at the design phase - whether that’s taking the care to understand the socio-historical context of the data used to train systems (and how they might create bias), or ensuring there’s a robust feedback loop that lets a system to receive information about bad decisions and adjust accordingly. These are just two measures, there are many others being offered up by tech ethicists deeply entrenched in this area.

None of it is straightforward. How can we know if a hiring algorithm filtered out the perfect candidate based on some irrelevant factor? Or if an autonomous vehicle could’ve made a better decision to avoid a crash but for some other competing goal? Or if a vulnerable person has been coerced into buying a product or service by relentless online targeting? Very broadly, in order to understand and intercept future bad decisions and influences, we need some level of AI explainability, and for humans to play a key role in continually policing, stress testing, and correcting our systems and their goals.

What’s next for you in your work?

I’m currently working on two research papers that focus on very different areas of tech ethics. One is looking at AI systems in political governance, the other considers AI-driven decision-guidance systems in the context of virtual and augmented reality. I’m particularly keen to extend my work into the latter as I think there are a number of issues that lend themselves to ethical analysis.

Outside of that, I’ll continue to blog, write in the media, and (hopefully) speak at great events like RE•WORK. I’m really encouraged by how much the AI ethics conversation has progressed in such a short time, and I’m delighted to be playing a small part in that.

Fiona will be compèring the Ethics & Social Responsibility stage at the Deep Learning Summit this week. Find out more and register your pass here.