There’s been an explosion of interest in ethics, responsible AI and bias in recent years. When built, cultivated and deployed with the right human oversight, AI has the potential to do significantly more good for the world than harm. However, the key here is the right human oversight, and as AI is becoming more and more accessible, it’s important for all aspects of each product to be designed with the potential ethical repercussions in mind.

Before we get stuck into discussing what is ethically ‘right’ or ‘wrong’, part of the confusion comes from a misunderstanding of what ethics is. At the AI Assistant Summit in San Francisco we were joined by leading minds in the field on the panel discussion ‘As Our AI Systems Become More Capable, Should Ethics be an Integral Component to your Business Strategy?’.

Joining us on the panel was:

  • Jane Nemcova, VP & NO, Global Services for Machine Intelligence at Lionbirdge
    Lionbridge provides international organisations with the language, cultural, and technological expertise they need to transform how they communicate globally.
  • Abhishek Gupta, Prestige Scholar at McGill University, and AI Ethics Researcher at District 3
    He is also organiser of the Montreal AI Ethics meetup where members from the community do a deep dive into technical and non-technical aspects of the ethical development of AI.
  • Cathy Pearl, VP of UX at Sense.ly
    Sensely’s virtual nurse avatar, Molly, helps people engage with their health. Cathy is the author of the O’Reilly book “Designing Voice User Interfaces”.
  • Jake Metcalf, PhD Consultant at Ethical Resolve (Panel Moderator)
    Ethical Resolve provides clients with a complete range of ethics services, including establishing ethics committees, market-driven research and engineering and design ethics training programs.
Read a transcript of the panel discussion below to learn more:

What are some of the ethical issues in AI that are most pertinent for AI assistants? What would you encourage your colleagues in AI assistants to pay attention to?

Cathy Pearl: For us, as a healthcare company, we think a lot about patient data privacy. In a broader sense, we want to make sure that we’re ethical in the way we get our patients to be compliant. For example, as a company it benefits us if our users to do their daily check in on their health, but it also benefits the patient to see if they may need more help or adjustments in their medication. However, we don’t want to completely gamify this process. Whilst it’s good for patients to want to check in, it loses its reliability if they’re doing it for the wrong reasons.

Abhishek Gupta: We need to think about what norms are we imposing on people when we put these AIs into action. For example, in the Western world we have an expectation of how a conversation takes place, but is that right for the rest of the world? Are they comfortable with the way we say things? Say you’re building something for the developing world, we need people from that community to work on it too so you don’t impose something that puts people in a position where they get a bad UX because there hasn’t been a fair process behind the creation of the product.

Jane Nemcova: We see challenges with different ways of analysing, but privacy and different types of legal issues around data is something we’re very careful with and we have a rigorous approach to. However, there’s a larger question in what are the big companies who are creating these AIs are doing - there’s a need for more folks who have an understanding of the bigger picture to figure out where you draw the line.

It’s interesting to pay attention to the breadth of users. Why do we need to take attention to this?

Cathy Pearl: There are definitely compliance issues with people taking medication, and there have been so many tech ‘solutions’ for people to take their medications by assuming that the compliance issue lies in that people forget their medication. Yes some people forget, but so often that’s not what it is. Maybe people can’t afford the prescription or they don’t believe their doctor. If you only look at your own issues you’ll miss some, so you need to look at a whole and varied data set.

Abhishek Gupta: Diversity in the collection of datasets is really important. Earlier, we were discussing the example of voice recognition with the issue of accents. If you don’t have a north american accent then the recognition and accuracy of the system is poor because the data is trained on largely north american accents. If you train the system on a wide set of accents it’s more likely to perform accurately for a wider audience.

Jane Nemcova: Our area is specifically in getting scalable data, so we cover India ,Africa, Asia, Europe and yes the US market is the largest, but the diversity even within the US is enormous. You have discrimination but it’s changing and the companies that are creating them realise that to have all the applications and to get the best UX the diversity of data is critical.

We hear a lot of discussion about what AI will be like in 10 years. But what do you think AI ethics will look like in 10 years? How should we be structuring technology research and business now in order to deal with future challenges?

Abhishek Gupta: Let’s think about cyber security. Go back 25/30 years. The role of cyber security was to have dedicated teams to act as a final check point before release - if flaws are found it goes back to the beginning of the production cycle: this leads to ‘secure by design’. Everyone thinks about security from the beginning, and in the future we’ll have ‘ethical by design’. It won’t just be a few people in the company thinking about the ethical implications, but every day everyone will be thinking about the consequences.

Jane Nemcova: Even the fact that we’re having this discussion means that ethics is an important issue for us all to consider no matter what our role in AI is, but it struck me a couple of years ago that people who were interested in the ethics weren’t necessarily educated in it. We run into problems when we try to apply one person's judgment. In 10 years from now we’ll be grappling with new applications that continue to evolve, but the companies and government need to consider this.

Cathy Pearl: At Sensely, our clinical team thinks about patient safety all the time, but I hope in the future, AI teams will be thinking about customer safety over all. Currently in AI a 5% fail risk is okay, but say for example you were doing a suicide prevention app and that was the fail rate that would be not good. So we need ethics teams to ensure products are safe.

Do we need a chief ethics officer or do we have something different for startups and big companies, or do companies that work on the backend need something different for user facing technologies?

Jane Nemcova: What we need is the education of everyone. It’s definitely in societies interest to think a about how it’s affecting everyone.We all need to know about how we fit into the world and our understanding of everything is critical. We need to develop the right habits around that so we can behave well with the systems. In a AI company, every role needs to have that in mind, what’s the ethical implications of what they’re doing, even if they’re not experts in the field.

Abhishek Gupta: Definitely. Education is important. The simple solution is having a course be mandatory in University on courses such as Computer Science and rolling it out to other disciplines that will be involved. Internal training courses at companies are also important and could have a huge impact. For example, at Microsoft before you write a single line of code you have to go through training, so to have something like that where you’re compelled to look into and study is practical and important.

Cathy Pearl: We do user testing before we put the product out to the public. We need to build in user testing with diverse audiences in the environment the product would actually be used in. It would be great to have a disaster prevention team to think of the worst case scenario. Something like a chief scepticism officer (laughs).

How is bias going to play out in AIA? How can we get ahead of that?

Abhishek Gupta: Something that comes to mind is a document around responsible data practices which sums up a lot of the concerns around bias in data sets amongst other data practices. During design and conception of AI assistants you need to think about who is your target audience, thinking about a red/blue team situation with all the possible cases that can happen and then go and collect all the data for that. It’s hard to do but fundamental.

Jane Nemcova: Having unbiased data is a hot topic, what worries me is when we look at the greater good of what a product is doing and who it’s serving, there’s a fine line that starts to get crossed between having an empirical approach, to having a ‘hey I don’t like how users are behaving’ so you change things on the way. Unless the greater good is universally agreed on, it could go from good to bad quite quickly - it’s all about how people are approaching it.

Cathy Pearl: We were working with a clinic improving lives with congestive heart failure and we invited patients in to talk to professionals and they learned so many things about the stressors in their patients lives that were impacting their health outside of their diagnosis that were previously unknown. The more stories you understand behind their behaviour, the more information you have.

Keen to hear more from the panel? Sign up to receive access to watch the full discussion, as well as receiving presentations from the AI Assistant Summit last week in San Francisco.

We’re also working on a new White Paper centralising around ethics & AI, so if you’re interested in contributing do email Yaz at [email protected]