In Search of Trustworthy & Ethical AI
With AI scaling at a tremendous rate in the last few years and no sign of slowing down, there is an increasing need for an open discussion around the ethics and trustworthiness of AI, including regulatory and legal risks.
Why is Ethical AI Needed?
From Deepfakes which have facilitated million-pound fraudulent bank transfers to harmful gender stereotyping and racial bias, it is clear that we need regulations in place to create a safer, fairer world for everyone. In a recent Women in AI Podcast, Rachel Alexander, CEO, and Founder of Omina Technologies suggested that the key to ethical and explainable AI is:
"Making it clear what this means for companies, explaining it in a way people can understand and also how it can be implemented for them"
To progress, we need to heighten our awareness around the changes that AI demands in our thinking, especially as, according to Gartner, by 2022, 85% of AI projects could deliver erroneous outcomes due to bias in data, algorithms, or the teams responsible for managing them. If we don’t, AI may trigger embarrassing situations, erode reputations, and damage businesses. However, there are also tremendous grey areas with regulation, with little consensus on how it should be done and most importantly, who should make the rules.
The Benefits of Ethical AI
The benefits of ethical AI are vast. In a recent study, 62% of consumers suggested that they would place higher trust in a company whose AI interactions they perceived as ethical. Consumer perception is also being backed up, perhaps with some lag time, by company executives. 51% of which consider that it is important to ensure that AI systems are ethical and transparent. It is also suggested that many hold higher faith in automated decisions, removing the potential for poor human input or performance.
Without an ethical input, AI will simply not have awareness of itself or hold empathy. The benefits, therefore of implementing ethical frameworks, both in development and application, include human and environmental wellbeing, fairness, privacy & security protection, and explainability. Another aspect that isn't often discussed is that due to the mammoth size of this task, it increases the opportunity for collaboration across academia, industry, and government, in turn making the possibilities of AI development, in general, more fruitful and beneficial to society as a whole. These benefits have also been seen as extremely fruitful by many regulators in Europe who plan to boost the research and industrial capacity of ethical AI. You can keep an eye on the regulatory updates here.
The Challenges of Implementation
Ethical standards only have value when put into practice and it has been argued that responsible AI also requires strong, mandated government controls including tools for managing processes and creating associated audit trails. As Rachel Alexander mentions in her Women in AI podcast:
"People will trust an artificial intelligence solution if they believe it makes decisions that are fair, respects their values, respects social norms, and they understand how the decision is made. They understand how to contest the decisions and they know their data is handled with respect and is secure."
That said, there are many issues and challenges which come with the implementation of Ethical AI. The complex task of understanding culture, law, and generic ethical practices, and relating them all to AI is a vast one, especially when creating AI systems which the public could potentially use. Another challenge that comes with this is that AI practitioners must take the responsibility of constantly evaluating ethical challenges which can take a lot of time and resources, as well as not allowing their economic interests or innovative enthusiasm to take over and crowd judgement. After all, many creating AI models are those with a great passion for technology, especially as up until now, regulation has been minimal. This is something Rachel believes is imperative to understand before even starting your AI journey:
"The first challenge is how to move from the old practice of building AI solutions, whilst also providing the right explanation to the right user, the context and the purpose to increase explanation, effectiveness and satisfaction. Providing a satisfactory explanation of the A.I. enabled decision might increase the trust and adoption of that decision. So you kind of have to decide as a business what is your goal and what is your priority and what are your regulatory and legal issues that you have to make sure that you that you fit."
Societal and industry pressures are pushing ethical AI forward, and it is often currently often down to businesses to choose their level of comfort in the choices they make, and balance these against the associated costs and wider impacts. Importantly, there has to be consensus around the guidelines chosen, requiring compromise from all parties who may have differing levels of cost/benefit and commitment. Another aspect is the financial benefits that companies can currently achieve. Many valuable datasets are held by organizations that have the sole priority of monetizing their value.
Interested in reading or listening to more on the need for ethical AI and explainable AI? See our free-to-access content below:
- Identifying and Addressing Bias in Machine Learning Models Used in Banking (Video)
- In Search of Trustworthy and Ethical AI (Podcast)
- Identifying and Addressing Bias in Machine Learning Models Used in Banking (Video)
- Explainable AI for Addressing Bias and Improving User Trust (Video)
- Responsible AI at the BBC with Myrna MacGregor, BBC Lead, Responsible AI+ML, BBC (Video Podcast)
Interested in more RE•WORK Content?
Catch up with our 5 most recent blogs: