AI is used in everyday life for many of us, often without us even realising. For instance, when using Siri on your phone, or Waze for directions. But how aware are you of the methods and tools used behind these intelligent systems? Do you trust them?

Building trust is essential to the adoption of Artificial Intelligence. At what stage do we start questioning whether or not we can trust the system?

What Is Trusted AI?

Trusted AI is when we discuss how to ensure and inject dimensions of trust into our intelligent systems, including fairness, robustness, accountability & responsibility, ethics, reliability and transparency. An AI model becomes more trustworthy when they are fed reliable knowledge and information from a set of data.

The Importance of Trusted AI

In the field of AI the saying ‘no trust, no use’ is a very familiar phrase. If we cannot trust our systems, and we are not 100% sure of the risks and the outcome, then we should be very cautious and sceptical about putting it to use.

“Trust is the social glue that enables humankind to progress through interaction with each other and the environment, including technology” - Rachel Botsman, Trust Researcher & Trust Fellow at Oxford University

What are the three main pillars that have to be answered before we decide if we trust AI?

  1. Performance: Does it perform well? Is it safe? Is it built correctly?
  2. Process: Does it perform the way we intended? Can we predict the outcome?
  3. Purpose: Do I have a good feeling about the intent of the program and provider? Does it adhere to ethical standards? Is it trustworthy? [1]

Research Scientists and Engineers at IBM have released a set of trust and transparency capabilities for AI, designed by 3 main factors: Explainability, fairness, and traceability. [2]

How Trusted Does Your AI Need to Be?

How much people trust your AI all depends on the consequences of the outcome. If the outcome is of low risk, then people are may be more willing to trust it. If the outcome is high-risk. people will be more skeptical about trusting the use of the AI.

The Main Risk Factors

There are many factors that come into place with when trust can be undermined, both in the development and deployment process. The main risks are biased data, lack of transparency, and badly curated data.

Below is an iceberg, containing the risks of AI  in a hierarchy. The main three are technology risks, data privacy risks, and data security risks. These factors tend to lie within keeping the customers privacy and data safe, which leads to whether the customers will trust the model and buy the product. As you can see, there are far more risks than you would think.

Who Is to Blame When Things Go Wrong?

There is no simple nor easy way to answer this question. There are many variable factors, but there are 3 main answers of who we may hold accountable; the developer, the trainer, or the operator. However, it may be a combination.

  1. The Developer

The developer of the algorithm can make an error in the process of creating it, which can result in very unpredictable behaviour. The developer may lack knowledge about the model, leading to missing out certain things or even introducing the wrong thing.

2. The Trainer

Once the algorithm has been created, it is time to sample the system to ensure we can predict the outcome and that it is safe to use. The sampling has to be able to give a reasonable, predictable and accurate outcome. The trainer may be held accountable if they do not test it enough and therefore predict an unreliable outcome, or if there is a biased set of data, it will result in an error.

3. The Operator

Being the operator you must follow the path of what led to the outcome and conclusion, or you will result in an error. The operator cannot ignore the warning signs and make a decision with an unrealistic outcome.


Now that we know what trusted AI is and what may cause the trust issues - How do we gain this trust in AI?

How Do We Build Trust in AI?

How can we create trusted AI? We must make sure the trusted AI is 1) Assured, 2) Explainable, 3) Legal & Ethical, & 4) Performant

Another way of how we can build Trusted AI is by  following certain guidelines and ‘frameworks’. Here is one framework below:

Trusted AI Framework

Capgemini realised there was a big problem around businesses not trusting AI, if they do not trust the outcome, they will not invest and buy it. Therefore, Capegemini created a framework, an ethical AI life cycle with checkpoints, with the end result of creating trustworthy AI.

The Trust Advantage

Once your company ticks all of the checkpoints above, you finally have an advantage over your competitors. If you are trusted, then the likelihood is that your company will grow faster, employees will be more motivated, and productivity will increase, resulting in your customers being more satisfied.

Summary

AI gives huge value to our lives and companies when it is done well. As AI is involved in our everyday lives and is ever-increasing, we need to ensure we can design it to be trusted and safe for the benefit of everyone.

As time progresses, trust and safety in AI is becoming increasingly more important, and emphasised. Those responsible are being much more cautious when creating models, by following guidelines and frameworks to ensure they are building a trustworthy model.

There is still a lot more to be done, however we are hopeful that the AI sector is heading in a more trustworthy direction.



If you want to learn more about trusted AI, then here are a few resources for you to get started with:

References

[1] https://towardsdatascience.com/how-to-build-trust-in-artificial-intelligence-solutions-83ca20c39f0

[2] https://www.research.ibm.com/artificial-intelligence/trusted-ai/

[3] https://www.ey.com/en_om/trusted-intelligence/how-to-harness-the-power-of-trust-and-data

[4] https://www.capgemini.com/ch-en/2020/09/the-why-in-building-trust-in-ai/