AI holds great promise but also significant threat. As AI capabilities continue to advance at a rapid pace, so does the risk to both company and consumer. In the plenary session at the Deep Learning in Finance Summit, Deep Learning in Retail & Advertising Summit, and the AI Assistant Summit in London last week, we were joined by experts in AI security as well as those facing giant risks when advancing AI in their business.

Aditya Kaul from Tractica kicked off the session by explaining how AI is constantly being re-invented and reminding us that 'in the next 5 years we might not even call it AI, it'll just be the way things run.'

The panel was made up of Shahar Avin, Postdoctoral Researcher in the Centre for the Study of Existential Risk (CSER) at the University of Cambridge, Catherine Flick, Senior Lecturer in Computing and Social Responsibility, from DeMontfort University, Bianca Furtuna, Freelance Data Scientist, and Jochen L Leidner, Professor of Data Analytics at University of Sheffield.

Have security and privacy risks become the ultimate obstacle for AI and it's rapid growth? This was the initial question posed to the panel, and he went on to expand: As we know, in software there are things that can go wrong, and machines learn from real world data. How do we protect against unintended consequences?

Catherine kicked off: I'm currently working on updating ACMs code at the moment and work on responsible research and innovation. We need to think not only what is the problem and solution, but what are the unintended consequences, like miss-use cases. Thinking about ethics and social impacts is really important by bringing in diverse use-cases to test on to make sure there aren’t unintended consequences. People who aren’t technical will probably have a different perspective of what’s important when we’re thinking about privacy.

In terms of malicious use, people think policy makers and government need to work together and prepare for the risks, but how can we start to take measures?

Shahar: Definitely by encouraging more talking between policy makers and the government. Malicious consequences are no longer unforeseen, so we have no excuse in being unprepared - we need to look forward and make a plan for how to make sure they’re used responsibly.

Jochen: I volunteer to teach because I think it’s important to upscale the next generation with technology and ethics. We need transparency in machine learning and people need to know why it’s happening. However, the best methods of ML aren’t transparent, but customers prefer lesser performing models that are transparent. Here, there are opportunities for research, but there are often also tough decisions to make. People are naive about how they share their data, and sometimes volunteer their data in ways that might lead to unintended consequences.

Catherine: Of course we’d love full transparency and security, but you can’t have it all, so we need to decide what the most important priorities are and find the balance. There’s no flow chart to see what’s most important, it’s about context so it’s a difficult thing to be able to set in stone. You need to know the values behind it to determine what you need to focus on. Try cancelling your Once you’ve given your data away you can’t get it back.

To hear more from the Privacy & Security panel, register for video access here. The second panel discussion of the afternoon saw discussions around the ethical implications of AI coming to light. Joining the discussion was Lucy Yu, Director Public Policy at FiveAI, Ansgar Koene, Senior Research Fellow at Horizon Digital Economy Research Institute & University of Nottingham and Yasemin J. Erden, Senior Lecturer, Philosophy at St Mary's University.

The moderator, Phil Westcott from Filament AI began the discussion touching on accountability.

Accountability is a challenge in AI and ethics. Currently, lots of solutions are very much supervised, but particularly looking towards reinforced learning, what would you say is the framework of accountability we need to consider?

Yasemin: The lack of transparency is a concern. How does the model make the decisions? With supervised learning, we have a clear dataset - this is the input and output and we get an idea of how the data sets work and look at the kinds of people will be affected etc. with unsupervised, there will be an exploration phase with randomised data, it’s hard to track what’s trained the system, so it’s hard to know what’s trained the system.

In automous vehicles, there’s the philosophical problem we always touch about on - the tolley problem - if a train is heading along a path and the path splits, if you go one way you’d kill 5 people and the other way you’d kill one. What shoul you do? The idea is supposed to show why saving more people isn’t necessarily the ‘right’ thing to do. In AI we need to think about this first because we’re hard coding this in. How do you approach this in autonomous vehicles?

Lucy: All the variants in the trolley problem are something we encounter every day, there are no rules or existing guidelines. We’re all starting to think about it though. The german government put up some outligns stating that all human life should be treated as equal, so for example age and the suchlike doesn’t matter. In autonomous vehicles we need to think about what’s comfortable to everyone whether you’re in the vehicle or outside of the vehicle.Ansgar: In a sense, this is an extreme case example of something that applies to AI and ethics in a much broarder sense. You always have to make a choice, and you won’t be abel to make everyone happy. There will always be a limited resource one way or the other so we have to decide, what do we optimise for? Should it be the maximum satisfaction levels, or minimising the difference between satisfied and unsatisfied customers?Yasemin: We're always trying to look forward and see what the possible use cases are. We have to forward plan everything.

Yasemin, you've previously spoken about technology being 'neurtal' - can you explain what you mean?

Yasemin: The idea is that every judgment we make is value laden. It’s tied to our experiences, beliefs and judgements - there’s no way to remove this from ourselves, so nothing we do can ever be neutral, so that affects every technology we make. We’re incapable of eliminating this.

How do we eliminate or reduce bias?

Ansgar: we’re talking about unjustified bias, because any decision has bias of some sort, but we want the bias to be based on justified and appropriate criteria. You have to understand what the criteria are on the decision that’s being made, in a transparent way so we can make sure the justification is something that’s an acceptable justification.

Lucy, at FiveAI, you have lots of Eurpoean culture in your business, how does this effect the system when it's rolled out globally?

Lucy: That's an interesting question, and we currently use a lot of London based real-world data. Of course different countries and cultures have different road laws and eticette, and even within London the behaviours of drivers vary. For instance, a driver in suburban North London might be different from someone navigating a highly populated area in South London, and also the times of day has an impact. Then there are cyclists, which are challenging for autonomous vehicles, and again they behave differently - a guy in a suit on a boris bike, or a deliveroo driver, or a drop handlebar lycra cyclist - they're not going to behave the same.

To hear more from the Ethics panel, register for video access here.