Last night at the Women in AI dinner in London, we brought together leading female minds working in AI to discuss algorithmic fairness, unused local computational opportunities, image recognition and other topics.
The evening began with our guests meeting and connecting over champagne at the South Place Hotel, engaging in conversations around AI and machine learning, with lots of people making valuable connections:
“We’re looking forward to the fairness talk - hoping to see some new examples rather than the old familiar ones”
Aleksandra Iljina and Daniela Antonova, Data Scientists attending from Apple
“Looking forward to checking out the ecosystem and seeing what’s out there for women in AI, especially in London”
Sahra Bashirmohamed, Data Scientist attending from Tech City
The evening welcomed local and international attendees, with Ruxandra Burtica, a Computer Scientist from Adobe flying in from Romania and our sponsors Borealis AI joining us from Canada. As well as welcoming new guests, we were delighted to see many familiar faces returning.
“I’ve been to the past two dinners and I found the speeches very inspirational. It’s always very fun meeting everyone.”
Bianca Furtuna, Data Scientist
After taking our seats for dinner, we were welcomed by Fujitsu’s Marian Nicholson. Marian commented how nice it was to see everyone coming out to support women in AI, especially the gentlemen in the room who had joined us and our sponsors RBC, for their support and collaboration.
Silvia Chiappa, Senior Research Scientist at DeepMind was our first speaker of the evening, giving an overview of ways the machine learning community is addressing the issue of fairness and an introduction to how DeepMind is innovating algorithmic fairness.
Machine learning is increasingly being used to make decisions that can severely affect people's lives. For example in policing, education, hiring, lending, and criminal risk assessment. We have judges and parole officers that increasingly use algorithms to predict the probability an individual will commit a crime. In situations like this it is important to make sure the algorithm is not biased or unfair towards some individuals or groups of people.
This process is particularly susceptible to unfairness because the training data often contains biases inherent in our society - such as racial discrimination. This can be absorbed or amplified by the system, leading to decisions that are unfair.
One of the earliest approaches in overcoming this, is to simply disregard the sensitive attribute (e.g. race or gender) in the system, or downweight its importance. However, this simple approach doesn’t really work. It can be detrimental to the performance of the system, and more importantly the resulting procedure still may not be fair because the protected attribute can be correlated with other attributes that are used by the system. In a system that uses neighbourhood as the input, even if the system is not explicitly using race, neighbourhood is highly correlated with race so the system can implicitly extract information about race through neighbourhood.
Silvia used several examples outlining DeepMind's proposal of a definition of fairness, contextualising the problems and providing possible solutions. She will also be releasing a paper online in a few days going into more detail on how this is possible, so be sure to take a look if you’d like to learn more.
With our first speaker and main course completed, it was time for another change of places to meet more new people. One of our attendees Maura, a student at Imperial College commented,
“The rotation system for the networking worked really well. There was a relaxed atmosphere and it felt very equal - you could approach everybody. It turned out a lot of people I spoke to were CEOs - there was no hierarchy!”
Next up to speak was Cecilia Mascolo, Professor of Mobile Systems at the University of Cambridge and The Alan Turing Institute, on the potential for using computational units on mobile devices and wearables, and possible applications for healthcare data processing and accessibility to data in developing countries with limited cloud access.
The devices and wearables we carry gather a lot of important data and concerns are often raised about privacy. Do we trust where this data is going? Do we trust that the data is accurate? Does a sleep monitor really understand when you’re sleeping?
People want better inference from their devices and Cecilia suggests that the way to go about it is to bring intelligence down to device level. There is so much temporal and spatial data that can be collected from devices. In particular, Cecilia mentioned the use of voice recognition through microphones for mood monitoring, keyword spotting, and the potential of this technology to be used for early diagnosis of various diseases, including recently for Alzheimer’s disease.
Questions about how accurate we can be with this data lead to opportunities to develop other aspects of data use. If done well, we could be better off using all the possible under used computational units on our devices. For example the powerful GPU on a phone is an under exploited computational unit. Cecilia mentioned the potential of applying this in developing regions where sending things to the cloud is very expensive and slow.
Local computation could improve privacy for healthcare data, particularly the collection of very fine grain location data. Cecilia compared the downsides of having to hand over this data to the unwanted side effects of a drug: We can diagnose your Alzheimer’s early but the side effect is we need to monitor you totally and constantly. By using more local computations we can work to limit these side effects. This could solve aspects of privacy people are greatly concerned about, such as very sensitive microphone data.
The evening concluded with our final speaker, Marian Nicholson, Lead Deal Architect at Fujitsu speaking about Fujitsu’s work applying deep learning to advanced image recognition and their human-centric approach to AI, emphasising the need for transparency.
Marian praised the work of her team at Fujitsu using image recognition to teach wind turbines how to recognise a defective blade. It takes ten years of training to be able to tell if there is a fault in a wind turbine, and involves potential physical danger, so they identified this as an area machine learning could benefit society. The Japanese company always ask: Why are we doing this? Is this going to benefit society?
In thinking about training their machines, they looked at how people learn and found that recognising things from a visual input was most useful. Babies recognise images before they learn to talk, at 9 months. They can recognise a complex image set, and can take a static image and relate it to something in real life.
Marian ended by emphasising that all technology can be used for bad so it’s incumbent on organisations to say “let’s use it for good”. AI is the way forward but we’ve got to be very transparent about what we do. We need to provide information to people that they can understand, and take out bias. With the huge power of AI comes the responsibility of making sure things are safe and fair.
Many attendees were keen to find out when our next event will be taking place. The Women in AI dinner will return to London July 11th. If you can’t wait that long, we’ll be in Boston in May and Singapore and San Francisco in June. We’ll also be hosting the dinner in New York and Toronto later in the year. We hope to see you at one of the dinners for more networking and conversation on women in AI!