One of Facebook's primary goals is to provide a personalised experience to every user. They do this by looking at your profile, interaction with adverts, and people to people interaction, however they haven't invested a lot of time in understanding the content itself. At the Women in Machine Intelligence Dinner in San Francisco this January 23, Annie Liu, Research Scientist and Cristina Scheau, Engineering manager, Computer Vision at Facebook will be presenting their current work in machine learning and deep learning. Cristina will be sharing her work in deep learning for search relevance from the perspective of users whilst Annie will share how Facebook approach the content understanding problem by combining signals from text, images, videos, and audios.
I spoke to Annie in the run up to the event to hear more about her work and what we should expect from the dinner next week.
Give me an overview of your work at Facebook
I work on a sub team in news feed that focuses on understanding content (posts) in Facebook. A post may contain text, images, videos, emojis, hashtags, comments, reactions, …etc.. and our goal is to take all these signals to come up with meaningful labels for each post. Traditionally there are two approaches to content understanding – 1) learn an embedding for the content, 2) explicitly tag the content with labels. The advantage of 1) is that it can be trained in the absence of a taxonomy. The advantage of 2) is that since these labels are interpretable, it can be exposed in different products. For example, if we understand that a post is about “politics” then we can give users control to either increase or decrease such content in their news feed.
At Facebook, we’re working on both approaches, but my team specifically focuses on approach 2), where we maintain a taxonomy of ~3000 categories and build hierarchical models to classify posts into a subset of these categories. Currently we’re classifying all English posts and the results are used in various recommendation/ranking products to make the results more personalized.
How did you begin your work in machine learning and more specifically in ranking and personalisation?
My PhD research was on statistical machine learning in large scale sensor network. While it prepared me for fundamentals in machine learning, I didn’t have much experience in ranking before joining Facebook. My first two years in Facebook was on the entities team working on entity resolution and classification and I only picked up ranking and started thinking about personalization mostly after joining the news feed ranking team. Easy team transition is built into Facebook’s culture and I’m very grateful for the opportunities that allow me to continue learning even after 4 years at the company.
What are the main challenges you're currently facing in curating personalised experiences for Facebook users?
Being able to provide personalized results is super important to Facebook’s eco system, and has profound use cases from news feed to ads and to search. However, overly personalized content has attracted the company some criticisms, especially around the discussion on echo chamber. Sometimes deliberate choice has to be made such that we’re not putting gas to an already divided environment. These difficult decisions are usually the results of many months of research and preparation that go beyond the typical disucssion of machine learning and optimization. The decision to kill clickbaits and sensational contents is one example; Mark Zuckerberg’s recent announcement to shift news feed back to focus on family and friends is another. These decisions don’t always align well with existing objective function that we’re optimizing, which provides a constant challenge to everyone working on the products.
How are you using AI for a positive impact?
There are so many examples I don’t know where to start! Facebook has transitioned into an AI driven company. Pretty much all the products you have used on Facebook have some AI components, whether it’s showing you the more relevant comments, summarizing trending news, fighting spammers, or preventing suicides. A particular example I worked on recently is to use machine learned model to detect clickbaits and sensational posts – these posts (with titles such as “A woman walks into a bar, and you’ll never believe what happens next!”) generate tons of engagements but were consistently among the top 3 complaints about news feed. We trained a ConvNet that takes in texts as well as other meta data features to predict how likely a post is a clickbait and use the predictions to demote such posts on news feed. This shift, albeit caused certain engagement metrics to drop, has dramatically improved the user experience on news feed.
What developments of AI are you most excited for, and which industries do you think will be most impacted?
I’m very excited about many recent developments in AI, such as machine learning in pharmaceuticals to accelerates drug development, and machine learning applied to arts and fundamental sciences. But if I have to choose one, it’d be self-driving car. Like many people living in the bay area, I commute on 101 daily to work, and like many people who opt in for this choice, I hate it, especially when the already bad traffic is made even worse by accidents. It seems to me that there’s already a solution to this problem with self-driving cars that communicate with each other and it’s only a matter of time that this becomes a reality. And it’s unavoidable that the current industry that provide driver services would be impacted directly.
AI and deep learning raises many ethical concerns such as bias, security & privacy amongst others. What are your opinions on this and how does it affect your work?
Machine learned models are trained on real-world data and so it’s unsurprising that they capture real-world biases. Not all biases are bad – the synonym of “bias” in machine learning is “prior” and everyone who has studied Bayesian statistics knows how powerful prior is. However, there are biases that we deliberately choose to avoid, such biases range from gender (trying googling “images of CEOs”), racial (e.g. Google’s “images of gorillas” problem), to religion (e.g. Facebook’s translation caused the arrest of a Palestinian man). There really is no silver bullet solution that would solve the bad bias problem and all we can do as machine learning engineers is to be extra cautious when building products powered by models by asking questions such as “is our label collection guideline as clear and neutral as possible?”, “are all the features we engineered free of human biases?”, and “Is the potential risk low enough to use this model in this product?”. In practice, we have certainly called off model launches because we couldn’t give positive answers to these questions and it is the right thing to do as a company that has social responsibility.