A decade ago, Netflix launched a challenge to predict how each user would rate each movie in their catalogue. This accelerated the science of machine learning and matrix factorization. Since then, Netflix's learning algorithms and models have evolved with multiple layers, multiple stages and nonlinearities. Today, they use machine learning and deep variants to rank a large catalogue by determining the relevance of each of their titles to each of their users, i.e. personalized content selection. They also use machine learning to find how to best present the top ranked items for the user. This includes selecting the best images to display for each title just for you, i.e. personalized image selection.
At the Deep Learning Summit, Tony presented on: 'Personalized Content and Image Selection'. In case you missed the session I am sharing my notes from the presentation.
Tony began his session by referring to Galileo and the idea that you first find a hypothesis and then go out and collect data. However, “rather than sitting under a tree to think of one, we now explore billions and trillions of hypothesises”. Out of the billion hypothesises, it's a case of figuring out which one sticks.
For Machine Learning there are two key areas to focus on:
- Collect massive data sets
- Try billions of hypotheses to find* one(s) with support
An observation of the weather over many day using Binary (0,1) where 0 is yes it will rain and 1 is no it won't rain. Similarly, we can view cloudy or sunny skies as another binary variable.
While we can learn the probability distribution from observations when they contain just a few variables, when there are thousands of variables, it is much harder to describe the probability of them jointly. We keep things tractable by limiting interactions between variables through a network. This gives better computational and statistical efficiency.
Storytelling goes back thousands of years, to the beginning of humanity. When this is applied digitally, the story teller doesn't know if the listener is engaged (e.g. laughing, frowning, etc.). Netflix acts as the story teller to its customers (the listeners).
At Netflix, they watch millions of people’s interactions with their site and find out what they like, what causes them to fast forward, turn off, etc. All this data is added to a model in machine learning that is used to make sense of the data and get a clear picture of each user.
Machine Learning at Netflix
There are 6 key areas Netflix focusses on:
Ranking & Layout
The entire catalogue of movies and shows at Netflix is ranked and ordered for each user in a personalized manner. Netflix can work out what a customers' favourite shows are based on what they have watched. If Customer Z has watch a few comedies, it can be presumed that they have an interest in comedy films/shows. Therefore, comedy would rank higher over films/shows they haven't shown interest in.
Therefore, on the website, Netflix ranks the movies by listing the ones most suited to the customer’s interests at the top. They also, do a ranking of rankings, where the first row is say top picks, the second could be kid’s TV and the third could be Rom-Coms, etc. However, this makes things more complicated.
Similarity & Promotion
Another key area is to catch similarities between movies in order to make useful suggestions to the users, for instance categorising by finding other titles related to one that the user recently watched. Another key question Netflix ponders is 'how do we promote new movies which users don’t know about?'. This is achieved by understanding the users likes and history on the site and then being able to suggest similar movies.
Evidence & Search
Through testing, correlations can be drawn between people’s interests, watch history, etc. The results of these tests give evidence as to what is working and what isn’t. Better search and acquisition of new movies to encourage people to sign up is a machine learning problem.
The first stage is a period of data collection over several months. Then, A/B testing is carried out to say whether this new model is better than the current model. So, A being the old model and B being the new contender they have just invented. Half the users get the new model and half the users get the old model and the results are analysed to decide which model gets rolled out.
There are many problems with batch learning, it may take a long time before it figures out what the best experience is for users and then users would get a worse experience until the models are fully learned and tested.
Explore / Exploit Learning
For explore / exploit learning, Netflix sample a large number of hypothesises and suppress the ones that aren't doing as well as others.
- Uniform population hypotheses
- Choose a random hypothesis h
- Act according to h and observe outcome
- Re-weight hypotheses
- Go to 2
Netflix uses explore/exploit learning to find which pictures best describe movies; therefore, Netflix modifies the imagery that represents the movie to suit each customer. To be successful with this, Netflix run tests to see which images are better for each movie and how other factors such as a customers' genre preference affect their choices.
1. Is machine learning used in shows?
We do machine learning on the entire catalogue. Machine learning evaluate shows and predicts which ones would do best.
2. Do you carry out analysis on the frames of the videos?
We have analysed the imagery and frames of the videos, but we are not doing it at the level where individual images in a movie really affect the recommendations we make.
3. Tailoring screen shots to what people watch (e.g. comedy). How much needs to be done by humans, for instance to say that Robin Williams is funny?
No one annotates these images. We use explore and exploit and randomly send the images to users and see their responses as to whether they click, as well as considering the history of the user. It's not an editorial process, it’s all done by feedback from our users.
Login/signup here to access the presentations from both the Deep Learning Summit and Virtual Assistant Summit. You can buy a membership that will allow you to access all our great content for 12 months. Contact firstname.lastname@example.org for more details.
Other Upcoming Summits Include:
- Deep Learning in Healthcare Summit, 28 February - 01 March in London
- Machine Intelligence Summit, 23-24 March in San Francisco
- Machine Intelligence in Autonomous Vehicles Summit, 23-24 March in San Francisco
- Deep Learning Summit, 27-28 April in Singapore
- Deep Learning in Finance Summit, 27-28 April in Singapore
- Deep Learning Summit, 25-26 May in Boston
- Deep Learning in Healthcare Summit, 25-26 May in Boston
View the full events calendar here.