It's Christmas Day!! How many carol singers have you encountered so far this? Or how many times have you heard Michael Bublé on repeat? Probably enough. Whilst there's a new Christmas Number One every year, they're never particularly festive thanks to the X-Factor, and no one seems to be dedicating their time to creating a new generation of Christmas classics. What to do when humans are failing at a task? Let AI take over!

Last Christmas, Researchers from University of Toronto developed an AI system that composes and then sings a Christmas song through analysing an image that you upload. The neural network is trained on over 100 hours of online music, and once trained, 'the program can take a musical scale and melodic profile and produce a simple 120-beats-per-minute melody — it then adds chords and drums.'

Raquel Urtasun who was involved in the project said 'we are used to thinking about AI for robotics and things like that. The question now is what can AI do for us?...You can imagine having an AI channel on Pandora or Spotify that generates music, or takes people’s pictures and sings about them, it’s about what can deep learning do these days to make life more fun?' Raquel is now working for Uber ATG in Toronto and spoke about her most recent work at the Deep Learning Summit in Montreal earlier this year.

The question is however, can a computer compose 'good' music? Ed Newton-Rex, CEO and Founder of Jukedeck believes so. Jukedeck is an AI composer that uses neural networks to compose original music. The model is trained to be able to create novel chord sequences and melodies, and Jukedeck are focused on improving AI’s ability to understand music at the composition level.

Jukedeck provides the user with several variables to help you get the track that you're after. You're given a selection of variables to tailor best to your requirements:

Once you've programmed in what your after, the AI gets to work composing you an original tune to fit the bill. Ed explained that AI is already affecting the music industry in big ways. It’s being used to amazing effect, for example, by Spotify and others in music recommendation: streaming services can now figure out music you haven’t heard but will probably like incredibly effectively. Ed presented at the Deep Learning Summit in London this September, and demonstrated how their model is trained on Bach chorales, which are essentially hymn tunes, with 371 pieces of music in 4 distinct parts in the same style - this made sense to train the machine on. The neural network can then pick up things that are true to Bach and recreate in the same style breaking down the music by structure and leaning elements such as circles of fifths.

Whilst there has been a lot of apprehension around machines and creativity with concern being raised over art losing its emotion, deep learning in music isn’t a new idea. In the 1950s the first attempts were made to train a rule based system to compose music, followed by Markov chains and evolutionary algorithms. Systems were trained to take in short sequences of chords and taught to predict which types of chords or sequences should follow. Once Jukedeck are able to create a system that has an element of creativity rather considering a variety of factors, the outcome for their clients will be a ‘personal composer in their pocket to fit their mood, taste, and calendar.’ As soon as you have an AI that understands music and art, you can engage people via these tools to learn and create music.

You can watch presentations from both Ed and Raquel on our video hub, or subscribe to one of our video playlists to explore the impact of deep learning on business and society in more depth. Learn about NLP, AI for Business, hear from the Pioneers of Deep Learning, and Women in AI, as well as watching full presentations here.