Can small companies successfully implement deep learning?

The world of deep learning is dominated by academics and technology giants pumping thousands of dollars into their research and applications every day. But what about the smaller companies? There are so many real-world problems that can be solved by DL that huge corporations aren’t solving. There are countless startups trying to solve an array of issues and improve efficiency in countless industries, and many of these fail - not due necessarily to a poor idea or execution, but they are often unfunded and understaffed. The startups with the really extraordinary ideas however, often secure funding from Venture Capitalists, in crowdfunding campaigns, or through awards or grants. The CEOs of these companies are not necessarily AI experts, but are experts in their own industry from artists, to healthcare professionals, scientists, retail managers and many more.

At the Deep Learning Summit in London this September, we heard from four startups who are creating DL models in their businesses. Whilst each company had vastly different business goals, they all had one thing in common: an comprehension of the positive impact DL can have in solving real-world problems.

Tony Beltramelli, Founder & CEO, UIzard.io

Implementing front end development is time consuming and tedious, and UIzard.io has created a platform to allow ‘more people to create software-based solutions by shortening the time from an initial idea to a functional system’ - they transform the user’s ideas into working code for every platform required. Tony has designed pix2code, a neural network trained end-to-end to generate this code. It’s necessary to identify graphical components on the user interface, and if this is done automatically by the computer, currently we can only use pixel values. To overcome this, Tony and his team are captioning images to provide insights to train the algorithm to describe an image - from this, they are generating a code to attempt to describe the picture. By using a pretrained convolutional neural network (CNN)  you can train an algorithm to recognise images using word vectors. The problem? There are no pretrained words for computer languages or GUI datasets. To overcome this, ‘we designed a main specific language to describe the user interface’- Tony went on to explain that they limited the language complexity by narrowing it down to a small vocabulary to reduce search space and allow efficient data synthesis - ‘the very simple architecture means it’s easy for DL to be implemented, understood and trained.’ UIzard have programmed a combination of the visual and language features into the decoder to generate probability distribution, Tony said ‘we input a sequence of tokens in the context and repeat until the network generates the end token and therefore has created a while code file.’ Unfortunately, as this model was created in English it’s not overly transferable and can’t be used on many platforms, so this is something UIzard are working to improve in their work.

Tony was asked whether he thought that the idea could be extended to include consumer websites, and we heard that ‘the long term vision is that you could have a designer hand draw the user interface with the owner - it would be the next step to generalise to hand drawn components.’

The user interface has been successfully tested on iOs, and an android version is currently in progress, although they’re having problems creating all the buttons at the moment! This is very early work, but it’s a proof of concept that DL can be applied to this area, and UIzard will be working to improve and bring to market a successful product.

Christopher Bonnet, Senior Machine Learning Researcher, alpha-i

Christopher and his team are looking at ML challenges through ‘the eyes of the probabilistic framework of Bayesian statistics thus transforming it into a statistical inference problem thereby increasing its predictive power,’ whilst also ‘leveraging DL methodologies to deliver accurate time series forecast with their uncertainties.’ ‘Deep learning is very data hungry’ and Christopher explained how they’re working with noisy data sets. The problem with this is there are limitations in the machine - it never knows what it doesn’t know, for example if it’s only trained on cars and dogs but is then shown a human, it will define it as either a car or a dog - it won’t recognise that it’s something outside of it’s knowledge. In order for the machine to be able to identify a human, it needs to be hand trained and multiple decisions need to go into this, such as how to design the system.

The solution? A bayesian DL system:

Distribution of network weights -> model uncertainty -> prediction uncertainty

A bayesian neural network has a high uncertainty outside of it’s data range - it’s used in heavily reliant applications such as autonomous vehicles where you want to be more risk averse. Compared to traditional ML models when there is no uncertainty and you don’t know when the system is unsure, the bayesian system tells you ‘I don’t know what’s going on here - I need more data’. Having an awareness that the machine doesn’t know what’s going on is a huge help, and although the output with BNNs is the same, it’s more aware and provides a larger prediction error.

In the financial market, this is being implemented to predict the stock market and returns - if ore reliable predictions were used, whilst there would be less opportunities they would be made with a higher certainty of movement, and the risks being taken would be much more informed. In terms of prediction accuracy, the model Christopher is working with focuses on probability and focuses on that to then apply and compare with a distribution of probability on past data - a regressive model.

Antoine Amann, CEO & Founder, Echobox

Any algorithms that you build are only ever as good as the dataset you have - it's important to look at private data, not just the public data.

Social media management is a time consuming yet important part of the publishing world. When Antoine worked for The Guardian, he realised that content managers were underdeveloped and inefficient, causing a huge time drain on staff who would be better served spending their time on other tasks. ‘Artificial intelligence is set to revolutionise social media marketing and fundamentally change how content creators interact with data.’ There are so many platforms that allow you to schedule posts but the insights are often vague and unactionable.

Echobox is taking social media management one step further by using DL to understand the meaning of content. Antoine explained how the model ‘reads through all sentences in an article and adds up every single word to pick the best sentence to be used as a title in a social post to generate the most reach’. Rather than the journalist having to sift through the content, Echobox does all the work. Antoine posed a question to the room and asked if attendees thought they would be able to distinguish between the machine generated and the human selected strapline - examples were shown, and the result was indistinguishable. Currently, The Guardian, Le Monde and New Scientist are using Echobox to alleviate their social media stresses.

But how does it work?

  • Uses comprehensive public & private data - any algorithms that you build are only ever as good as the dataset you have
  • Predicts virality with DL and neural networks
  • Predicts timing with genetic algorithms (calculated by the minutes of the day - analyse all the content that we choose to share - test it out via a genetic algorithm and then conclude what days/times generate the most reach for clients)
  • Understands the meaning of content and audience (analyses the keywords, titles, content to predict what type of content the article is e.g. breaking news rather than relying on a human to define it)

Each month, Echobox posts generate over 10,000,000,000 impressions to users all over the world and is transferable multilingually in over 20 languages including Hebrew and Persian. The results are very impressive and have a 57% average increase in traffic - last year alone this saw a £13,000,000 saving for publishers.

Echobox was used in the prediction of the French election, and Antoine was asked what his take on the ethics side of this was, and he explained that ‘the reason the french election traffic worked that well was because we had lots of french clients - the more data the easier it is to predict. A large sample size of the population.’ Echobox only share the content of the client - it’s written and vetted for the clients’ audience. The company started 4 years ago and have successfully completed 2 rounds of funding and are now a team of just under 30, so it will be interesting to see how growth and impact continues.

Ed Newton-Rex, Founder & CEO, Jukedeck

In the run up to the Deep Learning Summit, we spoke with Ed who discussed his machine learning driven music composition platform that can create original music using AI.

Music is something that there has been a lot of speculation over. “Machines might compose elaborate and scientific pieces of music of any degree of complexity” - Ada Lovelace, 1843

There’s been a lot of apprehension around machines and creativity with concern being raised over art losing its emotion, but deep learning in music isn’t a new idea. In the 1950s the first attempts were made to train a rule based system to compose music, followed by Markov chains and evolutionary algorithms. Systems were trained to take in short sequences of chords and taught to predict which types of chords or sequences should follow.

Jukedeck have taken this one step further. Ed demonstrated how the model is trained on Bach chorales, which are essentially hymn tunes, with 371 pieces of music in 4 distinct parts in the same style - this made sense to train the machine on. The neural network can then pick up things that are true to Bach and recreate in the same style breaking down the music by structure and leaning elements such as circles of fifths.

Ed then showed us how audio sysnthesis has been created where both the score and sound is created by neural networks. This however has posed several challenges both practically and musically. There is an issue with long term memory and it’s easier for neural networks to improvise than remember and build on old ideas. Additionally, with music ‘how do we actually judge what’s good? Your idea is different to mine - it’s all subjective in music.’ This leads us back to the emotional aspect of music - machines can’t create emotion, but will they be able to understand it? Perhaps this is a challenge that’s yet to be faced. It’s not only an emotional that is required - when you think about say three pieces of contrasting music they’re not just different because of their chord sequences, take for instance the influences of culture, environment, and personal circumstances that impact the composition.

Once Jukedeck are able to create a system that has an element of creativity rather considering a variety of factors, the outcome for their clients will be a ‘personal composer in their pocket to fit their mood, taste, and calendar.’ As soon as you have an AI that understands music and art, you can engage people via these tools to learn and create music.

The startup ecosystem is booming, and next week in Montreal we’ll be hearing from several thriving startups who are thriving in the ‘Silicon Valley of AI’. There are fewer than 50 tickets remaining, so register now to guarantee your place at the summit and learn about cutting edge AI advancements as well as business applications.