With billions of dollars being invested in AI we're routinely finding these technologies in our everyday lives in smartphones, cars, household appliances, and in our finances. At the Machine Intelligence Summit in Amsterdam today we have heard from industry experts and leading minds in machine intelligence who have discussed how machines now have 'abilities we never expected' and can learn through experience, much the way humans do. In the long run, machines, like their human creators, will be able to think for themselves independently.

@maria_axente:  Excited to learn the latest developments in #machinelearning @reworkMI #reworkMI

@katramadosi: Great to be back at a @reworkAI event. Looking forward to the next to days. @DFKI goes first #reworkMI

'Many people are starting to ask what a world with intelligent computers will look like,' and today's Summit invited us to delve into specific progressions in a variety of fields applying MI to their work.

There's a hypothesis and sometimes fear that machines are going to become hyper-intelligent and make many human roles redundant across multiple industries, but as Steve Wozniak from Apple mentioned, ‘that is so far off it’s an unrealistic fear at this stage. It would take decades and decades.’ This however, doesn’t mean that companies aren’t pushing to advance their techniques to resemble human intelligence. DFKI are currently exploring the latent visual space between adjectives with adversarial networks, an area which humans struggle with.


Damian Borth, DFKI, Exploring the Latent Visual Space Between Adjectives With Generative Adversarial Networks

Damien Borth from DFKI explained how image and video classification is often identified by the textual context, but this can be misleading. DFKI are overcoming this by learning to understand the adjectives in an image and label them with the correct and relevant sentiment. ‘These concepts need to be linked to emotion ’ so that once you detect a selection of adjectives, you can infer the sentiment e.g. ‘beautiful sky, great view’ = positive sentiment, and ‘derelict building, violent seas’ = negative sentiment. To be capture transitions between these in videos and across images, DFKI are using GANs to define the classifications independently of the nouns and then train them using deep learning. Initially, they were using a ‘non-deep learning model which delivered a result of 0.66, but after employing an end to end deep learning approach a result of 0.84 was delivered’. The addition of these layers and increased analysis delivered far superior results. The challenge and goal for DFKI is to identify whether it’s possible to learn the parameter of the original distribution given the observed datasets, and GANs are part of the solution to overcome this. They are composed of a generator and a discriminator: ‘the generator learns that, for example, in a bedroom you need certain items such as a bed, a wardrobe etc. and through this training the machine is able to generate a realistic looking bedroom'. A GAN is trained for each type of image, and every individual noise vector generates one specific synthetic image. DFKI are then able to define new operators as combinations ‘e.g. give me more cloudy sky, or construct a transition between ‘calm sky’ and pretty sky’.’ To make this transition, you need to detect the adjectives associated with each image to connect the visual space with the natural language area. You want a transition of adjectives, but in this case we don’t want it to be given by linguistics rather by the data and the GAN. After finishing his discussion, the floor was open to questions where we heard more about the sensitivity and errors in GANs, and how DFKI are overcoming this through their structured approach and testing.

Aleksandr Chuklin, Google Research Europe, Augmenting Search Quality Ratings with Logs Data

Google are constantly working on optimising and improving their search results, and Aleksandr Chuklin, Software Engineer at Google Research Europe, spoke about the machine learning model they are currently working with to leverage log data to build more accurate rater-based quality metrics. After giving a brief overview of the current delivery of search results, and joking that we ‘all know search engines, I hope!’, he explained how it is incredibly important for Google to identify if their users are happy with their search results. He explained how Google ‘have a whole lot of logs, but don’t know how to interpret them.’This log data is easily collectable at scale, so Google combine it with a limited-sized raters’ label to get an estimation of user satisfaction for previously unseen document layouts at no additional cost. Once this has been done once, they can ‘run it, train the model, and then reuse it over and over again. After the model is trained we can experiment with other layouts at zero cost.’ It’s also important for them to identify where users attention lies on a page in order to optimise the layout of their most valuable results, so they use hotspots to signal where the cursors spend the majority of time to place the results in the best position possible.

‘Wouldn’t it be great to know where the users’ attention was going to be? Before showing a page to a user, we want to predict where their attention will lie, and as a result of that put the results in the optimum position.’

Going forwards, Google’s search is constantly pushing forward and they’re currently working on a session-level model that spans across multiple queries and leveraging signals from mobile devices with touch and scroll, going beyond search.


Sven Behnke, University of Bonn, Learning Semantic Environment Perception for Cognitive Robots

We next heard from Sven Behnke from the University of Bonn who is currently working on semantic environment perception for cognitive robots and explained the importance of robots understanding their environment to act in a goal-directed manner.  In order for robots to move around in the 3D world they need ‘a 3D laser scanner, a dual wide-angle stereo camera, ultrasound, and plenty of other features’ which will allow them to understand the semantics of the environment. Sven spoke about the detection and recognition of objects and showed a short video of a exploration robot that performs tasks in an environmental simulation of mars using deep learning methods to manage the terrain and identify the best routes by categorising the surfaces as well as detecting objects and obstacles. Using a combination of this deep learning along with random forests and transfer learning they are able to create 3D semantic maps of the environment. These robots are able to operate as a solution to real world problems and work as domestic assistants, space explorers, assist in search and rescue.

AI is a consistent point of discussion, but what is the ratio of hype to real progress?


Tijmen Blankevoort, Scyfer, Human in the Loop / Active Learning AI

Plenty of companies are applying AI to their industries to solve real-world problems, and this afternoon Tijmen Blankevoort, CTO of Scyfer spoke about his work at the spin off company of the University of Amsterdam.

We are working with deep learning and all the spreadsheets that you use, we can automate that! I want to enable anyone to teach an AI to do a task.

They specialise in bringing AI to business and advise companies worldwide on the best and most practical ways to apply AI and ML to business to improve performance. Tijmen explained how DL is implemented as a problem solver with its simulation of human processing and intelligence driving efficiency. Scyfer promise their clients ‘the best results possible’, and it is ML that is making this goal a reality.

These technologies implemented in image classification, search engine optimisation, cognitive robots, business and other artificial intelligent products are considered revolutionary, but for the next generation, these will have always existed and AI will be their co-worker and a ubiquitous part of their lives.

Couldn’t make it to Amsterdam?

Check out our Women in Machine Intelligence Dinners which are running alongside our Global Deep Learning Summit Series throughout the Autumn and early next year:

Women in Machine Intelligence Dinner, Montreal, 11 October 2017
Women in Machine Intelligence & Healthcare Dinner, London, 15 November 2017
Women in Machine Intelligence Dinner, San Francisco, 23 January, 2018