Back in San Francisco for the 5th annual Deep Learning Summit, the first day has just wrapped up with attendees discussing presentations, workshops and other sessions they’ve attended throughout the day at our networking drinks session.

“I've been coming to the summits for 5 years now and it's always good quality. It's getting bigger every year and there are always some great talks. Ian's presentation was my favourite, but all speakers have been really excellent.” - Raj Neervannan, Alphasense

This year for the first time, the Deep Learning track and AI Assistant track were joined by 8 new stages, Education & AI, Industry Applications, Connect, Environment & Sustainability, Ethics & Social Responsibility, Futurescaping, Investors & Startups and Technical Labs. Across these stages, we’ve heard about how AI is disrupting countless different industries as well as exploring issues and themes around encouraging the next generation of talent into technical careers and minimizing bias in AI.

Last night, we opened the doors of the Hyatt Regency for attendees to collect their passes and kick off the networking early. Nina D’Amato from the San Francisco Department of Technology welcomed attendees to the city and explained how “the department is an enterprise information and technology services organization that supports approximately 35,000 employees and 56 departments of the City and County of San Francisco.” Knowing the summit is held in a progressive and supportive location for AI and technology is one of the reasons the event is growing year on year, and welcoming 800 attendees and 100 speakers (our biggest summit to date) was a real testament to this.

On the Deep Learning stage, Anirudh Koul from Aira welcomed everyone and set the bar high, introducing some of the experts joining us from Google Brain, OpenAI, Uber AI Labs amongst many other global leaders.

‘When will AI steal or jobs, when will AI be better than us, when will AI kill us? We hear this all the time, but this won’t happen! It can make us more productive, it can improve our abilities and it should be our intentions to design and use it for good, to help us do more! The best way to predict the future is to invent it.’ Anirudh Koul, Aira

On the AI Assistant stage, Gokhan Tur, Director of Conversational AI at Uber AI Labs spoke about his work in human/machine conversational language understanding systems. He explained how 'whilst conversational AI has always been the holy grail task for many scientists since the last several decades’, there has been tremendous advancements from the archaic language understanding systems used in the 80s to the AI assistants we have today. Gokhan presented ‘an overview of modern industrial and academic goal-oriented conversational systems, showing directions in our quest to building ultimate AI machines talking to humans.’

“Since Siri the spectrum of assistants is getting bigger and bigger. It is hard to talk about language understanding because there is no one definition. There are people coming from different backgrounds to join this field and create conversational AI assistant (e.g. NLP, semantic).”

A key focus of this summit centralizes around encouraging the next generation of experts into AI careers whilst ensuring that these budding experts are given equal opportunities regardless of their gender or background. On the Education and AI stage, we heard from Bonnie Li, 17-year-old Research Intern & Machine Learning Developer at The Knowledge Society. Bonnie told us that she is ‘passionate about pushing the current boundaries of the field’, and hopes that through her work (currently under Yoshua Bengio) will ‘help lead us closer to artificial general intelligence’.

"Advancements in education and AI are very helpful, they have allowed me, a 17-year-old, to get into the field and do many experiments! You would think that we are well on the way to AI general intelligence with deep reinforcement learning (in popular thinking) however it is important to note still the inability of machines to adapt and generalise.”

Bonnie also joined us for a short video interview where she discussed her journey into AI as well as her aspirations for the future. The video will be made available after the summit on the RE•WORK video platform.

Returning to a RE•WORK summit to share his expert knowledge for the 5th year running, Ian Goodfellow, creator of GANs and Staff Research Scientist at Google Brain presented his most recent research in Adversarial Machine Learning. Ian shared how ‘machine learning algorithms are based on optimization: given a cost function, the algorithm adapts the parameters to reduce the cost. Adversarial machine learning is instead based on game theory: multiple "players" compete to each reduce their own cost, often at the expense of other players’. He explained how adversarial machine learning is related to many active machine learning research areas:

In the last 4 years we’ve come a long way - we’re now able to make incredible images and generate these completely from scratch using GANs. For example, we can can convert an image of ‘driving in the daytime’ to ‘driving in the nighttime’ without any labels or supervision. If we had to train this on real-world instances, we’d have to get a car to come back to the same location in the night to train it in the same location in a different environment. This is a very long process! By using GANs we can train it to simulate an image from other scenarios it’s learned.’

Ian also spoke about the industry applications of adversarial machine learning and mentioned creating real world objects for use in dentistry (exact replica false teeth etc.), as well as the potential for its’ use in neuroscience.

Industry applications of AI are something that companies of all shapes and sizes are keen to explore, and in this morning’s sessions we heard from the likes of Walmart Labs, GE and Dropbox amongst others, who spoke about some of the breakthrough challenges in industry:

  • Prakhar Mehrotra, Walmart Labs: “There are experts at Walmart who know how retail works inside out, so I think we’re a long way from end to end AI run stores because it will take a long time for machines to have the knowledge that the experts have here. In retail, everything is long term - we’ve already planned 2019 Christmas, so imagine building a model for this that captures our long term goals which makes forecasting challenging.”
  • Ashish Bansal, Twitter:  “With collaborative filtering, at Twitter we have a unique advantage due to user followers - it’s easier to detect user similarity, because the user tells us. Content base is much harder at Twitter scale - the scope for characters is small and you have 46 words max per tweet, and they can also be multilingual.”
  • Vivek Thakral, GE: ‘What business problem do we want to solve? Cash problem - our customers aren't paying us on time. Admin problems - we have invoices with misspelled customer names, incorrect shipping or billing addresses. These are real problems we want to solve, so we’re working with SAP and Oracle to try and overcome these.’

Whilst presentations and workshops were taking place, up in the Interactive Corner attendees were invited to share their opinions on some key questions facing society regarding how AI and technology can help solve some of these challenges.

During the lunch, coffee breaks and networking sessions, it was great to hear from attendees which sessions they’d been enjoying, as well as what they thought about the summit so far:

  • My favourite parts of the day so far have been Jeff Clune's talk as well as Ian's. I've also been able to catch up with some old friends too which has been great.  Aravind Srinivas, UC Berkeley
  • Very well organised event and going very well. Some very interesting people here! I have a talk later focussed around NLP so looking forward to that.
    Rahul, Alexa AI
  • I work for a Japanese company and have come to learn more on machine learning and report it back. I have met some very interesting startups and will be looking at working with some potentially going forward.
    Junpei Ishikawa, NTT Comware

What else did we learn today?

Deep Learning Stage

Tejas Kulkarni, DeepMind

“Why are infants better at sensory-motor tasks than our current AI systems?” was the question Tejas opened with. He spoke about object-oriented perception and control, and presented unsupervised approaches integrating deep reinforcement learning and probabilistic programming to learn about objects and goal-directed control grounded in them.

How do we model visual objects? There are three ways. Instance segmentation + 3d interpretation + recognition. Object detection lets us know there’s an object, and instance segmentation looks at the 3D components of the image to mark with what the object might be. What we’re looking at is, ‘can we start modelling instances without any labels?’

Jeff Clune, Uber AI Labs

Jeff spoke about Go-Explore, A New Type of Algorithm for Hard-exploration Problems.

@MikeDShepherd:
@jeffclune does a phenomenal job at #ReworkDL today explaining how Go-Explore jumps leaps and bounds ahead of existing RL methods today. Great work at Uber AI Labs and http://EvolvingAI.org

Ilya Sutskever, OpenAI

As co-founder and chief scientist at OpenAI, Ilya spoke about pure Reinforcement Learning and how it has been applied only to simple games and simple robotics.

Yixuan Li, Facebook AI

At Facebook every day hundreds of millions of users interact with billions of visual contents. By understanding what's in an image, our systems can help connect users with the things that matter most to them. Speaking about Advancing State-of-the-art Image Recognition with Deep Learning on Hashtags, Yixuan said:

“I can’t put an exact number on how many hashtags are used on Instagram every day, but it’s in the hundreds of millions. There are noisy hashtags such as #love which doesn’t define an object, so we face challenges here”.

AI Assistant Stage

Cathy Pearl, Google
Joining us for the fourth year in a row, Cathy has moved into a new role as Head of Conversation Design Outreach in the last year. Raising the question ‘why aren’t our AI assistants smarter?’, Cathy explored the challenges of building these assistants and looked at what the future might bring:

@deborah_who
Cathy Pearl killing it here at #reworkDL. MYTH: Until we have AI, our VAs will seem dumb. REALITY: Many issues are solvable with ✨good design✨ & a bit of coding 👏👏👏 @cpearl42 @reworkdl

Rushin Shah, Facebook

Working on Dialog Management with a Software 2.0 Stack at Facebook, Pararth is ‘quite excited about the application of reinforcement learning techniques to train conversational agents’. His talk presented an approach to reliably scale to the long tail of conversational interactions using a combination of software development and machine learning.

‘Traditional NLU is limited. We need to go beyond intents and slots! First we start with devising representation then what's the good modelling approach to actually predict these representations.’

Deborah Harrison, Microsoft

Conversations have extraordinary power. They can harm or heal, obfuscate or enlighten, generate confusion or clarity. As well as sharing her work as Sr. Conversational UI Design Manager in here presentation, Deborah joined us in an episode of the Women in AI podcast where she spoke about her career in AI and how she started out.

Cathy Pearl, fellow speaker at the summit, praised Deborah’s work:
@cpearl42:  Don't trap your customer support bot users. Include "I don't know" as an option, and give them the chance to speak to an agent. & Don't be arrogant--your team doesn't represent ALL views. @deborah_who #reworkAI #reworkDL

Connect Stage

Thomas Simonini, Deep Reinforcement Learning Course

In one of the most highly anticipated workshops of the day, Thomas explained how Curiosity Driven Learning is one of the most exciting and promising strategy in deep reinforcement learning. It involved ‘creating agents that are able to produce rewards and learn from them. In this workshop, you’ll learn what is curiosity, how it works, and understand the process of how an agent generates this intrinsic reward using a trained agent in a video game environment.’

‘Agents struggle to adapt to new levels on games. They are far far more effective in familiar states, such as in Super Mario Bros’

Panel: How do we Encourage Women to Join Careers in STEM and What are the Next Steps?

Dora Jambor, Machine Learning Engineer, Shopify; Research Scientist, Google Brain/Cornell University and Catherine Lu, Principal, Spike Ventures, and Alicia Kavelaars, OffWorld joined a session to explore the importance of diversity and a female presence in STEM:

“I think it's very important that when you want to pursue something including STEM you just need to pursue it and know that you're going to achieve it.” Alicia Kavelaars, OffWorld

“When I used to go to Math Club at School, I would be the only girl there. It is really great that this is slowly changing.” Catherine Lu, Spike Ventures

Environment & Sustainability Stage

Hassan Murad, Intuitive AI

Intuitive’s vision is to create a zero waste world by developing intelligence that helps buildings and spaces in tracking their waste in real-time and engaging users by nudging them to correctly sort their waste. “Oscar is designed to have an enormous social impact by targeting the issue of public empathy when it comes to recycling at the source and engaging users by nudging them to separate their waste items at the point of disposal.”

David Kriegman, UCSD

Research from the UCSD Computer Vision for Coral Ecology Project has lead to new cameras, algorithms, software, and services. He spoke about the importance of coral reefs and how AI can help conserve the oceans: “They are incredibly biodiverse, they’re the rainforest of the underwater world. They only cover 0.1% of the ocean but house 25% of species. They also protect coastlines from storms.”

Tomorrow we’re excited to see the Ethics & Social Responsibility Stage, the Technical Labs Stage, the Investors and Startups Stage, and the Futurescaping Stage catalysing some really exciting topics.