Today saw attendees joining RE•WORK in San Francisco for our much anticipated Deep Learning Summit and AI Assistant Summit. Four years ago in November 2014 when we hosted our first summit in San Francisco, there was a single track focusing on Internet of Things with 250 attendees. Each year, the popularity of the event has grown, and today marked the biggest RE•WORK summit to date with over 100 speakers, 20 exhibitors, and 750 attendees from 20+ countries.

Each year, leading minds in AI come together in San Francisco to share their latest advancements in the field, network with like minded experts, and learn from each other to stay at the forefront of these progressions.

"We have been coming to the RE•WORK summits for many years and we are seeing how the innovations are becoming apparent in enterprise" Maithili Mavinkurve, Sightline Innovation

On the Deep Learning track Mariya Yao from TOPBOTS and our compère for the day began by welcoming attendees and encouraging everyone to join the conversation on our #reworkAI and #reworkDL tags.

The first session of presentations focused on the theory and applications of deep learning, where we heard from thought leaders Yaniv Taigman, Facebook AI Research (FAIR), Keith Adams, Slack, and Ian Goodfellow, Google Brain.

Discussing Personalised Generative models, Yaniv explained that their ‘goal is to make human computer interactions more personal. Examples of these personalised generative models stretch to personalised voice assistants, avatars and virtual reality.’ He explained that you can have anyone's voice for you home assistant making it more friendly, you can make avatars, virtual reality. Personalisation tied to a identity. So how do you train these voices? Yaniv spoke about how they use ‘in the wild’ sampling + multi speaker network'. They also use Intonation via priming to make the sentence structure sound more realistic.

At Slack, Keith works on embedding discrete data in a continuous, moderate-dimensional space to learn representations from many domains. He spoke about the embeddings learned from text graphs, and how human-created tags can support information retrieval, recommendations, classification and subjective human insight. Slack are using a new open-source supervised embedding framework, StarSpace, which they are using to learn representations of not only text, but also their users.

@keithmadams It was an honour sharing the stage with @goodfellow_ian and @taigman, and getting to meet attendees at #reworkDL this morning. For the StarSpace-curious: code … and paper

Mariya then introduced Ian Goodfellow, inventor of the Generative Adversarial Network, and lead author of the MIT Press textbook Deep Learning was a much anticipated speaker of the morning:

‘I’m excited to introduce Ian Goofedllow inventor of GANs, I’m still waiting for someone to come up with the shenaniGAN and I’m sure it’ll be one of you guys here!!’

@kalpitsmehta1 Amazing talk on GANs and it's cases. Thanks Ian Goodfellow. #reworkDL.

Ian explained that GANs are only one approach to generative modeling, 'but the idea I came up with is there are two different models that compete with each other in a game setting, so one player is forced to create the sample'. The generator and the discriminator. ‘Think of it as counter-fitters and beliefs. In the game, the discriminator network is like the police and looks at the data to see if it’s real or fake, we train the model to identify the real data.’ What’s different about GANs, Ian explained, is that ‘we train the generator to fool the discriminator. Over time as each player gets more and more skilful, the generator is forced to make more and more realistic images.’

With GANs, we train the generator to fool the discriminator. Over time as each 'player 'gets more and more skilful, the generator is forced to make more and more realistic images. We can now create imaginary celebrities, and turn streets from day to night without specific labeled examples.’

Over on the AI Assistant track, we learned about building a conversational agent overnight with dialogue self-play from Pararth Shah from Google. Currently, AI assistants are developed and deployed in separate phases. First, designers, engineers and researchers create an agent using the latest tools and frameworks, and then the agent is deployed to chat with actual users. This approach creates inflexible agents that are restricted to skills that were encoded by the developers or inferred from the training data. So, how do Google quickly build and teach an agent?

‘Dialogue is a collaborative game between 2 agents - you have a user that has a goal, and the system agent that doesn’t know the goal. The user wants to access the APIs from the agent and the aim is to get to the final state of the user knowing their answer and the agent providing it. When you talk about dialogue there are 2 levels: semantics (meaning) and surface forms - this is called a dialogue outline. E.g. the user makes their request and the agent responds with assistance.’ In his presentation, Pararth explained how they’re building an AI Agent to overcome this.'


As well as more presentations throughout the afternoon from the likes of Alok Kothari from Apple, Alison Darcy from Woebot, and Peter Carr from Disney Research, we held interviews, workshops, and startup pitching sessions.

James Vlahos from WIRED sat down with Ian Goodfellow, Yves Raimond and Justin Basillico from Netflix, Eli David from DeepInstinct, and Pararth Shah from Google to discuss their work and views on the current landscape of AI. The interviews will be available to watch on the Video Hub soon, so register now to check them out! We also held RE•WORK interviews throughout the day and heard some interesting expert insights!

'How people view the problem of NLU in different ways and the way people are approaching them. You get new understanding form hearing how other industries are handling their issues.' Alok Kothari, Apple


We also had the chance to record some new episodes for the Women in AI podcast between today’s activities. We spoke with Alison Darcy, CEO of Woebot, the bot to be there for you 24/7 to help manage mental health, and Cathy Pearl who’s working at Sensley to help manage patients daily health check-ins.

Alison explained how the global mental health crisis is growing more rapidly than previously anticipated and sometime patients don’t feel comfortable talking to friends, family, or even doctors say in the middle of the night. This is where Woebot comes in. To learn more and hear from Alison and Cathy, subscribe to the podcast here.


Over on the workshop track, Accenture were sharing their expertise on designing ethical AI systems. ‘The imperative for ethical design is clear – but how do we move from theory to practice?' In this workshop, Accenture expert and Responsible AI lead, Rumman Chowdhury  presented attendees with a challenge: You’re the chief data scientist and the chief HR officer approaches you with the idea to integrate AI. Okay so now what do you have to consider? How can AI parse through the thousands of resumes to find the right one - we want quality! We also need to consider employee satisfaction - how can we measure this? How do we deal with promotions and raises? Participants worked in groups to come up with solutions, considering the ethical implications of each decision.


Towards the end of the day, just before networking drinks, was the opportunity for startups to give a 5 minute product demonstration or pitch. We heard from some fantastic comanies with really exciting technologies. We learned how SmartEar have built a voice-controlled, fully integrated communication platform for enterprise that provides the power of an intelligent messaging assistant in the ear, and how Dashbot increase user engagement, acquisition, and monetization through actionable bot analytics.

What do our attendees have to say so far?

Amazing how Peter Carr of @DisneyResearch shows how deep imitation learning can be applied to basketball games to enable ghosted players to anticipate movements of teammates & opposition


Oh, this is awesome.  A big challenge in reinforcement learning is providing differentiable and continuous rewards to the learning agent. Typically, rewards are provided only when a task is achieved, which results in slow learning. Meta learning=wonderful. #reworkDL

@VianBakir1 Fab talk from Prof Maja Matric on #embodied robots for social good. Inspirational. We need more real world studies, ethnographers! #reworkAI

@cpearl42 Lionel Cordesses from @RenaultSV discussing their PoC process to connect the #AmazonEcho to the LEAF so people can ask things like "Is my LEAF charging?" and "Do I have enough miles to get to SF?" #ReworkAI


The components to natural conversation is much more complicated than just parsing or direct translation, it involves cognitive belief modeling and naturalistic dialogue management. #nlp #reworkai #speech #ai #understanding @EricSaund

'The Netflix talk was great. I was looking for someone to explain exactly how ML is used in a big software and they explained it so well. You guys are amazing to putinan event of this magnitude and have it go without any glitches is amazing' -nellian solaiappan, reinvent inc.' Deep Learning Summit Attendee

"I enjoyed Ian & Ilya's talk because meta Learning and self play is an area not written about much or not readily available, it was really interesting to learn about" Alex Smith, TheDevMasters

We’re excited to be back tomorrow with expert speakers such as Daphne Koller from Calico, Karthik Ramasamy from Uber, Amy Gershkoff from Ancestry and many more.

Couldn't make it to San Francisco but want to learn more? Sign up to receieve post event video access!