Today in San Francisco, global leaders in Artificial Intelligence and Deep Learning came together for the Deep Learning for Robotics Summit and AI in Industrial Automation Summit. Throughout the day, 300 attendees and 60 speakers engaged in presentations, workshops, networking, and demonstrations. We were joined by the likes of PepsiCo, Caterpillar, Google, Facebook, NASA and many more global leading companies.

Each RE•WORK summit welcomes a diverse audience, and today’s attendees ranged from founders, CEOs, to data scientists, professors, PhD students, ethics officers and many more. During registration and breakfast, Konstantin Domnitser, Software Engineer at Risk Management Solutions said "I'm most interested in understanding the spectrum of machine learning applications in robotics. I'm aware of the success of deep learning for computer vision, but I'm really excited to find out what other types of work in robotics can benefit from these techniques."

We encouraged attendees to share their day on social media, and several conversations were started before the presentations got into full swing:

@pabbeel Amazing line-up at the @teamrework Deep Learning for Robotics Summit the next two days!  Including lots of @berkeley_ai: Abhishek Gupta, Gregory Kahn, Josh Tobin, Georgia Gkioxiari, @animesh_garg, @xbpeng4, @haarnoja, Michael Laskey, Aviv Tamar.

@Sirkasam Very excited to attend the AI Industrial Summit in SFO #REWORKAuto #ReworkHealth #ReworkDL . Can't wait to showcase the easy to use, super fast no code AI & IOT Platform from @NumtraLLC #Numtra #NoCode

On the Industrial Automation track, the morning kicked off by exploring the Current AI Landscape, and compere for the day Hariharan Anonthanarayanan, Robotics Engineer from the San Francisco based startup, Osaro opened the morning:

“In my research for the past decade, I’ve been focusing on motion planning for robotics to try and get them to mimic human capabilities. I’ve been in the industry for a decade, and there’s a whole range of consumer robotics coming into the space now, the whole landscape of automation has shifted to data-driven methods, and we’re now in the heat of the moment trying to impact automation in a positive way.”

The first speaker for the day, Shahmeer Mirza, Senior R&D Engineer from PepsiCo, shared his current work on machine learning in the wild and scaling automation applications from prototype to plant floor. Shahmeer opened by reflecting on a quote from Andrew Ng: “Anything a human can do with at most a 1 second thought can probably now or soon be automated by AI". The question now becomes ‘how do we get there?’ PepsiCo is huge, and lots of their assets came from acquisitions with different sets of data and different challenges. Shameer took us through the typical challenges and how PepsiCo are dealing with this data and provided a framework for machine learning in industrial automation, and a framework for the future.

“AI without data is like a bike without wheels - if you don’t have the data in place, the algorithms won’t get you anywhere. Applied AI gives you three things: machine learning, domain knowledge and data - if any one of those is missing, the whole operation falls over.”

@CATALAIZE Interesting Business Case study of how @PepsiCo applies #AI #ML #IoT to monitor messy #potatoes peeling & image quality check every Potato Chip! @ShahmeerMirza #ReworkAuto for #food #manufacturing & impacts #cx

On the Robotics track, the morning was in full swing with Gregory Kahn, PhD student at UC Berkely sharing his work on real-world reinforcement learning for mobile robotics. He gave us an example about how they built a car to navigate its way through the halls at UC Berkeley: four steps needed to be taken ‘defining inputs and outputs, designing how to self supervise the outputs, define the cost function (minimise collision) and design the GCG model - we used a deep neural network here’. Given the model, Gregory explained how they are using reinforcement learning by training the model, forming a cost (minimise collision) and finally execute the policy to minimise the cost and teach the model. We then looked at how the model initially experienced failures, but learns from it’s mistakes, and continues ‘messing up’ until it’s learned how to navigate through the halls.

What else did we learn in this morning’s sessions?

Georgia Gkioxari, Research Scientist at Facebook AI Research spoke about embodied vision:

“We need to understand and detect all elements of an image - what people are doing, who they’re doing it with, even maybe what they’re feeling. There’s been tremendous progress in visual recognition - we’ve seen this much success due to the large scale and variety of data sets - they contain hundreds of thousands of examples with annotations. These arrive from the web, so are they actually representative of the world that we live in? Humans don’t just passively process data - we explore, and so we need to teach intelligent agents to develop an intelligence to understand the concept of learning interactively.”

Lionel Cordesses, AI Senior Team Manager at Renault Innovation SV shared his work on machine learning through analogies formalised with tools from category theory:

“One car is manufactured every 3 seconds at Renault, Nissan, Mitsubishi. Even for 10 million cars, we will only have 10 defective parts, so it’s challenging for us to train machine learning for defective parts because there aren’t many! We work on research - now we have limited data which is a rare challenge. Our target was clear - let’s try to design something that would use a limited data set and limited time.”

Corey Lynch, Research Resident at Google Brain spoke on Self-supervised Imitation:

"Imitation learning is more learning from demonstration - we want robots to be able to learn without actively requiring demonstration. When a baby learns from the TV, the TV doesn’t know it’s being imitated, it’s not doing an active demonstration. The abstract features you learn in a visual representation scenario are only as good as the labels you provide. It’s hard to weigh what’s actually most important. The second choice here might be unsupervised learning where you take an input image, encode it in continuous distribution, and reconstruct the original image. You can now avoid labels, and the image itself is the synthetic label driver. You don’t see people doing this in robotics because it’s challenging. The most appealing learning for us is self supervised learning where  you combine the rich features of supervised learning, but without the labels"

@atg_abhishek Disruptive opportunities vs deceptive disappointment when humans think linearly while change happens exponentially - wise words from T from @FellowRobots! Definitely going to bring this back to #Montreal #reworkauto @reworkauto

Throughout the day, we also recorded several episodes of the Women in AI Podcast, and also filmed exclusive speaker interviews which will be available on the RE•WORK Digital Content Hub. George Lawton from TechTarget interviewed Greg Kinsey from Hitachi, Andy Zeng from Google Brain and Princeton University, and Jeremy Marvel from NIST, whilst Tony Peng from Synced spoke with Dan Yu from Siemens Corporate Technology, Jeff Clune from Uber AI Labs, and Karol Hausman from Google Brain. On the Podcast, we had some exciting conversations about the current landscape of AI and deep learning as well as how we can encourage more girls and women into the industry. In the following weeks you’ll be able to listen to episodes with Georgia Gkioxair from Facebook AI Research, Jana Koseka from Google, Ayanna Howard from Georgia Institute of Technology and Fiona McEvoy from YouTheData.

More presentation highlights:

Animesh Garg, Postdoctoral Researcher in Robotics and Deep Learning at Stanford University: “We want systems to work outside of the lab, in the messiness of the real world, and to go beyond and perform the tasks at a complexity that has not been shown today”.

Jana Koseka, Professor at the Department of Computer Science & Visiting Research Scientist at George Mason University & Google: “One of the big challenges is that we are interested in detecting particular instances of objects, for which there is very little training data available.”

Jeremy Marvel, Research Scientist & Project Leader at NIST: “There really is no such thing as a collaborative robot.”

Lawson Wong, Senior Research Associate, Brown University: “We start off with an input, either through speech or text. The robot doesn’t understand this... maybe it has skills or a dedicated skill, or a goal. Maybe more general than that, you just want it to programme itself.”

The afternoon sessions launched with the panel discussion: Global Policy Surrounding AI & Autonomous Systems. The moderator, Samiron Ray, from CometLabs started by introducing Michael Hayes from Consumer Technology Association, Gretchen Greene from MIT Media Lab, and Abhishek Gupta, District 3 & McGill who were guests in the discussion.

Samiron: There’s a lot of public conversation around AI & social consequences. What do you think is missing right now?

Gretchen: I worked in credit in a time before people talked about data science. There must have been some push back, because regulations did come in, but there wasn’t a big public outcry in the same way that there is about artificial intelligence! For example, your credit score is used in so many ways that impact you - it can stop you from getting a mortgage, a loan, a rental - but people don’t seem to question the way that data is used. There are other things that are similar that aren’t getting as much attention. We need to ask, what’s actually different about AI?

Michael: We need to make sure that when we’re discussing regulations that this is grounded in reality - we need to think about what’s actually going to be used in the real world and creating regulations based on this. We need to look at real issues rather than the hypothetical consequences that are far far away; we want the public discourse to focus on where AI will actually impact your life.

Abhishek: All of my work is grounded in reality and one of the things that’s missing in public discourse is the way it’s expressed. For example, when we look at articles related to AI, most of them have a picture of a robot on the cover. People then think about these systems as embodiment of intelligence, but most of the places machine learning is being employed is in the background as an add-on rather than something dominating and replacing humans. We need to realise the capabilities and limitations rather than looking towards creating regulations for AGI which is so far off.

The panel went on to explore all aspects of policy within AI, and Michael explained that what we really need to think about, is whether the technology is benefiting society or not. ‘That’s the main question. I don’t know if people are thinking about their concerns in relation to society. We’ve got real issues around bias, potential job displacement - these are the issues we should focus on in a regulatory perspective. We should look at benefits vs risks.

Talent & Talk

During the afternoon coffee break, we opened the floor to companies or individuals looking for new talent in AI, deep learning and machine learning roles, and heard from some really exciting companies looking to hire.

Ascent robotics based in Tokyo, are hiring for machine learning, engineering, robotics, software engineer, automation, DevOps and more.

Kenedy Brown from Nlite.AI who are based in Portland explained how they're currently building their U.S team and looking for a lead data scientist and a junior ML engineer. They’re solving a broad set of problems - solutions for predictive maintenance to supply chain optimisation, so there’s a huge range of work there.

Sergue Nikolskiy from Glidewell Dental, the largest dental lab in the world, opened by asking 'What does dentistry have to do with AI? There are so many things we’re doing with AI today - making 10,000 unique crowns a day which is super unique, so we use machine learning and we’re making crowns using neural networks.' Sergue and his team looking for people 'who like good weather and good waves to join our team in California.'

The final session of the day brought together Abhishek Gupta from District 3 and McGill University, Andrew Grotto from Hoover Institution & CISAC, and Jeremy Marvel from NIST who discussed Industry 4.0 & Cybersecurity: How do we Safeguard our Data and Manage Risk? Here are some of the questions we covered:

  • What do you see is the current landscape of cybersecurity and how is the industrial landscape effected?
  • When we’re looking at embedded devices, they come with limited computing power - how can we manage risk on those sorts of devices?
  • When rolling out a new product comes at the risk of cybersecurity, where do we draw the line?
  • Thinking about things like passwords, we’re moving into an era where we have all these biometrics, like fingerprints, when you have this information, it’s forever. You can’t change your fingerprints.
  • What are some of the potential consequences of using AI from a cyber offensive perspective?

To bring day one to a close, attendees from both the AI in Industrial Automation Summit and the Deep Learning for Robotics Summit came together for networking over wine and beer. We had some great conversations and heard about people’s highlights of both the event today, and working in AI more generally.

Georgia Gkioxari, Facebook AI Research: “My favourite thing about working in AI is the collaborative environment. I’m very fortunate to have worked in labs with lots of women, but it’s important to encourage girls into the industry from a young age, and make women feel that they can do whatever they want. AI isn’t just for boys!”

Dalia Vitkauskaite, Volunteer: “The best thing about working in deep learning? I believe it is an opportunity to build the future. Deep learning is a great area to be in right now as we have access to huge amount of data and powerful computers. However, it is just a tiny step towards something bigger, something we probably cannot imagine right now. And I find it very exciting”

Joel Trudeal, Physics Researcher, Dawson College: “I’m looking to understand how people are using reinforcement learning and novel applications. The whole first session was very interesting. Greg Kahn from UC Berkeley's talk about the experiments was super interesting!”

Beiming Lu, Graduate Student, University of San Francisco: “I’m trying to understand how people are using the technology and applying it in a practical sense and in a businesses setting rather thank just with an academical focus.“

Rick Schweet, President, Deep Vision Data: “This seems like the perfect event for us - the smaller events where the attendees are focused exactly on the work we do are much more beneficial than the big events where only a small percentage of people are interested.”

Radhen Patel, Graduate Research Assistant, University of Colorado: "I’m looking forward to hearing about Deep Learning for tactile sensing as people are not using it as much as they should, especially for manipulation.”

Benjamin Blumenfeld, Software Engineer, Independent: “I’m really excited to be here, as a novice in ML and AI. I’m a software engineer and excited to learn about abstracting some of these complicated topics to use in my application”

We’ll be back tomorrow to hear from more experts in the field, as well as joining interactive workshops, robot demonstrations, more interviews and panel discussions. Keep up to date with the event on @reworkrobotics and @reworkauto, and join the conversation with #reworkROBOTICS and #reworkAUTO.