One Summit, three tracks, eight stages. RE•WORK’s Deep Learning Summit has grown year on year, from 228 attendees four years ago in January 2015 to the near 1000 attendees, speakers and exhibitors in 2018, which is a testament, not only to the excellent and fascinating work of our speakers but also to the growth in interest and application of AI.
In January, attendees from over 20 countries will meet to learn from industry experts in speech & pattern recognition, neural networks, image analysis, and NLP, and explore how deep learning will impact all industries, “within the next decade any company that will not heavily rely on deep learning will be left behind” - Eli David, Deep Learning Summit, San Francisco, 2018.
Reminiscing to our first ever Deep Learning Summit, we are taking a look at where 5 of our original speakers are currently at.
Ian Goodfellow is a Senior Staff Research Scientist at Google Brain. He is the lead author of the MIT Press textbook Deep Learning. In addition to generative models, he also studies security and privacy for machine learning. He has contributed to open source libraries including TensorFlow, Theano, and Pylearn2. He obtained a PhD from University of Montreal in Yoshua Bengio's lab, and an MSc from Stanford University, where he studied deep learning and computer vision with Andrew Ng. He is generally interested in all things deep learning. Ian will be presenting at the upcoming Deep Learning Summit in January, make sure not to miss it by signing up here now.
Check out Ian's presentation from the Deep Learning Summit in 2015 here.
Charlie Tang obtained his PhD in 2015 in Machine Learning from the University of Toronto, advised by Geoffrey Hinton and Ruslan Salakhutdinov. His thesis focused on various aspects of Deep Learning technology. Charlie also holds a Bachelors in Mechatronics Engineering and Masters in Computer Science from the University of Waterloo. After his PhD, along with Ruslan Salakhutdinov and Nitish Srivastava, Charlie co-founded a startup focused on the application of Deep Learning based vision algorithms. Currently, Charlie is a research scientist at Apple Inc. Charlie's research interests include Deep Learning, Vision, Neuroscience and Robotics.
Richard Socher is Chief Scientist at Salesforce. He was previously the CEO and founder of MetaMind, a startup that seeked to improve artificial intelligence and make it widely accessible. He obtained his PhD from Stanford working on deep learning with Chris Manning and Andrew Ng and won the best Stanford CS PhD thesis award. He is interested in developing new AI models that perform well across multiple different tasks in natural language processing and computer vision.
Watch Richard's presentation from 2015 here.
Marni Bartlett, Ph.D. is Research Scientist at Apple Inc. Marian is a pioneer in the field of machine learning and computer vision for face analysis. She and her colleagues developed software that automatically detects facial expressions of the seven primary emotions, as well as individual facial muscle movements, in collaboration with Paul Ekman, a founder of the science of facial behavior. The potential for this technology is far-reaching, across fields of healthcare, education, advertising, and retail. The technology was awarded best new product from CONNECT, San Diego, in 2013, and Marian was a winner of the 2014 Women Who Mean Business Award from the San Diego Business Journal. Marian received her Ph.D. from University of California, San Diego in Cognitive Science and Psychology, and her B.A. from Middlebury College in Mathematics and Computer Science.
Take a look at Marni's presentation from 2015 here.
Nitish Srivastava is a Machine Learning Engineer at Apple Inc. He is interested in using machine learning to create representations for images and videos that can help solve computer vision. He is working on object detection and action recognition. He is also interested in combining multiple data modalities into joint representations that can be used for cross-modal information retrieval. He has also worked on developing a new regularization technique that makes it possible to train very large and deep neural networks without overfitting.