Engineering workloads have always been a problem that are faced by data scientists. Training models on GPUs can be time consuming and solutions are always needed in order to speed up this process. Valohai, who is an exhibitor at the Deep Learning Summit, San Francisco, have a deep learning management platform that allows data scientists to be able to deploy experiments at scale with just one click. We caught up with Joanna Purosto, CMO at Valohai to get a sneak peek on what they are planning for the summit.
Give us an overview of Valohai & your role there.
Valohai is a Deep Learning Management Platform that automates deep learning infrastructure for data science teams.
The main benefits revolve around three main topics:
- Automating machine orchestration: Scale models to hundreds of CPUs or GPUs at the click of a button.
- Version control: Create an audit trail and reproduce any previous run with built-in version control for input data, hyperparameters, training algorithms and environments.
- Pipeline coordination: Manage entire ML pipeline with automatic coordination from feature extraction and training to deployment and inference.
I am CMO of Valohai. My goal is to have a dialogue with data scientists in different channels and, make sure that our platform addresses the current need of enterprise data science teams.
What are some of the main obstacles, when managing an entire deep learning pipeline?
This comes back to the three main benefits of our platform in the first question. Based on having hundreds of conversations with data science teams we have found out that there are three main obstacles in production level machine learning.
The first obstacle is related to machine orchestration. Currently, data scientists need to SSH into a server, install the latest Nvidia drivers, Python dependencies and clusters of 100 GPUs hosting Docker containers to get the training running. We have created an automated process for this and with just a push of a button in our UI (or via our command line client) it is possible to start the training in the cloud (AWS, Azure or Google) or on-premise hardware. Valohai also shuts down the instances when the training is done.
Secondly, there is version controlling and compliance. Many data science teams lack systematic archives for their models. Advanced teams might store their versions of code and models but rarely anything more. In the following picture, you can see all the information that Valohai automatically stores from each experiment. Having proper version control helps significantly when teams want to reproduce models or develop the models further. In some compliance scenarios, companies must be able to prove what model is in production and how this model was built.
A third facet is managing the whole machine learning development pipeline, from feature extraction to deployment. Google has studied that 95% of production level machine learning systems is just glue code and only 5% is the actual machine learning code. Imagine what would happen if data scientists could turn the situation around and use most of their time to actual model building! Obviously, it would have a huge impact at go-to-market time.
What developments of deep learning are you most excited for, and which industries do you think will be most impacted?
I’m personally interested in the energy sector since I believe that when we find out a way to efficiently connect machine learning, new energy forms (like solar energy) and battery technology, we will truly disrupt the electricity grids as we know them now.
In general, I’m also following the discussion about ethical use of AI across different industries. To me, this does not mean only preventing machine learning models being biased and milling around known ethical problems, like why male employees tend to have higher salaries or why people from some ethnic backgrounds are treated differently than others. These are important topics to acknowledge, but they hardly bring any new insight into a discussion about the ethics of AI.
Instead, with the ethical use of AI, I mean solutions that help us become better as humankind. One example comes from Valohai’s customer Two Hat Security, that is building a machine vision model that recognizes sexual abuse material from the darknet. This is something that is by all measures morally and ethically right thing to do and it is an example where we can use technology as a tool to be humane – and in this case – to help sexually abused children.
Would you advise a career in AI, and what are the key skills that you think are needed for such roles?
I think the key skill is to learn new skills and adapt fast.
The AI industry is evolving with an ever-accelerating pace. New technologies emerge and old ones get updated – Valohai is one example of technology that didn’t exist a couple years ago. Companies and individuals need to be in the front line testing new technologies and also to have a clear understanding of their industry vertical to identify opportunities to innovate and develop the solution further.
If you are a business person, learn the basics of Python and Data Science.
If you are a Data Scientist, have multiple conversations with your end users.
You’ll be joining us at the Deep Learning Summit in San Francisco, tell us a little bit of what we can expect to see from your exhibit table.
At our booth, it is possible to try Valohai in action and see how easy it is to have training up and running.
Join us in San Francisco to learn more from our exciting group of exhibitors. Sign up here now.