Deep learning algorithms are now transiting from proof-of-concepts or academic research to production deployments in industry. Businesses may be keen to apply AI to their work, but unable to execute the strategies due to lack of funding, staff, or knowledge within their teams. This, however, should not be seen as an obstacle for smaller businesses, and Intel’s Nervana Cloud that enables businesses to develop custom deep learning software.
Intel Nervana offer their deep learning expertise, coupled with a deep learning platform that enables data scientists to schedule training jobs, track their experiments, and test deployment plans.
Companies benefitting from Intel Nervana include NASA, who are using the deep learning to create better moon maps., the system is crunching the numbers that NASA collects and converting it into useful data.
At the Deep Learning Summit in Montreal, October 10 & 11, Staff Algortihms Engineer, Hanlin Tang will discuss ‘Deep Learning in Production’, touching on lessons learned in engineering algorithms at scale, from model development to inference optimizations to performance monitoring in the field, as well as presenting the Intel Nervana portfolio.
On speaking with Hanlin, he explained how as well as leading several AI projects for both commercial and U.S. Federal agencies, his team are conducting applied research in deep learning, particularly in industrial applications such as satellite imagery where there has not been much academic focus. They also provide algorithms guidance to the Intel Nervana's deep learning silicon.
With a background in neuroscience, Hanlin began his work in deep learning in graduate school, working on ‘connecting neural data of the human brain with biologically-inspired computer vision models that are the precursor to today’s deep neural networks.’ He explained that the immature nature of deep learning led him to realize that it’s the tool builders, of software and hardware that will have the greatest impact in the near future. This led to his joining of the eDL hardware startup Nervana Systems before it was acquired by Intel.
Intel Nervana optimizations have sped-up deep learning inference by 50-100x on Intel Architecture, allowing for deployment at scale. They are also working on custom silicon designed specifically for training deep neural networks, which we believe will not only reduce training time but also enable new algorithm capabilities.
Deep learning has been advancing at an astounding pace in recent years, with its impact becoming prevalent and beneficial across multiple. Andrew Ng even commented recently that 'AI is the new Electricity'.
But what is it that’s enabled recent advancements in DL?
According to Hanlin, ‘the less acknowledged enabler of deep learning are the crowd-sourcing platforms that were critical for the human labeling of massive curated datasets like ImageNet. Novel academic datasets contributed to the community are still largely reliant on these platforms.’ AI has the potential to assist in countless industrial applications from the production of autonomous vehicles, to automated monitoring of critical physical infrastructure to enabling faster and more accurate medical diagnosis.
Deep learning however, still has a long way to come and problems to solve. ‘Even though Fortune 100 companies have access to enormous amounts of data, the availability of human annotated data for industrial applications is still difficult to come by.’ When companies begin to curate their labeled data, and build data collection and processing into their systems, the chances of a further more extreme explosion of deep learning applications in all facets of industry is what Hanlin is predicting.