Deep learning and convolutional neural nets have become state of the art techniques for solving many computer vision problems, but the compute intensiveness and large memory requirement of these algorithms make it challenging to realise them on low power embedded platforms. At the Deep Learning Summit in Singapore, Gopalakrishna Hegde will be speaking about these challenges, as well as presenting ongoing research in energy efficient acceleration of deep learning algorithms, developing accelerators for neural networks, and network compression techniques. Gopalakrishna is Research Associate of the School of Computer Science & Engineering at Nanyang Technological University, where he is actively working on feasibility analysis and acceleration of deep learning algorithms on low power embedded platforms. I spoke to him ahead of the summit in October to learn more about recent advancements and future progress in deep learning.
What do you feel are the leading factors enabling recent advancements in deep learning?
Deep learning is a data driven algorithm and its applications require a large amount of computing and memory resources. Even the modern CPUs struggle to achieve real-time performance on these applications. From this perspective, the leading factors enabling recent advancements in deep learning are the large amount of data and parallel computing machines such as GPUs.
When the pioneers in this research area showed that the deep neural networks can achieve far better performance on the computer vision tasks compared to traditional machine learning algorithms, the industries shifted their focus towards this. Large investments coming from the industries towards research in this area are enabling more and more researchers to participate in deep learning research. Open source nature of the industrial leaders in AI such as Google, IBM and Facebook and massive contributions from the open source community resulted in numerous deep learning software frameworks and tools that are enabling fast-paced development in this area.
Which industries do you think will be disrupted by deep learning in the future, and how?
In short, not confining to any particular industrial sector, the majority of today's computer automation tasks are going to experience influence of the deep learning technology in the near future. Starting from information technology, industrial automation, automotive to biomedical and education and the list goes on. As we are going to have more data and faster computing infrastructure, deep learning algorithms are going to outperform the majority of the existing automation tasks and this will force the industries to adopt to this new technology.
What do you feel is essential to future progress of the field?
Research and engineering should be utilised together for better usage of deep learning algorithms in the real-world applications. Knowledge sharing and collaborations are essential at least in the research community. It is also important that the industries focus on efficient realization of these compute intensive algorithms so that they can be moved to edge-devices.
What advice would you give someone who would like to work in this field?
This technology has shown great potential, even in the early stages of its evolution, so it is good time to get involved with the field. Brush up your conventional machine learning skills and you are ready to jump in.
Gopalakrishna Hegde will be speaking at the Deep Learning Summit in Singapore on 20-21 October. Other speakers include Brian Cheung, Google Brain; Modar Alaoui, Eyeris; Pradeep Kumar, Lenovo; and Vassilios Vonikakis, Advanced Digital Sciences Center.
Early Bird tickets are available until the end of Friday 26 August, book now to reserve your pass at a discounted rate! For more information and to register, please visit the website here.Take advantage of our Summer Special discount! Enter the code SUMMER20 when registering for any of our upcoming summits for 20% off tickets. View all upcoming events here.