Twelve computational graphs for Christmas

A graph is the best way to describe the models you create in a machine learning system and most machine learning frameworks, like Google’s TensorFlow, use graphs. These computational graphs are made up of vertices, which perform operations, connected by edges which describe the communication paths between vertices.  

The graph abstraction is attractive because it makes no assumptions about the structure of computation and it breaks the computation into component pieces, which a highly parallel processor such as the Graphcore IPU can exploit for performance.

Our C++ software framework, called Poplar, interfaces seamlessly to leading machine learning frameworks like TensorFlow. Poplar automatically converts the graph output from the frameworks into the code and the communication patterns that will run on the IPU using an extremely complex computational graph.

Poplar is not a language – it is a low-level tensor graph framework, and uses the same type of programming model that you find in TensorFlow or other machine learning frameworks.

If you want to keep things really simple, you can just write your machine learning model description in TensorFlow and then let Poplar compile and run it on the IPU. If you want more control, you can go in and look at the library elements and modify them or extend them to create new types of machine learning model or deep neural networks.

The 12 beautiful visualizations here are an output of how our Poplar software represents the graph model for the IPU processor. It is orders of magnitude more detailed than the way neural network structures have been previously represented. Whilst its utility at the moment is mostly visual it is certainly something that will be revisited in the future as an active research topic. Dave Lacey will be talking more about graph computing at the RE•WORK Deep Learning Summit in San Francisco this January 25 & 26. For now, just enjoy these visualizations...