Groq ISCA 2022 Paper

Written by:

Download our 2022 ISCA-awarded paper, A Software-defined Tensor Streaming Multiprocessor for Large-scale Machine Learning

We describe our novel commercial software-defined approach for large-scale interconnection networks of tensor streaming processing (TSP) elements. The system architecture includes packaging, routing, and flow control of the interconnection network of TSPs. 

We describe the communication and synchronization primitives of a bandwidth-rich substrate for global communication. This scalable communication fabric provides the backbone for large-scale systems based on a software-defined Dragonfly topology, ultimately yielding a parallel machine learning system with elasticity to support a variety of workloads, both training and inference. We extend the TSP’s producer-consumer stream programming model to include global memory which is implemented as logically shared, but physically distributed SRAM on-chip memory. Each TSP contributes 220 MiBytes to the global memory capacity, with the maximum capacity limited only by the network’s scale – the maximum number of endpoints in the system. 

The TSP acts as both a processing element (endpoint) and a network switch for moving tensors across the communication links. We describe a novel software-controlled networking approach that avoids the latency variation introduced by dynamic contention for network links. We describe the topology, routing, and flow control to characterize the performance of the network that serves as the fabric for a large-scale parallel machine learning system with up to 10,440 TSPs and more than 2 TeraBytes of global memory accessible in less than 3 microseconds of end-to-end system latency.

Need more answers? Contact us and we’re happy to answer any questions you have.

Never miss a Groq update! Sign up below for our latest news.

The latest Groq news. Delivered to your inbox.