Evaluating the Impact of Spiking Neural Network Traffic on Extreme-Scale Hybrid Systems
Abstract: As we approach the limits of Moore's law, there is increasing interest in non-Von Neuman architectures such as neuromorphic computing to take advantage of improved compute and low power capabilities. Spiking neural network (SNN) applications have so far shown very promising results running on a number of processors, motivating the desire to scale to even larger systems having hundreds and even thousands of neuromorphic processors. Since these architectures currently do not exist in large configurations, we use simulation to scale real neuromorphic applications running on a single neuromorphic chip, to thousands of chips in an HPC class system. Furthermore, we use a novel simulation workflow to perform a full scale systems analysis of network performance and the interaction of neuromorphic workloads with traditional CPU workloads in a hybrid supercomputer environment. On average, we find Slim Fly, Fat-Tree, Dragonfly-1D, and Dragonfly-2D are 45%, 46%, 76%, and 83% respectively faster than the worst case performing topology for both convolutional and Hopfield NN workloads running alongside CPU workloads. Running in parallel with CPU workloads translates to an average slowdown of 21% for a Hopfield type workload and 184% for convolutional NN workloads across all HPC network topologies.
Back to The 9th International Workshop on Performance Modeling, Benchmarking, and Simulation of High-Performance Computer Systems (PMBS18) Archive Listing
Back to Full Workshop Archive Listing