<span class="var-sub_title">MATEDOR: MAtrix, TEnsor, and Deep-Learning Optimized Routines</span> SC18 Proceedings

The International Conference for High Performance Computing, Networking, Storage, and Analysis

MATEDOR: MAtrix, TEnsor, and Deep-Learning Optimized Routines


Authors: Ahmad Abdelfattah (University of Tennessee), Jack Dongarra (University of Tennessee), Stanimire Tomov (University of Tennessee), Ichitaro Yamazaki (University of Tennessee), Azzam Haidar (Nvidia Corporation)

Abstract: The MAtrix, TEnsor, and Deep-learning Optimized Routines (MATEDOR) project develops software technologies and standard APIs, along with a sustainable and portable library, for large-scale computations that can be broken down into very small matrix or tensor computations. The main target of MATEDOR is to accelerate applications from important fields that fit this profile, including deep learning, data mining, astrophysics, image and signal processing, hydrodynamics, and more.

MATEDOR is a high-performance numerical library for batched linear algebra subroutines autotuned for modern processor architectures and system designs. The MATEDOR library includes LAPACK-compliant routines that target many small dense problems, tensor, and application-specific operations, e.g., for deep-learning. These routines are constructed as much as possible out of calls to batch BLAS routines and their look-alikes required in sparse computation context.


Best Poster Finalist (BP): no

Poster: pdf
Poster summary: PDF


Back to Poster Archive Listing