Authors:
Abstract: The current AI benchmarks suffer from a number of drawbacks. First, they cannot adapt to the emerging changes of deep learning (DL) algorithms and are fixed once selected. Second, they contain tens to hundreds of applications and have very long running time. Third, they are mainly selected from open sources, which are restricted by copyright and not representable of the proprietary applications. To address these drawbacks, this work firstly proposes a synthetic benchmark framework that generates a small number of benchmarks that best represent a broad range of applications using their profiled workload characteristics. The synthetic benchmarks can adapt to new DL algorithms by re-profiling new applications and updating itself, greatly reduce number of benchmark tests and running time, and strongly represent DL applications of interests. The framework is validated by using log data profiled from DL models running on Alibaba AI platform, and is representable of real workload characteristics.
Best Poster Finalist (BP): no
Poster: pdf
Poster summary: PDF
Back to Poster Archive Listing