<span class="var-sub_title">MLModelScope: Evaluate and Measure Machine Learning Models within AI Pipelines</span> SC18 Proceedings

The International Conference for High Performance Computing, Networking, Storage, and Analysis

MLModelScope: Evaluate and Measure Machine Learning Models within AI Pipelines

Authors: Abdul Dakkak (University of Illinois), Cheng Li (University of Illinois), Wen-mei Hwu (University of Illinois), Jinjun Xiong (IBM)

Abstract: The current landscape of Machine Learning (ML) and Deep Learning (DL) is rife with non-uniform frameworks, models, and system stacks but lacks standard tools to facilitate the evaluation and measurement of models. Due to the absence of such tools, the current practice for evaluating and comparing the benefits of proposed AI innovations (be it hardware or software) on end-to-end AI pipelines is both arduous and error prone — stifling the adoption of the innovations. We propose MLModelScope— a hardware/software agnostic platform to facilitate the evaluation, measurement, and introspection of ML models within AI pipelines. MLModelScope aids application developers in discovering and experimenting with models, data scientists developers in replicating and evaluating for publishing models, and system architects in understanding the performance of AI workloads.

Best Poster Finalist (BP): no

Poster: pdf
Poster summary: PDF

Back to Poster Archive Listing