Auto-Tuning TensorFlow Threading Model for CPU Backend
Authors: Niranjan Hasabnis (Intel Corporation)
Abstract: TensorFlow is a popular deep learning framework used to solve machine learning and deep learning problems such as image classification and speech recognition. It also allows users to train neural network models or deploy them for inference using GPUs, CPUs, and custom-designed hardware such as TPUs. Even though TensorFlow supports a variety of optimized backends, realizing the best performance using a backend requires additional efforts. Getting the best performance from a CPU backend requires tuning of its threading model. Unfortunately, the best tuning approach used today is manual, tedious, time-consuming, and, more importantly, may not guarantee the best performance.
In this paper, we develop an automatic approach, called TENSORTUNER, to search for optimal parameter settings of TensorFlow’s threading model for CPU backends. We evaluate TENSORTUNER on both Eigen and Intel’s MKL CPU backends using a set of neural networks from TensorFlow’s benchmarking suite. Our evaluation results demonstrate that the parameter settings found by TENSORTUNER produce 2% to 123% performance improvement for the Eigen CPU backend and 1.5% to 28% performance improvement for the MKL CPU backend over the performance obtained using their best-known parameter settings. This highlights the fact that the default parameter settings in Eigen CPU backend are not the ideal settings; and even for a carefully hand-tuned MKL backend, the settings are sub-optimal. Our evaluations also revealed that TENSORTUNER is efficient at finding the optimal settings — it is able to converge to the optimal settings quickly by pruning more than 90% of the parameter search space.
Back to Machine Learning in HPC Environments Archive Listing
Back to Full Workshop Archive Listing