DescriptionDeep learning researchers are increasingly using Jupyter notebooks to implement interactive, reproducible workflows. Such solutions are typically deployed on small-scale (e.g. single server) computing systems. However, as the sizes and complexities of datasets and associated neural network models increase, distributed systems become important for training and evaluating models in a feasible amount of time. In this poster, we describe our work on Jupyter notebook solutions for distributed training and hyper-parameter optimization of deep neural networks on high-performance computing systems.