Student:
Supervisor: Christoph Csallner (University of Texas, Arlington)
Abstract: Deep learning has recently emerged as a powerful technique for many tasks including image classification. A key bottleneck of deep learning is that the training phase takes a lot of time, since state-of-the-art deep neural networks have millions of parameters and hundreds of hidden layers. The early layers of these deep neural networks have the fewest parameters but take up the most computation.
In this work, we reduce training time by progressively freezing hidden layers, pre-computing their output and excluding them from training in both forward and backward paths in subsequent iterations. We compare this technique to the most closely related approach for speeding up the training process of neural network.
Through experiments on two widely used datasets for image classification, we empirically demonstrate that our approach can yield savings of up to 25% wall-clock time during training with no loss in accuracy.
ACM-SRC Semi-Finalist: no
Poster: PDF
Poster Summary: pdf
Reproducibility Description Appendix: PDF
Back to Poster Archive Listing