<span class="var-sub_title">A Study on Checkpoints Compression for Adjoint Computation</span> SC18 Proceedings

The International Conference for High Performance Computing, Networking, Storage, and Analysis

The 4th International Workshop on Data Reduction for Big Scientific Data (DRBSD-4)


A Study on Checkpoints Compression for Adjoint Computation

Authors: Kai-Yuan Hou (Northwestern University)

Abstract: When we want to understand the sensitivity of a simulation model with respect to an input value or to optimize an objective function, the gradient usually provides a good hint. The adjoint state method is a widely used numerical method to compute the gradient of a function. It decomposes functions into a sequence of basic operations. It performs a forward sweep to evaluate the function, followed by a backward sweep to calculate the gradient using the chain rule iteratively. One limitation of the adjoint state method is that all intermediate values from the forward sweep are needed by the backward sweep. Usually, we keep only a portion of those values, called checkpoints, in the memory because of limited space. The remaining values are either stored on the hard disk or recomputed from the nearest checkpoint whenever needed. In this work, we seek to compress the intermediate values in order to better utilize limited space in the memory and to speed the I/O when checkpointing to the hard disk.

Archive Materials


Back to The 4th International Workshop on Data Reduction for Big Scientific Data (DRBSD-4) Archive Listing

Back to Full Workshop Archive Listing