Understanding Application Recomputability without Crash Consistency in Non-Volatile Memory
Abstract: Emerging non-volatile memory (NVM) is promising to be used as main memory, because of its good performance, density, and energy efficiency. Leveraging the non-volatility of NVM as main memory, we can recover data objects and resume application computation (recomputation) after application crashes. The existing work studies how to ensure that data objects stored in NVM can be recovered to a consistent version during system recovery, a property referred to as crash consistency. However, enabling crash consistency often requires program modification and brings large runtime overhead.
In this paper, we use a different view to examine application recomputation in NVM. Without taking care of consistency of data objects, we aim to understand if the application can be recomputable, given possible inconsistent data objects in NVM. We introduce a PIN-based simulation tool, NVC, to study application recomputability in NVM without crash consistency. The tool allows the user to randomly trigger application crash and then perform postmortem analysis (i.e., the analysis on data consistency) on data values in caches and memory. We use NVC to study a set of applications. We reveal that some applications are inherently tolerant to crash consistency. We perform a detailed analysis of the reasons. We study an optimization technique to accelerate the simulation performance of NVC. The technique allows us to use NVC to study data-intensive applications with large data sets.
Back to MCHPC’18: Workshop on Memory Centric High Performance Computing Archive Listing
Back to Full Workshop Archive Listing