DescriptionExperiments are a key component in systems and HPC-related research. They help validate new ideas and concepts. Sharing and reproducing experiments, however, is a challenge, especially when computational experiments reside in multiple computing environments, are disorganized into multiple directories, are disconnected from each other, or lack sufficient documentation.
In this paper, we show how sharing, porting, and reproducing distributive and iterative experiments can be simplified by using an automatic containerization tool for capturing/repeating an experiment and a convention for organizing repeated runs of an experiment. Using a simulation-analysis workflow, we show how semantically organized containers can help a reviewer find all experiments for a given result and re-execute all experiments with fail-proof guarantee. We discuss outstanding challenges of adopting this method as an artifact evaluation mechanism.