<span class="var-sub_title">Analysis of CPU Pinning and Storage Configuration in 100 Gbps Network Data Transfer</span> SC18 Proceedings

The International Conference for High Performance Computing, Networking, Storage, and Analysis

Innovating the Network for Data Intensive Science (INDIS)


Analysis of CPU Pinning and Storage Configuration in 100 Gbps Network Data Transfer

Abstract: A common bottleneck for high-speed network data transfers is lack of CPU resources. A number of techniques and solutions have been proposed to reduce CPU load for data transfer. One can optimize the core affinity settings in their Non-Uniform Memory Access (NUMA) system and use NVMe over Fabrics to avoid CPU bottlenecks in high-speed network data transfers. Our assumption is that binding processes to the local processor improves the overall performance of the high-speed network data transfers compared to binding the processes to actual cores or leaving them unbounded. Furthermore, using NVMe over Fabrics reduces the CPU utilization more with a lower number of processors. To evaluate these assumptions, we performed a series of experiments with different core affinity and storage settings. We found evidence that binding processes to the local processor instead of the cores improve the file transfer performance for most of the use-cases and NVMe over Fabrics is more efficient in transferring files compared to traditional file transfers in Local Area Networks (LANs). We were able to achieve the maximum SSD performance threshold using 32 transfer processes with traditional file transfers while using 8 processes with NVMe over Fabrics and reduced CPU utilization.

Archive Materials


Back to Innovating the Network for Data Intensive Science (INDIS) Archive Listing

Back to Full Workshop Archive Listing