Network Research Exhibition
The SC Conference Series is a test bed for cutting-edge developments in high-performance networking, computing, storage, and analysis.
SCinet invites proposals for the 2018 Network Research Exhibition (NRE) at the SC conference. NRE demonstrations leverage the advanced capabilities of SCinet, SC’s dedicated high-capacity network. For the duration of the conference, SCinet is the fastest and most powerful network in the world.
Network researchers and professionals from government, education, research, and industry are invited to submit proposals for demonstrations and experiments that display innovation in emerging network hardware, protocols, and advanced network-intensive scientific applications.
Each year, a selection of NRE participants are invited to share the results of their demos and experiments from the preceding year’s conference as part of the Innovating the Network for Data-Intensive Science (INDIS) workshop.
Accepted NRE Demos
NRE-1
SCinet Multi 100G Data Transfer Node for Multi Tenant Production Environment
Location: Booth 2450
Multi-tenant services and user environments for high performance network and data intensive workflows present emerging network service challenges. These challenges are even more complex in the dynamic experimental environment at the SC conference series. This NRE describes approaches to address them.
Download the final NRE-1 demo submission (PDF)
NRE-2
Demonstrations of 400 Gbps Disk-to-Disk WAN File Transfers using iWARP and NVMe-oF
Location: Booth 2851
NASA requires the processing and exchange of ever increasing vast amounts of scientific data, so NASA networks must scale up to ever increasing speeds, with 100 Gigabit per second (Gbps) networks being the current challenge. However it is not sufficient to simply have 100 Gbps network pipes, since normal data transfer rates would not even fill a 1 Gbps pipe. The NASA Goddard High End Computer Networking (HECN) team will demonstrate systems and techniques to achieve near 400G line-rate disk-to-disk data transfers between a high performance 4x100G NVMe Server at SC18 to or from a pair of high performance 2x100G NVMe servers across two national wide area 4x100G network paths, by utilizing NVMe-oF (NVME over Fabrics) and iWARP (Internet Wide Area RDMA Protocol) to transfer the data between the servers’ NVMe drives.
Download the final NRE-2 demo submission (PDF)
NRE-3
Building User Friendly Data Transfer Nodes
Location: Booth 2041
Data Transfer Nodes (DTNs) are a hardware and software system that allows for fast and efficient transport of data over long distances. They support data intensive science such as found in the high energy physics (HEP) community. This demonstration will show how multiple data streams can be transferred across the globe, using DTNs and high-performance networks.
Download the final NRE-3 demo submission (PDF)
NRE-4
AutoGOLE
Location: Booth 2041
The Global Lambda Integrated Facility (GLIF) Automated GOLE (AutoGOLE) is a worldwide collaboration of GLIF Open Lightpath Exchanges (GOLEs) and R&E networks to deliver network services end-to-end in a fully automated way, wherein connection requests are handled through the Network Service Interface Connection Service (NSI-CS).
Download the final NRE-4 demo submission (PDF)
NRE-5
IRNC Software Defined Exchange (SDX) Services Integrated With 100 Gbps Data Transfer Nodes (DTNs) for Petascale Science
Location: Booth 2851
iCAIR is designing, developing, implementing and experimenting with an international Software Defined Exchange (SDX) at the StarLight International/National Communications Exchange Facility, which integrates services based on 100 Gbps Data Transfer Nodes (DTNs) for Wide Area Networks (WANs), including trans-oceanic WANs, to provide high performance transport services for petascale science, controlled using Software Defined Networking (SDN) techniques. These SDN enabled DTN services are being designed specifically to optimize capabilities for supporting large scale, high capacity, high performance, reliable, high quality, sustained individual data streams for science research.
Download the final NRE-5 demo submission (PDF)
NRE-6
DTNs Closely Integrated with WAN High Performance 100 Gbps Optical Channels
Location: Booth 2851
Data Transfer Nodes (DTNs) are primarily used with L3 services, and in some cases with L2 services. This research project is exploring ways to directly integrate DTNs with 100 Gbps WAN channels based on optical networking. This project is using an international 100 Gbps testbed that has been designed, implemented and operated by Ciena. Recent developments will be showcased through demonstrations at SC18.
Download the final NRE-006 demo submission (PDF)
NRE-7
Applying P4 To Supporting Data Intensive Science Workflows On Large Scale Networks
Location: Booth 2851
These demonstrations will show how P4 can be used to support large scale data intensive science workflows on high capacity high performance WANs and LANs. It has long been recognized that managing large scale data intensive science workflows on high performance LANs and WANs requires special programmable networking techniques. Recently, many of these techniques have been based on Software Defined Networking architecture and technology using the OpenFlow protocols. However, OpenFlow has multiple limitations, and, consequently, network researchers have begun to develop actual programming languages to provide for more agile, dynamic network services and capabilities. P4 is an emerging language for programmable networks that has a potential to address many issues related to supporting large scale data intensive science workflows on high capacity high performance WANs and LANs.
Download the final NRE-7 demo submission (PDF)
NRE-8
International WAN High Performance Data Transfer Services Integrated With 100 Gbps Data Transfer Nodes for Petascale Science (PetaTrans)
Location: Booth 2851
The PetaTrans (high performance transport for petascale science) based on 100 Gbps Data Transfer Node (DTN) is a research project directed at improving large scale WAN services for high performance, long duration, large capacity single data flows. iCAIR is designing, developing, and experimenting with multiple designs and configurations for 100 Gbps Data Transfer Nodes (DTNs) over 100 Gbps Wide Area Networks (WANs), including Software Defined WANs, especially trans-oceanic WANs, PetaTrans – Recent developments will be showcased through demonstrations at SC18.
Download the final NRE-8 demo submission (PDF)
NRE-9
IRNC Software Defined Exchange (SDX) Services Integrated With 100 Gbps Data Transfer Nodes (DTNs) for Petascale Science
Location: Booth 2851
iCAIR is designing, developing, implementing and experimenting with an international Software Defined Exchange (SDX) at the StarLight International/National Communications Exchange Facility, which integrates services based on 100 Gbps Data Transfer Nodes (DTNs) for Wide Area Networks (WANs), including trans-oceanic WANs, to provide high performance transport services for petascale science, controlled using Software Defined Networking (SDN) techniques. These SDN enabled DTN services are being designed specifically to optimize capabilities for supporting large scale, high capacity, high performance, reliable, high quality, sustained individual data streams for science research.
Download the final NRE-9 demo submission (PDF)
NRE-10
MDTMFTP: A High-performance Data Transfer Tool
Location: Booth 2851
To address challenges in high performance data movement for large-scale science, the Fermilab network research group has developed mdtmFTP, a high-performance data transfer tool to optimize data transfer on multicore platforms. mdtmFTP has a number of advanced features. First, it adopts a pipelined I/O design. Data transfer tasks are carried out in a pipelined manner across multiple cores. Dedicated threads are spawned to perform network and disk I/O operations in parallel. Second, mdtmFTP uses multicore-aware data transfer middleware (MDTM) to schedule an optimal core for each thread, based on system configuration, in order to optimize throughput across the underlying multicore core platform. Third, mdtmFTP implements a large virtual file mechanism to efficiently handle lots-of-small-files (LOSF) situations. Finally, mdtmFTP unitizes optimization mechanisms such as zero copy, asynchronous I/O, batch processing, and pre-allocated buffer pools, to maximize performance.
Download the final NRE-10 demo submission (PDF)
NRE-11
Toward an NRP, xRP: Super-Channels and SDXs for Data-Intensive Science
Location: Booth not specified
A consortium has been established to design, develop, implement, and operate an end-to-end data transport service for data intensive science, within the framework of a large scale distributed science DMZ. Many science research communities require simple data transport across multi-domain WANs and LANs on end-to-end 100 Gbps connections, for example, to enable virtual co-location of data with computing, analytic systems, instruments and visualization systems. Provisioning such end-to-end disk to disk services are a challenge because the data must be transported across among generalized networks and multiple network domains. LANs, campus networks, and WANs, such as regional networks, national networks, and international networks. A consortium has been formed to address this use by leveraging the existing ESnet Science DMZ architecture. A first step was to create a regional science DMZ, the Pacific Research Platform (PRP), an initiate led by Larry Samrr and Tom Defanti. Partners include multiple California universities, Pacific Wave, CENIC, the International/National Communications Exchange Facility, and the Metropolitan Research and Education Network (MREN), a regional network in seven states in the upper midwest. More recently, two workshops have been held at Montana State University in Bozeman, Montana, focused on the concept of a National Research Platform (NRP). In addition, a Global Research Platform is being developed with multiple international partners.
Download the final NRE-11 demo submission (PDF)
NRE-12
Bridging Talents through SAGE2 Collaboration over SDN/IP
Location: Booth 3643
The key to foster innovation is always about how successful those talents can freely collaborate together on brilliant ideas. SAGE2 has long been one of the best video collaboration tool as the answer to the aforementioned demand. Meanwhile, SDN/IP is a promising technology to integrate the advantages from both the flexibility of software defined network (SDN) and the ubiquity of traditional IP networks. To take advantage of the agility and versatility of SDN/IP to further help scientists and network researchers to fulfill international collaborations, a SDN/IP testbed has been established between multiple sites in Taiwan, Japan and USA, with SAGE2 nodes of each participating institutions communicating over the testbed.
Download the final NRE-12 demo submission (PDF)
NRE-14
SENSE: SDN for End-to-end Networked Science at the Exascale
Location: Booths 4211, 1413
The Software-defined network for End-to-end Networked Science at Exascale (SENSE) research project is building smart network services to accelerate scientific discovery in the era of ‘big data’ driven by Exascale, cloud computing, machine learning and AI. The project’s architecture, models, and demonstrated prototype define the mechanisms needed to dynamically build end-to-end virtual guaranteed networks across administrative domains, with no manual intervention. In addition, a highly intuitive ‘intent’ based interface, as defined by the project, allows applications to express their high-level service requirements, and an intelligent, scalable model-based software orchestrator converts that intent into appropriate network services, configured across multiple types of devices. The significance of these capabilities is the ability for science applications to manage the network as a first-class schedulable resource akin to instruments, compute, and storage, to enable well defined and highly tuned complex workflows that require close coupling of resources spread across a vast geographic footprint such as those used in science domains like high-energy physics and basic energy sciences.
Download the final NRE-14 demo submission (PDF)
NRE-16
Global Petascale to Exascale Science Workflows Accelerated by Next Generation SDN Architectures and Applications
Location: Booth 1413
We will demonstrate several major advances in software defined and Terabit/sec networks, intelligent global operations and monitoring systems, workflow optimization methodologies with real-time analytics, and state of the art long distance data transfer methods and tools and server designs, to meet the challenges faced by leading edge data intensive experimental programs in high energy physics, astrophysics, climate science including the Large Hadron Collider (LHC), the Large Synoptic Space Telescope (LSST), the Linac Coherent Light Source (LCLS II), the Earth System Grid Federation and others. Several of the SC18 demonstrations will include a fundamentally new concept of “consistent network operations,” where stable load balanced high throughput workflows crossing optimally chosen network paths, up to preset high water marks to accommodate other traffic, provided by autonomous site-resident services dynamically interacting with network-resident services, in response to demands from the science programs’ principal data distribution and management systems. This will be empowered by end-to-end SDN methods extending all the way to autoconfigured Data Transfer Nodes (DTNs), including intent-based networking APIs combined with transfer applications such as Caltech’s open source TCP based FDT which have been shown to match 100G long distance paths at wire speed in production networks. During the demos, the data flows will be steered across regional, continental and transoceanic wide area networks through the orchestration software and controllers, and automated virtualization software stacks developed in the SENSE, PRP, AmLight, Kytos and other collaborating projects. The DTNs employed will use the latest high throughput SSDs and flow control methods at the edges such as FireQoS and/or Open vSwitch, complemented by NVMe over fabric installations in some locations.
Download the final NRE-16 demo submission (PDF)
NRE-17
Large Synoptic Survey Telescope (LSST) Real Time Low Latency Transfers for Scientific Processing Demonstrations
Location: Booth not specified
At SC18 in Dallas, Texas we plan to experiment with data transfer rates, using 100Gig FIONA Data Transfer Nodes (a.k.a. DTNs) in Chile and Illinois. The demos aim to achieve three goals: First, we will demonstrate real time low latency transfers for scientific processing of multi-Gigabyte images from the LSST base station site in La Serena, Chile, flowing over the REUNA Chilean National Research & Education Network (NREN), as well as ANSP and RNP Brazilian national circuits and the AmLight-ExP Atlantic and Pacific Ring through AMPATH2 to Starlight and NCSA. Second, we will simulate operational and data quality traffic to SLAC, Tucson and other sites including the Dallas show floor. Third, we will stress test the AmLight ExP network to simulate the LSST annual multi-petabyte Data Release from NCSA to La Serena at rates consistent with those required for LSST operations.
Download the final NRE-17 demo submission (PDF)
NRE-18
Americas Lightpaths Express and Protect Enhances Infrastructure for Research and Education
Location: Booth not specified
Americas Lightpaths Express and Protect (AmLight ExP) enables research and education amongst the people of the Americas through the operation of production infrastructure for communication and collaboration between the U.S. and Western Hemisphere science and engineering research and education communities. AmLight ExP supports a hybrid network strategy that combines optical spectrum (Express) and leased capacity (Protect) that provides a reliable, leading-edge diverse network infrastructure for research and education.
Download the final NRE-18 demo submission (PDF)
NRE-19
SDN Federation Protocol: Toward Fine-Grained Interdomain Routing
Location: Booth 1413
Member networks of collaborative data sciences are increasingly deploying software defined networking (SDN) within their own networks, but still interconnected by the Border Gateway Protocol (BGP). The inconsistency between the fine-grained SDN policies used in member networks and the coarse-grained (i.e., destination IP based) routing information exchanged between member networks can lead to serious issues, including black holes, reduced reachability and forwarding loops. We design SFP, the first pull-based, fully-distributed, fine-grained interdomain routing protocol, in which member networks query neighbors for routing information of interest. We design SFP as a modular extension of BGP, and develop two novel techniques, on-demand information dissemination and an efficient algorithm MaxODI, to address the potential scalability concern of SFP.
Download the final NRE-19 demo submission (PDF)
NRE-20
Mercator: Multi-domain Network State Abstraction
Location: Booth 1413
Multi-domain network resource reservation systems are being deployed, driven by the demand and substantial benefits of providing predictable network resources. However, a major lack of existing systems is their coarse granularity, due to the participating networks’ concern of revealing sensitive information, which can result in substantial inefficiencies. We present Mercator, a novel multi-domain network resource discovery system to provide fine-grained, global network resource information, for collaborative sciences. The foundation of Mercator is a resource abstraction through algebraic-expression enumeration (i.e., linear inequalities / equations), as a compact representation of the available bandwidth in multi-domain networks. We also develop an obfuscating protocol in Mercator, to address the privacy concerns by ensuring that no participant can associate the algebraic expressions with the corresponding member networks. We also introduce a super-set projection technique to increase Mercator’s scalability.
Download the final NRE-20 demo submission (PDF)
NRE-21
Trident: Unified SDN Programming Framework with Automatic Updates
Location: Booth 1413
Data-intensive collaborative data sciences can benefit substantially from software-defined networking (SDN) and network functions (NF). Unified SDN programming, which integrates states of network functions into SDN control plane programming, brings these two technologies together. However, integrating asynchronous, continuously changing states of network functions into SDN can introduce basic complexities: (1) how to naturally integrate network function state into SDN programming; (2) how to flexibly construct consistent, correlated routes to utilize network function state; and (3) how to handle dynamicity of unified SDN programming. We design Trident, the first unified SDN programming framework that introduces programming primitives including stream attributes, route algebra and live variables to remove these complexities.
Download the final NRE-021 demo submission (PDF)
NRE-23
UCSD/Qualcomm Institute GPU Challenge – Nautilus and CHASE-CI services: At-scale Machine Learning and Analysis With NASA MERRA Data
Location: Booth not specified
Rapid object segmentation for evolving time and space earth science data is challenging. Often in the earth sciences, objects of interest (e.g., rain clouds, flash floods, droughts) are not clearly defined and change rapidly and dynamically in time. Specialized algorithms are needed to identify, locate, and track these types of phenomena. In this GPU Challenge demonstration, we use a Jupyter Notebook connected to Nautilus and CHASE-CI services to use the new Kubernetes GPU cluster and TPU resources to do rapid object segmentation by applying a machine learning approach: Flood-Filling Networks (FFN) [Januszewski et al., 2016] to the NASA MERRA v2 data stored on a UCSD FIONA, accessible using a THREDDS server on the Pacific Research Platform. These resources provide the capability for an automated algorithm development and deployment platform across the CHASE-CI kubernetes GPU cluster.
Download the final NRE-23 demo submission (PDF)
NRE-26
BigData Express: A scalable and High-performance Data Transfer Platform
Location: Booths 2851, 4211
Big data has emerged as a driving force for scientific discoveries. To meet data transfer challenges in big data era, DOE’s Advanced Scientific Computing Research (ASCR) office has funded the BigData Express project. BigData Express is targeted at providing schedulable, predictable, and high-performance data transfer service for DOE’s large-scale science computing facilities and their collaborators. In this demo, we use BigData Express software to demonstrate bulk data movement over wide area networks.
Download the final NRE-26 demo submission (PDF)
NRE-27
Poseidon: A Machine Learning Approach to Network Device Role and Behavior Identification
Location: Booth 2450
Effective network security relies heavily upon the situational awareness of the network operator. At minimum, addressing risks requires the operator to answer two basic questions in near real-time: what is on the network; and what it is doing? In practice, answering these questions accurately has proven to be a challenge in and of itself. Doing so at scale in a reasonable, automated fashion can be debilitating to all but the smallest of organizations. Poseidon is an open-source project focused on leveraging software-defined networks (SDNs) and machine learning (ML) techniques to answer these two questions.
Download the final NRE-27 demo submission (PDF)
NRE-28
Resilient Distributed Processing
Location: Booth 2851
This demonstration will show dynamic arrangement of widely distributed processing of large volumes of data across a set of compute and network resources organized in response to resource availability and changing application demands. A real-time processing pipeline will be demonstrated from SC18 to the Naval Research Laboratory in Washington, DC, and back to SC18. High volume bulk data will be transferred concurrently across the same data paths. A software-controlled network will be assembled using a number of switches, four 100G connections from DC to Denver, two 100G connections from StarLight and one 100G connection from NERSC. We plan to show rapid deployment and redeployment, real-time monitoring and QOS management of these application data flows with very different network demands. Technologies we intend to leverage include SDN, RDMA, RoCE, NVMe, GPU acceleration and others.
Download the final NRE-28 demo submission (PDF)
NRE-30
Supporting Scientific Data-intensive Collaborations With Distributed Storage And Edge Services
Location: Booths 1204, 2853, 1413
Scientific collaboration with large or diverse data can be a challenging problem which diverts time from core research goals. Excellent network connectivity is central to enabling effective infrastructures supporting science. The University of Michigan and partners at Michigan State, Wayne State, Indiana University, University of Chicago, University of Utah and others will be demonstrating capabilities in this area from the OSiRIS, SLATE and AGLT2 projects.
Download the final NRE-30 demo submission (PDF)
NRE-31
Programmable Privacy-Preserving Network Measurement for Network Usage Analysis and Troubleshooting
Location: Booth not specified
Network measurement and monitoring are instrumental to network operations, planning and troubleshooting. However, increasing line rates (100+Gbps), changing measurement targets and metrics, privacy concerns, and policy differences across multiple R&E network domains have introduced tremendous challenges in operating such high-speed heterogeneous networks, understanding the traffic patterns, providing for resource optimization, and locating and resolving network issues. There is strong demand for a flexible, high-performance measurement instrument that can empower network operators to achieve the versatile objectives of effective network management and resource provisioning. In this demonstration, we present AMIS: Advanced Measurement Instrument and Services to achieve programmable, flow-granularity and event-driven network measurement, sustain scalable line rates, to meet evolving measurement objectives and to derive knowledge for network advancement.
Download the final NRE-31 demo submission (PDF)
NRE-32
Expressways from Scientific Instrument to Supercomputer: A Prototype Demonstration
Location: Booth not specified
Moving data from modern scientific instruments to supercomputing facilities presents many challenges. For instance, extreme-scale scientific simulations and experiments can generate much more data than that can be stored and analyzed efficiently at a single site. Moreover, data movement must be finished within tight schedules. Another example is the growing number of scientific fields that require the ability to analyze data in near real-time, so that results from one experiment can guide selection of the next–or even influence the course of a single experiment. Expressways encompasses two demonstrations. First, we build upon our SC16 demonstration about moving a Petabyte in a day, and present novel techniques for moving a Petabyte in half a day. Second, we present a reference architecture and tools to establish network circuits from scientific instrument to supercomputer that will enable streaming data analysis.
Download the final NRE-32 demo submission (PDF)
NRE-33
Tracking Network Events with Write Optimized Data Structures
Location: Booth 2450
The basic action of two IP addresses communicating is still a critical part of most security investigations. Typically security tools focus on logging torrents of security events. Some more advanced environments will try to send the logs to a variety of databases. Unfortunately, when faced with indexing billions of events such databases are usually unable to keep up with the rate of network traffic. As a result, security monitors typically log with little to no indexing. Write-optimized data structures (WODS) provides a novel approach to traditional data structures. WODS use RAM to aggregate multiple insertions into a single write and as a result are able to ingest data 10 to 100 times faster while answering queries in a timely manner. Our Diventi tool uses a write optimized B-Tree known as a Be-tree to index layer 3 network activity either from bro connection logs or netflow data. In 2017 our tool was able to track all bro-ids monitored traffic indexing at rates above 100,000 events per second, and typically answering queries in milliseconds. This year diventi will connect directly with the SciNet security team’s core tap and aggregation infrastructure, ingesting netflow records directly from the Ixia switch that will be doing the security monitoring. Working closely with the network security team, diventi will provide sub-second query results to help security responders identify which IPs were communicating at what times.
Download the final NRE-33 demo submission (PDF)
NRE-34
Automated Tensor Analysis For Deep Network Visibility
Location: Booth 2450
We will demonstrate a usable and scalable network security workflow based on ENSIGN, a high-performance data analytics tool based on tensor decompositions, that can analyze huge volumes of network data and provide actionable insights into the network. The enhanced workflow provided by ENSIGN assists in identifying actors who craft their actions to subvert signature-based detection methods and automates much of the labor intensive forensic process of connecting isolated incidents into a coherent attack profile. This approach complements traditional workflows that focus on highlighting individual suspicious activities.
Download the final NRE-34 demo submission (PDF)
NRE-35
Wide Area Workflows at 400 Gbps
Location: Booth 2400
The increasing rate of data production from digital instruments and simulations makes it harder and harder to replicate that data due to the limitations of local resources. To address situations like these, as part of the NSF funded Data Capacitor project, Indiana University (IU) worked with Oak Ridge National Laboratory (ORNL) in 2006 to examine the feasibility of using the Lustre file system across 10 Gbps networks to compute in place. The success of these efforts led to the deployment of a Lustre WAN file system in 2009 that allowed researchers to compute against their data across distance, and was made available to the NSF TeraGrid project. These efforts continue to have relevance today; at IU a physics laboratory located across the Bloomington campus from the data center mounting a Lustre file system on their cluster to compute against data that’s local to the university’s larger supercomputing resources. Researchers at the Indianapolis campus run Docker on their local workstations to compute against that same file system. The Pittsburgh Supercomputing Center (PSC) mounts the Lustre file system on their Bridges supercomputer to support the efforts of the National Center for Genome Analysis Support in order to shift computational workload from IU’s resources to Bridges without moving data.
Download the final NRE-35 demo submission (PDF)
NRE-36
University of Southern California 400G Ethernet Switches
Location: Booths 1403, 1413, 2450
The decentralization of the Internet and increased collaboration in large-scale science projects has increased the demand for higher throughput to move their datasets. Traditional applications which rely only on the TCP protocol suffer from CPU saturation at high bit rates. With the help of SCinet, National and regional network providers, AutoGOLE path provisioning and Pacific Research Platform, we intend to demonstrate large-scale disk-to-disk data transfers using the data transfer nodes (DTN). We will be optimizing and demonstrating highly optimized, first ever-built data transfers nodes to take benefit of RDMA over an IP network at 100G and 200Gbps.
Download the final NRE-36 demo submission (PDF)
NRE-37
Scalable Network Visualization with RCPs
Location: Booth 327
Demo will be done using the Reconfigurable Communication Processor(RCP) developed by ALAXALA Networks. RCP is a router with edge server that consists of Reconfigurable Service Modules(RSMs) and Reconfigurable Processing Modules(RPMs). RPMs are FPGA/ASIC based hardware and can be reconfigured to Flex-Counters and Data-Plane either of IP, MPLS and EoE (Ethernet over Ethernet). The Demo uses both of the mirror of actually flowing in SC18 traffic (10Gbps) and generated test traffic (100Gbps). The collected traffic statistics will be aggregated into 240 counters by country in real-time, which are realized with Flex-Counters in reconfigured RCP. Demo also shows that these steps can be carried out with multiple RCPs in a resource pool. NIRVANA-Kai developed by chile will receive the results of Flex-Counters via SNMP and visualize it on the monitor at 100 Gbps wire-speed in real-time.
Download the final NRE-37 demo submission (PDF)
2018 NRE Topics
Topics for the 2018 Network Research Exhibition demos and experiments include:
- Software-defined networking
- Novel network architecture
- Switching and routing
- Alternative data transfer protocols
- Network monitoring, management, and control
- Network security, encryption, and resilience
- Open clouds and storage area networks
- HPC-related use of GENI Racks
Important Dates
- May 11, 2018 – SCinet NRE submissions open
- June 1, 2018 – NRE Preliminary Abstracts due
- August 3, 2018 – NRE Network Requirements due
- October 26, 2018 – NRE Final Publishable Submissions due
- February 1, 2019 – NRE Research Results and Findings due
Submit an NRE Proposal
The NRE proposal submission process includes four required components. Each component must be submitted via Dropbox.
1. Preliminary Abstract (PA)
Deadline: Friday, June 1, 2018
The PA is a one-page document outlining the high-level goals and activities related to the proposal. Details on sources and network needs should be included. Failure to submit an abstract at this stage will exclude the demonstration from NRE consideration for SC18.
Download the Preliminary Abstract template (18 KB Word Doc)
2. Network Requirements (NR)
Deadline: Friday, August 3, 2018
The NR document outlines the sources (city, state, country of origin), bandwidth requirements, VLAN’s identified, and destination booth drops. This document will seed the provisioning process for the network in the Exhibits Hall and will allow for more consistent interaction with our telecommunication carriers. Failure to submit requirements on time will impact SCinet’s ability to deliver the requested network in a timely fashion. Failure to submit any requirements will exclude the demonstration from NRE consideration for SC18, regardless of the PA submission. Please note submissions can be edited as plans formalize in regard to specifics for the demonstration.
3. Final Publishable Submission (FPS)
Deadline: Friday, October 26, 2018
The FPS is a two- to three-page submission with all relevant details of the activities, drawings, and other research details. This document will be published on the SC18 website. Final submissions will be reviewed and officially accepted. This acceptance includes the explicit posting of the FPS on the SC18 website and inclusion in any relevant communications from SCinet to conference attendees, sponsors, etc.
4. Research Results and Findings (RRF)
Deadline: Friday, February 1, 2019
The RRF is a new component of the submission process that includes the submission of findings and results of the demonstrations. These will be collected and provided to the Innovating the Network for Data-Intensive Science (INDIS) workshop team for inclusion in the workshop at SC19. The handoff to INDIS will include the Final Publishable Submission, as well as the read-out of the demonstration results.