SC18 https://sc18.supercomputing.org Mon, 06 Jan 2020 10:58:49 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.5 SC18 History Makers – Marking SC’s 30th Anniversary – Part 2 https://sc18.supercomputing.org/sc18-history-makers-marking-scs-30th-anniversary-part-2/ https://sc18.supercomputing.org/sc18-history-makers-marking-scs-30th-anniversary-part-2/#respond Tue, 11 Dec 2018 21:07:31 +0000 https://sc18.supercomputing.org/?p=5679 ...]]> 30 years of SC logo

A second in a series! The following are exhibitor partner organizations responding to “How did your organization make supercomputing history?” survey as compiled by the 30th Anniversary team.  Thanks to all our exhibitors for their participation/collaboration over the past three decades and special thanks to those who filled out the survey.

 

Name of Individual or Organization: Cornell University Center for Advanced Computing (CAC)

Type of organization: Academic

Number of years exhibited at SC? 29

Description of the contribution:

1985 Cornell University selected as one of the first 4 supercomputer centers sponsored by the National Science Foundation. Cornell’s NSF supercomputing center was established by Nobel Laureate Kenneth Wilson who inspired the scientific community with the notion that computation is equal with theory and experiment in scientific inquiry.

Wilson was among the first to use the term Grand Challenge Problems to describe fundamental problems in science and engineering that have broad economic and/or scientific impact and whose solution can be advanced by applying high performance computing techniques and resources (R. Tapia et al.). Wilson and others also coined the term computational science to refer to the search for new discoveries using computation as the main method.

This idea was so powerful that it led to the U.S. Congress passing into law the High Performance Computing and Communication Initiative to stimulate scientific innovations through high-performance computation (P. Denning). Image: Rhodes Hall, home of Cornell University Center for Advanced Computing

 

Name of Individual or Organization: OpenMP Architecture Review Board

Company: OpenMP Architecture Review Board

Type of organization: Standards Body

Number of years exhibited at SC? 15

Description of the contribution:

The OpenMP API was launched in 1997 and has remained a strong presence in the industry. The OpenMP Architecture Review Board established and evolved the shared-memory programming model of choice for on-node multi-threading to complement the Message Passing Interface.

 

Name of Individual or Organization: RIKEN Center for Computational Science

Type of organization: Academic; National Lab

Description of the contribution:

RIKEN, in collaboration with Fujitsu, developed the K computer, the first machine to break the 10-petaflop barrier in 2011.

 

Name of Individual or Organization: JAXA: Japan Aerospace Exploration Agency

Type of organization: National Lab

Number of years exhibited at SC? 24

Description of the contribution:

Developed world first Parallel Vector supercomputer named “NWT: Numerical Wind Tunnel”.

 

Name of Individual or Organization: PIONIER – Polish Optical Internet/PSNC – Poznan Supercomputing and Networking Center

Company: PIONIER – Polish Optical Internet/PSNC – Poznan Supercomputing and Networking Center

Type of organization: Supercomputing and Networking Center

Number of years exhibited at SC? 14

Description of the contribution:

PSNC Poznan Supercomputing and Networking Center, the operator of PIONIER Polish Optical Internet, just after two years from its foundation in 1993 introduced its supercomputer to TOP500 list for the 1st time: Tulip – SGI Power Challenge XL 486th position on 06/1995 TOP 500 List, 492nd position on 06/1996 TOP 500 List

 

Name of Individual or Organization: Aggregate.Org/University of Kentucky

Type of organization: Academic

Number of years exhibited at SC? 25

Description of the contribution:

In February 1994, in the Purdue University School of Electrical and Computer Engineering, Professor Hank Dietz’s research group (now known as Aggregate.Org and based in the University of Kentucky) built the world’s first Linux PC cluster supercomputer to test their new AGGREGATE FUNCTION COMMUNICATION model and open source hardware that implemented it. It augmented 10Mb/s Ethernet with PAPERS (Purdue’s Adapter for Parallel Execution and Rapid Synchronization) hardware, which communicated between nodes without an OS call, with total latency as low as 3us for operations like barrier synchronization ‰ÛÒ orders of magnitude faster than most conventional supercomputers.

There was even a timesharing gang-scheduled meta-OS called PEN (PAPERS ENvironment). The combined photo here shows the first LInux PC cluster and original PAPERS hardware to the left. However, that cluster was regularly assembled and disassembled to allow courses to use the PCs in that lab as workstations. The image to the right is the first permanent Linux PC cluster, complete with a four-display video wall showing a simple CFD simulation. Each of these first two clusters contained just four 486 PCs.

 

Name of Individual or Organization: Juelich Supercomputing Centre

Type of organization: National Lab

Number of years exhibited at SC? 20

Description of the contribution:

During SC 2009 the German supercomputer QPACE became No. 1 on the Green500 list. It was developed by an academic consortium of universities and research centres together with several companies including IBM Research and Development Centre in B̦blingen as part of a government-funded research collaboration. The high ranking was achieved by using a very power efficient processor in combination with a new, innovative auto-tuning mechanism for voltages as well as a novel liquid cooling architecture. One 4-rack QPACE system with an aggregate peak performance of more than 100 Tflop/s was running at the Juelich Supercomputing Centre (JSC) for more than 7 years. It was primarily used for Lattice QCD research.

 

Name of Individual or Organization: Numerical Algorithms Group

Company: Numerical Algorithms Group

Type of organization: Non-for-Profit

Number of years exhibited at SC? 30

Description of the contribution:

The Numerical Algorithms Group NAG holds a rare position as one of the small handful of organizations that has exhibited at every SC since the first one in Orlando in 1988.

 

Name of Individual or Organization: Kitware, Inc.

Company: Kitware, Inc.

Type of organization: Vendor/Industry

Number of years exhibited at SC? 7

Description of the contribution:

VTK has been providing 3D computer graphics, image processing, and visualization capabilities under an Open Source license for over 20 years. First created in 1993 and maintained by Kitware, this toolkit has had over 400 contributors, with 108 contributors in the most recent release.

 

Name of Individual or Organization: Science and Technology Facilities Council

Company: Science and Technology Facilities Council

Type of organization: Academic; National Lab; International Lab

Number of years exhibited at SC? 19

Description of the contribution:

The Council for the Central Laboratory of the Research Councils (CCLRC) e-Science Centre was formed in 2001, as part of a UK Government initiative to fund a programme of e-infrastructure development for science. This programme comprised a wide range of resources, people, and e-science centres. The initial programme budget was å£120 million pounds sterling (US$ 157million) over three years. Of this, å£75 million (US$ 98million) was to be spent on grid application pilots, with another å£35 million (US$ 46 million) for developing industrial strength Grid middleware.

From 2004 2007 CCLRC ran the UK National e-Science presence at SC, in collaboration with the National e-Science Centre. We had a 40‰Ûª x 40‰Ûª booth with live demonstrations and presentations from research groups across the UK, attracting over 3,000 visitors during this period. The e-Science programme was the forerunner to developments in what is now the modern cloud. A good example of the journey from grid to cloud is our analysis of experimental data from the Large Hadron Collider at CERN.

Since those early days we have gone from strength to strength and, with the merging of the e-Science and Computational Science and Engineering Department and a name change, we are now STFC‰Ûªs Scientific Computing Department (SCD), one of the largest in Europe and located across two UK sites. SCD manages technically advanced high performance computing facilities, services and infrastructure, supporting some of UK’s most advanced scientific facilities. Our staff offer essential expertise, services and products that help the scientific community make vital discoveries and deliver progress.

 

Name of Individual or Organization: Pittsburgh Supercomputing Center

Company: Pittsburgh Supercomputing Center

Type of organization: Academic, National Supercomputing Center

Number of years exhibited at SC? 30

Description of the contribution:

Innovations in Supercomputing Systems PSC: First system to sustain a teraflop: LeMieux, the Terascale Computing System PSC: Bridges PSC-developed architecture pioneered convergence of HPC, AI, and big data. Bridges is designed to support familiar, convenient software and environments for both traditional and non-traditional HPC users.

 

Name of Individual or Organization: Mississippi State University

Type of organization: Academic

Number of years exhibited at SC? 25

Description of the contribution:

Known for fostering a number of historic contributions in grid generation, high performance computing, and programming, Mississippi State University began its continued involvement in commodity-based, clustered computing in 1987. In that year, DARPA funded an MSU project, called MADEM (Mapped Array Differential Equation Machine), to build its first computing cluster. This 8-node cluster was based on the Sun 4/110 workstation and utilized the native Ethernet connection of each node as the interconnect. By 1992, research activities evolved to support the development of an 8-node cluster based on the Sun SPARCstation 2 workstation, known as the MSPARC/8.

This cluster utilized an interconnect based on network interface cards developed at MSU. This cluster was the first of three clusters to be built at MSU based on the Sun SPARC processor architecture. In June 1993, the first components were purchased for what would be known as the SuperMSPARC. This cluster was initially designed to utilize the Sun SuperSPARC 4-processor modules, but due to production issues with the modules, the MSU research team opted to utilize a HyperSPARC processor modules. Each node of the SuperMSPARC cluster contains four 90 MHz HyperSPARC processors and 288 MB of RAM. The nodes are interconnect via 10 Mb/s Ethernet, 155 Mb/s (OC3) ATM, and Myrinet.

This system also has a custom-built communications midplane with SBUS adapter cards for monitoring interprocess communications. The SuperMSPARC cluster is on display in the Mississippi State University booth (Booth #2443). In December 1999, the fourth generation SPARC-based cluster at MSU was built utilizing Sun UltraSPARC II processors.

 

Name of Individual or Organization: Charmworks, Inc.

Type of organization: Vendor/Industry

Number of years exhibited at SC? 3

Description of the contribution:

In 1993, the initial version of Charm++ was released. Charm++ is a parallel programming language based on C++ designed by Prof. Laxmikant Kale, and developed in the Parallel Programming Laboratory at the University of Illinois at Urbana-Champaign. In 2014, Charmworks, Inc. began to license and support Charm++ for commercial use. Charm++ is an asynchronous task-based parallel programming model with dynamic runtime support for load balancing and fault tolerance among other features. It has been used to develop and scale scientific applications such as NAMD, OpenAtom, and ChaNGa, among others, up to full petascale systems.

 

Name of Individual or Organization: Indiana University

Type of organization: Academic

Description of the contribution:

In his 1953 State of the University Address, President Herman B Wells calls for creation of a research computing center because, “many complicated problems in the physical, biological, and social sciences, in business, and in education require the employment of modern high-speed computing machines for practical solution.”

 

Name of Individual or Organization: King Abdullah University of Science and Technology

Type of organization: Academic

Number of years exhibited at SC? 11

Description of the contribution:

The new Penguin Computing AccelionTM managed data transfer and access platform sets a data transfer record with a petabyte in a day (2018)

 

Name of Individual or Organization: NCSA

Type of organization: Academic

Number of years exhibited at SC? 20

Description of the contribution:

Four of the 30 SC General Chairs have a connection to NCSA, three are current NCSA employees! – Bill Kramer (2005) – Scott Lathrop (2011) – Bill Gropp (2013) – Jackie Kern (2015)

 

Name of Individual or Organization: NetApp

Type of organization: Vendor/Industry

Number of years exhibited at SC? 10

Description of the contribution:

Provided the requirements and design criteria to drive vendors for development of near-line drives in 2003. Seagate provided the first nearline drive in 2005 labelled Tonka near-line. Other vendors followed.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

]]>
https://sc18.supercomputing.org/sc18-history-makers-marking-scs-30th-anniversary-part-2/feed/ 0
Making History at SC18: Production Traffic Passes Through First 400 Gigabit Ethernet Metro Link in Texas https://sc18.supercomputing.org/making-history-at-sc18-production-traffic-passes-through-first-400-gigabit-ethernet-metro-link-in-texas/ https://sc18.supercomputing.org/making-history-at-sc18-production-traffic-passes-through-first-400-gigabit-ethernet-metro-link-in-texas/#respond Sat, 17 Nov 2018 01:16:37 +0000 https://sc18.supercomputing.org/?p=5663 ...]]> SCinet, the dedicated high-capacity network infrastructure supporting the SC Conference, today confirmed that it achieved a new milestone in successfully passing production traffic from national and international networks to Dallas using a single, 400 gigabit Ethernet link over metro distances of approximately 6 miles. DataBank, Internet2, Juniper Networks, Lonestar Education and Research Network (LEARN) and Pacific Wave contributed to the success of this project, the first such occurrence for an advanced research and education network.

The achievement is a mark of the collaboration and innovation that fuels SCinet’s reputation for pushing the boundaries of high-performance networking. SCinet is planned, built, and operated by a team of 225 volunteers. The unique multi-vendor installation consists of $52 million in hardware, software, and services contributed by 40 industry-leading contributors.

“SCinet brings together not only national and international network engineers, but also a multitude of research and education networks, in support of demonstrations at SC18 that are often precursors to some of the most demanding scientific challenges ahead of us,” said JP Velders, SCinet routing team co-lead. “Near-future infrastructure needs for scientific projects like the Square Kilometer Array (SKA) and the Large Synoptic Survey Telescope (LSST) have projected bandwidth needs that far surpass the 400 gigabit per second mark. Providing a testbed for some of the newest technologies within SCinet is of vital importance to the high-performance computing and research community.”

Internet2, LEARN and Pacific Wave delivered the wide area network circuits connected to this engineering feat at the SC Conference in the Kay Bailey Hutchison Convention Center Dallas. To ensure this connectivity, DataBank provided the necessary fiber cross-connects, rack space and power in Dallas. The production traffic ran clean on a 400 gigabit Ethernet link through a CALIENT S320 optical cross-connect between two Juniper devices: QFX10003-80C and PTX10003-80C.

“When you observe the evolution of networking technologies, engineers are always looking to increase network capacity to support the evolving needs of data-intensive research,” added Matthew Zekauskas, SCinet DevOps team co-lead. “The use of 400 gigabit Ethernet devices has been tested in labs with controlled settings, so the ability to run it in a real-world setting at SC18 allows us to understand both the limitations and opportunities that help us continue to mature the technology driving this innovation.”

]]>
https://sc18.supercomputing.org/making-history-at-sc18-production-traffic-passes-through-first-400-gigabit-ethernet-metro-link-in-texas/feed/ 0
SC18 History Makers – Marking SC’s 30th Anniversary https://sc18.supercomputing.org/sc18-history-makers-marking-scs-30th-anniversary/ https://sc18.supercomputing.org/sc18-history-makers-marking-scs-30th-anniversary/#respond Fri, 16 Nov 2018 00:45:52 +0000 https://sc18.supercomputing.org/?p=5643 ...]]> 30 years of SC logoSC18 HISTORY MAKERS

The following are partner organizations responding to “How did your organization make supercomputing history?” survey as compiled by the 30th Anniversary team.  Check back to the blog for additional installments.

Name of Individual or Organization: IBM

Name: Celeste Bishop

Type of organization: Academic; Vendor/Industry

Number of years exhibited at SC? 10

Description of the contribution:

In the 1990s as high-performance computing (HPC) gained traction in the enterprise, HPC infrastructure was required to serve multiple users working on multiple projects against business priorities. Out of this was born the need to manage the growing resources used to handle HPC workloads in more complex, heterogeneous environments, ushering in distributed cluster computing. In 1992, based on research at the University of Toronto Load Sharing Facility (LSF), an HPC workload manager for distributed clusters was released by Platform Computing.

LSF has evolved from a simple workload scheduler to a broad family of products designed to improve the productivity of todays HPC users through easy-to-use desktop, web and mobile interfaces. Acquired by IBM in 2012, IBM Spectrum LSF has continued to evolve, making significant advances in scalability and performance while simplifying management and use of complex HPC infrastructures including multi-cloud, accelerated computing (GPUs), and containerized workloads. In 2017, LSF celebrated 25 years with a new release providing a tightly integrated solution for demanding, mission-critical HPC environments that helps increase both user productivity and hardware utilization while decreasing management costs.

For over 25 years IBM Spectrum LSF has been managing many of the workloads that make our everyday lives possible, from playing a key role in sequencing the first human genome, to supporting some of the most powerful supercomputers, including Summit and Sierra systems.

Name of Individual or Organization: San Diego Supercomputer Center, UC San Diego

Name: Jan Zverina

Type of organization: Academic; National Lab

Booth number: 3013

Number of years exhibited at SC? 30

Description of the contribution:

1985 – SDSC opens its doors with its first supercomputer, a CRAY XMP-48. First calculations made on the system by Herbert Hamber, UC Invine. 1989 – SDSC stands up a CRAY Y-MP8/864. 1990 – SDSC acquires 256-node NCUBE 2 parallel computer. 1993 – SDSC acquires a CRAY C-90, 1996 – SDSC acquires a CRAY T3E. 1999 – Supercomputer simulations led by UC San Diego’s J. Andrew McCammon reveal a new anti-HIV strategy, leading to a new HIV drug from Merck – the most important HIV drug in a decade. 2000 – SDSC acquires an IBM Blue Horizon. 2004 – SDSC acquires as IBM eServer Blue Gene. 2009, 2011 – SDSC launches ‘Gordon’, which debuts as the 48th fastest on the Top 500 list.

Name of Individual or Organization: Corsa Technology

Name: Carolyn Raab

Type of organization: Vendor/Industry

Booth number: 3043

Number of years exhibited at SC? 4

Description of the contribution:

In 2016, leveraging software-defined networking (SDN), Corsa created a SCinet Network Research Exhibition (NRE) which allowed large data transfers to share ‘bandwidth on demand’ thus avoiding data collisions and lost data on the network. Within SCinet, the Corsa DP2400 SDN Switching platform provided shared 100 Gbps WAN access to multiple exhibitors located throughout the SC16 show floor. Participating exhibitors accessed a 100 Gbps ring topology to reach one of two Wide Area Networks; Internet2 and ESnet. Each exhibitor was able to schedule their high bandwidth research experiments into assured forwarding traffic classes guaranteeing them the bandwidth they needed for as long as they needed it. The SCinet Network Operations Center (NOC) controlled the Corsa equipment using OpenFlow. The following were participating data intensive science demos using SCinet and Corsa: Pittsburgh SuperComputing (PSC), GEANT DynPAC, iCAIR DTN, RNP to Ampath.

Name of Individual or Organization: Hewlett Packard Enterprise

Name: Stephanie Whalen

Type of organization: Information Technology (IT)

Booth number: 2429

Number of years exhibited at SC? 30

Description of the contribution:

HPE Historical Contribution to SC History 1: In July 2000, Compaq Computer Corporation (HPE) provided the lion’s share of the technology for what may amount to one of the most significant IT projects of 2000: Mapping the human genome. Harnessing the power of more than 200 AlphaServers and a 70- to 80-terabyte database, scientists were able to map 3.12 billion base pairs of DNA that make up the genetic blueprint and the human body. Source: Computerworld: Compaq, Celera Map the Human Genome, July 6, 2000. https://www.computerworld.com.au/article/93001/compaq_celera_map_human_genome/

 

 

]]>
https://sc18.supercomputing.org/sc18-history-makers-marking-scs-30th-anniversary/feed/ 0
SC18 Supercomputing Conference Sets Records in Dallas, Texas https://sc18.supercomputing.org/5650-2/ https://sc18.supercomputing.org/5650-2/#respond Thu, 15 Nov 2018 22:44:48 +0000 https://sc18.supercomputing.org/?p=5650 ...]]> DALLAS–SC18 marks the 30th anniversary of the annual international conference of high performance computing, networking, storage and analysis. It celebrates the contributions of researchers and scientists – from those just starting their careers to those whose contributions have made lasting impacts.

The conference drew a record-breaking 13,071 (as of 11/15/18) attendees and featured a technical program spanning six days – making it the largest SC conference of all time. In all, the conference and exhibition infused the local economy with more than $40 million in revenue according to the local Dallas Convention Bureau.

SC18 General Chair Ralph McEldowney

“From our volunteers to our exhibitors to our students and attendees – SC18 was inspirational,” said SC18 General Chair Ralph McEldowney. “Whether it was in technical sessions or on the exhibit floor, SC18 inspired people with the best in research, technology, and information sharing.”

Conference Themes:

  • Inspiring the Next Generation & Diversity   
    • This also marked the 4th year of the early career track.
    • A conference highlight related to this theme was the Student Cluster Competition, which features 15 international student teams competing in a non-stop “Iron-Chef’ style challenge to complete a real-world scientific workload, while impressing conference attendees and judges with their HPC knowledge.
    • A shrinking labor force is one of the major shortfalls in the STEM industry. The SC Conference brought together all nationalities, ethnicities, genders, and technical capabilities with the goal of sparking new ideas on how to attract more women, minorities, and young people to HPC.
  • Inspiring the World 
    • HPC has the power and promise to solve world’s most difficult challenges. From hurricane and earthquake predictions to solving global hunger challenges, a main focus of the conference was to demonstrate how the HPC industry is using supercomputing to help make the world a better place.
  • Inspiring the Future of Technology
    • HPC is powering the advancement of artificial intelligence. In technical sessions and on the exhibit floor, the conference explored how HPC is helping AI bring improvements in societies, economies, and organizations.
U.S. Under Secretary of Science Paul M. Dabbar

SC18 Hosts U.S. Under Secretary of Science

U.S. Under Secretary of Science Paul M. Dabbar. He serves as the Department’s principal advisor on fundamental scientific research in high energy and nuclear physics; advanced computing; fusion; biological and environmental research; and has direct management over many of DOE’s national labs that run data-intensive experiments.

Secretary of Energy Rick Perry has made supercomputing a top priority for DOE, providing $1.8 billion for the development of Exascale supercomputers at DOE’s national laboratories – a system capable of a billion-billion double precision floating point operations per second. Supercomputing will transform energy research, scientific discovery, economic competitiveness, and national security.

SC18 SCinet Chair Jason Zurawski

World’s Fastest Network

SCinet gave SC18 attendees the chance to experience the world’s fastest temporary network, delivering 4.02 terabits per second of network capacity to the Kay Bailey Hutchison Convention Center Dallas this week.

In preparation volunteers installed more than 67 miles of fiber optic cable, including two miles of new underground fiber that now connects the convention center to a downtown Dallas data center. After this year’s conference concludes, that underground fiber will remain in place for the benefit of the city of Dallas.

To deliver WiFi for all attendees across one million square feet of exhibit space, volunteers also installed 300 wireless access points in just one week.

SCinet is made possible by the contributions of 40 industry-leading vendors, who in total donated $52 million in hardware, software, and services.

Exhibit Records

According to Christy Adkinson, SC18 Exhibits Chair from Cray Inc, the SC18 Exhibition broke several records including largest research booth space at 65,000 sq. ft. and more industry exhibitors than ever. Plus, it exceeded the most total number of exhibitors at 364 and was the largest Exhibits ever with over 150,000 sq. ft. occupied. It also featured the first ever “Start-Up” Pavilion allowing small companies just entering the field an economic way to have a presence at the conference. Finally, SC18 featured more 1st time exhibitors than ever.

Content Highlights

For the fifth year, SC featured an opening plenary with a moderated discussion. This year it was “HPC and Artificial Intelligence – Helping to Solve Humanity’s Grand Challenges”. Its focus was on how high performance computing is revolutionizing how we address and manage global issues from solving global food security challenges to preventing epidemics and understanding the impact of environmental health on urban centers.

The Keynote, Erik Brynjolfsson of MIT, highlighted how machines are transforming the role of human decision-making, how digital platforms allow products and services of others to be sold and brokered, and the proliferation of ideas obtained from the general public rather than experts at the core of the business.

The Technical Program again offered the highest quality of original HPC research from around the globe. Among the highlights there were 2 Best Paper Finalists, 5 Best Student Paper Finalists, and 6 Gordon Bell Finalists.

Other highlights included 118 Posters, 38 Workshops, 35 Tutorials, 68 Papers, 15 Panels, 12 Invited Speakers, 9 HPC Impact Showcase talks, 7 Emerging Tech Presentations, and 16 Doctoral Showcases. These represent the best of the best in a wide variety of research topics in HPC.

]]>
https://sc18.supercomputing.org/5650-2/feed/ 0
SCinet Must See: How to Build the World’s Fastest Temporary Computer Network Time Lapse https://sc18.supercomputing.org/scinet-must-see-how-to-build-the-worlds-fastest-temporary-computer-network-time-lapse/ https://sc18.supercomputing.org/scinet-must-see-how-to-build-the-worlds-fastest-temporary-computer-network-time-lapse/#respond Thu, 15 Nov 2018 06:23:53 +0000 https://sc18.supercomputing.org/?p=5626 ...]]>

This video shows how SCinet was built in Dallas, Texas for SC18 – the international conference for high performance computing, networking, storage,and analysis. SCinet combines global volunteers from 85 institutions with international graduate students to plan and assemble $52 million in contributed gear. The equipment was partially assembled in the Kay Bailey Hutchison arena and then deftly and safely moved to the exhibit hall in 14 equipment racks weighing 4.25 tons.

The SCinet network operations center (NOC) in the heart of the exhibit hall seats 17 interdisciplinary teams of 225 volunteer experts, leaders, and scholars. Their efforts are connecting 25 cities on four continents with data intensive demos and experiments in addition to the 13,000 attendees at an SC (an attendance record) at a breaking speed of 4.02 terabits per second.

Media contacts: Brian Ban (773-454-7423) or Amber Rasche (605-359-3612)

]]>
https://sc18.supercomputing.org/scinet-must-see-how-to-build-the-worlds-fastest-temporary-computer-network-time-lapse/feed/ 0
“Blow-the-Doors off the Network” Event to Demonstrate the Power of SCinet https://sc18.supercomputing.org/blow-the-doors-off-the-network-event-to-demonstrate-the-power-of-scinet/ https://sc18.supercomputing.org/blow-the-doors-off-the-network-event-to-demonstrate-the-power-of-scinet/#respond Tue, 13 Nov 2018 23:41:53 +0000 https://sc18.supercomputing.org/?p=5622 ...]]> SC exhibitors are invited to participate in a “Blow-the-Doors Off the Network” event this week to demonstrate the power of SCinet, the high-capacity network built to support the revolutionary applications and experiments that are a hallmark of the SC conference.

Exhibitors with 100 GB connections to their booths are invited to blast peak traffic to and from the exhibit hall floor to illustrate the power of SCinet connectivity within the Kay Bailey Hutchison Convention Center and to other sites around the world. The event is scheduled noon to 1 p.m. Thursday, Nov. 15.

Attendees and exhibitors can see the traffic peak in real time during the events using the SC18 Total Traffic Dashboard.

]]>
https://sc18.supercomputing.org/blow-the-doors-off-the-network-event-to-demonstrate-the-power-of-scinet/feed/ 0
Inaugural SCinet Spirit of Innovation Award Recognizes Five Contributors’ Role in Supporting 800 gigabits per Second of Bandwidth for SC18 https://sc18.supercomputing.org/inaugural-scinet-spirit-of-innovation-award-recognizes-five-contributors-role-in-supporting-800-gigabits-per-second-of-bandwidth-for-sc18/ https://sc18.supercomputing.org/inaugural-scinet-spirit-of-innovation-award-recognizes-five-contributors-role-in-supporting-800-gigabits-per-second-of-bandwidth-for-sc18/#respond Mon, 12 Nov 2018 08:00:37 +0000 https://sc18.supercomputing.org/?p=5594 ...]]> Today marks the official start of the SC Conference in Dallas, bringing together members of the international high performance computing, networking, storage, and analysis community to share the latest research, technologies, and demonstrations.

Exhibitors and visitors at the SC Conference this year will enjoy access to 4.02 terabits per second of bandwidth across 1 million square feet in the Kay Bailey Hutchison Convention Center Dallas thanks to SCinet, the dedicated high-capacity network for the conference.

SCinet is a collaborative effort by 225 volunteer experts from 85 organizations that span industry, academia, and government. This spirit of collaboration is a major driver for the success and innovation that SCinet delivers on a yearly basis.

Five organizations in particular were instrumental in delivering 800 gigabits per second of long-haul bandwidth to the SC Conference this year, and they are being recognized with the SCinet Spirit of Innovation Award: Ciena®, Energy Sciences Network (ESnet), Internet2, Lonestar Education and Research Network (LEARN), and Utah Education and Telehealth Network (UETN).

Jason Zurawski, SCinet Chair

“The winners of the inaugural SCinet Spirit of Innovation Award have embraced the spirit of collaboration and cooperation that showcases the best there is to offer in demonstrating, implementing, and operating leading-edge solutions to challenging problems,” said Jason Zurawski, SCinet chair. “This project is truly special to SCinet, and we are all encouraged by and appreciative of their efforts to showcase partnership and innovation.”

To deliver a bandwidth of 800 gigabits per second, volunteers from all five organizations spent six months planning, testing, and delivering the necessary connections to the Kay Bailey Hutchison Convention Center Dallas.

The path between Chicago and Dallas uses Ciena’s WaveLogic 3 Extreme modems without optical-electrical-optical (OEO) regeneration, reducing equipment cost and latency. These modems are running over spectrums and paths from ESnet, Internet2, LEARN, and UETN. This was achieved without the aid of Raman amplification or any other modifications to the existing line system.

The metro configuration uses Ciena’s WaveLogic Ai technology to deliver 400 gigabits per second per wave for increased spectral density and decreased cost per bit. This was all made possible with the loan of server hardware from ESnet, LEARN, and UETN as well as the space and power donations by both Internet2 and LEARN.

SCinet contributors donate millions of dollars in equipment, software, and services that are needed to build and support the network each year for the SC Conference. At this year’s conference in Dallas, it’s estimated that contributions from 40 organizations will total $52 million.

Ciena, ESnet, Internet2, LEARN, and UETN are being recognized at a private ceremony before today’s opening gala celebration at 7 p.m. CT.

]]>
https://sc18.supercomputing.org/inaugural-scinet-spirit-of-innovation-award-recognizes-five-contributors-role-in-supporting-800-gigabits-per-second-of-bandwidth-for-sc18/feed/ 0
Record-Breaking SCinet High-Capacity Network Goes Live for SC18 in Dallas https://sc18.supercomputing.org/5607-2/ https://sc18.supercomputing.org/5607-2/#respond Sun, 11 Nov 2018 23:45:26 +0000 https://sc18.supercomputing.org/?p=5607 ...]]>
About 50 volunteers and approximately $52 million worth of equipment and resources arrived at the Kay Bailey Hutchison Convention Center mid-October to start building SCinet. Volunteers unpacked, inventoried, installed, configured, and tested all equipment and prepared to move the equipment racks into the Convention Center – with a value of over $52 million (USD).

A record-breaking 4.02 Tbps of network bandwidth will deliver unparalleled capacity to the attendees and demonstrations of SC18 this week. SCinet – SC’s dedicated high-capacity network – has a reputation for pushing the boundaries of high performance networking.

And it all comes down to this: when networking limitations are removed from the equation, what’s possible in the realm of scientific discovery?

Jason Zurawski, SC18 SCinet Chair

“Our global team of 225 volunteers, from 85 volunteer organizations and 40 contributing partners have worked incredibly hard to design, build, and operate this network,” said Jason Zurawski of Energy Sciences Network (ESnet) who is this year’s SCinet committee chair. “And we cannot wait to witness the innovations that will occur this week in Dallas.”

Among those will be more than 30 network research demonstrations and experiments, some of which will be showcased in the conference’s Innovating the Network for Data-Intensive Science workshop on November 11. In support of those activities, SCinet will connect SC18 in Dallas to 27 cities in the U.S. and around the world, its reach stretching across four continents to research labs and universities as far as Amsterdam, Daejeon, Mumbai, and Sao Paulo.

SCinet volunteers arrived in Dallas mid-October to start building the network and returned last week to complete the final installation, which took more than a year to plan. Evidence of this volunteer-built, multi-contributor collaboration exists throughout the Kay Bailey Hutchison Convention Center Dallas.

Attendees can see SCinet on display (booth #2450) when the conference exhibit hall opens November 12th. A stage surrounded by plexiglass houses nine equipment racks, weighing in at more than 6,400 lbs. That’s a major portion of the $52 million in cutting-edge equipment and resources provided by 40 SCinet contributors.

Overhead and underfoot are 67 miles of fiber optic cable, including just over 2 miles of new underground fiber that now connects the convention center to a downtown Dallas data center (watch this video to see the installation in action*). Across 1 million square feet are 300 wireless access points, which SCinet volunteers installed just last week to ensure attendees stay connected wherever they roam during SC.

*The video reflects anticipated numbers available at the time it was created.

About SC18

SC18 is the premier international conference showcasing the many ways high performance computing, networking, storage, and analysis lead to advances in scientific discovery, research, education, and commerce. The annual event, created and sponsored by the IEEE Computer Society and ACM (Association for Computing Machinery), attracts thousands of HPC professionals and educators from around the globe to participate in its complete technical education program, workshops, tutorials, a world-class exhibit area, demonstrations, and the world’s fastest temporary computer network.

]]>
https://sc18.supercomputing.org/5607-2/feed/ 0
SC18 Hosts U.S. Department of Energy Under Secretary of Science Paul M. Dabbar https://sc18.supercomputing.org/sc18-hosts-u-s-department-of-energy-under-secretary-of-science-paul-m-dabbar/ https://sc18.supercomputing.org/sc18-hosts-u-s-department-of-energy-under-secretary-of-science-paul-m-dabbar/#respond Fri, 09 Nov 2018 20:59:26 +0000 https://sc18.supercomputing.org/?p=5541 ...]]> Dabber
Paul M. Dabbar

During the 30th annual SC conference, from November 13-14, SC18 will host U.S. Under Secretary of Science Paul M. Dabbar. He serves as the Department’s principal advisor on fundamental scientific research in high energy and nuclear physics; advanced computing; fusion; biological and environmental research; and has direct management over many of DOE’s national labs that run data-intensive experiments.

HPC is impacting our everyday lives, from creating lighter-weight metals and alloys that enable cleaner, more efficient engines to delivering warnings of approaching natural disasters that can save hundreds of lives and millions of dollars. Those pioneering this technology will gather in Dallas for the world’s largest conference dedicated to supercomputing breakthroughs.

Secretary of Energy Rick Perry has made supercomputing a top priority for DOE, providing $1.8 billion for the development of Exascale supercomputers at DOE’s national laboratories – a system capable of a billion-billion double precision floating point operations per second. Supercomputing will transform energy research, scientific discovery, economic competitiveness, and national security.

Tuesday, November 13 , 5:15 pm–7 pm, Exhibit Hall B

Under Secretary Dabbar will provide remarks at the “TOP500” session – where the fastest 500 computers in the world are revealed and discussed alongside major trends in supercomputer technology.

Wednesday, November 14, 10 am–12 pm, Exhibit Hall C-F

The Under Secretary will tour an Exhibit Hall of 365+ organizations filling 550,000 square feet of space with booths from government agencies like DOE and NASA, top computing companies, and universities or other organizations that run supercomputers or those with major computer science research programs.

]]>
https://sc18.supercomputing.org/sc18-hosts-u-s-department-of-energy-under-secretary-of-science-paul-m-dabbar/feed/ 0
@ SC18 Everything’s Bigger in Texas – This Year, SCinet Diamond Level Contributors’ Generosity is No Small Feat https://sc18.supercomputing.org/diamond/ https://sc18.supercomputing.org/diamond/#respond Thu, 08 Nov 2018 17:18:32 +0000 https://sc18.supercomputing.org/?p=5529 ...]]> In just a few days, a group of 225 volunteers will have completed the execution of SCinet, the world’s fastest temporary network that supports the SC Conference which is taking place in Dallas from November 11 to 16, 2018.

SCinet is a collaborative effort by volunteer experts from 85 organizations than span industry, academia, and government that takes one year to plan, one month to build, one week to operate, and less than 24 hours to tear down. It offers an unprecedented amount of bandwidth — anticipated 4.02 terabits per second for SC18 — within the conference exhibit hall and connects the convention center to the broader internet.

What makes SCinet possible? SCinet contributors donate millions of dollars in equipment, software, and services that are needed to build and support the network each year for the SC Conference.

At this year’s conference in Dallas, it’s estimated that contributions from 40 organizations will total $52 million. Three companies individually topped the highest level of contributions with donations of resources worth $5 million each: CenturyLink, Cisco, and Juniper.

“SCinet can only flourish due to the incredible generosity of our contributing partners,” said Jason Zurawski, SCinet chair and science engagement engineer at the Energy Sciences Network (ESnet). “CenturyLink, Cisco, and Juniper have all gone above and beyond to ensure the success of SCinet this year through the donation of hardware, software, services, and the most important of resources: time.”

CenturyLink has been instrumental in supporting the demonstrations that push the envelope on cutting edge research. Their assistance has enabled 19 wide area network connections to cities that include Chicago, Los Angeles, Miami, Seattle, Sunnyvale, and Washington, D.C.

Cisco remains a steadfast partner in the operation and design of SCinet as part of their organizational commitment in supporting research and education through advancements in networking, data center, security, automation and programmability focused on improving and accelerating innovation. Their contributions of hardware span many different use cases, from metropolitan networking to core routers and switches, as well as hundreds of access points that SCinet will use to provide wireless internet capabilities across 1 million square feet in the Kay Bailey Hutchison Convention Center Dallas.

Juniper Networks continues to lead the way in delivering cutting edge network hardware for use in SCinet. By offering to support SCinet with prototype devices, SCinet volunteers and users are exposed to the next generation in transport capabilities at 400-gigabits.

With SCinet contributors’ support, members of the HPC community participating at the SC Conference are able to demonstrate the advanced computing resources of their home institutions and showcase the revolutionary applications and experiments that are a hallmark of SC.

]]>
https://sc18.supercomputing.org/diamond/feed/ 0