Ƶ

Skip to main content

Cyberinfrastructure Resources

High-Performance Computing

The Ƶ Division of Technology and Security (DTS) manages a 240 TFLOP (approx.) cluster and parallel file system. Funding for this resource came from an NSF MRI program award: “MRI: Acquisition of a High-Performance Cluster to Enable Advanced Bioscience and Engineering Research,” NSF 15-504, MRI Award 1726946. This cluster instrument includes a substantial general compute pool, 3TB of high memory, and NVIDIA GPU nodes to support CPU, high memory, and GPU-intensive computing.  The parallel file system includes 1.2 PB of high-speed storage space for data-intensive compute jobs within the cluster. The cluster leverages three networks consisting of 100Gbps Infiniband (Cluster data application processing), 10Gbps Ethernet (science data transfers) and 1Gbps (cluster management).

Network

In 2018, the state of South Dakota implemented a high-speed fiber-optic network, which brought 100Gbps of high-speed broadband connectivity to support research activities within state, regional (Great Plains Network Research Platform) and the national research networks (Internet 2). In the fourth quarter of 2017, the Ƶ Division of Technology and Security (DTS) upgraded its local area network infrastructure to support 100 GBPS of data transmission capacity within its core network router system. This work also included an upgrade of the sub-core switching to 40Gbps and all network building switching to a minimum of 10Gbps of connectivity.

Science DMZ

In 2014, the South Dakota State University (Ƶ) Division of Technology and Security (DTS) were awarded an NFS CC* Cyberinfrastructure Program grant: “CC*IIE Networking Infrastructure: Building a Science DMZ and Enhancing Science Data Movement to Support Data-Intensive Computational Research at South Dakota State University,” NSF 14-521, CC*IIE Project Reference: 1440622. The project goals revolve around enhancing cross-institutional research collaboration by deploying a high-speed frictionless (Science DMZ) network. Upgrade work in 2018 involved the implementation of a robust data management sharing service (Globus) and a new 100 Gbps Flash I/O Network Appliance (FIONA) data transfer node, an integral element for the inclusion of Ƶ in the Great Plains Network (GPN) Research Platform.

Storage/Archival Services

Ƶ DTS manages a Global Parallel File System (GPFS) and various high-capacity block storage area network systems that provide enterprise-class storage for single instance servers and cluster systems. Raw data, metadata, and research products are archived and made publicly available within Ƶ’s Open PRAIRIE, a product contracted between Ƶ and the Bepress corporation (Digital Commons). 

Data Center Support Services

The Ƶ DTS (Division of Technology and Security) houses all servers, storage, and central networking equipment for the Brookings, SD campus within the Morrill Hall building, rooms 112 and 114. The room is configured in a hot-and-cold aisle layout with all cabling overhead in existing trays. The raised flooring panels are rated for 1000 PSI concentrated load. The GDP panel supplies all power within the Data Center and is rated at 1000 amps. Also, the GDP panel is equipped with a TVSS rated at 160 kA. UPS-1 and UPS-2 are both rated at 160 kVA (144 kW). Generator 1 and Generator 2 are both rated at 150 kW. Two DX CRAC cooling units have 22 tons and 16 tons of cooling capacity (38 tons of cooling total). Also, the room cooling includes three chilled water CRAC units with 8 tons of cooling capacity each (24 tons of cooling total).

DTS Research CyberInfrastructure Expertise

Ƶ DTS permanent staff support for HPC includes an Assistant Vice President, Director, Research High-Performance Computer Specialist, HPC Software Development Engineer, Cyberinfrastructure Engineering Specialist and Systems administrator. Graduate student workers are also employed to help support research computing applications and programming.

If you have any questions, please contact Ƶ HPC.