石榴直播 HPC Facilities and Resources (Fall 2023)
The 石榴直播 HPC computing resources represent the University鈥檚 commitment to research computing. The 石榴直播 HPC is a RedHat Linux based cluster that offers a total capacity of over 50 Teraflops to 石榴直播 faculty researchers and their teams. The cluster consists of around 50 nodes with 118 processors having 1,704 cores (excluding GPU cores) and 16.7 TB RAM and has both CPU and GPU capabilities. There are queues available for standard, high memory and GPU jobs. The HPC is built on a fast network for data and interconnect traffic. A large storage array is provided for user home directories and a fast storage is available for use during job runtime. Power for Cooling and Servers is backed by battery systems and natural gas generators. On and off campus access to the cluster is allowed only through secure protocols.
Software is provided through environment modules to help provide versions of the same
software and avoid conflicts with dependencies. There are 150 software programs available
that include titles for Astronomy, Biology, Chemistry, Math, Statistics, Engineering
and programming languages. Some popular titles include: Gaussian, MATLAB, Mathematica,
R, TensorFlow, COMSOL, HH-Suite, MAFFT, LAMMPS, OpenFoam, PHYLIP and Trinity. There
is cluster management and job scheduling software used to provide free access to this
shared resource.
石榴直播 has established a high-speed pathway to Internet2 and other heavily used commercial content providers. Kennesaw and Marietta campuses are now directly connected through SoX to Internet2 and have established connections for both the Regional Research and Education Networks (R&E) routes and Internet2 Peer Exchange (I2PX) routes. The current connection speed is 10Gb/s. This connection will allow for rapid sharing of large amounts of data between 石榴直播 and other participating research institutions worldwide. This implementation is now available to on-campus researchers and traffic that can be routed through this connection will be done automatically.
石榴直播 recommends that users of the university-level HPC include
the following acknowledgement statement: 鈥淭his work was supported in part by research
computing resources and technical expertise via a partnership between Kennesaw State
University鈥檚 Office of the Vice President for Research and the Office of the CIO and Vice
President for Information Technology [1].鈥 and cite using the appropriate citation format.
-
HPC Cluster Node Details
QUEUE CPUS CORES RAM(GB) batch 34-51 2 Xeon Gold 6148 (2.4 GHz) 40 192 batch 52-70 2 Xeon Gold 6126 (2.6 GHz) 24 192 batch 71-77 4 Xeon Gold 6226 (2.70 GHz) 48 768 himem 78 4 Xeon Gold 6226 (2.70 GHz) 48 1,537 gpu 79-82 GPU: 4 NVidia V100S 5,120 cores each 768 Total (47 nodes) 118 1,704 16,705