- Dataset: 6n4o
- Software: GROMACS.cuda.mpi (gromacs/2024.4-gofbc-2023a-avx512)
- Resource: 1 tasks, 16 cores, nodes, 1 GPUs, with NVLink
- CPU: Xeon Gold 6448Y (Sapphire Rapids), 2.1 GHz
- GPU: NVidia-H100-SXM5-80GB, 16 cores/GPU
- Simulation speed: 161.046 ns/day
- Efficiency: 100.0 %
- Site: Rorqual
- Date: Aug. 2, 2025, 12:34 p.m.
- Submission script:
#!/bin/bash
#SBATCH -A def-svassili
#SBATCH --mem-per-cpu=2000 --time=1:0:0 -c16 --ntasks=1 --gpus=h100:1
export OMP_NUM_THREADS="${SLURM_CPUS_PER_TASK:-1}"
module load StdEnv/2023 gcc/12.3 openmpi/4.1.5 cuda/12.2 gromacs/2024.4
- Notes:
Job performance fluctuated, with some runs noticeably slower.
Trial speeds:
161.046 160.144 159.880 156.724 156.060 152.344 91.281
- Simulation input file:
title = benchmark
; Run parameters
integrator = md
nsteps = 400000
dt = 0.001
; Output control
nstxout = 0
nstvout = 0
nstfout = 0
nstenergy = 10000
nstlog = 10000
nstxout-compressed = 50000
compressed-x-grps = System
; Bond parameters
continuation = yes
constraint_algorithm = lincs
constraints = h-bonds
; Neighborsearching
cutoff-scheme = Verlet
ns_type = grid
nstlist = 10
rcoulomb = 0.8
rvdw = 0.8
DispCorr = Ener ; anaytic VDW correction
; Electrostatics
coulombtype = PME
pme_order = 4
fourier-nx = 144
fourier-ny = 144
fourier-nz = 144
; Temperature coupling is on
tcoupl = V-rescale
tc-grps = system
tau_t = 0.1
ref_t = 300
; Pressure coupling is on
pcoupl = Parrinello-Rahman
pcoupltype = isotropic
tau_p = 2.0
ref_p = 1.0
compressibility = 4.5e-5
; Periodic boundary conditions
pbc = xyz
; Velocity generation
gen_vel = no