MOLECULAR DYNAMICS PERFORMANCE GUIDE - Digital Research Alliance of CANADA

SOFTWARE DETAILS

ID=10, NAMD3.cuda

  • Module/Version: binary_pack/3.0a9
  • Toolchain/Version: -/-
  • CPU instruction set: -
  • Example job submission script:
    1. #!/bin/bash
    2. #SBATCH -c2 --gres=gpu:2
    3. #SBATCH --mem-per-cpu=2000 --time=1:0:0
    4. NAMDHOME=$HOME/NAMD_3.0alpha9_Linux-x86_64-multicore-CUDA
    5. $NAMDHOME/namd3 +p${SLURM_CPUS_PER_TASK} +idlepoll namd3.in
  • Benchmark submission script:
    1. #!/bin/bash
    2. #SBATCH -c2 --gres=gpu:v100:2
    3. #SBATCH --mem-per-cpu=2000 --time=1:0:0
    4. # Usage: sbatch submit.cuda.sh [number_of_steps]
    5. INPFILE=namd.in
    6. #------- End of user input ------
    7. STEPS=$1
    8. TMPFILE=tf_${SLURM_CPUS_PER_TASK}
    9. cp $INPFILE run_${SLURM_CPUS_PER_TASK}.in
    10. echo numsteps $1 >> run_${SLURM_CPUS_PER_TASK}.in
    11. echo "CUDASOAintegrate on" >> run_${SLURM_CPUS_PER_TASK}.in
    12. NAMDHOME=$HOME/NAMD_3.0alpha9_Linux-x86_64-multicore-CUDA
    13. echo ${SLURM_NODELIST} running on ${SLURM_CPUS_PER_TASK} cores
    14. cat /proc/cpuinfo | grep "model name" | uniq
    15. nvidia-smi -L
    16. # Run simulation three times
    17. $NAMDHOME/namd3 +p${SLURM_CPUS_PER_TASK} +idlepoll run_${SLURM_CPUS_PER_TASK}.in > $TMPFILE
    18. $NAMDHOME/namd3 +p${SLURM_CPUS_PER_TASK} +idlepoll run_${SLURM_CPUS_PER_TASK}.in > $TMPFILE
    19. $NAMDHOME/namd3 +p${SLURM_CPUS_PER_TASK} +idlepoll run_${SLURM_CPUS_PER_TASK}.in > $TMPFILE
    20. # Print average of three runs.
    21. echo -n "ns/day:"
    22. grep CPUTime $TMPFILE | cut -f5 -d " " | awk -v steps=$STEPS '{ total += $1; count++ } END { print count*3.6*2.4*steps*0.01/total}'
    23. rm -f tf_${SLURM_CPUS_PER_TASK} *.restart.* *.old *.BAK run_${SLURM_CPUS_PER_TASK}.in