MOLECULAR DYNAMICS PERFORMANCE GUIDE - Digital Research Alliance of CANADA

ADVANCED SEARCH

APPLIED FILTERS: NAMD2 _____ _____ _____ _____ _____ _____ _____ _____
.
  • EXPLORING THE DATABASE

    Default view
    When this page is viewed no filters are initially applied. All benchmarks are selected and sorted by simulation speed. The chart on the right displays only top 30 benchmarks for clarity.
  • Selecting benchmarks
    A subset of benchmarks can be selected using a custom chain of filters. Selected database entries can be downloaded as CSV files for further analysis or viewed in the Benchmark Details table at the bottom of the page.
  • Detailed views
    A detailed view of each database entry can be accessed from Benchmark ID and Software ID search forms. Detailed views include submission commands and simulation input files. View example: PMEMD @Narval (benchmark ID=46).
  • Parallel efficiency
    Efficiency is computed as PS/(SS * N) where PS is speed of the parallel program, SS is speed of the serial program, and N is the number of CPUs or GPUs.
  • Viewing parallel speedup and efficiency
    To view the graph of the dependence of parallel speedup and efficiency on the number of CPU/GPU equivalents select only one software and one cluster. View example: GROMACS @Narval .

  • Viewing QM/MM benchmarks
    To view QM/MM benchmarks select simulation system 4cg1 .

Performance Chart For Selected Benchmarks

*Data updated Sept. 7, 2023

Cost Of CPU-only Simulations

*Data updated Sept. 7, 2023

    OPTIMIZING CPU USAGE

  • Submitting CPU-only simulation
    CPU-only simulations reach performance comparable to GPU-accelerated ones only with hundreds of CPU cores. It is not uncommon for such jobs to wait in the queue for up to several days for such a significant resource to be available, especialy if a long time is requested.
  • Benchmarking CPU-only MD Engines
    We calculate CPU usage in core equivalents per year. Core equivalent is a bundle made up of a single core, and some memory associated with it . For most of the systems one core equivalent includes 4000M per core.

Cost Of GPU-accelerated Simulations

*Data updated Sept. 7, 2023
  • OPTIMIZING GPU USAGE

    Parallel scaling to multiple GPUs
    Parallel scaling to multiple GPUs strongly depends on the compibation of software, hardware and simulation parameters. Often simulations do not run faster on multiple GPUs (PMEMD @Cedar example). Simulations on nodes with direct interconnect between GPUs (NVLink) are more likely to benefit from multiple GPUs, but efficiency decreases and cost goes up with the number of GPUs (NAMD3 @Cedar example ).
  • Benchmarking GPU accelerated MD Engines
    For benchmarking we use the optimal number of cores per GPU (the number needed for the fastest simulation time but not exceeding the maximum number of CPU cores per GPU in a GPU equivalent).

BENCHMARK RESULTS

CPUY: CPU years per 1 microsecond long simulation. GPUY: GPU years per 1 microsecond long simulation. | T: tasks | C: cores | N: nodes. Speed is in ns/day. Integration step = 1 fs. Measured with dataset 6n40 (239,131 atoms).

*More information is available by clicking ID in the table above
ID Software Module Toolch Arch Data Speed CPU CPUeff CPUY GPUY T C N GPU NVLink Site
56 NAMD2.cuda namd-multicore/2.14 iccifortcuda avx2 6n4o 7.63e+00 EPYC 7413 87.4 0.0 0.359 1 8 1 1A100-SXM4 Yes Narval
121 NAMD2.cuda namd-multicore/2.14 iimklc avx2 6n4o 7.48e+00 Xeon Silver 4216 83.1 0.0 0.732 1 16 1 2V100-SXM2 Yes Cedar
58 NAMD2.cuda namd-multicore/2.14 iccifortcuda avx2 6n4o 7.18e+00 EPYC 7413 41.1 0.0 0.764 1 16 1 2A100-SXM4 Yes Narval
116 NAMD2.cuda namd-multicore/2.14 iimklc avx2 6n4o 7.15e+00 Xeon E5-2650 51.4 0.0 1.533 1 24 1 4P100-PCIE No Cedar
175 NAMD2.ucx namd-ucx/2.14 iimkl avx2 6n4o 6.95e+00 Xeon E5-2683 72.4 50.46 0.0 128 1 4 0 No Graham
135 NAMD2.cuda.ucx namd-ucx-smp/2.14 iccifortcuda avx2 6n4o 6.78e+00 EPYC 7413 39.1 0.0 0.809 2 12 1 2A100-SXM4 Yes Narval
119 NAMD2.cuda.ofi namd-ofi-smp/2.14 iimklc avx2 6n4o 6.33e+00 Xeon E5-2650 46.8 0.0 1.731 4 6 1 4P100-PCIE No Cedar
57 NAMD2.cuda namd-multicore/2.14 iccifortcuda avx2 6n4o 6.23e+00 EPYC 7413 35.7 0.0 0.88 1 8 1 2A100-SXM4 Yes Narval
3 NAMD2.cuda namd-multicore/2.14 iimklc avx512 6n4o 6.17e+00 Xeon Gold 6148 100.0 0.0 0.444 1 10 1 1V100-PCIE No Siku
120 NAMD2.cuda.ofi namd-ofi-smp/2.14 iimklc avx2 6n4o 6.15e+00 Xeon E5-2650 22.7 0.0 3.562 8 6 2 8P100-PCIE No Cedar
Date Updated: Sept. 7, 2023, 12:31 a.m.