example SLURM script

NOTE:  gpunode enforces a limit on the length of interactive jobs after 48 hrs.

 

 

Find more details about how to get started with SLURM at

 

An actual Example.  This should be copied into a separate file with no extra line breaks at the top of the file:

#!/bin/bash
# provide a short name
#SBATCH –job-name=MY_JOB
# specify a time limit (100 hrs?)
#SBATCH -t 100-0
# specify the number of tasks (cores) to use
#SBATCH -n 64
# specify the partition to use (options are default, GPU, and desperate)
#SBATCH -p GPU
# specify a specific node to use
#SBATCH -w, –nodelist=gpunode00

# create a new directory with the slurm $JOB_ID number (which gets created
# upon submission of the job). Be sure to use a directory that has
# high speed IO
export WRK_DIR=$PWD/job-$SLURM_JOB_ID
mkdir -p $WRK_DIR

# setup spack env variables
source /usr/local/spack/share/spack/setup-env.sh

# load any special package environments (check spack package info)
spack load /rmsur
# load OpenMPI%GCC
spack load /ej4xe2g

# copy any files needed into $WRKDIR
cp $PWD/DISP.supercell_template.in.001 $PWD/example.slurm *UPF $WRK_DIR

# change into $WRKDIR
cd $WRK_DIR

# set the number of OpenMP threads per core to use.
export OMP_NUM_THREADS=1

# issue the “run” command
mpirun -n $SLURM_NTASKS \
pw.x < \ /scratch/pwc2a/testing/QuantumEspresso/DISP.supercell_template.in.001 > $SLURM_JOB_ID.out

# delete any files that dont have to remain in the directory after
# the job is complete
rm *UPF