Skip to main content

Partition organization

Since July 25, 2022, a new partition organization was introduced in EXPLOR. The new layout is shown below:

Screenshot1

Attention

The general allocation in the std partition now includes nodes from the former freeXXX partition. If you do not select any special nodes, your JOB may be terminated due to the priority of JOBS on MyXXX nodes. To avoid this, consider selecting appropriate nodes while excluding private nodes. For instructions, please check items (4.3 and 5).

Association table

PartitionHostname# nodes# CPU /nodeCPU memory (Mb)Node memory (Gb)FEATURE
debugcna[01]1642000118BROADWELL,OPA,INTEL
stdcna[02...64]53323750118BROADWELL,OPA,INTEL
stdcnb[01...62]30323750118BROADWELL,OPA,INTEL
stdcnc[01...64]51325625182SKYLAKE,OPA,INTEL
stdcnd[01...12]1116375054IVY,IB,INTEL
stdcne[01...16]16815000118BROADWELL,INTEL,HF
stdcnf[01...08]581200086SKYLAKE,OPA,INTEL,HF
stdcng[01...12]8404500182CASCADELAKE,OPA,INTEL
stdcnh[01...02]2894000758CASCADELAKE,OPA,INTEL,HF
stdcni[01...16]16409000354CASCADELAKE,IB,INTEL
stdcni[16...32]16404500182CASCADELAKE,IB,INTEL
stdcnj[01...64]64485200246EPYC3,IB,AMD
stdcnk[01...08]8822500182CASCADELAKE,IB,INTEL,HF
stdcnl[01...04]424241241EPYC4,IB,AMD
stdcnl[05...18]143210000500EPYC4,IB,AMD
gpugpb[01...06]6323750118BROADWELL,OPA,P1000,INTEL
gpugpc[01...04]432200054BROADWELL,OPA,GTX1080TI,INTEL
gpugpd[01...03]324360086CASCADELAKE,OPA,T4,INTEL
gpugpe[01...02]2404500182CASCADELAKE,IB,RTX6000,INTEL
gpugpf[01]1323750118CASCADELAKE,L40,INTEL
myXXX-------------

myXXX associations table

PartitionHostname# nodes#CPUMemory (Gb)Preemption
myskycnc[53,55-57,59-63]932182Oui
myhfcnf[02-06]5886Oui
mycascng[01-02,04-05,07,09,11-12]840182Oui
mylhfcnh[01-02]28758Oui
mylemtacni[01-16]1640354Oui
mystdcascni[16-24,29-32]840182Oui
mystdepyccnj[01-64]6448246Non
myhfcascnk[01-08]88182Oui
mylpctcnl[02-04]324241Oui
mygeocnl[05-18]1432500Non
myt4gpd[01-03]32486Oui
myrtx6000gpe[01-02]240182Oui
mylpctgpugpf[01]132118Oui
Note

The hostnames cnXX correspond to CPU nodes and gpXX to GPU nodes.

Submission instructions

(1) All submissions must always include

#SBATCH --account=MY_GROUP

or

#SBATCH -A MY_GROUP

(1.1) Special information – MyXXX submission with a different project association

To use a different project association when you have multiple projects, remove the option #SBATCH -A/--account from your script and add it externally on the command line:

sbatch --account MY_GROUP my_subm_scrit.slurm

or

sbatch -A MY_GROUP my_subm_scrit.slurm

MY_GROUP should be your project identifier; you can verify it in your terminal prompt:

[<user>@vm-<MY_GROUP> ~]

(2) General cases where you do not need a special machine

Attention

The general allocation in the std partition now includes nodes from the former freeXXX partition. If you do not select any special nodes, your JOB may be terminated due to the priority of JOBS on MyXXX nodes. To avoid this, consider selecting appropriate nodes while excluding private nodes. For instructions, please check item (4).

Important

The private MyXXX partitions that are non-preemptible are: mystdepyc (cnj[01-64]) and mygeo (cnl[05-18]).

(2.1) any type of machine in std

#SBATCH --account=MY_GROUP
#SBATCH --partition=std
#SBATCH --job-name=Test
#SBATCH --nodes=1
#SBATCH --ntasks=4

or

#SBATCH -A MY_GROUP
#SBATCH -p std
#SBATCH -J Test
#SBATCH -N 1
#SBATCH -n 4

(2.2) any type of machine in gpu

#SBATCH --account=MY_GROUP
#SBATCH --partition=gpu
#SBATCH --job-name=Test
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --gres=gpu:2

or

#SBATCH -A MY_GROUP
#SBATCH -p gpu
#SBATCH -J Test
#SBATCH -N 1
#SBATCH -n 1
#SBATCH --gres=gpu:2

(3) Hosted hardware – continues as before

The mystdcasXXX partitions will now be accessible through a single mystdcas partition

(mystdcaslemta, mystdcasijl, mystdcascrm2) ==> mystdcas

#SBATCH --account=MY_GROUP
#SBATCH --partition=mycas
#SBATCH --job-name=Test
#SBATCH --nodes=1
#SBATCH --ntasks=4

or

#SBATCH -A MY_GROUP
#SBATCH -p mycas
#SBATCH -J Test
#SBATCH -N 1
#SBATCH -n 4

(4) Precise node selection

Selecting specific nodes is done via the FEATURES shown in the association table above. Examples:

#SBATCH --constraint=SOMETHING_FROM_FEATURES

(4.1) Select nodes from the former sky partition

#SBATCH --account=MY_GROUP
#SBATCH --partition=std
#SBATCH --constraint=SKYLAKE,OPA,INTEL
#SBATCH --job-name=Test
#SBATCH --nodes=1
#SBATCH --ntasks=4

or

#SBATCH -A MY_GROUP
#SBATCH -p std
#SBATCH -C SKYLAKE,OPA,INTEL
#SBATCH -J Test
#SBATCH -N 1
#SBATCH -n 4

Screenshot3

(4.2) Select nodes from the former p100 partition

#SBATCH --account=MY_GROUP
#SBATCH --partition=gpu
#SBATCH --constraint=BROADWELL,OPA,P100,INTEL
#SBATCH --job-name=Test
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --gres=gpu:2

or

#SBATCH -A MY_GROUP
#SBATCH -p gpu
#SBATCH -C BROADWELL,OPA,P100,INTEL
#SBATCH -J Test
#SBATCH -N 1
#SBATCH -n 1
#SBATCH --gres=gpu:2

Screenshot4

(4.3) Exclude all nodes from the former freeXXX/MyXXX machines and select all other legacy nodes (std, sky, ivy, hf)

#SBATCH --account=MY_GROUP
#SBATCH --partition=std
#SBATCH --constraint=NOPREEMPT
#SBATCH --job-name=Test
#SBATCH --nodes=1
#SBATCH --ntasks=4

or

#SBATCH -A MY_GROUP
#SBATCH -p std
#SBATCH -C NOPREEMPT
#SBATCH -J Test
#SBATCH -N 1
#SBATCH -n 4

Screenshot5

(5) Restart/Requeue preempted JOBS

This is not a feature for JOBS that end with an error. It is a feature for JOBS that may have been removed from execution by the preemption rule. If you want to submit to all machines in the STD partition—even if on some machines your JOB may be preempted to give priority to a higher-priority job—you can use the job requeue feature (--requeue):

#SBATCH --account=MY_GROUP
#SBATCH --partition=std
#SBATCH --requeue
#SBATCH --job-name=Test
#SBATCH --nodes=1
#SBATCH --ntasks=4