Organization of partitions
Since July 25, 2022, a new partition organization has been implemented in EXPLOR. The new organization is shown below:
Attention
Please note: the general allocation in the
Table of associations
Partition | Hostname | # Nodes | #CPU | Memory (Gb) | FEATURE |
---|---|---|---|---|---|
debug | cna[01] | 1 | 64 | 118 | BROADWELL,OPA,INTEL |
std | cna[02...64] | 55 | 32 | 118 | BROADWELL,OPA,INTEL |
std | cnb[01...62] | 35 | 32 | 118 | BROADWELL,OPA,INTEL |
std | cnc[01...64] | 59 | 32 | 182 | SKYLAKE,OPA,INTEL |
std | cnd[01...12] | 11 | 16 | 54 | IVY,IB,INTEL |
std | cne[01...16] | 16 | 8 | 118 | BROADWELL,INTEL,HF |
std | cnf[01...08] | 7 | 8 | 86 | SKYLAKE,OPA,INTEL,HF |
std | cng[01...12] | 9 | 40 | 182 | CASCADELAKE,OPA,INTEL |
std | cnh[01...02] | 2 | 8 | 758 | CASCADELAKE,OPA,INTEL,HF |
std | cni[01...16] | 16 | 40 | 354 | CASCADELAKE,IB,INTEL |
std | cni[16...32] | 16 | 40 | 182 | CASCADELAKE,IB,INTEL |
std | cnj[01...64] | 64 | 48 | 246 | EPYC3,IB,AMD |
std | cnk[01...08] | 8 | 8 | 182 | CASCADELAKE,IB,INTEL,HF |
std | cnl[01...04] | 4 | 24 | 241 | EPYC4,IB,AMD |
std | cnl[05...18] | 14 | 32 | 500 | EPYC4,IB,AMD |
gpu | gpb[01...06] | 6 | 32 | 118 | BROADWELL,OPA,P1000,INTEL |
gpu | gpc[01...04] | 4 | 32 | 54 | BROADWELL,OPA,GTX1080TI,INTEL |
gpu | gpd[01...03] | 3 | 24 | 86 | CASCADELAKE,OPA,T4,INTEL |
gpu | gpe[01...02] | 2 | 40 | 182 | CASCADELAKE,IB,RTX6000,INTEL |
gpu | gpf[01] | 1 | 32 | 118 | CASCADELAKE,L40,INTEL |
myXXX | --- | -- | -- | -- | -- |
Table of myXXX Associations
Partition | Hostname | # Nodes | #CPU | Memory (Gb) | Preemption |
---|---|---|---|---|---|
mysky | cnc[53-64] | 12 | 32 | 182 | Yes |
myhf | cnf[01-07] | 7 | 8 | 86 | Yes |
mycas | cng[01-02,04-05,07-12] | 9 | 40 | 182 | Yes |
mylhf | cnh[01-02] | 2 | 8 | 758 | Yes |
mylemta | cni[01-16] | 16 | 40 | 354 | Yes |
mystdcas | cni[16-24,29-32] | 8 | 40 | 182 | Yes |
mystdepyc | cnj[01-64] | 64 | 48 | 246 | Not |
myhfcas | cnk[01-08] | 8 | 8 | 182 | Yes |
mylpct | cnl[02-04] | 3 | 24 | 241 | Yes |
mygeo | cnl[05-18] | 14 | 32 | 500 | Not |
myt4 | gpd[01-03] | 3 | 24 | 86 | Yes |
myrtx6000 | gpe[01-02] | 2 | 40 | 182 | Yes |
mylpctgpu | gpf[01] | 1 | 32 | 118 | Yes |
Note
The hostnames of the nodes cnXX correspond to the CPUs and gpXX to the GPUs.
Submission instructions
(1) All submissions must always contain
#SBATCH --account=MY_GROUP
or
#SBATCH -A MY_GROUP
(1.1) Special information – MyXXX submission with a different project association
To use a different project association for those with many projects, please remove the #SBATCH -A/--account
option from your script and add it externally with the command line.
%sbatch --account MY_GROUP my_subm_scrit.slurm
or
%sbatch -A MY_GROUP my_subm_scrit.slurm
MY_GROUP: should be your project ID, which you can check in your terminal prompt
[<utilisateur>@vm-<MY_GROUP> ~]
(2) In general, when you don't need a special machine
Attention
Please note: the general allocation in the
Important
The non-preemptable private partitions MyXXX are: mystdepyc (cnj[01-64]) and mygeo (cnl[05-18]).
(2.1) all types of machines in std
#SBATCH --account=MY_GROUP
#SBATCH --partition=std
#SBATCH --job-name=Test
#SBATCH --nodes=1
#SBATCH --ntasks=4
or
#SBATCH -A MY_GROUP
#SBATCH -p std
#SBATCH -J Test
#SBATCH -N 1
#SBATCH -n 4
(2.2) all types of machines in gpu
#SBATCH --account=MY_GROUP
#SBATCH --partition=gpu
#SBATCH --job-name=Test
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --gres=gpu:2
or
#SBATCH -A MY_GROUP
#SBATCH -p gpu
#SBATCH -J Test
#SBATCH -N 1
#SBATCH -n 1
#SBATCH --gres=gpu:2
(3) Hosted material – continue as usual
mystdcasXXX partitions can now be accessed by a single mystdcas partition
(mystdcaslemta, mystdcasijl, mystdcascrm2) ==> mystdcas
#SBATCH --account=MY_GROUP
#SBATCH --partition=mycas
#SBATCH --job-name=Test
#SBATCH --nodes=1
#SBATCH --ntasks=4
or
#SBATCH -A MY_GROUP
#SBATCH -p mycas
#SBATCH -J Test
#SBATCH -N 1
#SBATCH -n 4
(4) Precise selection of nodes
Specific nodes are selected using the FEATURES shown in the association table above. See examples below:
#SBATCH --constraint=SOMETHING_FROM_FEATURES
(4.1) Selection of nodes from the old partition sky
#SBATCH --account=MY_GROUP
#SBATCH --partition=std
#SBATCH --constraint=SKYLAKE,OPA,INTEL
#SBATCH --job-name=Test
#SBATCH --nodes=1
#SBATCH --ntasks=4
or
#SBATCH -A MY_GROUP
#SBATCH -p std
#SBATCH -C SKYLAKE,OPA,INTEL
#SBATCH -J Test
#SBATCH -N 1
#SBATCH -n 4
(4.2) Selection of nodes from the old partition
#SBATCH --account=MY_GROUP
#SBATCH --partition=gpu
#SBATCH --constraint=BROADWELL,OPA,P100,INTEL
#SBATCH --job-name=Test
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --gres=gpu:2
or
#SBATCH -A MY_GROUP
#SBATCH -p gpu
#SBATCH -C BROADWELL,OPA,P100,INTEL
#SBATCH -J Test
#SBATCH -N 1
#SBATCH -n 1
#SBATCH --gres=gpu:2
(4.3) Deleting all nodes of the old freeXXX/MyXXX machines and selecting all other old nodes (std, sky, ivy, hf)
#SBATCH --account=MY_GROUP
#SBATCH --partition=std
#SBATCH --constraint=NOPREEMPT
#SBATCH --job-name=Test
#SBATCH --nodes=1
#SBATCH --ntasks=4
or
#SBATCH -A MY_GROUP
#SBATCH -p std
#SBATCH -C NOPREEMPT
#SBATCH -J Test
#SBATCH -N 1
#SBATCH -n 4
(5) Restart/Requeue preempted JOBS
This is not a feature for JOBS that terminate with an error.
It is a feature for JOBS that may have been removed from execution by the --requeue
)
#SBATCH --account=MY_GROUP
#SBATCH --partition=std
#SBATCH --requeue
#SBATCH --job-name=Test
#SBATCH --nodes=1
#SBATCH --ntasks=4