Organization of partitions

Since July 25, 2022, a new partition organization has been implemented in EXPLOR. The new organization is shown below:

Screenshot1

Please note: the general allocation in the partition now includes nodes from the former freeXXX partition. If you don't select any special nodes, your JOB may be terminated due to the priority of JOBS in MyXXX nodes. To avoid this situation, consider selecting the appropriate nodes, excluding private nodes. To find out how to do this, please check the information in element (4.3 and 5).


Table of associations

Before
July 25, 2022
After
July 25, 2022
Hostname # nodes # CPU FEATURE
std std cna[01-64] 64 32

BROADWELL,OPA,INTEL

std std cnb[01-64] 64 32

BROADWELL,OPA,INTEL

sky std cnc[01-64] 64 32

SKYLAKE,OPA,INTEL

ivy std cnd[01-12] 12 16

IVY,IB,INTEL

hf std cne[01-16] 16 8

BROADWELL,INTEL,HF

freehf std cnf[01-08] 8 8

SKYLAKE,OPA,INTEL,HF

freecas std cng[01-12] 12 40

CASCADELAKE,OPA,INTEL

freelhf std cnh[01-02] 2 8

CASCADELAKE,OPA,INTEL,HF

freestdcas std cni[01-24] 24 40

CASCADELAKE,IB,INTEL

freestdepyc std cnj[01-64] 64 48

EPYC3,IB,AMD

freehfcas std cnk[01-08] 8 8

CASCADELAKE,IB,INTEL,HF

-- std cnl[01-18] 18 32

EPYC4,IB,AMD

p100 gpu gpb[01-06] 6 32

BROADWELL,OPA,P1000,INTEL

gtx gpu gpc[01-04] 4 32

BROADWELL,OPA,GTX1080TI,INTEL

freet4 gpu gpd[01-03] 3 24

CASCADELAKE,OPA,T4,INTEL

freertx6000std gpu gpe[01-02] 2 40

CASCADELAKE,IB,RTX6000,INTEL

-- gpu gpf[01] 1 32

CASCADELAKE,L40,INTEL

myXXX myXXX --- -- -- ---------

mystdcasXXX partitions will now be accessible via a single mystdcas
(mystdcaslemta, mystdcasijl, mystdcascrm2) ==> mystdcas


Submission instructions

(1) All submissions must always contain

#SBATCH --account=MY_GROUP

or

#SBATCH -A MY_GROUP


(1.1) Special information – MyXXX submission with a different project association

To use a different project association for those with many projects, please remove the #SBATCH -A/--account option from your script and add it externally with the command line.

%sbatch --account MY_GROUP my_subm_scrit.slurm

or

%sbatch -A MY_GROUP my_subm_scrit.slurm

MY_GROUP: should be your project ID, which you can check in your terminal prompt

[<utilisateur>@vm-<MY_GROUP> ~]


(2) In general, when you don't need a special machine

Please note: the general allocation in the partition now includes nodes from the former freeXXX partition. If you don't select any special nodes, your JOB may be terminated due to the priority of JOBS in MyXXX nodes. To avoid this situation, consider selecting the appropriate nodes, excluding private nodes. To find out how to do this, please check the information in item (4).


(2.1) all types of machines in std

#SBATCH --account=MY_GROUP

#SBATCH --partition=std

#SBATCH --job-name=Test

#SBATCH --nodes=1

#SBATCH --ntasks=4

or

#SBATCH -A MY_GROUP

#SBATCH -p std

#SBATCH -J Test

#SBATCH -N 1

#SBATCH -n 4


(2.2) all types of machines in gpu

#SBATCH --account=MY_GROUP

#SBATCH --partition=gpu

#SBATCH --job-name=Test

#SBATCH --nodes=1

#SBATCH --ntasks=1

#SBATCH --gres=gpu:2

or

#SBATCH -A MY_GROUP

#SBATCH -p gpu

#SBATCH -J Test

#SBATCH -N 1

#SBATCH -n 1

#SBATCH --gres=gpu:2


(3) Hosted material – continue as usual

mystdcasXXX partitions can now be accessed by a single mystdcas partition

(mystdcaslemta, mystdcasijl, mystdcascrm2) ==> mystdcas

#SBATCH --account=MY_GROUP

#SBATCH --partition=mycas

#SBATCH --job-name=Test

#SBATCH --nodes=1

#SBATCH --ntasks=4

or

#SBATCH -A MY_GROUP

#SBATCH -p mycas

#SBATCH -J Test

#SBATCH -N 1

#SBATCH -n 4


(4) Precise selection of nodes

Specific nodes are selected using the FEATURES shown in the association table above. See examples below:

#SBATCH --constraint=SOMETHING_FROM_FEATURES


(4.1) Selection of nodes from the old partition sky

#SBATCH --account=MY_GROUP

#SBATCH --partition=std

#SBATCH --constraint=SKYLAKE,OPA,INTEL

#SBATCH --job-name=Test

#SBATCH --nodes=1

#SBATCH --ntasks=4

or

#SBATCH -A MY_GROUP

#SBATCH -p std

#SBATCH -C SKYLAKE,OPA,INTEL

#SBATCH -J Test

#SBATCH -N 1

#SBATCH -n 4


Screenshot3

(4.2) Selection of nodes from the old partition

#SBATCH --account=MY_GROUP

#SBATCH --partition=gpu

#SBATCH --constraint=BROADWELL,OPA,P100,INTEL

#SBATCH --job-name=Test

#SBATCH --nodes=1

#SBATCH --ntasks=1

#SBATCH --gres=gpu:2

or

#SBATCH -A MY_GROUP

#SBATCH -p gpu

#SBATCH -C BROADWELL,OPA,P100,INTEL

#SBATCH -J Test

#SBATCH -N 1

#SBATCH -n 1

#SBATCH --gres=gpu:2


Screenshot4

(4.3) Deleting all nodes of the old freeXXX/MyXXX machines and selecting all other old nodes (std, sky, ivy, hf)

#SBATCH --account=MY_GROUP

#SBATCH --partition=std

#SBATCH --constraint=NOPREEMPT

#SBATCH --job-name=Test

#SBATCH --nodes=1

#SBATCH --ntasks=4

or

#SBATCH -A MY_GROUP

#SBATCH -p std

#SBATCH -C NOPREEMPT

#SBATCH -J Test

#SBATCH -N 1

#SBATCH -n 4

Screenshot5

(5) Restart/Requeue preempted JOBS

This is not a feature for JOBS that terminate with an error. It is a feature for JOBS that may have been removed from execution by the rule. If you wish to submit to all machines in the STD partition, even though on some machines your JOB may be deleted to give preference to another with a higher priority, it is possible to use the put JOB back in queue feature (--requeue)

#SBATCH --account=MY_GROUP

#SBATCH --partition=std

#SBATCH --requeue

#SBATCH --job-name=Test

#SBATCH --nodes=1

#SBATCH --ntasks=4