Skip to main content

Calculation resources

Description of the available calculation resources

EXPLOR Material

PartitionNode-idMaterialCPUGPU# of nodes# of cores
/ node
# of total
cores
Available
memory / node
Default
memory / core
NetworkTotal
Tflops
Node-id
stdcna[02...64], cnb[01..64]DELL C6320Intel Xeon E5-2683v42.1 GHz, AVX2n/a84322688120GB3.75GBOmnipath100GB/s32.2cna[02...64], cnb[01..64]
stdcnc[01-11,13-48]DELL C6420Intel Xeon Gold 61302.1 GHz, AVX512n/a42321344180GB5.625GBOmnipath100GB/s16.6cnc[01-11,13-48]
stdcnd[01-04,06-12]DELL C6220Intel Xeon E5-2640v22.0 GHz, AVXn/a111619260GB3.75GBInfiniband40GB/s0.8cnd[01-04,06-12]
stdcne[01-16]DELL R630Intel Xeon E5-2637v43.5 GHz, AVX2n/a168128120GB15.0GBEthernet10GB/s0.4cne[01-16]
stdcni[25-28]Dell C6420Intel Xeon Gold 62302.1 GHz, AVX512n/a440160180GB4.5GBInfiniband100GB/s3.0cni[25-28]
stdcnl[01]Dell R6615AMD EPYC 92542.9 GHz, AVX2n/a12424240GB10.0GBInfiniband100GB/s0.6cnl[01]
gpugpb[01-06]DELL C4130Intel Xeon E5-2683v42.1 GHz, AVXNVIDIA TeslaP10064xGPU/nœud32192120GB3.75GBOmnipath100GB/s65.1gpb[01-04,06]
gpugpc[01-04]DELL T630Intel Xeon E5-2683v42.1 GHz, AVXGEFORCE GTX1080 Ti42xGPU/nœud3212864GB2.0GBOmnipath100GB/s46.6gpc[01-04]

Hosted Material

The mesocentre is also intended to house material.Contact us explor-contact@univ-lorraine.fr for any further information about hosting material.

PartitionNode-idMaterialCPUGPU# of nodes# of cores
/ node
# of total
cores
Available
memory / node
Default
memory / core
NetworkTotal
Tflops (DP)
Partition
myskycnc[53-64]DELL C6420Intel Xeon Gold 61302.1 Ghz, AVX512n/a932288180GB5.6GBOmnipath100GB/s4.2mysky
myhfcnf[02-06]DELL C6420Intel Xeon Gold 51223.6 Ghz, AVX512n/a584096GB12.0GBOmnipath100GB/s0.2myhf
mycascng[01-05,07-09,11-12]DELL C6820Intel Xeon Gold 62302.1 GHz, AVX512n/a840320180GB4.50GBOmnipath100GB/s6.7mycas
mylhfcnh[01-02]DELL R640Intel Xeon Gold 52223.8 Ghz, AVX512n/a2816758GB94.8GBOmnipath100GB/s0.1mylhf
mylemtacni[01-16]Dell C6420Intel Xeon Gold 62302.1 GHz, AVX512n/a1640640180GB4.50GBInfiniband100GB/s12.1mylemta
mystdcascni[17-24,29-32]Dell C6420Intel Xeon Gold 62302.10GHz, AVX512n/a1240480180GB4.50GBInfiniband100GB/s9.1mystdcas
mystdepyccnj[01-64]DELL 6425AMD EPYC 74132.65 GHz, AVX2n/a64483072246GB5.1GBInfiniband100GB/s123.5mystdepyc
myhfcascnk[01-08]Dell R640Intel Xeon Gold 52223.8 Ghz, AVX512n/a8864180GB22.5GBInfiniband100GB/s0.2myhfcas
mylpctcnl[02-04]Dell R6615AMD EPYC 92542.9 GHz, AVX512n/a32472240GB10.0GBInfiniband100GB/s1.9mylpct
mygeocnl[05-18]Dell R6615AMD EPYC 9354P3.2 GHz, AVX512n/a1432448512GB16.0GBInfiniband100GB/s11.4mygeo
myt4gpd[01-03]DELL R740Intel Xeon Gold 6126CPU @ 2.6 GHz, AVX512TESLA T433xGPU/nœud249686GB3.6GBInfiniband100GB/s25.0myt4
myrtx6000gpe[01-02]Dell R740Intel Xeon Gold 62302.1 GHz, AVX512RTX600022xGPU/nœud4080182GB4.55GBInfiniband100GB/s31.0myrtx6000
mylpctgpugpf[01]Dell R760xaIntel Xeon Gold 52182.3 GHz, AVX512L4014xGPU/nœud3232120GB3.75GB90.9mylpctgpu

Details of GPU nodes

NodePartitionCPUGPUMemory/node (GB)Memory/GPU-card (GB)Cuda Ver.
gpb01gpu32411816.012.2
gpb02gpu32411816.012.2
gpb03gpu32411816.012.2
gpb04gpu32411816.012.2
gpb05gpu32411816.012.2
gpb06gpu32411816.012.2
gpc01gpu3225411.012.2
gpc02gpu3225411.012.2
gpc03gpu3225411.010.1
gpc04gpu3225411.010.1
gpd01gpu2438615.012.2
gpd02gpu2438615.012.2
gpd03gpu2438615.012.2
gpe01gpu40218222.512.2
gpe02gpu40218222.512.2
gpf01gpu32411844.912.4

Global Computing Power

Total Power: 480 TFlops

# Number of Nodes# Number of CoresCPU Power (TFlops)GPU Power (TFlops)
31510496220260

Intel Skylake Processors (6th generation):

The "skylake" nodes cnc[01-64] and cnf[02-06] include 56 nodes connected on a low-latency 100GB network. Each node has two Skylake processors Intel Xeon Gold 6130 Processor (each processor has 16 cores clocked at 2.10 GHz, i.e., 32 cores per node) and 192 GB of RAM. This configuration is suitable for massively parallel applications.

Important

The only real difference between Skylake (6th generation) and Broadwell (5th generation) processors is that Skylake implements more efficient computation instructions. Therefore Skylake can perform more operations per second (flops) than Broadwell — provided programs are recompiled with the proper options to enable use of the new registers (AVX512).

Intel Broadwell Processors (5th generation):

The "standard" nodes cn(a/b)[01-64] include 83 nodes connected on a low-latency 100GB network. Each node has two Broadwell processors Intel Xeon E5-2683 v4 (each processor has 16 cores clocked at 2.10 GHz, i.e., 32 cores per node) and 128 GB of RAM. This configuration is suitable for massively parallel applications.

The "high-frequency" nodes cne[01-16] include 16 nodes connected on a 10GB Ethernet network. Each node has two Intel Xeon E5-2637 v4 processors (each with 4 cores clocked at 3.5 GHz) and 128 GB of RAM. These nodes are suited for serial or lightly parallelized applications.

The GPU nodes gpb[01-06] include 6 nodes connected on a low-latency 100GB network. Each node has two Broadwell processors Intel Xeon E5-2683 v4 (each with 16 cores at 2.10 GHz, i.e., 32 cores per node), 128 GB RAM and 4 NVIDIA Tesla P100 cards.

The GPU "GTX" nodes gpc[01-04] include 4 nodes connected on a low-latency 100GB network. Each node has two Broadwell processors Intel Xeon E5-2683 v4 (each with 16 cores at 2.10 GHz, i.e., 32 cores per node), 128 GB RAM and 2 GEFORCE GTX 1080 Ti cards.

Intel Ivy Bridge Processors (3rd generation):

The cndi[01-12] nodes include 11 nodes connected on a low-latency 40Gb Infiniband network. Each node has two Intel Xeon E5-2640 v2 2.0 GHz processors (each with 8 cores at 2.0 GHz) and 64 GB of RAM. These nodes are suitable for parallel applications.

Attention

The general allocation in the std partition now includes nodes from exclusive partitions (myXXX). If you do not select any special nodes, your JOB may be preempted due to job priority on MyXXX nodes. To avoid this, consider selecting appropriate nodes while excluding private nodes. For instructions, please check items 4.3 and 5.

Job duration and resource limits

The following resource and time limits apply at the mesocenter for each submitted job:

Max # of Nodes / jobMaximum Duration / job (days)
116
216
48
84
164
322

The maximum number of active jobs is determined by the total number of CPUs allocated, which is limited to 2048 per user. Jobs submitted beyond this limit will be placed in a queue and processed once earlier jobs have finished.

Note

Job submission will be refused if the maximum duration is not specified or if the requested resources exceed the limits above.