Fluent
Fluent/Abaqus Licence
Users of Fluent/Abaqus software on EXPLOR must declare the number of licenses in their submission scripts. The number of licenses required must match the number of MPI processes to be executed by the script. For EXPLOR, the maximum number of active licenses has been set to 320 MPI processes. This pool of 320 licenses is shared by all EXPLOR users; licenses are enforced as a SLURM resource when allocating JOBs. Processes running without the required license "TAG" (Fluent/Abaqus) can be terminated without warning.
- Fluent – max license: 320
- Abaqus – max license: 320
Example: Fluent JOB with 64 MPI processes
#SBATCH –licenses=fluent:64
or
#SBATCH -L fluent:64
Example: Abaqus JOB with 128 MPI processes
#SBATCH –licenses=abaqus:128
or
#SBATCH -L abaqus:128
When submitting a JOB that requests a license TAG, the job may not start immediately even if compute nodes are available—this indicates all Fluent/Abaqus licenses are currently in use. In that case the JOB waits until the requested license resource is free.
You can check license status with:
scontrol show lic fluent
LicenseName=fluent
Total=320 Used=0 Free=320 Reserved=0 Remote=no
scontrol show lic abaqus
LicenseName=abaqus
Total=320 Used=0 Free=320 Reserved=0 Remote=no
Example of a Fluent JOB script
#!/bin/bash
#SBATCH --account=<group>
#SBATCH --partition=std
#SBATCH --job-name=Test
#SBATCH --output=slurm-%x.%N.%j.out
#SBATCH --error=slurm-%x.%N.%j.err
#SBATCH --nodes=1
#SBATCH --ntasks=16
#SBATCH --ntasks-per-node=16
#SBATCH --cpus-per-task=1
#SBATCH --time=04:00:00
#SBATCH --license=fluent:16
env | grep -i slurm
echo ""
echo " JOB started: $(date) "
echo ""
cd $SLURM_SUBMIT_DIR
# Creating temporary directory
# do not run your JOB in the HOME or $SLURM_SUBMIT_DIR
WORKDIR="$SCRATCHDIR/job.$SLURM_JOB_ID.$USER"
mkdir -p $WORKDIR
echo ""
echo " WORKDIR: $WORKDIR"
echo ""
module purge
module load ansys/24.2
export OMP_NUM_THREADS=1
export MKL_NUM_THREADS=1
ulimit -s unlimited
unset I_MPI_PMI_LIBRARY
srun hostname -s | sort -V > $WORKDIR/hosts.file
cp $SLURM_SUBMIT_DIR/* $WORKDIR
cd $WORKDIR
# you may change the executable 3ddp
# you may change the input.jou and/out output.jou names
# for the other options, they are necessary to run in parallel: do not touch them
# unless you really need
fluent 2ddp -g -mpi=intel -cnf=$WORKDIR/hosts.file -t$SLURM_NTASKS -i input.jou > output.jou
cp $WORKDIR/* $SLURM_SUBMIT_DIR
cd $SLURM_SUBMIT_DIR
# removing the $WORKDIR
rm $WORKDIR/*
rmdir $WORKDIR
echo ""
echo " JOB finished: $(date) "
echo ""