Fluent/Abaqus Licence

Users of Fluent/Abaqus software on EXPLOR must use the number of licenses in their submission scripts. The number of licenses required must match the number of MPI processes to be executed by the scripts. For EXPLOR, we have a maximum number of active licenses that has been adjusted to 320 MPI processes. This maximum number of active licenses is shared by all Explor users. The number of licenses is now a resource that will be taken into account by SLURM when allocating user JOBs. Processes running without the “TAG” of these software licenses (Fluent/Abaqus) can be removed from execution - without any warning.

Fluent – max license: 320
Abaqus – max license: 320

Ex. Fluent JOB with 64 MPI processes

#$SBATCH –licenses=fluent:64

or

#$SBATCH -L fluent:64

Ex. Abaqus JOB with 128 MPI processes

#$SBATCH –licenses=abaqus:128

or

#$SBATCH -L abaqus:128

When submitting a JOB with the << TAG >> of license, the process may not run immediately even if there are nodes available. This means that all Fluent/Abaqus software licenses are already in use. In this case, the user's JOB will wait for the resource in the system, which in this case is the license number requested by the JOB via the license << TAG >>. The user can check the number of licenses via the commands

%scontrol show lic fluent

LicenseName=fluent

    Total=320 Used=0 Free=320 Reserved=0 Remote=no
%scontrol show lic abaqus

LicenseName=abaqus

    Total=320 Used=0 Free=320 Reserved=0 Remote=no

Example of a Fluent JOB script

#!/bin/bash

#SBATCH --account=<group>
#SBATCH --partition=std
#SBATCH --job-name=Test
#SBATCH --output=slurm-%x.%N.%j.out 
#SBATCH --error=slurm-%x.%N.%j.err 
#SBATCH --nodes=1
#SBATCH --ntasks=16
#SBATCH --ntasks-per-node=16
#SBATCH --cpus-per-task=1
#SBATCH --time=04:00:00
#SBATCH --license=fluent:16

  env | grep -i slurm 

  echo ""
  echo "  JOB started: $(date) "
  echo ""

  cd $SLURM_SUBMIT_DIR

  # Creating temporary directory 
  # do not run your JOB in the HOME or $SLURM_SUBMIT_DIR 
  WORKDIR="$SCRATCHDIR/job.$SLURM_JOB_ID.$USER"
  mkdir -p $WORKDIR

  echo ""
  echo "  WORKDIR: $WORKDIR"
  echo ""

  module purge
  module load ansys/24.2

  export OMP_NUM_THREADS=1
  export MKL_NUM_THREADS=1

  ulimit -s unlimited

  unset I_MPI_PMI_LIBRARY

  srun hostname -s | sort -V > $WORKDIR/hosts.file

  cp $SLURM_SUBMIT_DIR/* $WORKDIR

  cd $WORKDIR

  # you may change the executable 3ddp
  # you may change the input.jou and/out output.jou names
  # for the other options, they are necessary to run in parallel: do not touch them 
  # unless you really need
  fluent 2ddp -g -mpi=intel -cnf=$WORKDIR/hosts.file -t$SLURM_NTASKS -i input.jou > output.jou

  cp $WORKDIR/* $SLURM_SUBMIT_DIR

  cd $SLURM_SUBMIT_DIR

  # removing the $WORKDIR 
  rm $WORKDIR/*
  rmdir $WORKDIR

  echo ""
  echo "  JOB finished: $(date) "
  echo ""