-
Notifications
You must be signed in to change notification settings - Fork 35
Build ~ LLNL Dane
- ssh onto Dane and request node
ssh -l USERNAME dane.llnl.gov
# Get on to a compute node
#https://hpc.llnl.gov/training/tutorials/livermore-computing-psaap3-quick-start-tutorial
salloc -N 1 --time=60:00 -p pdebug
- Load the required modules
- If using clang
module load clang/14.0.6-magic
module load cmake/3.25.2- if using intel
module load intel-classic/2021.6.0-magic
module load cmake/3.25.2- If using gcc
module load gcc/10.3.1-magic
module load cmake/3.26.3
module load mkl - Clone PETSc
cd /usr/workspace/USERNAME
git clone https://gitlab.com/petsc/petsc.git
cd petsc
git checkout main
# git checkout 70dc2cbd0e2406f6d638a3fe6bb01400f0fd5fdc #last petsc commit that seem to work 06/04/2025
export CMAKE_BUILD_PARALLEL_LEVEL=112Writing files from ablate built in /usr/workspace/ is not easy so it is recommended to build ablate and petsc at /p/lustre2/USERNAME/. This could be an issue if you are using CLion to edit your ablate directory and try to run tests that require IO. 4. Configure/Build PETSc
./configure PETSC_ARCH=arch-ablate-opt --with-debugging=0 --with-make-np=112 --download-ctetgen --download-tetgen --download-egads --download-hdf5 --download-metis --download-mumps --download-parmetis --download-scalapack --download-slepc --download-suitesparse --download-superlu_dist --download-triangle --download-zlib --download-opencascade --with-libpng --with-64-bit-indices=1 --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpifort --download-kokkos --with-blaslapack-dir=/usr/tce/packages/mkl/mkl-2022.1.0/lib/intel64/ --download-kokkos-commit=3.7.01
#Build PETSc following on screen directions- Clone ABLATE
cd /usr/workspace/USERNAME
# if using the latest main branch of ablate
git clone https://github.com/UBCHREST/ablate.git
# or your fork
git clone https://github.com/USERNAME/ablate.git- Configure and Build
# make petsc findable. This needs to be run every time before ABLATE is built.
export PETSC_DIR="/usr/workspace/USERNAME/petsc" #UPDATE to the real path of petsc
export PETSC_ARCH="arch-ablate-opt" # arch-ablate-debug or arch-ablate-opt
export PKG_CONFIG_PATH="${PETSC_DIR}/${PETSC_ARCH}/lib/pkgconfig:$PKG_CONFIG_PATH"
export HDF5_ROOT="${PETSC_DIR}/${PETSC_ARCH}"
# Include the bin directory to access mpi commands
# export PATH="${PETSC_DIR}/${PETSC_ARCH}/bin:$PATH"
export PKG_CONFIG_PATH="${PETSC_DIR}/${PETSC_ARCH}/lib/pkgconfig:$PKG_CONFIG_PATH"
# make the build directory
mkdir ablateOpt
cd ablateOpt
# configure
cmake -DCMAKE_BUILD_TYPE=Release -B . -S ../ablate
# make
make -j 8- Run Tests
export TEST_MPI_COMMAND=srun
ctest- Running simulation
This is a sample of running a provided example. Adjust the paths/inputs for your simulation or change the pool. Jobs limits can be found by entering the
news job.lim.danecommand on dane.
#!/bin/bash
#### See https://hpc.llnl.gov/training/tutorials/slurm-and-moab#LC
##### These lines are for Slurm
#SBATCH -N 1
#SBATCH -J ablateTest
#SBATCH -t 00:60:00
#SBATCH -p pdebug
#SBATCH --mail-type=ALL
#SBATCH -A sunyb
##### Load Required modules
# clang
module load clang/14.0.6-magic
module load cmake/3.25.2
# or intel
# module load module load intel-classic/2021.6.0-magic
# module load cmake/3.25.2
# or gcc
# module load gcc/10.3.1-magic
# module load cmake/3.25.0
# Load PETSC ENV
export PETSC_DIR="/usr/workspace/USERNAME/petsc"
export PETSC_ARCH="arch-ablate-opt" # arch-ablate-debug or arch-ablate-opt
export PKG_CONFIG_PATH="${PETSC_DIR}/${PETSC_ARCH}/lib/pkgconfig:$PKG_CONFIG_PATH"
export HDF5_ROOT="${PETSC_DIR}/${PETSC_ARCH}"
# Include the bin directory to access mpi commands
export PATH="${PETSC_DIR}/${PETSC_ARCH}/bin:$PATH"
# Get the input file
wget https://raw.githubusercontent.com/UBCHREST/ablate/main/tests/integrationTests/inputs/compressibleFlow/compressibleFlowVortexLodi.yaml
export INPUT_FILE=$(pwd)/compressibleFlowVortexLodi.yaml
# Make a temp directory so that tchem has a place to vomit its files
mkdir tmp_$SLURM_JOBID
cd tmp_$SLURM_JOBID
##### Launch parallel job using srun. The number of processes should be at most 36*nodes
srun -n36 /usr/workspace/USERNAME/ablateOpt/ablate --input $INPUT_FILE \
-yaml::environment::title flow_$SLURM_JOBID -yaml::timestepper::domain::faces [80,80]
echo 'Running with Spindle'
spindle --location=/var/tmp/USERNAME srun -n36 /usr/workspace/USERNAME/ablateOpt/ablate --input $INPUT_FILE \
-yaml::environment::title flow_$SLURM_JOBID -yaml::timestepper::domain::faces [80,80]
echo 'Done'The best way to visualize ABLATE results on Dane is with the LLNL virtual server and VisIt.
-
Setup your local VNC client and connect to the vnc sever following these directions.
-
Once connected, in your VNC client open the Linux terminal and ssh onto dane
ssh -X dane.llnl.gov- Once connected to dane open visit using the commands
/usr/gapps/visit/bin/visit. Using a-vflag followed by a version number will allow selection from a few installed version of VisIt. Available VisIt versions are listed by their version number within the/usr/gapps/visit/bin/visitdirectory. A sample version selection is shown below. After launching, follow the on-screen directions to select a queue. Additional details can be found in LLNL's VisIt Documentation
/usr/gapps/visit/bin/visit -v 3.3.3- VisIt can now be used similar to local installs.
-
Type module avail paraview to see the available options. You can specify a particular version in the module command, e.g., module load paraview/5.11.0. Note that your desktop installation of ParaView should match the version you are running on the LC cluster! If you aren't sure what version you are using, you can open the Help > About menu item within the ParaView application. On the cluster, you can run pvserver -V to retrieve the version information.
-
Allocate Resources, e.g., to request two nodes for 60 min:
salloc -N 2 -t 60- Start Server(s) and Create SSH Tunnel. Specify the number of pvserver tasks, e.g., to request 16 tasks:
pvserver_parallel 16You will need an SSH tunnel FROM your desktop TO this cluster. To create a tunnel, run one of the following ON YOUR DESKTOP machine:
For Mac/Linux:
ssh -L 11111:dane####:11111 [email protected]For Windows with Plink (PuTTY Link):
plink -L 11111:dane####:11111 [email protected]-
Connect Desktop ParaView Client Once you have one or more pvserver tasks running on the cluster, you can connect from your desktop client. Open ParaView on your desktop (if you haven't had it running already). Then, click on the Connect icon, or select File > Connect from the menus. Here you can set up a new server configuration.
-
Enter the values as shown here to configure the connection and save it.
Server Type: Client / Server
Host: localhost
Port: 11111