|
| 1 | +# Basic surface water mass transformation for MPAS runs |
| 2 | + |
| 3 | +The core surface water mass transformation (WMT) calculations are located in [`watermasstools.py`](https://github.com/MPAS-Dev/MPAS-QuickViz/blob/master/ocean/AMOC/watermassanalysis/modules/watermasstools.py). These calculations contain one additional dependency beyond the `e3sm_unified` environment |
| 4 | + |
| 5 | + - [`fastjmd95`](https://github.com/xgcm/fastjmd95) -- A numba-accelerated package for the Jackett and McDougall (1995) equation of state |
| 6 | + |
| 7 | +### Serial usage |
| 8 | + |
| 9 | +The command line executable module `basic_surface_wmt.py` is a postprocessing wrapper around the core WMT functions. This module is set up to accomplish two tasks |
| 10 | + |
| 11 | + 1. Build a basic coordinates file `python basic_surface_wmt.py -c [MESHNAME]` |
| 12 | + 2. Process a single monthly results file `python basic_surface_wmt.py -f [FILENUMBER] [MESHNAME]` |
| 13 | + |
| 14 | +Here `MESHNAME` is either `LR` (`EC30to60E2r2`) or `HR` (`oRRS18to6v3`). Both options use the CORE-II E3SMv2 G-cases with the `20210421_sim7` tag. Different runs can be specified in `parameters.yaml` (small changes to `python basic_surface_wmt.py` will probably also be required). |
| 15 | + |
| 16 | +### Parallel usage |
| 17 | + |
| 18 | +The workflow is set up to process single monthly files such that each serial task can be distributed across multiple cpus on a single node. I use GNU Parallel for this. The general workflow is as follows |
| 19 | + |
| 20 | +``` |
| 21 | +#!/bin/bash |
| 22 | +#SBATCH --job-name=JOB_NAME |
| 23 | +#SBATCH --qos=regular |
| 24 | +#SBATCH --nodes=1 |
| 25 | +#SBATCH --ntasks-per-node=64 |
| 26 | +#SBATCH --cpus-per-task=1 |
| 27 | +#SBATCH --constraint=cpu |
| 28 | +#SBATCH --exclusive |
| 29 | +#SBATCH --output=JOB_NAME.o%j |
| 30 | +#SBATCH --time=1:00:00 |
| 31 | +#SBATCH --account=ACCOUNT |
| 32 | +
|
| 33 | +source $HOME/.bashrc |
| 34 | +module load parallel |
| 35 | +conda activate ENV_NAME |
| 36 | +
|
| 37 | +meshName=LR |
| 38 | +savePath="/path/to/${meshName}" |
| 39 | +mkdir -p "${savePath}/monthly_files" |
| 40 | +mkdir -p "${savePath}/concatenated" |
| 41 | +
|
| 42 | +# Calculate coords file (-c loads the coords file) |
| 43 | +python basic_surface_wmt.py -p "${savePath}" -c "${meshName}" |
| 44 | +
|
| 45 | +# We use GNU Parallel to run the serial monthly processing across all cpus on our node |
| 46 | +PARALLEL_OPTS="-N 1 --delay .2 -j $SLURM_NTASKS --joblog parallel-${SLURM_JOBID}.log" |
| 47 | +
|
| 48 | +# Here GNU Parallel distributes the individual SRUN tasks, so for SRUN |
| 49 | +SRUN_OPTS="-N 1 -n 1 --exclusive" |
| 50 | +
|
| 51 | +# Process the monthly files, $(seq 0 119) does the first 10 years |
| 52 | +parallel $PARALLEL_OPTS srun $SRUN_OPTS \ |
| 53 | + python basic_surface_wmt.py -p "${savePath}" -f {} "${meshName}" ::: $(seq 0 119) |
| 54 | +
|
| 55 | +# Concatenate monthly files |
| 56 | +ncrcat -h ${savePath}/monthly_files/${meshName}_WMT1D* \ |
| 57 | + ${savePath}/concatenated/${meshName}_WMT1D_years1-10.nc |
| 58 | +ncrcat -h ${savePath}/monthly_files/${meshName}_WMT2D* \ |
| 59 | + ${savePath}/concatenated/${meshName}_WMT2D_years1-10.nc |
| 60 | +``` |
| 61 | + |
| 62 | +### Visualization |
| 63 | + |
| 64 | +A basic visualization example can be found in `basic_surface_wmt.ipynb`. |
0 commit comments