diff --git a/docs/Batch_Computing/Batch_Computing_Guide.md b/docs/Batch_Computing/Batch_Computing_Guide.md index 032bea4ae..38d71b3fa 100644 --- a/docs/Batch_Computing/Batch_Computing_Guide.md +++ b/docs/Batch_Computing/Batch_Computing_Guide.md @@ -122,7 +122,7 @@ You can find more details on its use on the [Slurm Documentation](https://slurm. ### Checking job efficiency -After a job has completed you can get the basic usage information using `nn_seff `. +After a job has completed you can get the basic usage information using `seff `. This will return an output as below: ``` out diff --git a/docs/Batch_Computing/Checking_resource_usage.md b/docs/Batch_Computing/Checking_resource_usage.md index 6fab9fc88..be479bfbc 100644 --- a/docs/Batch_Computing/Checking_resource_usage.md +++ b/docs/Batch_Computing/Checking_resource_usage.md @@ -3,6 +3,7 @@ created_at: '2022-02-15T01:13:51Z' tags: - slurm - accounting +status: deprecated --- To check your project's usage of Slurm-managed resources, you can use diff --git a/docs/Getting_Started/FAQs/Ive_run_out_of_storage_space.md b/docs/Getting_Started/FAQs/Ive_run_out_of_storage_space.md index 6fbf48cbf..73e47cd08 100644 --- a/docs/Getting_Started/FAQs/Ive_run_out_of_storage_space.md +++ b/docs/Getting_Started/FAQs/Ive_run_out_of_storage_space.md @@ -11,7 +11,7 @@ There are two tracked resources in the Mahuika filesystem, *disk space* and Trying to write to a filesystem over its inode or disk quota will cause an error (and probably kill your job). -Current file-count and disk space can be found using `nn_storage_quota`. +Current file-count and disk space can be found using `storage_quota`. ```sh Quota_Location AvailableGiB UsedGiB Use% @@ -22,7 +22,7 @@ nobackup_nesi99999 6.833T !!! note There is a delay between making changes to a filesystem and seeing the - change in `nn_storage_quota`, immediate file count and disk space can + change in `storage_quota`, immediate file count and disk space can be found using the commands `du --inodes` and `du -h` respectively. There are a few ways to deal with file count problems diff --git a/docs/Interactive_Computing/OnDemand/ood_troubleshooting.md b/docs/Interactive_Computing/OnDemand/ood_troubleshooting.md index fc15a22c2..c005d6a4b 100644 --- a/docs/Interactive_Computing/OnDemand/ood_troubleshooting.md +++ b/docs/Interactive_Computing/OnDemand/ood_troubleshooting.md @@ -37,10 +37,10 @@ To resolve this issue: ![NeSI_OnDemand_No_Space_Left_1.png](../../assets/images/NeSI_OnDemand_No_Space_Left_2.png) 2. Log in to your NeSI account through the terminal. -3. Type into the terminal ```nn_storage_quota```. This will show the amount of space in your `home`, `project`, and `nobackup` directories. You will see that your `home` directory is full. +3. Type into the terminal ```storage_quota```. This will show the amount of space in your `home`, `project`, and `nobackup` directories. You will see that your `home` directory is full. ```sh - username@login03:~$ nn_storage_quota + username@login03:~$ storage_quota Quota_Location AvailableGiB UsedGiB Use% home_username 20 20 100 ``` diff --git a/docs/Software/Available_Applications/COMSOL.md b/docs/Software/Available_Applications/COMSOL.md index 8136c026a..cc2c233d2 100644 --- a/docs/Software/Available_Applications/COMSOL.md +++ b/docs/Software/Available_Applications/COMSOL.md @@ -164,7 +164,7 @@ Multithreading will benefit jobs using less than ## Tmpdir -If you find yourself receiving the error 'Disk quota exceeded', yet `nn_storage_quota` shows plenty of room in your filesystem, you may be running out of tmpdir. +If you find yourself receiving the error 'Disk quota exceeded', yet `storage_quota` shows plenty of room in your filesystem, you may be running out of tmpdir. This can be fixed by using the `--tmpdir` flag in the comsol command line, e.g. `comsol --tmpdir /nesi/nobackup/nesi99991/comsoltmp`, or by exporting `TMPDIR` before running the command, e.g. `export TMPDIR=/nesi/nobackup/nesi99991/comsoltmp`. You may also want to set this at the Java level with `export _JAVA_OPTIONS=-Djava.io.tmpdir=/nesi/nobackup/nesi99991/comsoltmp` diff --git a/docs/Software/Available_Applications/Nextflow.md b/docs/Software/Available_Applications/Nextflow.md index 53022fe91..2999ceb9a 100644 --- a/docs/Software/Available_Applications/Nextflow.md +++ b/docs/Software/Available_Applications/Nextflow.md @@ -302,7 +302,7 @@ Duration : 11h 16m 47s CPU hours : 319.6 ```bash -> nn_seff +> seff Cluster: hpc Job ID: 3034402 State: ['COMPLETED'] @@ -336,7 +336,7 @@ Labeled processes (list below) could submit via slurm array requesting 12 CPUs, - `QUALIMAP_RNASEQ` ```bash -nn_seff +seff Cluster: hpc Job ID: 3059728 State: ['OUT_OF_MEMORY'] diff --git a/docs/Software/Available_Applications/OpenFOAM.md b/docs/Software/Available_Applications/OpenFOAM.md index 2ae216767..b9db0111f 100644 --- a/docs/Software/Available_Applications/OpenFOAM.md +++ b/docs/Software/Available_Applications/OpenFOAM.md @@ -100,9 +100,9 @@ There are a few ways to mitigate this use and I/O load. - **Monitor Filesystem** - The command `nn_storage_quota` should be used to track filesystem + The command `storage_quota` should be used to track filesystem usage. There is a delay between making changes to a filesystem and - seeing it on `nn_storage_quota`. + seeing it on `storage_quota`. ```sh Filesystem Available Used Use% Inodes IUsed IUse% diff --git a/docs/Software/Profiling_and_Debugging/Finding_Job_Efficiency.md b/docs/Software/Profiling_and_Debugging/Finding_Job_Efficiency.md index a7c3ab39a..fd0f616d6 100644 --- a/docs/Software/Profiling_and_Debugging/Finding_Job_Efficiency.md +++ b/docs/Software/Profiling_and_Debugging/Finding_Job_Efficiency.md @@ -12,12 +12,12 @@ completion, this way you can improve your job specifications in the future. Once your job has finished check the relevant details using the tools: -`nn_seff` or `sacct` For example: +`seff` or `sacct` For example: -### Using `nn_seff` +### Using `seff` ```bash -nn_seff 30479534 +seff 30479534 ``` ```txt @@ -36,7 +36,7 @@ Mem Efficiency: 10.84% 111.00 MB of 1.00 GB Notice that the CPU efficiency was high but the memory efficiency was low and consideration should be given to reducing memory requests for similar jobs. If in doubt, please contact [support@nesi.org.nz](mailto:support@nesi.org.nz) for guidance. -_nn_seff_ is based on the same numbers as are shown by _sacct_. +_seff_ is based on the same numbers as are shown by _sacct_. ### Using `sacct` diff --git a/docs/Storage/Filesystems_and_Quotas.md b/docs/Storage/Filesystems_and_Quotas.md index c393dd7fc..27e053649 100644 --- a/docs/Storage/Filesystems_and_Quotas.md +++ b/docs/Storage/Filesystems_and_Quotas.md @@ -10,10 +10,10 @@ You may query your actual usage and disk allocations using the following command: ```sh - nn_storage_quota + storage_quota ``` -The values for `nn_storage_quota` are updated approximately every hour +The values for `storage_quota` are updated approximately every hour and cached between updates. ![neSI\_filetree.svg](../assets/images/NeSI_File_Systems_and_Quotas.png) diff --git a/docs/Tutorials/Introduction_To_HPC/Bash_Shell.md b/docs/Tutorials/Introduction_To_HPC/Bash_Shell.md index 71232d4ff..fdf3ed1ef 100644 --- a/docs/Tutorials/Introduction_To_HPC/Bash_Shell.md +++ b/docs/Tutorials/Introduction_To_HPC/Bash_Shell.md @@ -85,7 +85,7 @@ When your space is locked you will need to move or remove data. Also note that none of the nobackup space is being used, a smart idea would be to move data from `project` to `nobackup`. !!! note - `nn_storage_quota` uses cached data, and so will not immediately show changes to storage use. + `storage_quota` uses cached data, and so will not immediately show changes to storage use. For more details on our persistent and nobackup storage systems, including data retention and the nobackup auto-delete schedule, please see our [Filesystem and Quota](../../Storage/Filesystems_and_Quotas.md) documentation. diff --git a/docs/Tutorials/Introduction_To_HPC/Resources.md b/docs/Tutorials/Introduction_To_HPC/Resources.md index f5a36c718..9d00621c6 100644 --- a/docs/Tutorials/Introduction_To_HPC/Resources.md +++ b/docs/Tutorials/Introduction_To_HPC/Resources.md @@ -179,10 +179,10 @@ _48 seconds_ used out of _15 minutes_ requested give a time efficiency of about b. Memory efficiency is `( 14 / 32 ) x 100` or around **43%**. -For convenience, Mahuika has provided the command `nn_seff ` to calculate **S**lurm **Eff**iciency (all Mahuika commands start with `nn_`, for **N**eSI **N**IWA). +For convenience, Mahuika has provided the command `seff ` to calculate **S**lurm **Eff**iciency. ```sh - nn_seff + seff ``` ```out diff --git a/docs/Tutorials/Introduction_To_HPC/Scaling.md b/docs/Tutorials/Introduction_To_HPC/Scaling.md index c52b61bae..2fc11a4c7 100644 --- a/docs/Tutorials/Introduction_To_HPC/Scaling.md +++ b/docs/Tutorials/Introduction_To_HPC/Scaling.md @@ -43,9 +43,9 @@ It is worth noting that Amdahl's law assumes all other elements of scaling are h seconds will not have it's memory use recorded. Submit the job with `sbatch --acctg-freq 1 example_job.sl`. 4. Watch the job with `squeue --me` or `watch squeue --me`. - 5. On completion of job, use `nn_seff `. + 5. On completion of job, use `seff `. 6. Record the jobs "Elapsed", "TotalCPU", and "Memory" values in the spreadsheet. - (Hint: They are the first numbers after the percentage efficiency in output of `nn_seff`). + (Hint: They are the first numbers after the percentage efficiency in output of `seff`). Make sure you have entered the values in the correct format and there is a tick next to each entry. ![Correctly entered data in spreadsheet](../../assets/images/correct-spreadsheet-entry.png)