Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/Batch_Computing/Batch_Computing_Guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ You can find more details on its use on the [Slurm Documentation](https://slurm.

### Checking job efficiency

After a job has completed you can get the basic usage information using `nn_seff <job-id>`.
After a job has completed you can get the basic usage information using `seff <job-id>`.
This will return an output as below:

``` out
Expand Down
1 change: 1 addition & 0 deletions docs/Batch_Computing/Checking_resource_usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ created_at: '2022-02-15T01:13:51Z'
tags:
- slurm
- accounting
status: deprecated
---

To check your project's usage of Slurm-managed resources, you can use
Expand Down
4 changes: 2 additions & 2 deletions docs/Getting_Started/FAQs/Ive_run_out_of_storage_space.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ There are two tracked resources in the Mahuika filesystem, *disk space* and
Trying to write to a filesystem over its inode or disk quota will cause
an error (and probably kill your job).

Current file-count and disk space can be found using `nn_storage_quota`.
Current file-count and disk space can be found using `storage_quota`.

```sh
Quota_Location AvailableGiB UsedGiB Use%
Expand All @@ -22,7 +22,7 @@ nobackup_nesi99999 6.833T

!!! note
There is a delay between making changes to a filesystem and seeing the
change in `nn_storage_quota`, immediate file count and disk space can
change in `storage_quota`, immediate file count and disk space can
be found using the commands `du --inodes` and `du -h` respectively.

There are a few ways to deal with file count problems
Expand Down
4 changes: 2 additions & 2 deletions docs/Interactive_Computing/OnDemand/ood_troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,10 +37,10 @@ To resolve this issue:
![NeSI_OnDemand_No_Space_Left_1.png](../../assets/images/NeSI_OnDemand_No_Space_Left_2.png)

2. Log in to your NeSI account through the terminal.
3. Type into the terminal ```nn_storage_quota```. This will show the amount of space in your `home`, `project`, and `nobackup` directories. You will see that your `home` directory is full.
3. Type into the terminal ```storage_quota```. This will show the amount of space in your `home`, `project`, and `nobackup` directories. You will see that your `home` directory is full.

```sh
username@login03:~$ nn_storage_quota
username@login03:~$ storage_quota
Quota_Location AvailableGiB UsedGiB Use%
home_username 20 20 100
```
Expand Down
2 changes: 1 addition & 1 deletion docs/Software/Available_Applications/COMSOL.md
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,7 @@ Multithreading will benefit jobs using less than

## Tmpdir

If you find yourself receiving the error 'Disk quota exceeded', yet `nn_storage_quota` shows plenty of room in your filesystem, you may be running out of tmpdir.
If you find yourself receiving the error 'Disk quota exceeded', yet `storage_quota` shows plenty of room in your filesystem, you may be running out of tmpdir.
This can be fixed by using the `--tmpdir` flag in the comsol command line, e.g. `comsol --tmpdir /nesi/nobackup/nesi99991/comsoltmp`, or by exporting `TMPDIR` before running the command, e.g. `export TMPDIR=/nesi/nobackup/nesi99991/comsoltmp`.

You may also want to set this at the Java level with `export _JAVA_OPTIONS=-Djava.io.tmpdir=/nesi/nobackup/nesi99991/comsoltmp`
Expand Down
4 changes: 2 additions & 2 deletions docs/Software/Available_Applications/Nextflow.md
Original file line number Diff line number Diff line change
Expand Up @@ -302,7 +302,7 @@ Duration : 11h 16m 47s
CPU hours : 319.6

```bash
> nn_seff <job-id>
> seff <job-id>
Cluster: hpc
Job ID: 3034402
State: ['COMPLETED']
Expand Down Expand Up @@ -336,7 +336,7 @@ Labeled processes (list below) could submit via slurm array requesting 12 CPUs,
- `QUALIMAP_RNASEQ`

```bash
nn_seff <job-id>
seff <job-id>
Cluster: hpc
Job ID: 3059728
State: ['OUT_OF_MEMORY']
Expand Down
4 changes: 2 additions & 2 deletions docs/Software/Available_Applications/OpenFOAM.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,9 +100,9 @@ There are a few ways to mitigate this
use and I/O load.

- **Monitor Filesystem**
The command `nn_storage_quota` should be used to track filesystem
The command `storage_quota` should be used to track filesystem
usage. There is a delay between making changes to a filesystem and
seeing it on `nn_storage_quota`.
seeing it on `storage_quota`.

```sh
Filesystem Available Used Use% Inodes IUsed IUse%
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,12 +12,12 @@ completion, this way you can improve your job specifications in the
future.

Once your job has finished check the relevant details using the tools:
`nn_seff` or `sacct` For example:
`seff` or `sacct` For example:

### Using `nn_seff`
### Using `seff`

```bash
nn_seff 30479534
seff 30479534
```

```txt
Expand All @@ -36,7 +36,7 @@ Mem Efficiency: 10.84% 111.00 MB of 1.00 GB
Notice that the CPU efficiency was high but the memory efficiency was low and consideration should be given to reducing memory requests
for similar jobs. If in doubt, please contact [[email protected]](mailto:[email protected]) for guidance.

_nn_seff_ is based on the same numbers as are shown by _sacct_.
_seff_ is based on the same numbers as are shown by _sacct_.

### Using `sacct`

Expand Down
4 changes: 2 additions & 2 deletions docs/Storage/Filesystems_and_Quotas.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,10 +10,10 @@ You may query your actual usage and disk allocations using the following
command:

```sh
nn_storage_quota
storage_quota
```

The values for `nn_storage_quota` are updated approximately every hour
The values for `storage_quota` are updated approximately every hour
and cached between updates.

![neSI\_filetree.svg](../assets/images/NeSI_File_Systems_and_Quotas.png)
Expand Down
2 changes: 1 addition & 1 deletion docs/Tutorials/Introduction_To_HPC/Bash_Shell.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ When your space is locked you will need to move or remove data.
Also note that none of the nobackup space is being used, a smart idea would be to move data from `project` to `nobackup`.

!!! note
`nn_storage_quota` uses cached data, and so will not immediately show changes to storage use.
`storage_quota` uses cached data, and so will not immediately show changes to storage use.

For more details on our persistent and nobackup storage systems, including data retention and the nobackup auto-delete schedule,
please see our [Filesystem and Quota](../../Storage/Filesystems_and_Quotas.md) documentation.
Expand Down
4 changes: 2 additions & 2 deletions docs/Tutorials/Introduction_To_HPC/Resources.md
Original file line number Diff line number Diff line change
Expand Up @@ -179,10 +179,10 @@ _48 seconds_ used out of _15 minutes_ requested give a time efficiency of about

b. Memory efficiency is `( 14 / 32 ) x 100` or around **43%**.

For convenience, Mahuika has provided the command `nn_seff <jobid>` to calculate **S**lurm **Eff**iciency (all Mahuika commands start with `nn_`, for **N**eSI **N**IWA).
For convenience, Mahuika has provided the command `seff <jobid>` to calculate **S**lurm **Eff**iciency.

```sh
nn_seff <jobid>
seff <jobid>
```

```out
Expand Down
4 changes: 2 additions & 2 deletions docs/Tutorials/Introduction_To_HPC/Scaling.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,9 +43,9 @@ It is worth noting that Amdahl's law assumes all other elements of scaling are h
seconds will not have it's memory use recorded.
Submit the job with `sbatch --acctg-freq 1 example_job.sl`.
4. Watch the job with `squeue --me` or `watch squeue --me`.
5. On completion of job, use `nn_seff <job-id>`.
5. On completion of job, use `seff <job-id>`.
6. Record the jobs "Elapsed", "TotalCPU", and "Memory" values in the spreadsheet.
(Hint: They are the first numbers after the percentage efficiency in output of `nn_seff`).
(Hint: They are the first numbers after the percentage efficiency in output of `seff`).
Make sure you have entered the values in the correct format and there is a tick next to each entry.
![Correctly entered data in spreadsheet](../../assets/images/correct-spreadsheet-entry.png)

Expand Down
Loading