Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
88 commits
Select commit Hold shift + click to select a range
79c88ea
Added interative visualization for easier debugging and interpreting …
rohansaw Jun 4, 2025
5f44688
added visualization generation to snakemake
rohansaw Jun 4, 2025
9110e85
minor changes
rohansaw Jun 5, 2025
9ceec64
Quickfix - Documentation updates, simplified setup helper, inclusion …
rohansaw Jun 6, 2025
6bcbcd5
added fallback from collection1 for datespan when not available. Adju…
rohansaw Jun 6, 2025
5891210
Merge remote-tracking branch 'origin/develop' into fix/stac-collectio…
rohansaw Jun 6, 2025
67035a3
Added output that aggregates all preds +_groundtruth to combined csvs…
rohansaw Jun 12, 2025
1ea3370
save setuphelper local
rohansaw Jun 13, 2025
09bd994
Merge branch 'setuphelper' into feature/more-visualizations
rohansaw Jun 13, 2025
8cd49c8
Lots of messy changes for experiments before LPS - will be refactored
rohansaw Jun 18, 2025
04f3982
WIP improving interactive map - adding error visualizations for easie…
rohansaw Jun 18, 2025
07627e1
Adding nasapower gridalignment option to pipeline
rohansaw Jun 18, 2025
b88a558
refactoring file structure to be more meaningfull
rohansaw Jun 18, 2025
8c3a043
Refactoring folder structure and only keepinga single max lai band to…
rohansaw Jun 18, 2025
111cbef
Merge branch 'feature/more-visualizations' into feature/multiyear-pre…
rohansaw Jun 18, 2025
96a5ce4
Outsourcing stac downlaoder to external backage for decoupling and re…
rohansaw Jun 30, 2025
dd7eb0b
Merge branch 'feature/more-visualizations' into feature/multiyear-pre…
rohansaw Jun 30, 2025
23087d7
cleanup
rohansaw Jun 30, 2025
0ea88fb
Merge branch 'feature/multiyear-preds-csvs' of github.com:JPLMLIA/ver…
rohansaw Jun 30, 2025
3d1e7a7
Merged with refactored structure
rohansaw Jun 30, 2025
6328af7
Merge branch 'feature/multiyear-preds-csvs' of github.com:JPLMLIA/ver…
rohansaw Jun 30, 2025
305cf83
Fixing snakemake pipeline and adding some dependancy versions
rohansaw Jul 1, 2025
3d80deb
Adjusting CRS to EPSG:4326 for imagery search through STAC
rohansaw Jul 9, 2025
cc55752
harmonizing imagery to conforgm shift < processing baseline 4.0 alrea…
rohansaw Jul 9, 2025
ae364fd
WIP improving imagery download
rohansaw Jul 9, 2025
e66ed07
Merge branch 'feature/multiyear-preds-csvs' of github.com:JPLMLIA/ver…
rohansaw Jul 9, 2025
fe3e5b0
WIP runnnig vercye via simple cli
rohansaw Jul 9, 2025
691589a
Added chirps downloader to cli. Added config validator to catch a bun…
rohansaw Jul 9, 2025
fb21ef7
Updating docs
rohansaw Jul 9, 2025
96ee6b0
Improving query inteersection by not creating a convex hull, but rath…
rohansaw Jul 10, 2025
3feb260
Updated integration of reference data, and including reference data i…
rohansaw Jul 10, 2025
4068b52
Minor changes
rohansaw Jul 10, 2025
9fe0f33
Update comment for offset in S2
rohansaw Jul 10, 2025
a2bea1d
Fixing snakemake integration of interactive visualization of results …
rohansaw Jul 10, 2025
fa190a2
Merge branch 'feature/multiyear-preds-csvs' of github.com:JPLMLIA/ver…
rohansaw Jul 10, 2025
b18f5e5
Merged fixed snakefile
rohansaw Jul 10, 2025
554f396
Running snakemake through subprocess instead of through python API in…
rohansaw Jul 11, 2025
abf9705
improved handling of lai generation with default path
rohansaw Jul 11, 2025
a243218
fixed lai env var name
rohansaw Jul 11, 2025
d80efd3
fix lai path
rohansaw Jul 11, 2025
7c8c22d
Fixing relate default paths and allowing to set keep_imagery param du…
rohansaw Jul 11, 2025
b0d7d27
remove unused weight param
rohansaw Jul 11, 2025
e3d38dd
Using relative paths in config validation aswell
rohansaw Jul 11, 2025
5934de1
Updated docs and .env_example added
rohansaw Jul 11, 2025
9f5ebe3
Updated setup instructions with tested python version
rohansaw Jul 11, 2025
253eaef
Updated docs
rohansaw Jul 11, 2025
741b718
Add actual git clone to APSIM docs
rohansaw Jul 11, 2025
04c4b9d
Fixing baseline 4.0 offset
rohansaw Jul 11, 2025
16f52d5
Zipping interactive map with simulation pngs
rohansaw Jul 14, 2025
22676db
Updated MPC imagery download to match new STAC Downloader format
rohansaw Jul 15, 2025
0455402
Adding sanity check
rohansaw Jul 15, 2025
85e7703
Avoiding failing on empty items received
rohansaw Jul 15, 2025
7dc40f0
Updated docs and token generation for old GEE-based LAI generation
Jul 29, 2025
00b7493
fixed typo in token generation and updating docs
rohansaw Jul 30, 2025
8088066
Updating requirements with google cloud dependancies to reduce user e…
rohansaw Jul 30, 2025
c9f1c99
Updating map with hover information
Jul 30, 2025
d1e6890
Updating map with hover info
Jul 30, 2025
56a1232
Added rule to save full best simulations as csv for analysis and rese…
Jul 31, 2025
e98cd93
Merge branch 'feature/multiyear-preds-csvs' of https://github.com/JPL…
Jul 31, 2025
cb1b6fd
Updated docs
Jul 31, 2025
e6a7089
Added basic setup for setting fixed sowing dates instead of sowing wi…
Jul 31, 2025
a2f76b8
Fixed early return in json dict search
Jul 31, 2025
034c441
Added sowing date ingestion to Snakefile and example configs and upda…
Jul 31, 2025
6736a5e
Setup simple vercye webapp allowing to interacively run vercye yields…
Aug 4, 2025
4b59a55
refactoring to own routers per domain and adding lai retrieval and vi…
Aug 5, 2025
c23e0bf
Improving workflow handling in ui
Aug 6, 2025
4661f4e
Automatically fetching missing CHIRPS data from webui study run call …
rohansaw Aug 12, 2025
0447f86
Linting complete codebase and adding pre-commit hooks. Improving weba…
rohansaw Aug 29, 2025
8c2b5ed
Adding Lint & Test CI
rohansaw Aug 29, 2025
9ab37b6
fix lint
rohansaw Aug 29, 2025
ce5bc30
fix gdal version in CI
rohansaw Aug 29, 2025
ae0b287
Fix module name in test
rohansaw Aug 29, 2025
7563965
quickfix for testing
rohansaw Aug 29, 2025
9fc0636
Usability improvements
rohansaw Sep 2, 2025
b469de0
WIP HLS and setup improvements
rohansaw Sep 5, 2025
041bdee
Improving usability
rohansaw Sep 5, 2025
85e3b13
WIP improving usability
rohansaw Sep 5, 2025
acff728
Merge branch 'feature/HLS-LAI-generation' into develop
rohansaw Sep 5, 2025
fc20caa
Fixed np caching overwrite, and accumulation of results without groun…
rohansaw Oct 5, 2025
c88d7d6
Updating docs
rohansaw Oct 5, 2025
7cce50f
doc updates
rohansaw Oct 5, 2025
5ae7a3b
Fixing nodata value also set in profile in cloudmasking. Minor other …
rohansaw Oct 13, 2025
acaa347
Merge branch 'develop' of https://github.com/JPLMLIA/vercye_ops into …
rohansaw Oct 13, 2025
f4a8a84
Adding deps for xhtml2pdf to CI
rohansaw Oct 13, 2025
3d19bd4
removing invalid testcases that would need further preprocessing for …
rohansaw Oct 13, 2025
c1361ad
Updating documentation
rohansaw Oct 14, 2025
19028d9
Further updates to docs
rohansaw Oct 14, 2025
5978ec4
Consolidating changes from main
rohansaw Oct 14, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
29 changes: 17 additions & 12 deletions docs/docs/LAI/intro.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,12 @@
# VeRCYe LAI Generation

VeRCYE is designed to identify best matching APSIM simulations with actual remotely sensed Leaf Area Index (LAI) data. This documentation covers the LAI data generation pipeline, which is a separate but essential component of the VeRCYE workflow. The LAI is not a true remotely sensed value, but is rather estimated from a
combination of bands using neural networks that were converted to Pytorch from the [Leaf Toolbox](https://github.com/rfernand387/LEAF-Toolbox).
VeRCYE is designed to identify best matching APSIM simulations based on actual remotely sensed Leaf Area Index (LAI) data. This documentation covers the LAI data generation pipeline, which is a separate but essential component of the VeRCYE workflow. The LAI is not a true remotely sensed value, but is rather estimated from a
combination of bands using neural networks that were converted to Pytorch from the [Leaf Toolbox](https://github.com/rfernand387/LEAF-Toolbox) that was developed by Fernandez et al.

Two different LAI models from the Leaf Toolbox were re-implemented more efficiently with Pytorch:
- The 20m general purpose and strongly validated model.
- The 10m model, which should be considered experimental at the moment, as it was not yet validated in depth. Check the LEAF toolbox for details to the models.


The LAI creation pipeline described in this documentation transforms Sentinel-2 satellite imagery into LAI estimates that can be used by the VeRCYE algorithm. Currently only Sentinel-2 data is supported, but a version using Harmonized Landsat-Sentinel Imagery is planned.

Expand All @@ -16,7 +21,7 @@ We provide two methods for exporting remotely sensed imagery and deriving LAI pr

**A:** Exporting RS imagery from **Google Earth Engine**

**B:** Downloading RS imagery through an open source **STAC catalog** and data hosted on AWS.
**B:** Downloading RS imagery through an open source **STAC catalog** and data hosted on AWS/Azure.

**C:** Using your own LAI data

Expand All @@ -28,29 +33,29 @@ Google Drive or a Google Cloud Storage Bucket, from which it can be downloaded t

**Pro:**

- Directly export mosaics with bands resampled to the same resolution and CRS
- Directly export mosaics with bands resampled to the same resolution and CRS (EPSG:4326).
- Strong cloud masking algorithm (Google Cloud Score Plus)

**Con**

- Slow for large regions, due to limited number of parallel export processes
- Very slow for large regions, due to limited number of parallel export processes
- Exported data is exported to either Google Drive (Free) or Google Cloud Storage (Fees apply), and downloaded from there, but requires more manual setup which might be tedious especially on remote systems.

### B: STAC & AWS Export
This approach queries a STAC catalog to identify all Sentinel-2 Tiles intersecting the region of interest within the timespan. The individual tiles are then downloaded from an AWS bucket.
### B: STAC Based Export
This approach queries a STAC catalog to identify all Sentinel-2 Tiles intersecting the region of interest within the timespan. The individual tiles are then downloaded from an AWS or Microsoft Planetary Computer (Azure) bucket.

You can choose between selecting data hosted by Element84 on AWS (`Sentinel-2 L2A Colection 1` ), in which all historial data was processed using `Processing Baseline 5.0`, however this collection is currently missing large timespans (e.g 2022,2023). Alternativeley, you can use the Microsoft Planetary Computer (`Sentinel-2 L2A`).
You can choose between selecting data hosted by Element84 on AWS (`Sentinel-2 L2A Colection 1` ), in which all historial data was processed using `Processing Baseline 5.0`, however this collection is currently missing large timespans (e.g 2022,2023). Alternativeley, as a default we use the Microsoft Planetary Computer (`Sentinel-2 L2A`). The data is downloaded and processed to match the `S2_SR_Harmonized` collection in GEE.

**Pro**:

- Very fast download in HPC environment due to high level of parallelism
- Completely free download of data
- Harmonized to data in `Sentinel-2 L2A Colection 1` - all data processed using modern Baseline 5.0.

**Con**:

- Less Accurate Cloudmask in comparison to Google Cloud Score Plus. Cloud mask is based on SCL + S2-Cloudless.
- As of May 27th 2025, `Sentinel-2 L2A Colection 1` does not contain data for 2022 and parts of 2023. According to ESA this backfill is scheduled to be completed until Q2 2025.
- Less Accurate Cloudmask in comparison to Google Cloud Score Plus. Cloud mask is based on SCL.
- As of May 27th 2025, `Sentinel-2 L2A Colection 1` does not contain data for 2022 and parts of 2023. According to ESA this backfill is scheduled to be completed until Q2 2025, however this needs to be validated.
- Sentinel-2 data in MPC has inconsistent nodata values (should be all 0, but sometimes no nodata value is set), which might hint to differences in processing aswell.

### C: Bring your own LAI data
If you already have LAI data or are planning to generate it with a different pipeline this is also possible. Simply ensure the file names match our required format. All files ned to be located in a single folder and the filename needs to satisfy the following format:
Expand All @@ -61,4 +66,4 @@ If you already have LAI data or are planning to generate it with a different pip
- The `date` should be in the YYYY-MM-DD format.
- The `file extension` can be either `.vrt` or `.tif`

Additionally, you will have to ensure all your LAI files have exactly the same resolution and CRS and match the scale and offset as used in our inbuilt imagery.
Additionally, you will have to ensure all your LAI files have exactly the same resolution and CRS and extent and match the scale and offset as used in our inbuilt imagery. Currently all the imagery is scaled by multiplying with 0.001 to convert the int data to float.
4 changes: 2 additions & 2 deletions docs/docs/LAI/running.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
The pipeline produces LAI products for VERCYe and is intended for scaled deployment on servers or HPC with minimal human intervention. The Sentinel-2 LAI model is by Fernandes et al. from https://github.com/rfernand387/LEAF-Toolbox. We provide two methods for exporting remotely sensed imagery and deriving LAI products:

- **A:** Exporting RS imagery from **Google Earth Engine** (slow, more setup required, better cloudmasks)
- **B: **Downloading RS imagery through an open source **STAC catalog** and data hosted on AWS or MPC (fast, inferior cloudmasking).
- **B: **Downloading RS imagery through an open source **STAC catalog** and data hosted on AWS or MPC/Azure (fast, inferior cloudmasking).

The individual advantages are detailed in the [introduction](intro.md#lai-generation). This document details the instruction on how to download remotely sensed imagery and derive LAI data. For both approaches we provide pipelines that simply require specifying a configuration and then handle the complete process from exporting and downloading remotely sensed imagery to cloudmasking and deriving LAI estimates. Details of the invididual components of the pipelines can be found in the readme of the corresponding folders.

Expand Down Expand Up @@ -151,7 +151,7 @@ keep_imagery: false

- `date_ranges`: Define multiple seasonal or arbitrary time windows to process (in YYY-MM-DD format).

- `resolution`: Spatial resolution in meters. (Typically 10 or 20)
- `resolution`: Spatial resolution in meters. (Typically 10 or 20). When specifying 10m - a different model for 10m resolution is used, which is not yet validated in depth!
- `geojson-path`: Path to your regions of interest geojson. Will create a bounding box for each geometry and query the intersecting tiles.
- `out_dir`: Output directory for all generated data.
- `region_out_prefix`: Prefix for the output VRT filenames - typically the name of the GeoJSON region.
Expand Down
7 changes: 2 additions & 5 deletions docs/docs/Vercye/apsim.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,6 @@
APSIM is an agricultural modelling framework that can simulate a variety of biophysical processes for differen crops.
APSIM is an agricultural modelling framework that can simulate a variety of biophysical processes for different crops.

VeRCYe relies on the APSIMX framework for generating various simulations in a realistic range.


**TODO needs to be updates - currently only copied Mark instructions, but they are incomplete**
VeRCYe relies on the APSIM Next-Gen framework for generating various simulations in a realistic range of input parameters (management practices, soils, water, etc. ).

### Installing APSIMX
Visit [https://www.apsim.info](https://www.apsim.info) and make sure you have the proper license.
Expand Down
1 change: 1 addition & 0 deletions docs/docs/Vercye/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,7 @@ The complete logic is defined in `vecrye_ops/snakemake/Snakefile`.
### 4. Meteorological Data Acquisition

**Supported Sources**:

- **ERA5** (via Google Earth Engine): Max 10 concurrent jobs.
- **NASAPower**: Uses a global cache to avoid API rate limits.
- First job: One-time cache fill per region for its full date range (single job per region to avoid race conditions in cache write).
Expand Down
4 changes: 2 additions & 2 deletions docs/docs/Vercye/metdata.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,8 +34,8 @@ Options:
--help Show this message and exit.
```

!Attention: This can amount to a few hundred GB of data when downloading many years of historical data. Therefore this is rather intended to be run on HPC environments.
!Attention: This can amount to 100+ GB of data when downloading many years of historical data. Therefore this is rather intended to be run on HPC environments.

This will first try to download all final [CHIRPS v2.0 global daily products](https://data.chc.ucsb.edu/products/CHIRPS-2.0/global_daily/cogs/p05/) at 0.05 degrees resolution. For days without data available, the downloader will fallback to the [preliminary product](https://data.chc.ucsb.edu/products/CHIRPS-2.0/prelim/global_daily/tifs/p05/).

The VeRCYe pipeline will then read local regions from this global files during runtime.
The VeRCYe pipeline will then read local regions from these global files during runtime.
27 changes: 14 additions & 13 deletions docs/docs/Vercye/running.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Running VeRCYE manuall
# Running VeRCYE manually

This guide walks you through the process of setting up and running a yield study using our framework, which helps you simulate crop yields across different regions.

Expand Down Expand Up @@ -56,7 +56,7 @@ The full configuration options are documented in the [Inputs documentation (Sect

The **base directory** (your `study head dir`) organizes region-specific geometries and APSIM simulations by year, timepoint, and region [(See Details)](inputs.md).

Use the provided helper script (`prepare_yieldstudy.py`) to create this structure. For this, simply create an additional `setup_config.yaml` file in your base directory and fill it as described below. You can then run the setup helper with `python prepare_yieldstudy.py /path/to/basedirectory/setup_config.yaml`. For ease of use, start out with the example provided in `examples/setup_config.yaml`.
Use the provided helper script (`prepare_yieldstudy.py`) to create this structure. For this, simply create an additional `setup_config.yaml` file in your base directory and fill it as described below. For ease of use, start out with the example provided in `examples/setup_config.yaml`.

1. **Input shapefile & region names**

Expand All @@ -75,12 +75,16 @@ Use the provided helper script (`prepare_yieldstudy.py`) to create this structur

3. **APSIM configuration templates**

VeRCYe requires an APSIM template that will be adjusted for each region. This APSIM template defines the general simulations that should be run for each region. i.e it defines the factorials of different input parameters that should be run. All of these should be manually configured based on expertise of the regions of interest.

Additionally, currently a precipitation based custom script that MUST be embedded in the APSIM file (provided by Yuval Sadeh) is used to estimate likely sowing dates (sowing window). However, if the true sowing date is known, this can also be injected in the pipeline, but it still requires the sowing window script to be present in the code, as the start/end/force sowing dates are overwritten to the true sowing date and the factorial is disabled.

Rather than manually copying and editing an APSIM file for each year/region, the helper will:

1. Copy a template for each higher-level region (e.g. state) into every year’s folder.
2. Auto-adjust the simulation dates. NOTE: This will replace the `Models.Clock` parameter in the APSIM simulation to with the value specified in the `run_config_template.yaml` under `apsim_params.time_bounds`. If you require different simulation start/end-dates for various regions during a season, you will have to configure this manually in the APSIM files in the extracted directories.

Configure this by setting:
Configure this by setting the following parameters in your `setup_config.yaml` that you created in the previous step:

- **`APSIM_TEMPLATE_PATHS_FILTER_COL_NAME`** Admin column that groups regions sharing a template (e.g. `NAME_1`).

Expand All @@ -99,13 +103,11 @@ Use the provided helper script (`prepare_yieldstudy.py`) to create this structur
```


Once all parameters are defined, run the notebook. It will:
Once all parameters are defined, run the prepartaion script with `python prepare_yieldstudy.py /path/to/basedirectory/setup_config.yaml`. It will:

- Create your `year/timepoint/region` directory tree under `OUTPUT_DIR`.
- Create your `year/timepoint/region` scaffolded directory tree under `OUTPUT_DIR`.
- Generate a final `run_config.yaml` that merges your Snakemake settings with the selected regions.

**Note**: Sometimes, you might want to add some custom conditionals or processing, that is why we have provided this code in a jupyter notebook. In that case make sure to read the [input documentation](inputs.md), to understand the required structure.

## 4. Adding Reported Validation Data

The VeRCYE pipeline can automatically generate validation metrics (e.g., R², RMSE) if reported data is available. To enable this, you must manually add validation data for each year.
Expand All @@ -116,12 +118,12 @@ Define aggregation levels in your `config file` under `eval_params.aggregation_l

For each year and aggregation level, create a CSV file named: `{year}/referencedata_{aggregation_name}-{year}.csv`, where aggregation_name matches the key in your config (case-sensitive!).

Example: For 2024 state-level data, the file should be: `basedirectory/2024/referencedata__State-2024.csv`
Example: For 2024 state-level data, the file should be: `basedirectory/2024/referencedata_State-2024.csv`
For simulation ROI-level data, use `primary` as the aggregation name: `basedirectory/2024/referencedata_primary-2024.csv`

**CSV Structure**
**CSV Structure of validation name**

- `region`: Name matching GeoJSON folder (for `primary aggregation level`) or matching attribute table column values for custom aggregation level (Column as specified under `eval_params.aggregation_levels` in tour `config.yaml`)
- `region`: Name matching GeoJSON folder name (for `primary aggregation level`) or matching attribute table column values for custom aggregation level (Column as specified under `eval_params.aggregation_levels` in tour `config.yaml`)
- `reported_mean_yield_kg_ha`: Mean yield in kg/ha
If unavailable, provide `reported_production_kg` instead. The mean yield will then be calculated using cropmask area (note: subject to cropmask accuracy).If you do not have validation data for certain regions, simply do not include these in your CSV.
- If your reference data contains area, it is recommended to also include this under `reported_area_ha` even though this is not yet used in the evaluation pipeline.
Expand Down Expand Up @@ -155,12 +157,11 @@ Once your setup is complete:

When the simulation completes, results will be available in your base directory. See the [Outputs Documentation](outputs.md) for details on interpreting the results.

To run the pipeline over the same region(s), either use Snakemake's `-F` flag or delete the log files at `vercye_ops/snakemake/logs_*`. Runtimes are in `vercye_ops/snakemake/benchmarks`.

## Troubleshooting and re-running

If your pipeline fails, you have a few options to re-run:
- If you want to force the re-execution of all rules, even if they have already completed successfully, you can add the `-F` flag to the run command above. This will invalidate all outputs and require rerunning them.
- You can delete output files and these files and all downstream affected rules will be rerun.
- Recommend: If you have fixed the section of your code that caused the problems, you can simply rerun with the normal run command and only the rules that have failed and their downstream dependencies will be run.

Check out the troubleshooting page.
Check out the [troubeshooting page](troubleshooting.md) for common errors.
4 changes: 3 additions & 1 deletion docs/docs/Vercye/troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,10 @@ This section contains a few tips on what to do if you are encountering errors du

- `Missing input files for rule xyz`: Check the error output under `affected files`. This outlines the files that snakemake expects to be present, however they are do not exist. You can manually check the directory if they exist. Typically this points to an error in the configuration, as for example when a `region.geojson` is supposed to be missing, this points to the basedirectory being incorrectly setup / the wrong path being provided to the base directory somewhere in the config.

- `Error in rule LAI_analysis`: An error related to not enough points or something similar typicallt indicates that in all of your LAI data there are not sufficient dates that meet the required minimum pixels without clouds for the specific region.
- `Error in rule LAI_analysis`: An error related to not enough points or something similar typically indicates that in all of your LAI data there are not sufficient dates that meet the required minimum pixels without clouds for the specific region.

However, this rarely should be the case when running with LAI data of multiple months (a typical season). Typically, this rather indicates that the `LAI parameters` were incorrectly set in the config. Check that the `lai_region`, `lai_resolution`, `lai_dir` and `file_ext` are correctly set.

- `Error in rule match_sim_real: KeyError: None`: Typically indicates that the APSIM simulation was externally interrupted or unexpectedly failed. In such a case you will have to find the `--db_path` option in the `shell` section in the tracelog and manually delete the `.db` file.

- `Errors related to evaluation`: Typically is related to names in the validation data `.csv` file not matching those used internally by vercye. For validation data at the same level as the simulation (`primary`), the names must match the cleaned names from vercye (the folder names).
4 changes: 2 additions & 2 deletions docs/docs/Vercye/webapp.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@ The VeRCYe webapp is an interface to core functionality, wrapping the CLI utilit


### Setup
1. Ensure you have installed the VeRCYe core library as described in [](../index.md#vercye-library-setup).
1. Ensure you have installed the VeRCYe core library as described in the [setup instruction](../index.md#vercye-library-setup).
2. The webapp requires you to set a number of default folders, for example for the storage of cached outputs, the path to the APSIM installation and others. For this set the environmental variables by copying `vercye_ops/.env_examples` to `vercye_ops/.env` and setting the actual values.
3. Navigate to `vercye_ops/vercye_webapp/`: `cd vercye_ops/vercye_webapp`.
4. Install the additional requirements for the webapp: Ensure you have loaded your environment from step 1 and run `pip install -r requirements.txt`.
5. To queue incoming jobs and allow workers to fetch jobs independantly, `redis` is used. Install redis for your system by following the [official instructions]().
5. To queue incoming jobs and allow workers to fetch jobs independantly, `redis` is used. Install redis for your system by following the [official instructions](https://redis.io/docs/latest/operate/oss_and_stack/install/archive/install-redis/).
4. You will now have to specify a few more environmental variables for the webapp. For this copy `vercye_webapp/.env_example` to `vercye_webapp/.env` and set the values.


Expand Down
Loading