Skip to content

Using the Repo

pjames27 edited this page Jan 18, 2025 · 3 revisions

How do I use this repo?

The primary ways to use this repo are to:

  1. Test a coverage encoding.
  2. Prepare a coverage encoding for deployment.

In either case, this repo is designed to be used with the VSCode/Cursor extension for Epilog. Before continuing, you should do the following:

  1. Clone the repo.
  2. If you haven't already done so, install the VSCode or Cursor IDE.
  3. Install the Epilog VSCode/Cursor extension.

If you have questions about how to use the .epilogscript and .epilogbuild file types, see the documentation on the page for the extension.

It may be worth becoming familiar with the structure of the repo.

For instructions specific to each use case, see below.

Testing a coverage encoding

In order to test a coverage encoding, do the following:

  1. Ensure the "Universal" settings for the Epilog extension are as stated below. This will specify which rules and data are always available, and emulates the environment in the deployed system in which some rules and data are always loaded.
    • The "Universal: Data" setting is the absoluate path to the repo's "System-wide\world.hdf" file in your system.
    • The "Universal: Rules" setting is the absoluate path to the repo's "System-wide\rules.hrf" file in your system.
  2. Navigate to the testing.epilogscript file for the coverage you wish to test.
    • The file should contain lines specifying (i) the ruleset to test, (ii) the dataset(s) to test, and (iii) the Epilog query to run on the ruleset and datasets.
      • The dataset(s) can be either a single .hdf file, or a folder containing one or more .hdf files. If it is the latter, the query will be run sequentially on each .hdf file in the folder.
      • The dataset folder should usually be "Example Datasets/". This contains the test cases for the coverage encoding.
  3. Open the "Output" pane in VSCode/Cursor and switch it to the "Epilog Language Server" channel.
  4. Run the "Epilog: Run Epilog Script" command.
    • The results of the query should appear in the Output pane. The filename of the dataset will appear before the results of running the query on that file.
    • Generally, if the name of the file ends with "not_covered{num}", the results should be "None.". If the name of the file instead ends with "covered{num}", the results should appear in a numbered list.

Preparing a coverage encoding for deployment

We have decomposed the encodings into many files, but the systems to which we deploy the encodings usually want rulesets, metadata, etc. as single files. So, we need to consolidate our encodings from many files to one of each of a few different types.

Specifically, we need to generate the following four files:

  • the ruleset (a .hrf file)
  • the metadata (a .metadata file)
  • the world dataset (a .hdf file)
  • the berlitz for generating natural language explanation of epilog derivations (a .hdf file)

To generate these files, do the following:

  1. Ensure the "Consolidate: Include Universal Files" setting for the Epilog extension is false. This will ensure the System-wide files that are referenced when running scripts are not included in the steps that follow.
  2. Navigate to the build.epilogbuild file for the coverage you wish to prepare for deployment.
    • The file should contain lines specifying versions of the four files above that are to be consolidated, and should specify that the consolidated files should be saved in the Deploy folder for that coverage.
    • Because the system to which the encodings are deployed is sensitive to the order of the metadata, we consolidate the metadata from the metadata_consolidation_order.metadata file rather than from the metadata.metadata file.
    • Because we usually already have versions of the four files in the Deploy folder when we consolidate, we often specify the optional line "overwrite: true", which allows the consolidation command to overwrite the existing files without checking with the user.
  3. Run the "Epilog: Consolidate contents of referenced files" command.
    • The consolidated files should now be in the Deploy folder.
  4. Navigate to the "Deploy/metadata.metadata" file that was just generated.
  5. Search and replace "replace_this_term" with the name of the coverage that is to be deployed.
    • The name of the coverage should be some underscore-separated sequence of characters ending in _coverage, and should already appear in the file somewhere as "superclass({coverage_name}, claim)". E.g. "family_planning_services_contraceptives_coverage".
    • This is necessary because the system to which the encodings are deployed needs to know which attributes the coverage has, but some are shared between different coverage types, so we need to specialize the shared metadata to each coverage type before deployment.

The files should now be ready for deployment!

Clone this wiki locally