diff --git a/README.md b/README.md index f5d08d7..23a454c 100644 --- a/README.md +++ b/README.md @@ -1,204 +1,634 @@ -# Installation +# JABS Postprocess -## Singularity Container +JABS-postprocess is a comprehensive Python toolkit for analyzing behavioral data from +the [JABS (JAX Animal Behavior System)](https://github.com/KumarLabJax/JABS-behavior-classifier) +computer vision pipeline. Transform video pose estimations and behavior predictions +into publication-ready tables and visualizations. -This code contains a [singularity definition file](vm/jabs-postprocess.def) for -assistance with installing the python environment. This environment supports both -generating behavior table files and plotting the data in python. +## Quick Start -Example building of the singularity image: +### Prerequisites +- Python 3.10 or higher +- Access to JABS project data (pose and behavior prediction files) + +### Installation + +This section covers installing JABS-postprocess using either PyPI or Docker. See the +[Development Installation](#development-installation) section for instructions on +installing from source. + +#### Option 1: From PyPI + +##### Using `venv` and `pip` +```bash + +# Create and activate virtual environment +python3 -m venv jabs_postprocess_env +source jabs_postprocess_env/bin/activate # On Windows: jabs_postprocess_env\Scripts\activate + +# Install JABS-postprocess +pip install jabs-postprocess ``` -singularity build --fakeroot jabs-postprocess.sif vm/jabs-postprocess.def + +##### Using `uv` (Recommended) +```bash + +# Create project directory and install with uv +mkdir my_jabs_analysis && cd my_jabs_analysis +uv add jabs-postprocess ``` -### Running Commands in Singularity -When using the Singularity container, the environment is set up with the command line -script on your path. You can run commands directly, with or without using the root -`jabs-postprocess` command. +```bash +# Or install globally with uv +uv tool install jabs-postprocess ``` -$ singularity run jabs-postprocess.sif --help -Usage: jabs-postprocess [OPTIONS] COMMAND [ARGS]... - -╭─ Options ───────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ --install-completion Install completion for the current shell. │ -│ --show-completion Show completion for the current shell, to copy it or customize the installation. │ -│ --help Show this message and exit. │ -╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -╭─ Commands ──────────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ transform-bouts-to-bins Transform a bout file into a summary table. │ -│ create-snippet Create a video snippet from a JABS recording with optional behavior/pose rendering. │ -│ evaluate-ground-truth Evaluate classifier performance on densely annotated ground truth data. │ -│ generate-tables Generate behavior tables from JABS predictions. │ -│ heuristic-classify Process heuristic classification for behavior analysis. │ -╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ +You can also use poetry, or any other Python package manager of your choice. + +#### Option 2: With Containerization +##### Using Docker +```bash + +docker pull aberger4/jabs-postprocess:latest +docker run -it --rm aberger4/jabs-postprocess:latest jabs-postprocess --help ``` -For example: +#### Using Singularity +```bash + +# Clone source code +git clone https://github.com/KumarLabJax/JABS-postprocess.git + +# Build container +singularity build --fakeroot jabs-postprocess.sif vm/jabs-postprocess.def + +# Run commands through container +singularity run jabs-postprocess.sif jabs-postprocess --help ``` -singularity run jabs-Postprocessing.sif jabs-postprocess generate-tables --help + +### Verify Installation +```bash + +# Check if installation worked (adjust command based on your installation method) +jabs-postprocess --help # If installed with pip/uv tool +uv run jabs-postprocess --help # If using uv project +poetry run jabs-postprocess --help # If using poetry + +# Should display available commands: +# - generate-tables +# - heuristic-classify +# - create-snippet +# - evaluate-ground-truth +# - merge-tables +# - add-bout-statistics ``` -## Virtual Environment +## Key Concepts -This project uses [`uv`](https://docs.astral.sh/uv/) for managing dependencies. You can use it to set up your virutal environment. +Understanding these core concepts will help you effectively use JABS-postprocess: -From the root of the repository, run: +### Data Organization +- **Project Folder**: Contains all data files for an experiment +- **Pose Files**: `*_pose_est_v*.h5` - Body part coordinates from computer vision +- **Behavior Files**: `*_behavior.h5` - ML classifier predictions for behaviors +- **Feature Files**: `features.h5` - Optional engineered features (speed, distances, etc.) + +### Behavior Analysis Pipeline ``` -uv sync +Raw Video → JABS Pose Detection → JABS Behavior Classification → JABS-Postprocess Tables → Analysis/Visualization ``` -### Pip Based Virtual Environment -If you must use pip, you can create a virtual environment by running: +### Bout-Based Analysis +- **Bout**: Continuous sequence of the same behavior state +- **Behavior States**: + - `1`: Behavior detected + - `0`: Not performing behavior + - `-1`: Missing pose data (no prediction possible) + +### Filtering Parameters +Applied sequentially to clean up predictions: -```python3 -m venv postprocess_venv -source postprocess_venv/bin/activate -pip3 install -r vm/requirements.txt +1. **Interpolate Size** (`--interpolate_size`): Fill gaps in missing data ≤ N frames +2. **Stitch Gap** (`--stitch_gap`): Merge behavior bouts separated by ≤ N frames +3. **Min Bout Length** (`--min_bout_length`): Remove behavior bouts shorter than N frames + +Example: With `--interpolate_size 5 --stitch_gap 10 --min_bout_length 30`: +``` +Raw: [Behavior-5frames] [Missing-3frames] [NotBehavior-8frames] [Behavior-15frames] +Step 1: [Behavior-5frames] [Behavior-3frames] [NotBehavior-8frames] [Behavior-15frames] # Interpolate +Step 2: [Behavior-26frames] # Stitch (5+3+8+10=26) +Step 3: [] # Filter out (26 < 30) ``` -Only python3.10 has been tested. +### Output Tables -# Generating Behavior Tables +**Bout Table** (`*_bouts.csv`): Raw bout-level data +- Each row = one behavioral bout +- Columns: animal_id, start_frame, duration, behavior_state, etc. -## Classifier-based Table Generation +**Summary Table** (`*_summaries.csv`): Time-binned summaries +- Each row = one time bin (default: 60 minutes) +- Columns: time_behavior, bout_count, distances_traveled, etc. -**Note**: When using a uv based environment, use the `jabs-postprocess` command: +## Usage Examples +### Example 1: Basic Behavior Table Generation + +Generate tables for grooming and locomotion behaviors from classifier predictions: + +```bash + +# Navigate to your data directory +cd /path/to/your/experiment + +# Generate behavior tables for multiple behaviors +jabs-postprocess generate-tables \ + --project_folder ./my_experiment \ + --behavior grooming \ + --behavior locomotion \ + --out_prefix experiment_results \ + --out_bin_size 60 \ + --interpolate_size 5 \ + --stitch_gap 15 \ + --min_bout_length 30 ``` -uv run jabs-postprocess generate-tables \ - --project_folder /path/to/project/folder/ \ - --out_prefix results \ - --behavior Behavior_1 \ - --behavior Behavior_2 + +**Expected Output:** +``` +Generated tables for grooming: + Bout table: experiment_results_grooming_bouts.csv + Summary table: experiment_results_grooming_summaries.csv + ✓ Includes bout statistics + +Generated tables for locomotion: + Bout table: experiment_results_locomotion_bouts.csv + Summary table: experiment_results_locomotion_summaries.csv + ✓ Includes bout statistics ``` -This will generate 2 behavior table files per behavior detected in the project folder. You must include `--behavior BehaviorName` to generate a behavior table for each behavior. If you are unsure which behavior are available in a given project folder, you can check by intentionally guessing incorrectly. +**What this does:** +- Processes all pose/behavior files in `./my_experiment` +- Creates 4 CSV files (2 per behavior) +- Fills missing data gaps ≤ 5 frames +- Merges behavior bouts separated by ≤ 15 frames +- Removes behavior bouts shorter than 30 frames +- Bins data into 60-minute summaries +- Adds statistics like bout count, average duration, etc. + +### Example 2: Heuristic Classification Workflow + +Use pose-based features to classify freezing behavior (alternative to ML classifiers): + +```bash -To see all options with a short description, run: +# First, make sure you have exported features from JABS +# Then run heuristic classification +jabs-postprocess heuristic-classify \ + --project_folder ./fear_conditioning_experiment \ + --behavior_config freeze.yaml \ + --feature_folder features \ + --out_prefix freeze_analysis \ + --out_bin_size 30 \ + --min_bout_length 90 ``` -uv run jabs-postprocess generate-tables --help + +**Freeze behavior definition** (from `freeze.yaml`): +```yaml +# Mouse immobile for at least 3 seconds +definition: + all: + - less than: + - features/per_frame/point_speeds BASE_NECK speed + - 2.0 # pixels/frame + - less than: + - features/per_frame/point_speeds NOSE speed + - 2.0 + - less than: + - features/per_frame/point_speeds BASE_TAIL speed + - 2.0 ``` -## JABS-feature-based Table Generation +**Expected Output:** +``` +freeze_analysis_freeze_bouts.csv # Individual freezing bouts +freeze_analysis_freeze_summaries.csv # 30-minute time bins +``` +### Example 3: Video Snippet Creation with Behavior Overlay + +Extract video clips showing specific behaviors with pose and prediction overlays: + +```bash +# Create 30-second clip starting at 5 minutes with behavior overlay +jabs-postprocess create-snippet \ + --input_video ./experiment_videos/mouse_session_2023-06-15_14-30-00.mp4 \ + --output_video ./clips/grooming_example.mp4 \ + --start 300 \ + --duration 30 \ + --time_units second \ + --pose_file ./experiment_data/mouse_session_2023-06-15_14-30-00_pose_est_v5.h5 \ + --behavior_file ./experiment_data/mouse_session_2023-06-15_14-30-00_behavior.h5 \ + --render_pose \ + --overwrite ``` -uv run jabs-postprocess heuristic-classify \ - --behavior_config src/heuristic_classifiers/feeze.yaml \` - --project_folder /path/to/project/folder/ \ - --feature_folder /path/to/project/features/ + +**What this creates:** +- 30-second video clip (5:00-5:30) +- Pose keypoints overlaid on video +- Behavior predictions shown as colored regions +- Useful for validating classifier performance or creating figures + +**Advanced Example - Extract Multiple High-Confidence Behavior Bouts:** +```bash +# First generate behavior tables to identify good examples +jabs-postprocess generate-tables \ + --project_folder ./social_experiment \ + --behavior approach \ + --out_prefix social_analysis + +# Then examine the bout table to find interesting time periods +# Look for approach bouts with duration > 60 frames (2 seconds at 30fps) + +# Extract specific bout (manually identified from bout table) +jabs-postprocess create-snippet \ + --input_video ./social_experiment/videos/pair_A_2023-08-10_16-45-22.mp4 \ + --output_video ./analysis_clips/approach_bout_example.mp4 \ + --start 1847 \ + --duration 180 \ + --time_units frame \ + --pose_file ./social_experiment/pair_A_2023-08-10_16-45-22_pose_est_v5.h5 \ + --behavior_file ./social_experiment/pair_A_2023-08-10_16-45-22_behavior.h5 \ + --render_pose ``` -This will generate 2 behavior table files based on the threshold applied to the feature. Additional `--feature_key --relation --threshold ` can be used in succession to indicate all conditions at the same time (e.g. `feature_1 < threshold_1 AND feature_2 > threshold_2`). +## Working with Generated Data + +### Loading Data in Python -To see all options with a short description, run: +```python +import pandas as pd +import matplotlib.pyplot as plt + +# Load bout-level data +bouts_df = pd.read_csv('experiment_results_grooming_bouts.csv', skiprows=2) +print(f"Found {len(bouts_df)} grooming bouts") + +# Load summary data +summary_df = pd.read_csv('experiment_results_grooming_summaries.csv', skiprows=2) + +# Basic analysis +print(f"Total grooming time: {bouts_df['duration'].sum()/30:.1f} seconds") # Assuming 30 fps +print(f"Average bout duration: {bouts_df['duration'].mean():.1f} frames") + +# Plot timeline +plt.figure(figsize=(12, 6)) +plt.scatter(pd.to_datetime(summary_df['time']), summary_df['bout_behavior']) +plt.xlabel('Time') +plt.ylabel('Grooming Bouts per Hour') +plt.title('Grooming Behavior Over Time') +plt.show() ``` -uv run jabs-postprocess heuristic-classify --help + +### Using Built-in Analysis Tools + +```python +# Use JABS-postprocess plotting utilities +from jabs_postprocess.analysis_utils.plots import generate_time_vs_feature_plot +from jabs_postprocess.analysis_utils.parse_table import read_ltm_summary_table + +# Read data with metadata +header_data, df = read_ltm_summary_table('experiment_results_grooming_summaries.csv') + +# Create time-series plots +plot = generate_time_vs_feature_plot( + df, + x_col='relative_exp_time', + y_col='bout_behavior', + title=f"{header_data['Behavior'][0]} Analysis" +) +plot.draw().show() ``` -## Notes on Filtering +## Available Heuristic Classifiers + +Pre-built behavior classifiers using pose features: + +| Behavior | Description | Key Features | Min Bout | +|----------|-------------|--------------|----------| +| **Locomotion** | Movement > 5 cm/s | `centroid_velocity_mag > 5.0` | 15 frames | +| **Freeze** | Immobility for ≥3 seconds | `neck_speed < 2.0 AND nose_speed < 2.0 AND tail_speed < 2.0` | 90 frames | +| **Wall Facing** | Oriented toward wall | Head direction + distance to wall | 5 frames | +| **Corner** | In corner region | Position in corner zones | 5 frames | +| **Periphery** | Along arena edges | Distance from center | 5 frames | -We apply filtering in 3 sequential stages using optional parameters. -All filtering is applied on bout-level data targeted at removing bouts in 1 of the 3 states (missing prediction, not behavior, behavior). When a block gets deleted, the borders are compared. When the borders match, the bout of the surrounding predictions get merged. When they borders mismatch, 50% of the deleted block gets assigned to each bordering. If the deleted duration is not divisible by 2, the later bout receives 1 more frame. +### Creating Custom Heuristic Classifiers -* `--max_interpolate_size` : The maximum size to delete missing predictions. Missing predictions below this size are assigned 50% to each of its bordering prediction blocks. -* `--stitch_gap` : The maximum size to delete not behavior predictions. This essentially attempts to merge together multiple behavior bouts. -* `--min_bout_length` : The maximum size to delete behavior predictions. Any behavior bout shorter than this length gets deleted. +```yaml +# Example: custom_rearing.yaml +behavior: Rearing -The order of deletions is: +interpolate: 5 +stitch: 10 +min_bout: 45 # 1.5 seconds at 30fps -1. Missing Predictions -2. Not Behavior -3. Behavior +definition: + all: + - greater than: + - features/per_frame/point_heights NOSE height + - 50.0 # Nose elevated above baseline + - less than: + - features/per_frame/centroid_velocity_mag centroid_velocity_mag + - 3.0 # Minimal movement during rearing +``` -## Data table extensions +```bash +# Use your custom classifier +jabs-postprocess heuristic-classify \ + --project_folder ./open_field_test \ + --behavior_config ./custom_rearing.yaml \ + --feature_folder features \ + --out_prefix rearing_analysis +``` -Lots of the functions used in generating these behavior tables were designed for potential re-use. Check out the functions inside [jabs_utils](jabs_utils/) if you wish to possibly extend the functionality of the generate behavior scripts. +## Common Issues and Solutions -# Data table format +### 1. "No behavior files found in project folder" -There are two behavior tables generated. Both contain a header line to store parameters used while calling the script. +**Problem**: The generate-tables command can't find behavior prediction files. -Some features are optional, because calculating them can be expensive. These options are noted with an asterisk (\*). While default behavior is to include them, they are not guaranteed. +**Solutions**: +```bash +# Check your project structure +ls -la /path/to/project/ +# Look for files ending in _behavior.h5 -## Header Data +# Common issues: +# - Wrong folder path +# - Files named differently +# - Files in subdirectories -Stores data pertaining to the script call that globally addresses all data within the table. +# If files are in subdirectories, try: +find /path/to/project/ -name "*_behavior.h5" -type f +``` + +**Expected Structure**: +``` +project_folder/ +├── mouse_session_2023-06-15_14-30-00_pose_est_v5.h5 +├── mouse_session_2023-06-15_14-30-00_behavior.h5 +├── mouse_session_2023-06-15_15-30-00_pose_est_v5.h5 +├── mouse_session_2023-06-15_15-30-00_behavior.h5 +└── ... +``` -* `Project Folder` : The folder the script searched for data -* `Behavior` : The behavior the script parsed -* `Interpolate Size` : The number of frames where missing data gets interpolated -* `Stitch Gap` : The number of frames in which predicted bouts were merged together -* `Min Bout Length` : The number of frames in which short bouts were omitted (filtered out) -* `Out Bin Size` : Time duration (minutes) used in binning the results +### 2. "Unknown behavior: grooming" -## Bout Table +**Problem**: The behavior name doesn't match what's in the behavior files. -The bout table contains a compressed RLE encoded format for each bout (post-filtering) +**Solutions**: +```bash +# Intentionally use wrong behavior to see available options +jabs-postprocess generate-tables \ + --project_folder ./my_project \ + --behavior wrong_name \ + --out_prefix test -* `animal_idx` : Animal index in the pose file (typically not used, see pose file documentation for indexing rules) -* `longterm_idx` : Identity of the mouse in the experiment (-1 reserved for unlinked animals, animals in experiment are index 0+) -* `exp_prefix` : Detected experiment ID -* `time` : Formatted time string in "%Y-%m-%d %H:%M:%S" of the time this bout was extracted from -* `video_name` : Name of the video this bout was extracted from -* `start` : Start in frame of the bout -* `duration` : Duration of the bout in frames -* `is_behavior` : State of the described bout - * `-1` : The mouse did not have a pose to create a prediction - * `0` : Not behavior prediction - * `1` : Behavior prediction -* `distance`\* : Distance traveled during bout +# Error message will show available behaviors like: +# Available behaviors: ['approach', 'locomotion', 'rearing', 'grooming'] -## Binned Table +# Use exact spelling +jabs-postprocess generate-tables \ + --project_folder ./my_project \ + --behavior approach \ + --out_prefix results +``` -The binned table contains summaries of the results in time-bins. +### 3. "FileExistsError: Output file already exists" -Summaries included: +**Problem**: Output files from previous runs exist. -* `longterm_idx` : Identity of the mouse in the experiment (-1 reserved for unlinked animals, animals in experiment are index 0+) -* `exp_prefix` : Detected experiment ID -* `time` : Formatted time string in "%Y-%m-%d %H:%M:%S" of the time bin -* `time_no_pred` : Count of frames where mouse did not have a predicted pose (missing data) -* `time_not_behavior` : Count of frames where mouse is not performing the behavior -* `time_behavior` : Count of frames where mouse is performing the behavior -* `bout_behavior` : Number of bouts where the mouse is performing the behavior - * Bouts are counted by the proportion in which bouts are contained in a time bin - * If a bout spans multiple time bins, it will be divided into both via the proportion of time - * Sum of bouts across bins produces the correct total count - * Note that bouts cannot span between video files -* `not_behavior_dist`\* : Total distance traveled during not behavior bouts -* `behavior_dist`\* : Total distance traveled during behavior bouts +**Solutions**: +```bash +# Option 1: Use --overwrite flag +jabs-postprocess generate-tables \ + --project_folder ./my_project \ + --behavior grooming \ + --out_prefix results \ + --overwrite -# Example Plotting Code +# Option 2: Change output prefix +jabs-postprocess generate-tables \ + --project_folder ./my_project \ + --behavior grooming \ + --out_prefix results_v2 -Since the data is in a "long" format, it is generally straight forward to generate plots using ggplot in R or plotnine in python. +# Option 3: Remove old files +rm results_grooming_*.csv +``` -Some example code for generating plots is located in [test_plot.py](test_plot.py). +### 4. "Empty behavior table generated" + +**Problem**: No behaviors detected after filtering. + +**Solutions**: +```bash +# Try more permissive filtering +jabs-postprocess generate-tables \ + --project_folder ./my_project \ + --behavior grooming \ + --out_prefix debug \ + --interpolate_size 10 \ # Increase from default 5 + --stitch_gap 30 \ # Increase from default 5 + --min_bout_length 5 # Decrease from default 5 + +# Check the raw predictions first +python -c " +import h5py +with h5py.File('path_to_behavior.h5', 'r') as f: + predictions = f['preds'][:] + print(f'Behavior predictions shape: {predictions.shape}') + print(f'Unique values: {np.unique(predictions)}') + print(f'Behavior frames: {np.sum(predictions > 0.5)}') +" +``` -Additionally, there are a variety of helper functions located in [analysis_utils](analysis_utils/) for reading, manipulating, and generating plots of data using the data tables produced. +### 5. "KeyError: 'preds' when reading behavior file" -# Dense Ground Truth Performance Scripts +**Problem**: Behavior file structure is unexpected. -These scripts are still in the prototyping phase, but example methods of comparing predictions with a JABS annotated set of videos are available using: +**Solutions**: +```bash +# Examine file structure +python -c " +import h5py +with h5py.File('problematic_file.h5', 'r') as f: + print('Available keys:', list(f.keys())) + for key in f.keys(): + print(f'{key}: {f[key].shape if hasattr(f[key], \"shape\") else \"group\"}')" +``` +### 6. "Pose file and behavior file frame count mismatch" + +**Problem**: Different number of frames between pose and behavior data. + +**Solutions**: +- Check if files are from the same recording session +- Verify timestamps match exactly +- Some videos may have been truncated during processing + +```bash +# Compare frame counts +python -c " +import h5py +with h5py.File('pose_file.h5', 'r') as f: + pose_frames = f['poseest'][()].shape[0] +with h5py.File('behavior_file.h5', 'r') as f: + behavior_frames = f['preds'][()].shape[0] +print(f'Pose frames: {pose_frames}, Behavior frames: {behavior_frames}') +" ``` -uv run jabs-postprocess compare-gt --help + +### 7. "No features found" for heuristic classification + +**Problem**: Feature files missing or incomplete. + +**Solutions**: +```bash +# Check if features were exported from JABS +ls -la features/ +# Should see: features.h5, centroid_velocity.csv, point_speeds.csv, etc. + +# Re-export features from JABS if missing +# See JABS documentation for feature export + +# Check specific feature availability +python -c " +import h5py +with h5py.File('features/features.h5', 'r') as f: + print('Available feature groups:', list(f.keys())) + if 'per_frame' in f: + print('Per-frame features:', list(f['per_frame'].keys())) +" ``` -# Video Clip Extraction +## Example Data and Configuration Files + +### Sample Heuristic Configurations +- [Locomotion classifier](src/jabs_postprocess/heuristic_classifiers/locomotion.yaml) - Movement detection +- [Freeze classifier](src/jabs_postprocess/heuristic_classifiers/freeze.yaml) - Immobility detection +- [Wall facing classifier](src/jabs_postprocess/heuristic_classifiers/wall_facing.yaml) - Spatial orientation +- [Corner classifier](src/jabs_postprocess/heuristic_classifiers/corner.yaml) - Corner preference +- [Periphery classifier](src/jabs_postprocess/heuristic_classifiers/periphery.yaml) - Thigmotaxis + +### Example Analysis Scripts +- [Basic plotting examples](examples/test_plot.py) - Time-series visualization +- [ID matching examples](examples/test_id_matching.py) - Multi-animal tracking +- [Video sampling](examples/sample_uncertain_vids.py) - Quality control workflows -To create video snippets based on an input video (optionally rendering behavior predictions): +### Data Structure Examples + +**Project Folder Layout**: +``` +my_experiment/ +├── videos/ # Original videos (optional) +│ ├── mouse_A_2023-06-15_14-30-00.mp4 +│ └── mouse_A_2023-06-15_15-30-00.mp4 +├── mouse_A_2023-06-15_14-30-00_pose_est_v5.h5 # Pose data +├── mouse_A_2023-06-15_14-30-00_behavior.h5 # Behavior predictions +├── mouse_A_2023-06-15_15-30-00_pose_est_v5.h5 +├── mouse_A_2023-06-15_15-30-00_behavior.h5 +└── features/ # Feature data (optional) + ├── features.h5 + ├── centroid_velocity.csv + └── point_speeds.csv +``` +## Advanced Usage + +### Batch Processing Multiple Experiments + +```bash +#!/bin/bash +# Process multiple experiment folders +experiments=("experiment_1" "experiment_2" "experiment_3") +behaviors=("approach" "grooming" "locomotion") + +for exp in "${experiments[@]}"; do + for behavior in "${behaviors[@]}"; do + echo "Processing $exp - $behavior" + jabs-postprocess generate-tables \ + --project_folder "./$exp" \ + --behavior "$behavior" \ + --out_prefix "${exp}_${behavior}" \ + --overwrite + done +done ``` -uv run jabs-postprocess create-snippets --help + +### Merging Results Across Experiments + +```bash +# Merge grooming results from multiple experiments +jabs-postprocess merge-multiple-tables \ + --table_folder ./results \ + --behaviors grooming \ + --table_pattern "*grooming_bouts.csv" \ + --output_prefix combined_grooming \ + --overwrite ``` -To sample uncertain videos from a project folder: +### Performance Evaluation Against Ground Truth +```bash +# Evaluate classifier performance +jabs-postprocess evaluate-ground-truth \ + --behavior grooming \ + --ground_truth_folder ./manually_annotated_data \ + --prediction_folder ./classifier_predictions \ + --results_folder ./evaluation_results ``` -uv run jabs-postprocess sample-uncertain --help + +### Development Installation + +For development or accessing the latest features, you can install from source: + +```bash +# Clone the repository +git clone https://github.com/KumarLabJax/JABS-postprocess.git +cd JABS-postprocess + +# Option 1: UV development install +uv sync +uv run jabs-postprocess --help + +# Option 2: Pip development install +python3 -m venv jabs_dev_env +source jabs_dev_env/bin/activate +pip install -e . + +# Option 3: Poetry development install +poetry install +poetry run jabs-postprocess --help ``` -For both of these commands, check the `--help` function for available filters. +## Getting Help + +- **Documentation**: See inline help with `jabs-postprocess COMMAND --help` (or use `uv run`/`poetry run` prefix based on your installation) +- **Issues**: Report problems at [GitHub Issues](https://github.com/KumarLabJax/JABS-postprocess/issues) +- **Examples**: Check the documentation and examples for working code patterns + +## Citation + +If you use JABS-postprocess in your research, please cite: + +```bibtex +@software{jabs_postprocess, + title = {JABS-postprocess: Behavioral Analysis Toolkit}, + author = {Kumar Lab, The Jackson Laboratory}, + url = {https://github.com/KumarLabJax/JABS-postprocess}, + year = {2024} +} +```