A reading management system controlled by a webpage "remote" and processed by voice recognition.
liturgy.display shows scripture/reading text on screen and advances slides by voice recognition word count using the Vosk speech-recognition engine. Designed for simple setup and offline/edge operation.
- Offline speech recognition (Vosk)
- Configurable words-per-slide depending on screen size and preference
- Simple config in .env
- Basic support for USCCB readings (no API key required)
- Python 3.8+ (or adjust for your runtime)
- pip
- Microphone access on the host machine
- A Vosk model (downloaded and extracted locally)
Download models: https://alphacephei.com/vosk/models
- Clone repository
git clone /path/to/liturgy.display cd liturgy.display - (Optional) Create and activate a virtualenv
python -m venv venv source venv/bin/activate - Install dependencies
pip install -r requirements.txt
Create a .env file in the project root with at least:
WORDS_PER_SLIDE=40
MODEL_PATH=/your/path/to/vosk/model
- WORDS_PER_SLIDE: number of words shown per slide before auto-advance
- MODEL_PATH: path to the extracted Vosk model directory
Start the app (example):
python main.py
- Model not found: verify MODEL_PATH points to the extracted model folder
- Microphone issues: ensure OS microphone permissions are granted and the correct device is selected.