A modern, open-source schedule builder for uOttawa.
Note
You will need bun (v1.3.x), and Docker (latest) installed.
Optional: Install turbo and wrangler globally via bun
git clone https://github.com/g-raman/uenroll.gitcd uenrollbun installBuild dependencies
bun run buildRun Postgres instance locally
Warning
Make sure the docker engine is running.
bun run db:upSeed database
bun run db:seedRunning this command populates the database with seed data.
See the Populate with real data section if you want more/better data.
Or alternatively see the Request seed file section.
bun run dev:webYou can use the drizzle client (Recommended) to view the database and run SQL queries. You can also use any database client that supports PostgreSQL of your choice (Beekeeper Studio, DBeaver, pgadmin, etc.)
bun run db:studioYou can now open http://local.drizzle.studio in your web browser of choice.
Caution
Safari/Brave/MacOS users, access via a browser to localhost is denied by default. You need to create self signed certificate and drizzle studio should work. For brave you also need to disable shields for the website.
brew install mkcert
mkcert -installRestart your studio and you should be able to view it now.
Caution
The dev mode for cloudflare workflows is a little janky. You will see a lot of errors on the screen. It's fine to ignore these, the scraper should pick up most of the course catalogue. There are some issues around concurrency so some courses will be missing. Refer to the Request seed file section for accurate data.
If you'd like to see actual data from the uOttawa public course registry.
You can run the scraper against your local database instance.
You can run the following commands (assuming database is running):
bun run dev:scraperThe default port for triggering cron jobs is 8787.
If there's no port conflicts, run the following command to trigger the scraper.
curl http://localhost:8787/cdn-cgi/handler/scheduledDepending on network bottlenecks, the scraping should be done within 5-10 minutes.
Caution
The postgres instance running inside docker doesn't persist data to your disk.
If you want to populate your local database with real data.
Modify the compose.yml file to persist data to disk so you don't have to keep running the scraper.
If you don't want to wait for the scraper to run. You can request the seed.sql file from me.
Send me an e-mail.
Install postgresql via your package manager or from the website.
Install version 18. We're gonna use the psql utility to dump the contents of the file into the local db.
psql 'postgresql://postgres:postgres@localhost:5432/postgres' < seed.sql