Skip to content

Claude/fix persistent sqlite hwb9 q#17

Merged
davidjuarezdev merged 5 commits intomainfrom
claude/fix-persistent-sqlite-HWB9Q
Mar 19, 2026
Merged

Claude/fix persistent sqlite hwb9 q#17
davidjuarezdev merged 5 commits intomainfrom
claude/fix-persistent-sqlite-HWB9Q

Conversation

@davidjuarezdev
Copy link
Copy Markdown
Owner

No description provided.

google-labs-jules Bot and others added 5 commits March 19, 2026 22:11
Previously, the `DatabaseBase` class recreated the `sqlite3` connection for every check (`contains()`), insertion (`add()`), deletion (`remove()`), and retrieval (`all()`). During sequential operations, like fetching metadata and verifying if track IDs are previously downloaded (e.g. downloading a playlist with hundreds of tracks), the overhead of creating/destroying these connections repeatedly is not negligible.

With this optimization, the Database instance caches its `sqlite3.connect` connection upon initialization (using `check_same_thread=False` to safely share across asyncio tasks running within the event loop) and gracefully shuts down in `__del__`.

Impact: Significant speedup (~10x on db operations) resolving track IDs sequentially.

Co-authored-by: davidjuarezdev <230496599+davidjuarezdev@users.noreply.github.com>
Previously, the `DatabaseBase` class recreated the `sqlite3` connection for every check (`contains()`), insertion (`add()`), deletion (`remove()`), and retrieval (`all()`). During sequential operations, like fetching metadata and verifying if track IDs are previously downloaded (e.g. downloading a playlist with hundreds of tracks), the overhead of creating/destroying these connections repeatedly is not negligible.

With this optimization, the Database instance caches its `sqlite3.connect` connection upon initialization (using `check_same_thread=False` to safely share across asyncio tasks running within the event loop) and gracefully shuts down in `__del__`.

Impact: Significant speedup (~10x on db operations) resolving track IDs sequentially.

Also updates the GitHub Actions CodeQL workflow to use `v3`/`v4` instead of the deprecated `v1`/`v2` actions which were causing CI failures.

Co-authored-by: davidjuarezdev <230496599+davidjuarezdev@users.noreply.github.com>
Previously, the `DatabaseBase` class recreated the `sqlite3` connection for every check (`contains()`), insertion (`add()`), deletion (`remove()`), and retrieval (`all()`). During sequential operations, like fetching metadata and verifying if track IDs are previously downloaded (e.g. downloading a playlist with hundreds of tracks), the overhead of creating/destroying these connections repeatedly is not negligible.

With this optimization, the Database instance caches its `sqlite3.connect` connection upon initialization (using `check_same_thread=False` to safely share across asyncio tasks running within the event loop) and gracefully shuts down in `__del__`.

Impact: Significant speedup (~10x on db operations) resolving track IDs sequentially.

Also updates the GitHub Actions CodeQL workflow to use `v3`/`v4` instead of the deprecated `v1`/`v2` actions which were causing CI failures, and addresses other CodeQL warnings like `setup-python-dependencies: false` and `CODEQL_ACTION_FILE_COVERAGE_ON_PRS`.

Co-authored-by: davidjuarezdev <230496599+davidjuarezdev@users.noreply.github.com>
Previously, the `DatabaseBase` class recreated the `sqlite3` connection for every check (`contains()`), insertion (`add()`), deletion (`remove()`), and retrieval (`all()`). During sequential operations, like fetching metadata and verifying if track IDs are previously downloaded (e.g. downloading a playlist with hundreds of tracks), the overhead of creating/destroying these connections repeatedly is not negligible.

With this optimization, the Database instance caches its `sqlite3.connect` connection upon initialization (using `check_same_thread=False` to safely share across asyncio tasks running within the event loop) and gracefully shuts down in `__del__`.

Impact: Significant speedup (~10x on db operations) resolving track IDs sequentially.

Also updates the GitHub Actions CodeQL workflow to use `v4` instead of the deprecated `v1` actions and explicitly turns off python package installation warnings via `build-mode: none` to fix CI failures.

Co-authored-by: davidjuarezdev <230496599+davidjuarezdev@users.noreply.github.com>
Accept deletion of codeql-analysis.yml (removed in main via PR #15)

https://claude.ai/code/session_018Mv9wywY8XSGfkztttn3Q9
Copilot AI review requested due to automatic review settings March 19, 2026 23:39
@davidjuarezdev davidjuarezdev merged commit 0967a3e into main Mar 19, 2026
1 of 2 checks passed
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 19, 2026

Caution

Review failed

The pull request is closed.

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 5fadcb6d-26c4-40e8-885b-189bb8c09dde

📥 Commits

Reviewing files that changed from the base of the PR and between ec7deb3 and 254be38.

📒 Files selected for processing (3)
  • .jules/bolt.md
  • streamrip/db.py
  • tests/test_deezer.py

Disabled knowledge base sources:

  • Jira integration is disabled

You can enable these sources in your CodeRabbit configuration.


📝 Walkthrough

Summary by CodeRabbit

  • Performance
    • Database operations are now more efficient through connection pooling, resulting in faster bulk workflows.

Walkthrough

Documentation added for database connection pooling requirements. DatabaseBase refactored to cache a persistent sqlite3 connection during initialization instead of creating new connections per operation. Test imports reorganized for consistency.

Changes

Cohort / File(s) Summary
Documentation
.jules/bolt.md
Added database connection pooling note for 2025-03-20 identifying need for persistent connection per instance instead of per-operation connections.
Database Implementation
streamrip/db.py
Refactored DatabaseBase to establish and reuse a persistent cached connection (self.conn) across all database operations. Added _table_exists() helper, modified initialization to check table existence, updated create() to use CREATE TABLE IF NOT EXISTS, added explicit commit() calls to write operations, introduced __del__() for cleanup, and updated reset() to close and reconnect with check_same_thread=False.
Test Imports
tests/test_deezer.py
Reordered imports and removed unused DeezerDownloadable import.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

  • ⚡ Bolt: Use persistent SQLite connection #14: Directly modifies DatabaseBase with persistent connection caching, table existence checking, and lifecycle management changes—likely overlapping implementation with identical purpose.

Poem

🐰 A clever cache, a persistent thread,
No more connections opened and shed,
One bond held tight through database calls,
Efficiency hops through the application halls,
SQLite whispers: "Share well, dear friend!"

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch claude/fix-persistent-sqlite-HWB9Q
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR updates Streamrip’s SQLite database wrapper to reuse a persistent connection (instead of reconnecting per operation), and cleans up an unused import in the Deezer tests.

Changes:

  • Keep a single cached sqlite3.Connection per DatabaseBase instance and reuse it for create/contains/add/remove/all.
  • Add table-existence detection and make table creation idempotent (CREATE TABLE IF NOT EXISTS).
  • Minor import cleanup/formatting in tests/test_deezer.py; add a .jules/bolt.md note documenting the optimization.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 5 comments.

File Description
streamrip/db.py Switches DB operations to a persistent SQLite connection; adds _table_exists, adjusts create, and updates CRUD methods accordingly.
tests/test_deezer.py Removes an unused import and normalizes import grouping.
.jules/bolt.md Documents the rationale and action for the persistent SQLite connection change.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread streamrip/db.py
self.conn.execute(command, tuple(items))
self.conn.commit()
except sqlite3.IntegrityError as e:
# tried to insert an item that was already there
Copy link

Copilot AI Mar 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add() catches sqlite3.IntegrityError but does not ROLLBACK the transaction. With a long-lived connection this can leave the connection in an open transaction and hold locks, causing subsequent operations (or other processes) to hit database is locked. Roll back on errors (or wrap DML in with self.conn: so commit/rollback is handled consistently).

Suggested change
# tried to insert an item that was already there
# tried to insert an item that was already there
self.conn.rollback()

Copilot uses AI. Check for mistakes.
Comment thread streamrip/db.py
Comment on lines +83 to +86
def __del__(self):
"""Ensure connection is closed on exit."""
if hasattr(self, 'conn') and self.conn:
self.conn.close()
Copy link

Copilot AI Mar 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Relying on __del__ to close the SQLite connection is not reliable (it may never run, and during interpreter shutdown module globals may already be torn down). Consider providing an explicit close() / context-manager API and calling it from the application shutdown path, and keep __del__ as a best-effort fallback only.

Suggested change
def __del__(self):
"""Ensure connection is closed on exit."""
if hasattr(self, 'conn') and self.conn:
self.conn.close()
def close(self) -> None:
"""Close the underlying SQLite connection, if open."""
conn = getattr(self, "conn", None)
if conn is not None:
try:
conn.close()
except Exception:
# Best-effort close; failures are not fatal.
logger.debug("Error while closing database connection", exc_info=True)
finally:
self.conn = None
def __enter__(self):
"""Support use as a context manager."""
return self
def __exit__(self, exc_type, exc_val, exc_tb):
"""Ensure the connection is closed when leaving a context manager."""
self.close()
def __del__(self):
"""Best-effort attempt to close the connection on garbage collection."""
try:
self.close()
except Exception:
# Avoid raising during interpreter shutdown
pass

Copilot uses AI. Check for mistakes.
Comment thread streamrip/db.py
Comment on lines +85 to +86
if hasattr(self, 'conn') and self.conn:
self.conn.close()
Copy link

Copilot AI Mar 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Repo formatting config enforces double quotes; these single-quoted string literals will be reformatted (and may fail CI if formatting checks run). Please use double quotes for consistency.

Copilot uses AI. Check for mistakes.
Comment thread streamrip/db.py

def reset(self):
"""Delete the database file."""
if hasattr(self, 'conn') and self.conn:
Copy link

Copilot AI Mar 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Repo formatting config enforces double quotes; use double quotes here as well (this currently deviates from tool.ruff.format.quote-style = "double").

Suggested change
if hasattr(self, 'conn') and self.conn:
if hasattr(self, "conn") and self.conn:

Copilot uses AI. Check for mistakes.
Comment thread streamrip/db.py
# ⚡ Bolt: Cache persistent SQLite connection to avoid recreating it
# on every db check/add. This gives ~10x speedup for database operations
# like downloading a playlist where it does hundreds of ID checks sequentially.
self.conn = sqlite3.connect(self.path, check_same_thread=False)
Copy link

Copilot AI Mar 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

check_same_thread=False disables sqlite3's built-in guardrails, but there is no locking/serialization around self.conn access. If this instance is ever used across threads, this can lead to unsafe concurrent access. If cross-thread use isn't required, keep the default (True); if it is required, add a lock around all DB operations (or use one connection per thread).

Suggested change
self.conn = sqlite3.connect(self.path, check_same_thread=False)
self.conn = sqlite3.connect(self.path)

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants