Conversation
Signed-off-by: aniebietafia <aniebietafia87@gmail.com>
WalkthroughSwitches real-time poll broadcasts from immediate Channels sends to a Redis "dirty_polls" set plus a Celery Beat task (every 2s) that snapshots dirty polls, queries fresh option data, and broadcasts aggregated updates; README updated (Chapa, Celery Beat, ws:// entry); small view/query/import tweaks. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant API as API Endpoint
participant DB as Database
participant Redis as Redis
participant CeleryBeat as Celery Beat (2s)
participant Task as broadcast_poll_updates Task
participant Channels as Django Channels
participant WS as WebSocket Clients
User->>API: POST /vote/ (submit vote)
API->>DB: Atomic increment PollOption.vote_count
API->>Redis: SADD "dirty_polls" poll_id
API-->>User: 200 OK
Note over CeleryBeat: Every 2 seconds
CeleryBeat->>Task: Trigger broadcast_poll_updates()
Task->>Redis: RENAME "dirty_polls" -> "dirty_polls_processing"
Task->>Redis: SMEMBERS "dirty_polls_processing"
loop for each poll_id
Task->>DB: Query fresh PollOption data for poll_id
Task->>Channels: group_send to "poll_{poll_id}"
Channels->>WS: Push aggregated poll update
end
Task->>Redis: DEL "dirty_polls_processing"
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Possibly related PRs
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Signed-off-by: aniebietafia <aniebietafia87@gmail.com>
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
README.md (1)
6-23: Fix the misaligned “Poll Categorization” table row (leading space breaks table formatting).- | Poll Categorization | Supports categorization of polls for better organization and retrieval. | +| Poll Categorization | Supports categorization of polls for better organization and retrieval. |
🧹 Nitpick comments (4)
polls/signals.py (1)
19-21: Preferinstance.option_id/instance.poll_idto avoid FK deref + extra attribute work.
Tiny cleanup, but it also avoids surprises if related objects are deferred.- PollOption.objects.filter(id=instance.option.id).update( + PollOption.objects.filter(id=instance.option_id).update( vote_count=F("vote_count") + 1 ) @@ - con.sadd("dirty_polls", str(instance.poll.poll_id)) + con.sadd("dirty_polls", str(instance.poll_id))Also applies to: 25-25
core/tasks.py (1)
104-111: Consider stable ordering inoptions_data(and include whatever the frontend expects).
If the client expects options in a consistent order, add an explicit.order_by(...)(or includeindex).- options_data = list( - PollOption.objects.filter(poll_id=poll_id).values("id", "vote_count") - ) + options_data = list( + PollOption.objects.filter(poll_id=poll_id) + .order_by("index") + .values("id", "index", "vote_count") + )core/settings.py (1)
386-389: 2s beat cadence: ensure you won’t overlap tasks / overload workers; considertimedelta(seconds=2)for clarity.
If processing sometimes takes >2s, multiple tasks can queue up; make sure this is acceptable operationally (or add locking / drop-if-running)."broadcast-poll-updates": { "task": "core.tasks.broadcast_poll_updates", - "schedule": 2.0, # every 2 seconds + "schedule": timedelta(seconds=2), },polls/views.py (1)
57-61: Nice perf win with.prefetch_related("options"); consider applying similar prefetch/select_related in other branches.
Right now anonymous/vote branches return less-optimized querysets than the authenticated non-vote path.Also applies to: 72-79
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
core/__pycache__/settings.cpython-313.pycis excluded by!**/*.pyc
📒 Files selected for processing (5)
README.md(2 hunks)core/settings.py(1 hunks)core/tasks.py(2 hunks)polls/signals.py(1 hunks)polls/views.py(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
polls/signals.py (1)
polls/models.py (2)
Vote(91-107)PollOption(71-88)
core/tasks.py (1)
polls/models.py (1)
PollOption(71-88)
polls/views.py (1)
polls/models.py (1)
Poll(26-68)
🪛 markdownlint-cli2 (0.18.1)
README.md
112-112: Tables should be surrounded by blank lines
(MD058, blanks-around-tables)
140-140: Spaces inside code span elements
(MD038, no-space-in-code)
🪛 Ruff (0.14.8)
polls/signals.py
10-10: Unused function argument: sender
(ARG001)
10-10: Unused function argument: kwargs
(ARG001)
26-26: Do not catch blind exception: Exception
(BLE001)
core/tasks.py
89-89: Do not catch blind exception: Exception
(BLE001)
112-112: Do not catch blind exception: Exception
(BLE001)
113-113: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
🔇 Additional comments (1)
polls/views.py (1)
81-87: Thecreatorfield is correctly implemented end-to-end. The Poll model, serializer, views, and permissions all usecreatorconsistently. No runtime errors will occur. Thecreated_byreferences found in the codebase belong to the PollCategory model, which is separate from Poll.
Signed-off-by: aniebietafia <aniebietafia87@gmail.com>
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (3)
README.md (1)
111-112: Add blank line before the API table for proper markdown formatting.Markdown best practices require blank lines surrounding tables. This ensures consistent rendering across different markdown parsers.
Apply this diff:
## API Documentation + | Endpoint | Method | Description |core/tasks.py (2)
93-116: Wrap processing in try/finally and requeue failed polls to prevent update loss.Critical issues that were flagged in the previous review remain unaddressed:
- No try/finally: If any exception occurs outside the per-poll handler (lines 99-113), cleanup at line 116 never runs, leaving
dirty_polls_processingstuck indefinitely- Failed polls are dropped: When broadcasting fails (line 112-113), that poll_id is lost when the processing key is deleted, so clients never see updates
- Redundant exception object:
logger.exception()automatically logs the exception, don't include{e}in the messageApply this diff:
# 2. Get all unique Poll IDs that need updates dirty_poll_ids = redis_conn.smembers("dirty_polls_processing") logger.info(f"Broadcasting updates for {len(dirty_poll_ids)} polls") + + failed_poll_ids = [] - for poll_id_bytes in dirty_poll_ids: - try: - poll_id = poll_id_bytes.decode("utf-8") - room_group_name = f"poll_{poll_id}" - - # 3. Fetch Fresh Data (Once per batch, not per vote) - options_data = list( - PollOption.objects.filter(poll_id=poll_id).values("id", "vote_count") - ) - - # 4. Broadcast to WebSocket Group - async_to_sync(channel_layer.group_send)( - room_group_name, {"type": "poll_update", "results": options_data} - ) - except Exception as e: - logger.exception(f"Error broadcasting poll {poll_id}: {e}") - - # 5. Cleanup the processing key - redis_conn.delete("dirty_polls_processing") + try: + for poll_id_bytes in dirty_poll_ids: + try: + poll_id = poll_id_bytes.decode("utf-8") + room_group_name = f"poll_{poll_id}" + + # 3. Fetch Fresh Data (Once per batch, not per vote) + options_data = list( + PollOption.objects.filter(poll_id=poll_id).values("id", "vote_count") + ) + + # 4. Broadcast to WebSocket Group + async_to_sync(channel_layer.group_send)( + room_group_name, {"type": "poll_update", "results": options_data} + ) + except Exception: + logger.exception(f"Error broadcasting poll {poll_id}") + failed_poll_ids.append(poll_id_bytes) + finally: + # Requeue failed polls for next tick + if failed_poll_ids: + redis_conn.sadd("dirty_polls", *failed_poll_ids) + logger.warning(f"Requeued {len(failed_poll_ids)} failed polls") + + # 5. Cleanup the processing key + redis_conn.delete("dirty_polls_processing")This ensures cleanup always runs and failed polls are retried on the next tick, preventing permanent update loss.
77-91: Add channel_layer None-check, narrow exception handling, and set TTL on processing key.Three issues:
- No None-check for
channel_layer- misconfiguration will cause crashes later- Generic
Exceptioncatch masks different failure modes (connection errors vs. race conditions)- No TTL on
dirty_polls_processing- if the task crashes before line 116, the key persists indefinitely, blocking all future broadcastsApply this diff:
redis_conn = get_redis_connection("default") channel_layer = get_channel_layer() + + if channel_layer is None: + logger.error("Channel layer not configured") + return # Check if there are any dirty polls if not redis_conn.exists("dirty_polls"): return # 1. Atomic Snapshot # Rename the key so we have a static list to process, # while new votes start filling a fresh 'dirty_polls' set. try: redis_conn.rename("dirty_polls", "dirty_polls_processing") - except Exception: + # Set TTL to prevent indefinite blocking if task crashes + redis_conn.expire("dirty_polls_processing", 300) # 5 minutes + except redis_conn.ResponseError: # Key might have disappeared or race condition; skip this tick. + logger.debug("dirty_polls key not found during rename, skipping") returnNote: Narrowed exception to
ResponseError(key doesn't exist) and added TTL for resilience.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
README.md(2 hunks)core/tasks.py(2 hunks)polls/signals.py(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
core/tasks.py (1)
polls/models.py (1)
PollOption(71-88)
polls/signals.py (1)
polls/models.py (1)
PollOption(71-88)
🪛 markdownlint-cli2 (0.18.1)
README.md
112-112: Tables should be surrounded by blank lines
(MD058, blanks-around-tables)
🪛 Ruff (0.14.8)
core/tasks.py
89-89: Do not catch blind exception: Exception
(BLE001)
113-113: Redundant exception object included in logging.exception call
(TRY401)
🔇 Additional comments (6)
polls/signals.py (2)
1-11: LGTM!The imports are correctly structured to support Redis-backed dirty poll marking with proper exception handling.
21-26: LGTM!The atomic increment using
F()expression correctly prevents race conditions during concurrent vote updates.README.md (3)
36-42: LGTM!Technology stack accurately reflects the new integrations (Chapa payment gateway and Celery Beat scheduling) and deployment platform.
87-88: LGTM!The Celery Beat scheduler command is correctly configured with the DatabaseScheduler backend.
130-130: LGTM!Explicit WebSocket endpoint documentation improves clarity for API consumers.
core/tasks.py (1)
4-13: LGTM!Imports correctly support the Redis-backed dirty poll batching and WebSocket broadcast functionality.
Signed-off-by: aniebietafia <aniebietafia87@gmail.com>
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
polls/signals.py (1)
30-30: Consider optimizing poll access in high-throughput scenarios.Accessing
instance.poll.poll_idtriggers a database query if thepollrelation isn't prefetched. In typical voting patterns this overhead is negligible, but if you later encounter performance bottlenecks in high-volume scenarios, you could optimize vote creation call sites with:vote = Vote.objects.select_related('poll').get(...) # or when creating: Vote.objects.create(option=option, poll=poll) # pass poll explicitly if available
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
polls/signals.py(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
polls/signals.py (1)
polls/models.py (1)
PollOption(71-88)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Run Tests & Lint
🔇 Additional comments (3)
polls/signals.py (3)
1-11: LGTM! Clean imports and logger setup.The imports are well-organized and the logger initialization follows Django best practices. The
RedisErrorimport fromredis.exceptionsis available in redis==7.0.1 as specified in your dependencies.
14-26: LGTM! Atomic database update is correctly implemented.The use of
F("vote_count") + 1ensures the increment happens atomically at the database level, preventing race conditions when multiple votes arrive concurrently. This is the recommended Django pattern for counter fields.
28-36: Previous review concerns successfully addressed.The exception handling and logging now follow best practices:
- ✓ Catches specific
RedisErrorinstead of broadException- ✓ Uses
logger.exception()to preserve stack trace- ✓ Includes structured context via
extra={"poll_id": ...}The graceful degradation ensures votes persist in the database even if Redis is unavailable, with full observability of any Redis failures.
Summary by CodeRabbit
New Features
Documentation
Performance
Bug Fixes
✏️ Tip: You can customize this high-level summary in your review settings.