Skip to content

fix: mark poll as dirty.#25

Merged
aniebietafia merged 4 commits intomainfrom
documentation
Dec 14, 2025
Merged

fix: mark poll as dirty.#25
aniebietafia merged 4 commits intomainfrom
documentation

Conversation

@aniebietafia
Copy link
Contributor

@aniebietafia aniebietafia commented Dec 14, 2025

Summary by CodeRabbit

  • New Features

    • Payment gateway (Chapa) added
    • Background scheduling added for real-time updates
  • Documentation

    • API docs restructured; unified endpoint table and explicit WebSocket URL added
    • Deployment section updated to Heroku only
  • Performance

    • Real-time poll update delivery moved to background processing for efficiency
    • Poll listing optimized for faster option retrieval
  • Bug Fixes

    • Vote persistence preserved even if background enqueueing fails
    • Poll creation now correctly associates the creator

✏️ Tip: You can customize this high-level summary in your review settings.

Signed-off-by: aniebietafia <aniebietafia87@gmail.com>
@coderabbitai
Copy link

coderabbitai bot commented Dec 14, 2025

Walkthrough

Switches real-time poll broadcasts from immediate Channels sends to a Redis "dirty_polls" set plus a Celery Beat task (every 2s) that snapshots dirty polls, queries fresh option data, and broadcasts aggregated updates; README updated (Chapa, Celery Beat, ws:// entry); small view/query/import tweaks.

Changes

Cohort / File(s) Summary
Documentation
README.md
Removed AWS from deployments; added Payment Gateway (Chapa) and Task Scheduling (Django Celery Beat); restructured API docs and real-time endpoint table; added explicit ws:// WebSocket entry.
Celery & Background Tasks
core/settings.py, core/tasks.py
Added Celery Beat schedule broadcast-poll-updates (every 2s) and new broadcast_poll_updates() task which snapshots Redis dirty_polls, fetches PollOption data per poll, and broadcasts aggregated updates to Channels groups with per-poll error handling and cleanup.
Real-Time Broadcast Refactor
polls/signals.py
Replaced immediate Channels group_send with adding poll_id to Redis set dirty_polls after atomic DB increment of vote_count; logs Redis errors without failing the vote flow.
Views & Query Optimization
polls/views.py
perform_create() now calls serializer.save(creator=self.request.user) (was created_by); get_queryset() adds .prefetch_related("options"); removed unused imports (PollOption, Vote, get_object_or_404, get_client_ip, get_country_from_ip, DRF serializers module).

Sequence Diagram(s)

sequenceDiagram
    actor User
    participant API as API Endpoint
    participant DB as Database
    participant Redis as Redis
    participant CeleryBeat as Celery Beat (2s)
    participant Task as broadcast_poll_updates Task
    participant Channels as Django Channels
    participant WS as WebSocket Clients

    User->>API: POST /vote/ (submit vote)
    API->>DB: Atomic increment PollOption.vote_count
    API->>Redis: SADD "dirty_polls" poll_id
    API-->>User: 200 OK

    Note over CeleryBeat: Every 2 seconds
    CeleryBeat->>Task: Trigger broadcast_poll_updates()

    Task->>Redis: RENAME "dirty_polls" -> "dirty_polls_processing"
    Task->>Redis: SMEMBERS "dirty_polls_processing"
    loop for each poll_id
        Task->>DB: Query fresh PollOption data for poll_id
        Task->>Channels: group_send to "poll_{poll_id}"
        Channels->>WS: Push aggregated poll update
    end
    Task->>Redis: DEL "dirty_polls_processing"
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

  • Focus areas:
    • core/tasks.py: correctness of Redis RENAME snapshot, Channels usage via async/sync boundaries, error handling and connection lifecycle.
    • polls/signals.py: ensure vote persistence remains atomic and Redis failures are handled appropriately.
    • Integration between group naming and WebSocket consumers; timing/beat configuration validation.

Possibly related PRs

Poem

🐰 I hopped through Redis, left polls marked "dirty" and bright,
Celery tapped the beat, every two seconds a light,
I fetched fresh options, bundled them with care,
Sent updates through channels to clients waiting there,
A tidy little broadcast — swift as a rabbit's flight. 🥕✨

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'fix: mark poll as dirty' accurately describes the core architectural change: implementing a Redis-backed dirty poll marking system to replace direct WebSocket broadcasts. This reflects the main change across multiple files (signals, tasks, settings).
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch documentation

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Signed-off-by: aniebietafia <aniebietafia87@gmail.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
README.md (1)

6-23: Fix the misaligned “Poll Categorization” table row (leading space breaks table formatting).

- | Poll Categorization         | Supports categorization of polls for better organization and retrieval.            |
+| Poll Categorization         | Supports categorization of polls for better organization and retrieval.            |
🧹 Nitpick comments (4)
polls/signals.py (1)

19-21: Prefer instance.option_id / instance.poll_id to avoid FK deref + extra attribute work.
Tiny cleanup, but it also avoids surprises if related objects are deferred.

-        PollOption.objects.filter(id=instance.option.id).update(
+        PollOption.objects.filter(id=instance.option_id).update(
             vote_count=F("vote_count") + 1
         )
@@
-            con.sadd("dirty_polls", str(instance.poll.poll_id))
+            con.sadd("dirty_polls", str(instance.poll_id))

Also applies to: 25-25

core/tasks.py (1)

104-111: Consider stable ordering in options_data (and include whatever the frontend expects).
If the client expects options in a consistent order, add an explicit .order_by(...) (or include index).

-            options_data = list(
-                PollOption.objects.filter(poll_id=poll_id).values("id", "vote_count")
-            )
+            options_data = list(
+                PollOption.objects.filter(poll_id=poll_id)
+                .order_by("index")
+                .values("id", "index", "vote_count")
+            )
core/settings.py (1)

386-389: 2s beat cadence: ensure you won’t overlap tasks / overload workers; consider timedelta(seconds=2) for clarity.
If processing sometimes takes >2s, multiple tasks can queue up; make sure this is acceptable operationally (or add locking / drop-if-running).

     "broadcast-poll-updates": {
         "task": "core.tasks.broadcast_poll_updates",
-        "schedule": 2.0,  # every 2 seconds
+        "schedule": timedelta(seconds=2),
     },
polls/views.py (1)

57-61: Nice perf win with .prefetch_related("options"); consider applying similar prefetch/select_related in other branches.
Right now anonymous/vote branches return less-optimized querysets than the authenticated non-vote path.

Also applies to: 72-79

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3df78e3 and 9bf296d.

⛔ Files ignored due to path filters (1)
  • core/__pycache__/settings.cpython-313.pyc is excluded by !**/*.pyc
📒 Files selected for processing (5)
  • README.md (2 hunks)
  • core/settings.py (1 hunks)
  • core/tasks.py (2 hunks)
  • polls/signals.py (1 hunks)
  • polls/views.py (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
polls/signals.py (1)
polls/models.py (2)
  • Vote (91-107)
  • PollOption (71-88)
core/tasks.py (1)
polls/models.py (1)
  • PollOption (71-88)
polls/views.py (1)
polls/models.py (1)
  • Poll (26-68)
🪛 markdownlint-cli2 (0.18.1)
README.md

112-112: Tables should be surrounded by blank lines

(MD058, blanks-around-tables)


140-140: Spaces inside code span elements

(MD038, no-space-in-code)

🪛 Ruff (0.14.8)
polls/signals.py

10-10: Unused function argument: sender

(ARG001)


10-10: Unused function argument: kwargs

(ARG001)


26-26: Do not catch blind exception: Exception

(BLE001)

core/tasks.py

89-89: Do not catch blind exception: Exception

(BLE001)


112-112: Do not catch blind exception: Exception

(BLE001)


113-113: Use logging.exception instead of logging.error

Replace with exception

(TRY400)

🔇 Additional comments (1)
polls/views.py (1)

81-87: The creator field is correctly implemented end-to-end. The Poll model, serializer, views, and permissions all use creator consistently. No runtime errors will occur. The created_by references found in the codebase belong to the PollCategory model, which is separate from Poll.

Signed-off-by: aniebietafia <aniebietafia87@gmail.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (3)
README.md (1)

111-112: Add blank line before the API table for proper markdown formatting.

Markdown best practices require blank lines surrounding tables. This ensures consistent rendering across different markdown parsers.

Apply this diff:

 ## API Documentation
+
 | Endpoint                                 | Method | Description                                 |
core/tasks.py (2)

93-116: Wrap processing in try/finally and requeue failed polls to prevent update loss.

Critical issues that were flagged in the previous review remain unaddressed:

  1. No try/finally: If any exception occurs outside the per-poll handler (lines 99-113), cleanup at line 116 never runs, leaving dirty_polls_processing stuck indefinitely
  2. Failed polls are dropped: When broadcasting fails (line 112-113), that poll_id is lost when the processing key is deleted, so clients never see updates
  3. Redundant exception object: logger.exception() automatically logs the exception, don't include {e} in the message

Apply this diff:

     # 2. Get all unique Poll IDs that need updates
     dirty_poll_ids = redis_conn.smembers("dirty_polls_processing")
 
     logger.info(f"Broadcasting updates for {len(dirty_poll_ids)} polls")
+    
+    failed_poll_ids = []
 
-    for poll_id_bytes in dirty_poll_ids:
-        try:
-            poll_id = poll_id_bytes.decode("utf-8")
-            room_group_name = f"poll_{poll_id}"
-
-            # 3. Fetch Fresh Data (Once per batch, not per vote)
-            options_data = list(
-                PollOption.objects.filter(poll_id=poll_id).values("id", "vote_count")
-            )
-
-            # 4. Broadcast to WebSocket Group
-            async_to_sync(channel_layer.group_send)(
-                room_group_name, {"type": "poll_update", "results": options_data}
-            )
-        except Exception as e:
-            logger.exception(f"Error broadcasting poll {poll_id}: {e}")
-
-    # 5. Cleanup the processing key
-    redis_conn.delete("dirty_polls_processing")
+    try:
+        for poll_id_bytes in dirty_poll_ids:
+            try:
+                poll_id = poll_id_bytes.decode("utf-8")
+                room_group_name = f"poll_{poll_id}"
+
+                # 3. Fetch Fresh Data (Once per batch, not per vote)
+                options_data = list(
+                    PollOption.objects.filter(poll_id=poll_id).values("id", "vote_count")
+                )
+
+                # 4. Broadcast to WebSocket Group
+                async_to_sync(channel_layer.group_send)(
+                    room_group_name, {"type": "poll_update", "results": options_data}
+                )
+            except Exception:
+                logger.exception(f"Error broadcasting poll {poll_id}")
+                failed_poll_ids.append(poll_id_bytes)
+    finally:
+        # Requeue failed polls for next tick
+        if failed_poll_ids:
+            redis_conn.sadd("dirty_polls", *failed_poll_ids)
+            logger.warning(f"Requeued {len(failed_poll_ids)} failed polls")
+        
+        # 5. Cleanup the processing key
+        redis_conn.delete("dirty_polls_processing")

This ensures cleanup always runs and failed polls are retried on the next tick, preventing permanent update loss.


77-91: Add channel_layer None-check, narrow exception handling, and set TTL on processing key.

Three issues:

  1. No None-check for channel_layer - misconfiguration will cause crashes later
  2. Generic Exception catch masks different failure modes (connection errors vs. race conditions)
  3. No TTL on dirty_polls_processing - if the task crashes before line 116, the key persists indefinitely, blocking all future broadcasts

Apply this diff:

     redis_conn = get_redis_connection("default")
     channel_layer = get_channel_layer()
+    
+    if channel_layer is None:
+        logger.error("Channel layer not configured")
+        return
 
     # Check if there are any dirty polls
     if not redis_conn.exists("dirty_polls"):
         return
 
     # 1. Atomic Snapshot
     # Rename the key so we have a static list to process,
     # while new votes start filling a fresh 'dirty_polls' set.
     try:
         redis_conn.rename("dirty_polls", "dirty_polls_processing")
-    except Exception:
+        # Set TTL to prevent indefinite blocking if task crashes
+        redis_conn.expire("dirty_polls_processing", 300)  # 5 minutes
+    except redis_conn.ResponseError:
         # Key might have disappeared or race condition; skip this tick.
+        logger.debug("dirty_polls key not found during rename, skipping")
         return

Note: Narrowed exception to ResponseError (key doesn't exist) and added TTL for resilience.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9bf296d and 2d0a04e.

📒 Files selected for processing (3)
  • README.md (2 hunks)
  • core/tasks.py (2 hunks)
  • polls/signals.py (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
core/tasks.py (1)
polls/models.py (1)
  • PollOption (71-88)
polls/signals.py (1)
polls/models.py (1)
  • PollOption (71-88)
🪛 markdownlint-cli2 (0.18.1)
README.md

112-112: Tables should be surrounded by blank lines

(MD058, blanks-around-tables)

🪛 Ruff (0.14.8)
core/tasks.py

89-89: Do not catch blind exception: Exception

(BLE001)


113-113: Redundant exception object included in logging.exception call

(TRY401)

🔇 Additional comments (6)
polls/signals.py (2)

1-11: LGTM!

The imports are correctly structured to support Redis-backed dirty poll marking with proper exception handling.


21-26: LGTM!

The atomic increment using F() expression correctly prevents race conditions during concurrent vote updates.

README.md (3)

36-42: LGTM!

Technology stack accurately reflects the new integrations (Chapa payment gateway and Celery Beat scheduling) and deployment platform.


87-88: LGTM!

The Celery Beat scheduler command is correctly configured with the DatabaseScheduler backend.


130-130: LGTM!

Explicit WebSocket endpoint documentation improves clarity for API consumers.

core/tasks.py (1)

4-13: LGTM!

Imports correctly support the Redis-backed dirty poll batching and WebSocket broadcast functionality.

Signed-off-by: aniebietafia <aniebietafia87@gmail.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
polls/signals.py (1)

30-30: Consider optimizing poll access in high-throughput scenarios.

Accessing instance.poll.poll_id triggers a database query if the poll relation isn't prefetched. In typical voting patterns this overhead is negligible, but if you later encounter performance bottlenecks in high-volume scenarios, you could optimize vote creation call sites with:

vote = Vote.objects.select_related('poll').get(...)
# or when creating:
Vote.objects.create(option=option, poll=poll)  # pass poll explicitly if available
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2d0a04e and e6f4672.

📒 Files selected for processing (1)
  • polls/signals.py (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
polls/signals.py (1)
polls/models.py (1)
  • PollOption (71-88)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Run Tests & Lint
🔇 Additional comments (3)
polls/signals.py (3)

1-11: LGTM! Clean imports and logger setup.

The imports are well-organized and the logger initialization follows Django best practices. The RedisError import from redis.exceptions is available in redis==7.0.1 as specified in your dependencies.


14-26: LGTM! Atomic database update is correctly implemented.

The use of F("vote_count") + 1 ensures the increment happens atomically at the database level, preventing race conditions when multiple votes arrive concurrently. This is the recommended Django pattern for counter fields.


28-36: Previous review concerns successfully addressed.

The exception handling and logging now follow best practices:

  • ✓ Catches specific RedisError instead of broad Exception
  • ✓ Uses logger.exception() to preserve stack trace
  • ✓ Includes structured context via extra={"poll_id": ...}

The graceful degradation ensures votes persist in the database even if Redis is unavailable, with full observability of any Redis failures.

@aniebietafia aniebietafia merged commit 88fa1d0 into main Dec 14, 2025
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant