"BREAKING CHANGE: invoice is generated and sent via email after successful payment."#23
"BREAKING CHANGE: invoice is generated and sent via email after successful payment."#23aniebietafia merged 9 commits intomainfrom
Conversation
…sful payment. Signed-off-by: aniebietafia <aniebietafia87@gmail.com>
…sful payment. Signed-off-by: aniebietafia <aniebietafia87@gmail.com>
…sful payment. Signed-off-by: aniebietafia <aniebietafia87@gmail.com>
|
Warning Rate limit exceeded@aniebietafia has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 20 minutes and 54 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (1)
WalkthroughAdds Docker dev orchestration and system deps, payment invoice PDF generation and Cloudinary upload, an async invoice/email Celery task, multiple DB schema migrations and added db indexes, expanded docs and email templates, a DB-wait management command, CI/system dependency changes, and moves weekly stats logs into a logs/ directory. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Client
participant PaymentsAPI as Payments View
participant DB as Database
participant Celery as Celery Worker
participant InvoiceSvc as InvoiceService
participant Cloudinary
participant EmailTask as Email Sender
Client->>PaymentsAPI: GET /verify?tx_ref=...
PaymentsAPI->>DB: fetch Transaction by tx_ref
PaymentsAPI->>DB: verify and mark user premium
PaymentsAPI->>Celery: process_successful_payment_actions.delay(txn.id)
PaymentsAPI-->>Client: render payment_status page
Celery->>DB: load Transaction and User
Celery->>InvoiceSvc: generate_pdf(context)
InvoiceSvc->>InvoiceSvc: render template -> convert to PDF bytes
InvoiceSvc-->>Celery: return PDF bytes
Celery->>InvoiceSvc: upload_to_cloudinary(pdf_bytes, filename)
InvoiceSvc->>Cloudinary: upload raw file
Cloudinary-->>InvoiceSvc: secure_url
InvoiceSvc-->>Celery: invoice_url
Celery->>DB: update Transaction.invoice_url
Celery->>EmailTask: send_email_task(..., attachments=[pdf])
EmailTask->>Recipient: deliver email with PDF
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
…sful payment. Signed-off-by: aniebietafia <aniebietafia87@gmail.com>
…sful payment. Signed-off-by: aniebietafia <aniebietafia87@gmail.com>
There was a problem hiding this comment.
Actionable comments posted: 11
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
users/serializers.py (1)
5-5: Remove the unusedOrganizationSerializerimport from line 5.The search confirms that
OrganizationSerializeris imported but never used in the file. The code uses only the locally-definedUserOrganizationSerializerclass (lines 41 and 46). The import should be removed.
🧹 Nitpick comments (18)
users/migrations/0004_alter_user_is_premium.py (1)
12-17: Consider if indexing a boolean field provides value.Boolean fields have very low cardinality (only 2 values). An index on
is_premiumis only beneficial if the distribution is heavily skewed (e.g., <5% of users are premium) and you frequently query for the minority value. Otherwise, the index maintenance overhead may not be worth it.If the majority of queries filter for
is_premium=Trueand premium users are a small fraction, this index is appropriate. Otherwise, consider removing it.core/management/commands/wait_for_db.py (1)
1-21: DB wait command is solid; consider silencing unused-arg lintThe retry loop around
connections["default"].cursor()withOperationalErrorhandling is a straightforward and appropriate readiness check for startup.Minor polish:
handle(self, *args, **options)doesn’t useargsoroptions, which triggers ARG002. You could rename them to_args/_optionsor drop them entirely if you don’t plan to support command options.core/settings.py (1)
21-26: Filter out empty hosts when extendingALLOWED_HOSTS
os.environ.get("ALLOWED_HOSTS", "").split(",")will produce[""]when the env var is unset or has trailing commas, adding empty strings toALLOWED_HOSTS. Django tolerates this but it’s noisy and can confuse diagnostics.Consider filtering blanks, and you can adopt iterable unpacking at the same time if you like:
-ALLOWED_HOSTS = [ - "localhost", - "127.0.0.1", - ".herokuapp.com", - "6826d4eb58e1.ngrok-free.app", -] + os.environ.get("ALLOWED_HOSTS", "").split(",") +base_hosts = [ + "localhost", + "127.0.0.1", + ".herokuapp.com", + "6826d4eb58e1.ngrok-free.app", +] +env_hosts = [h for h in os.environ.get("ALLOWED_HOSTS", "").split(",") if h] +ALLOWED_HOSTS = [*base_hosts, *env_hosts]docker-compose.yml (1)
1-76: Compose orchestration looks good; consider adding DB wait for Celery servicesThe multi-service setup (Postgres healthcheck +
wait_for_db+ migrations before Daphne) is well thought out.Celery worker and beat currently just
depends_on: weband connect directly to DB/Redis. For smoother startups (especially on cold boots), you might want to:
- Either add a small wait script similar to
wait_for_dbin their commands, e.g.:command: > sh -c "python manage.py wait_for_db && celery -A core worker -l info"
- Or add explicit
depends_onentries fordbandrediswith health/started conditions, if you’d rather rely on Compose.Not mandatory for dev, but it reduces transient connection errors on bring-up.
README.md (1)
7-23: Clean up table formatting and inline code spacing to satisfy markdownlint and render consistentlyA few small markdown nits picked up by markdownlint:
- Tables should be surrounded by blank lines (MD058) for reliable rendering.
- Some rows have a leading space before the
|, which can break table alignment:
- Line 20 (
| Poll Categorization ...)- Line 125 (
| \/api/v1/auth/verify-email/` ...`)- Line 135 (
| Payment Integration ...)- There are extra spaces inside an inline code span for the payment details endpoint (MD038): the backticked path includes trailing spaces:
`/api/v1/payments/{payment_id}/ `.Suggested adjustments (schematic diff):
-## Key Features -| Feature | Description | +## Key Features + +| Feature | Description | @@ -| Payment Invoice Generation | Automatically generates and sends payment invoices upon successful transactions. | - | Poll Categorization | Supports categorization of polls for better organization and retrieval. | +| Payment Invoice Generation | Automatically generates and sends payment invoices upon successful transactions. | +| Poll Categorization | Supports categorization of polls for better organization and retrieval. | @@ -## Technology Stack -| Component | Technology | +## Technology Stack + +| Component | Technology | @@ -## API Documentation -| Endpoint | Method | Description | +## API Documentation + +| Endpoint | Method | Description | @@ - | `/api/v1/auth/verify-email/` | GET | Verify user email | +| `/api/v1/auth/verify-email/` | GET | Verify user email | @@ - | Payment Integration | | | +| Payment Integration | | | @@ -| `/api/v1/payments/{payment_id}/ ` | GET | Retrieve Payment Details | +| `/api/v1/payments/{payment_id}/` | GET | Retrieve Payment Details |This should address the lint warnings and ensure the tables render cleanly.
Also applies to: 26-37, 110-139
polls/migrations/0002_alter_poll_poll_id_alter_pollcategory_category_id.py (1)
13-36: UUID PK migration looks consistent; just double-check existing schema and redundantdb_indexThis migration makes
poll_idandcategory_idUUID primary keys withdefault=uuid.uuid4, which matches the pattern used elsewhere in the project.Two minor points to be aware of:
- For primary keys, most databases automatically create an index, so
db_index=Trueis redundant. It’s not harmful, just extra noise in the migration.- If these fields were not already UUID PKs in
0001_initial, altering the primary key type on existing tables with data can be a heavy/lock-prone operation. It’s worth confirming that the current production schema already uses UUIDs (and this is mainly addingdb_index=True), or planning the rollout carefully if a type change is involved.No changes required if this matches your existing models and deploy plan; just sanity-check before running in a live environment.
templates/email/payment_invoice.html (1)
1-130: Invoice template is clear and usable; consider email-client compatibility and flexibilityThe structure and styling are clean, and the placeholders (
user.*,transaction.*,company_*) line up well with an invoice/PDF use case.Two optional considerations:
- If this template is also sent as an HTML email (not just rendered to PDF), some clients are picky about
<style>blocks; inlining key styles on elements can improve rendering consistency.- The hard-coded description
"Agora Polls Premium Upgrade"is fine for the current premium-upgrade flow; if you later reuse this for other products, you may want to make the description dynamic (e.g.,{{ line_item_description }}) or drive it from the transaction.payments/models.py (2)
13-15: Redundantdb_index=Trueon primary key.Primary keys are automatically indexed by the database. The
db_index=Trueargument onidis unnecessary and can be removed.id = models.UUIDField( - primary_key=True, default=uuid.uuid4, editable=False, db_index=True + primary_key=True, default=uuid.uuid4, editable=False )
29-34: Reconsider indexinginvoice_url.This field stores a URL for display purposes and is unlikely to be used as a query filter. Adding an index on a nullable, rarely-queried field adds write overhead without query benefits. Consider removing
db_index=Trueunless you have a specific use case for querying by invoice URL.organizations/migrations/0005_alter_organization_org_id_and_more.py (1)
14-51: Redundantdb_index=Trueon primary key fields.The
db_index=Trueonorg_id,invite_id, andmember_idis redundant since primary keys are automatically indexed by the database. Only thetokenfield (line 39) benefits from the explicit index alongside its unique constraint.This doesn't cause functional issues, but adds unnecessary clutter to the migration. Consider regenerating the migration without
db_indexon primary key fields.polls/models.py (1)
9-11: Redundantdb_index=Trueon primary key fields.Both
category_idandpoll_idare primary keys, which are automatically indexed. Thedb_index=Trueis unnecessary and can be removed for cleaner model definitions.Also applies to: 27-29
templates/email/payment_status.html (1)
131-136: Consider using Django's{% url %}template tag for navigation links.Hardcoded URLs (
/and/payments/initialize) are fragile and will break if routes change. Using the{% url %}tag with named URL patterns improves maintainability.- <a href="/" class="button">Go to Dashboard</a> + <a href="{% url 'dashboard' %}" class="button">Go to Dashboard</a>- <a href="/payments/initialize" class="button">Try Again</a> + <a href="{% url 'payments:initialize' %}" class="button">Try Again</a>Adjust the URL names to match your actual URL configuration.
payments/migrations/0002_transaction_invoice_url_alter_transaction_id_and_more.py (1)
24-34: Redundantdb_index=Trueon primary key.As noted in the model review,
db_index=Trueon theidprimary key field is redundant. The migration correctly reflects the model definition but inherits this unnecessary configuration.organizations/models.py (2)
8-10: Redundantdb_index=Trueon primary key fields.The
org_id,member_id, andinvite_idfields are primary keys and are automatically indexed. The explicitdb_index=Trueis unnecessary.Also applies to: 37-39, 66-68
16-21: Redundantdb_index=Trueon ForeignKey fields.Django's
ForeignKeyfields havedb_index=Trueby default. The explicit specification onownerandorganizationForeignKey fields is unnecessary unless you specifically want to document this behavior.Also applies to: 40-42
notifications/tasks.py (1)
76-78: Use a custom exception for PDF generation failures.Raising a generic
Exceptionmakes error handling less precise. Consider creating a specific exception class for invoice generation failures.+class InvoiceGenerationError(Exception): + """Raised when invoice PDF generation fails.""" + pass + # Generate PDF pdf_bytes = InvoiceService.generate_pdf(context) if not pdf_bytes: - raise Exception("Failed to generate PDF bytes") + raise InvoiceGenerationError("Failed to generate PDF bytes")Then update
autoretry_forif you don't want to retry on this specific failure.payments/invoice_service.py (1)
45-63: Improve logging for PDF generation and Cloudinary uploadThe exception logging in
generate_pdfandupload_to_cloudinaryuses f‑strings withexc_info=True. Ruff’s RUF010 hint and general logging best practices favor either%splaceholders orlogger.exception, which also avoids accidentally dropping the traceback.You can simplify and improve the logs like this:
@@ - except Exception as e: - logger.error(f"Failed to generate PDF invoice: {str(e)}", exc_info=True) - return None + except Exception: + logger.exception("Failed to generate PDF invoice") + return None @@ - except Exception as e: - logger.error(f"Cloudinary upload failed: {str(e)}", exc_info=True) - return None + except Exception: + logger.exception("Cloudinary upload failed") + return NoneThis keeps the behavior the same while producing clearer, more standard logs with full tracebacks.
Also applies to: 70-81
payments/views.py (1)
290-295: Uselogger.exceptionand structured logging for async task failuresCatching all exceptions around
process_successful_payment_actions.delay(txn.id)is sensible to avoid breaking the main payment flow due to asynchronous issues, but the current log:logger.error(f"Failed to trigger invoice task for {tx_ref}: {e}")loses some structure and requires string interpolation.
You can keep the behavior and improve logging by using placeholders and
logger.exception:- # Trigger Invoice Generation and Email Task - try: - process_successful_payment_actions.delay(txn.id) - except Exception as e: - logger.error(f"Failed to trigger invoice task for {tx_ref}: {e}") + # Trigger Invoice Generation and Email Task; failures should not affect the main flow. + try: + process_successful_payment_actions.delay(txn.id) + except Exception: + logger.exception( + "Failed to trigger invoice task for transaction %s", tx_ref + )This matches Ruff’s logging guidance and preserves the decision to swallow async‑task errors in the HTTP path.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
core/__pycache__/settings.cpython-313.pycis excluded by!**/*.pyc
📒 Files selected for processing (26)
.gitignore(1 hunks)Dockerfile(1 hunks)README.md(3 hunks)core/management/commands/wait_for_db.py(1 hunks)core/settings.py(2 hunks)core/tasks.py(2 hunks)docker-compose.yml(1 hunks)heroku.yml(1 hunks)notifications/tasks.py(6 hunks)organizations/migrations/0005_alter_organization_org_id_and_more.py(1 hunks)organizations/models.py(4 hunks)payments/invoice_service.py(1 hunks)payments/migrations/0002_transaction_invoice_url_alter_transaction_id_and_more.py(1 hunks)payments/models.py(1 hunks)payments/views.py(5 hunks)polls/migrations/0002_alter_poll_poll_id_alter_pollcategory_category_id.py(1 hunks)polls/models.py(3 hunks)polls/serializers.py(4 hunks)polls/signals.py(1 hunks)requirements.txt(5 hunks)templates/email/payment_invoice.html(1 hunks)templates/email/payment_status.html(1 hunks)user_statistics_report_log.txt(0 hunks)users/migrations/0004_alter_user_is_premium.py(1 hunks)users/models.py(2 hunks)users/serializers.py(1 hunks)
💤 Files with no reviewable changes (1)
- user_statistics_report_log.txt
🧰 Additional context used
🧬 Code graph analysis (5)
polls/signals.py (2)
polls/models.py (1)
PollOption(71-80)users/models.py (1)
id(113-114)
payments/migrations/0002_transaction_invoice_url_alter_transaction_id_and_more.py (3)
organizations/migrations/0005_alter_organization_org_id_and_more.py (1)
Migration(7-52)polls/migrations/0002_alter_poll_poll_id_alter_pollcategory_category_id.py (1)
Migration(7-36)users/migrations/0004_alter_user_is_premium.py (1)
Migration(6-18)
notifications/tasks.py (4)
notifications/email_service.py (1)
send_email(21-57)organizations/models.py (2)
OrganizationMember(32-55)Role(33-35)payments/invoice_service.py (3)
InvoiceService(14-81)generate_pdf(39-63)upload_to_cloudinary(66-81)payments/models.py (1)
Transaction(6-40)
polls/migrations/0002_alter_poll_poll_id_alter_pollcategory_category_id.py (3)
organizations/migrations/0005_alter_organization_org_id_and_more.py (1)
Migration(7-52)payments/migrations/0002_transaction_invoice_url_alter_transaction_id_and_more.py (1)
Migration(7-40)users/migrations/0004_alter_user_is_premium.py (1)
Migration(6-18)
payments/views.py (4)
payments/models.py (1)
Transaction(6-40)payments/utils.py (1)
ChapaService(9-108)notifications/tasks.py (1)
process_successful_payment_actions(57-120)core/custom_renderer.py (1)
render(10-41)
🪛 markdownlint-cli2 (0.18.1)
README.md
7-7: Tables should be surrounded by blank lines
(MD058, blanks-around-tables)
26-26: Tables should be surrounded by blank lines
(MD058, blanks-around-tables)
110-110: Tables should be surrounded by blank lines
(MD058, blanks-around-tables)
138-138: Spaces inside code span elements
(MD038, no-space-in-code)
🪛 Ruff (0.14.6)
core/settings.py
21-26: Consider iterable unpacking instead of concatenation
Replace with iterable unpacking
(RUF005)
organizations/migrations/0005_alter_organization_org_id_and_more.py
9-11: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
13-52: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
payments/migrations/0002_transaction_invoice_url_alter_transaction_id_and_more.py
9-11: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
13-40: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
core/management/commands/wait_for_db.py
10-10: Unused method argument: args
(ARG002)
10-10: Unused method argument: options
(ARG002)
notifications/tasks.py
57-57: Unused function argument: self
(ARG001)
78-78: Abstract raise to an inner function
(TRY301)
78-78: Create your own exception
(TRY002)
78-78: Avoid specifying long messages outside the exception class
(TRY003)
117-117: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
119-119: Use explicit conversion flag
Replace with conversion flag
(RUF010)
users/migrations/0004_alter_user_is_premium.py
8-10: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
12-18: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
polls/migrations/0002_alter_poll_poll_id_alter_pollcategory_category_id.py
9-11: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
13-36: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
payments/views.py
8-8: Redefinition of unused extend_schema from line 7
Remove definition: extend_schema
(F811)
293-293: Do not catch blind exception: Exception
(BLE001)
294-294: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
payments/invoice_service.py
20-20: Unused static method argument: rel
(ARG004)
62-62: Use explicit conversion flag
Replace with conversion flag
(RUF010)
80-80: Use explicit conversion flag
Replace with conversion flag
(RUF010)
🔇 Additional comments (20)
core/tasks.py (3)
40-44: LGTM! Good use of pathlib.The refactor to use
pathlibfor directory and file operations is idiomatic and more maintainable than the previous approach. Themkdir(exist_ok=True)cleanly handles the case where the directory already exists.
58-58: LGTM! More specific exception handling.Narrowing the exception handling from broad
Exceptionto specific(OSError, IOError)follows best practices and makes error handling more explicit for file I/O operations.
62-62: Based on my verification, I can now provide the rewritten review comment:
Task result is not currently utilized in the codebase.
The function now returns the
statsdictionary, which will be stored in the Celery result backend (Redis). However, no code currently retrieves or uses this result. The change is benign and non-breaking—Celery scheduled tasks do not require callers to handle results inline. The return value enables future enhancements such as result monitoring or debugging via the result backend.Dockerfile (1)
45-48: LGTM!The comment simplification and formatting are clean. The CMD correctly uses shell form to allow
$PORTenvironment variable expansion at runtime.requirements.txt (1)
116-116: Appropriate choice for HTML-to-PDF conversion.
xhtml2pdfis a suitable library for generating PDF invoices from HTML templates, aligning with the PR's invoice generation objective.users/serializers.py (1)
41-46: LGTM!The
@extend_schema_fieldannotation now correctly referencesUserOrganizationSerializer, matching the actual serializer used inget_organizations. This ensures accurate OpenAPI documentation.users/models.py (2)
65-66: Booleanis_premiumindex is reasonable; just confirm it matches query patternsIndexing
is_premiummakes sense if you frequently filter on premium users (especially if they’re a small subset). It does add write overhead though, so it’s worth confirming with your common queries/ORM filters that this index will actually be used and beneficial at your expected scale.
124-124: Comment tweak forverification_typelooks goodThe simplified comment still clearly documents expected values (
'email', 'phone') with no behavioral impact.polls/signals.py (1)
18-40: Comment clarifications around atomic updates and WebSockets are helpfulThe updated comments make the intent of the
F()-based increment and the real-time broadcast flow clearer without touching behavior.core/settings.py (1)
278-282: Explicit Cloudinary config is good; ensure env vars are present everywhereSwitching to
cloud_name/api_key/api_secretenv vars is a clean setup and aligns with more advanced Cloudinary usage (like invoice uploads). Just make sure these three variables are defined in all environments that import settings (local, tests, CI, prod), otherwise any Cloudinary access will fail at runtime.If you want a stricter fail-fast, you could add a small validation step around these env reads.
polls/models.py (1)
76-77: LGTM!The comment clarifies the purpose of the denormalized
vote_countfield for read optimization.templates/email/payment_status.html (1)
119-140: LGTM on template structure and conditional logic.The three-state conditional (success, explicit failure, loading) is correctly implemented. Using
success == Falseproperly distinguishes betweenFalseandNone/undefined context variables.payments/migrations/0002_transaction_invoice_url_alter_transaction_id_and_more.py (1)
14-23: LGTM on new field and reference index.The
invoice_urlfield addition andreferencefield index are correctly defined and align with the model changes.Also applies to: 35-39
organizations/models.py (1)
71-71: LGTM on token indexing.The
db_index=Trueon thetokenfield is appropriate since invite tokens are looked up frequently during the invitation acceptance flow.polls/serializers.py (1)
90-90: LGTM on documentation improvements.The added/updated docstrings and comments improve code clarity without affecting functionality.
Also applies to: 132-132, 160-161, 201-201
notifications/tasks.py (3)
116-120: SwallowingTransaction.DoesNotExistprevents retries.When the transaction is not found, the error is logged but not re-raised, so the task completes "successfully" without retrying. If this is a race condition (transaction not yet committed), retrying might resolve it. If it's a permanent error (invalid ID), swallowing is correct but consider logging at a higher severity.
Confirm this is the intended behavior. If the transaction might be created asynchronously, consider:
except Transaction.DoesNotExist: - logger.error(f"Transaction {transaction_id} not found") + logger.exception(f"Transaction {transaction_id} not found") + raise # Allow retry in case of race condition
30-31: LGTM on attachments parameter addition.The
attachmentsparameter is properly typed and forwarded to thesend_emailfunction. Note: ensure the receivingsend_emailfunction can handle base64-encoded strings if you apply the serialization fix above.Also applies to: 41-41
134-137: LGTM on admin email query update.Using
OrganizationMember.Role.ADMINenum anduser__emailfor the lookup is cleaner and more explicit than string-based role filtering.payments/views.py (2)
99-111: OpenAPI parameter fortx_refis well-definedThe
extend_schemadefinition forverifycorrectly documentstx_refas a required query parameter and clarifies the rendered HTML response. This aligns with the new GET‑based, browser‑oriented verify flow.
218-219: Returning 200 for unknown webhook transactions is reasonableThe 200 response for unknown transactions (
return Response(status=status.HTTP_200_OK) # prevent retries) is a pragmatic choice: it avoids repeated webhook retries from Chapa for references you don't know about while still logging the anomaly.Given that business logic for valid references is already guarded above, this looks fine.
|
|
||
| .idea | ||
| staticfiles/ | ||
| user_statistics_report_log.txt |
There was a problem hiding this comment.
Fix the gitignore pattern to match the actual log file location.
The pattern user_statistics_report_log.txt only ignores files at the repository root, but core/tasks.py writes the log to logs/user_statistics_report_log.txt. This mismatch means the actual log file will not be ignored and could be committed to version control.
Apply this diff to correct the pattern:
-user_statistics_report_log.txt
+logs/user_statistics_report_log.txtAlternatively, if you want to ignore the entire logs directory (recommended for log files):
-user_statistics_report_log.txt
+logs/📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| user_statistics_report_log.txt | |
| logs/user_statistics_report_log.txt |
🤖 Prompt for AI Agents
In .gitignore around line 220, the entry "user_statistics_report_log.txt" only
ignores that filename at the repo root but core/tasks.py writes to
logs/user_statistics_report_log.txt; update the .gitignore to either add
"logs/user_statistics_report_log.txt" to match the actual path or (recommended)
add "logs/" (or "logs/*") to ignore the entire logs directory so runtime log
files under logs/ are not tracked.
| - python manage.py migrate | ||
| - python manage.py collectstatic |
There was a problem hiding this comment.
Restore --noinput flags for automated release commands.
Removing --noinput from migrate and collectstatic in an automated release pipeline is problematic. These commands may prompt for user confirmation (e.g., destructive migrations, overwriting static files), causing the release to hang or fail since there's no interactive terminal.
Apply this diff to restore non-interactive mode:
release:
image: web
command:
- - python manage.py migrate
- - python manage.py collectstatic
+ - python manage.py migrate --noinput
+ - python manage.py collectstatic --noinput📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - python manage.py migrate | |
| - python manage.py collectstatic | |
| - python manage.py migrate --noinput | |
| - python manage.py collectstatic --noinput |
🤖 Prompt for AI Agents
In heroku.yml around lines 9 to 10, the release commands were changed to run
"python manage.py migrate" and "python manage.py collectstatic" without the
--noinput flag; restore non-interactive behavior by adding --noinput to both
commands so they run unattended in automated releases (i.e., change them back to
"python manage.py migrate --noinput" and "python manage.py collectstatic
--noinput").
| @shared_task( | ||
| bind=True, | ||
| autoretry_for=(Exception,), | ||
| retry_backoff=30, | ||
| max_retries=3, | ||
| retry_kwargs={"max_retries": 5}, | ||
| ) |
There was a problem hiding this comment.
Conflicting max_retries configuration.
The decorator specifies max_retries=3 on line 54, but retry_kwargs={"max_retries": 5} on line 55 overrides it. This is confusing and the effective value will be 5. Choose one approach for clarity.
@shared_task(
bind=True,
autoretry_for=(Exception,),
retry_backoff=30,
- max_retries=3,
- retry_kwargs={"max_retries": 5},
+ max_retries=5,
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @shared_task( | |
| bind=True, | |
| autoretry_for=(Exception,), | |
| retry_backoff=30, | |
| max_retries=3, | |
| retry_kwargs={"max_retries": 5}, | |
| ) | |
| @shared_task( | |
| bind=True, | |
| autoretry_for=(Exception,), | |
| retry_backoff=30, | |
| max_retries=5, | |
| ) |
🤖 Prompt for AI Agents
In notifications/tasks.py around lines 50 to 56, the Celery task decorator
currently sets conflicting retry limits (max_retries=3 and
retry_kwargs={"max_retries": 5}); pick one approach and make the values
consistent by removing the duplicate: either remove the top-level max_retries
and keep retry_kwargs with the desired integer (e.g., 5), or remove retry_kwargs
and set max_retries to the chosen value (e.g., 3); update the remaining argument
to the intended retry count and run tests/lint to ensure no other code relies on
the previous value.
| send_email_task.delay( | ||
| subject="Payment Successful - Your Invoice", | ||
| recipients=[transaction.email], | ||
| template_name="email/payment_status.html", | ||
| context=email_context, | ||
| attachments=[(filename, pdf_bytes, "application/pdf")], | ||
| ) |
There was a problem hiding this comment.
Critical: Bytes cannot be serialized by Celery's default JSON serializer.
Passing pdf_bytes (a bytes object) directly to send_email_task.delay() will fail because Celery's default JSON serializer cannot handle bytes. This will raise a TypeError at runtime.
Options to fix:
- Base64-encode the bytes before passing, then decode in
send_email - Send the email synchronously within
process_successful_payment_actionsinstead of queuing another task - Store the PDF temporarily and pass a file path/URL instead
+import base64
+
# 4. Send Email with Attachment
+pdf_base64 = base64.b64encode(pdf_bytes).decode('utf-8')
+
send_email_task.delay(
subject="Payment Successful - Your Invoice",
recipients=[transaction.email],
template_name="email/payment_status.html",
context=email_context,
- attachments=[(filename, pdf_bytes, "application/pdf")],
+ attachments=[(filename, pdf_base64, "application/pdf")],
)Then in send_email or send_email_task, decode the base64 back to bytes before attaching.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In notifications/tasks.py around lines 104 to 110, the code passes pdf_bytes (a
bytes object) into send_email_task.delay which will fail with Celery's default
JSON serializer; instead base64-encode the PDF bytes before calling .delay
(convert bytes -> base64 string) and pass that string as the attachment payload,
then update send_email_task / send_email to base64-decode the string back to
bytes before creating the attachment; ensure you use a safe text encoding
(utf-8) for the base64 string and keep the filename and MIME type unchanged.
| send_email_task.delay( | ||
| subject="Payment Successful - Your Invoice", | ||
| recipients=[transaction.email], | ||
| template_name="email/payment_status.html", |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find all email templates to verify available options
fd -t f -e html . templates/email/ 2>/dev/null | sortRepository: Brints/alx-project-nexus
Length of output: 473
🏁 Script executed:
#!/bin/bash
# Read the context around line 107 in notifications/tasks.py
wc -l notifications/tasks.pyRepository: Brints/alx-project-nexus
Length of output: 94
🏁 Script executed:
#!/bin/bash
# Get context around line 107 to understand the email usage
sed -n '95,120p' notifications/tasks.pyRepository: Brints/alx-project-nexus
Length of output: 983
🏁 Script executed:
#!/bin/bash
# Read both email templates to compare them
echo "=== payment_status.html ===" && cat templates/email/payment_status.html
echo -e "\n=== payment_invoice.html ===" && cat templates/email/payment_invoice.htmlRepository: Brints/alx-project-nexus
Length of output: 7878
Change email template from payment_status.html to payment_invoice.html.
The verification confirms the review comment. The payment_status.html template is designed as an interactive web page with gradient backgrounds, animations (CSS spinner), and hover effects—not suitable for email clients. The payment_invoice.html template is available and designed specifically for invoice emails with a clean, email-client-friendly layout displaying invoice details, amount, and payment status.
The code context shows this is an invoice delivery email (subject: "Payment Successful - Your Invoice", includes PDF attachment), making payment_invoice.html the correct choice.
🤖 Prompt for AI Agents
In notifications/tasks.py around line 107, the email template is set to
"email/payment_status.html" which is unsuitable for email clients; update the
template_name to "email/payment_invoice.html" so the invoice delivery email uses
the email-client-friendly invoice template that matches the subject and PDF
attachment.
| from drf_spectacular.utils import extend_schema | ||
| from drf_spectacular.utils import extend_schema, OpenApiParameter | ||
|
|
||
| from .models import Transaction | ||
| from .utils import ChapaService | ||
| from notifications.tasks import process_successful_payment_actions | ||
| from .serializers import ( |
There was a problem hiding this comment.
Remove duplicate extend_schema import
extend_schema is imported twice (lines 7 and 8), triggering Ruff’s F811 redefinition warning and failing lint.
You can consolidate to a single import:
-from drf_spectacular.utils import extend_schema
-from drf_spectacular.utils import extend_schema, OpenApiParameter
+from drf_spectacular.utils import extend_schema, OpenApiParameterThe process_successful_payment_actions import on Line 12 looks correct and is used below.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| from drf_spectacular.utils import extend_schema | |
| from drf_spectacular.utils import extend_schema, OpenApiParameter | |
| from .models import Transaction | |
| from .utils import ChapaService | |
| from notifications.tasks import process_successful_payment_actions | |
| from .serializers import ( | |
| from drf_spectacular.utils import extend_schema, OpenApiParameter | |
| from .models import Transaction | |
| from .utils import ChapaService | |
| from notifications.tasks import process_successful_payment_actions | |
| from .serializers import ( |
🧰 Tools
🪛 Ruff (0.14.6)
8-8: Redefinition of unused extend_schema from line 7
Remove definition: extend_schema
(F811)
🤖 Prompt for AI Agents
In payments/views.py around lines 7 to 13, there is a duplicate import of
extend_schema (imported on both line 7 and 8) causing a lint F811; remove the
duplicate by consolidating into a single import that brings in both
extend_schema and OpenApiParameter (or delete the redundant line), leaving other
imports (Transaction, ChapaService, process_successful_payment_actions,
serializers) unchanged.
| def verify(self, request): | ||
| """ | ||
| Client-side verification after redirect from Chapa. | ||
| Handles Chapa redirect callback and displays payment status. | ||
| """ | ||
| serializer = VerifyPaymentSerializer(data=request.data) | ||
| serializer.is_valid(raise_exception=True) | ||
| tx_ref = serializer.validated_data["tx_ref"] | ||
| tx_ref = request.query_params.get("tx_ref") | ||
|
|
||
| if not tx_ref: | ||
| from django.shortcuts import render | ||
|
|
||
| return self._process_payment_verification(tx_ref, request.user) | ||
| context = { | ||
| "success": False, | ||
| "message": "Missing transaction reference", | ||
| "is_premium": False, | ||
| } | ||
| return render(request, "email/payment_status.html", context) | ||
|
|
||
| # Fetch transaction to get the user | ||
| try: | ||
| txn = Transaction.objects.select_related("user").get(reference=tx_ref) | ||
| user = txn.user | ||
| except Transaction.DoesNotExist: | ||
| from django.shortcuts import render | ||
|
|
||
| context = { | ||
| "success": False, | ||
| "message": "Transaction not found", | ||
| "is_premium": False, | ||
| } | ||
| return render(request, "email/payment_status.html", context) | ||
|
|
||
| # Process verification with the user from transaction | ||
| result = self._process_payment_verification(tx_ref, user) | ||
|
|
||
| # Prepare context for template | ||
| if isinstance(result, dict): | ||
| context = { | ||
| "success": result.get("success", False), | ||
| "message": result.get("message", "Unknown error"), | ||
| "is_premium": result.get("is_premium", False), | ||
| "transaction_ref": tx_ref if result.get("success") else None, | ||
| } | ||
| else: | ||
| context = { | ||
| "success": False, | ||
| "message": "Payment verification failed", | ||
| "is_premium": False, | ||
| } | ||
|
|
||
| from django.shortcuts import render | ||
|
|
||
| return render(request, "email/payment_status.html", context) |
There was a problem hiding this comment.
I'll help you verify the review comment about mixed return types in the _process_payment_verification method. Let me start by examining the implementation.
<function_calls>
cat -n payments/views.py | head -350 | tail -250
</function_calls>
Looking at the file to understand the actual implementation of _process_payment_verification:
<function_calls>
wc -l payments/views.py
</function_calls>
Now let me read the specific sections mentioned in the review:
<function_calls>
sed -n '232,320p' payments/views.py
</function_calls>
Let me also examine the verify method more closely:
<function_calls>
sed -n '118,175p' payments/views.py
</function_calls>
Let me search for all call sites of _process_payment_verification:
<function_calls>
rg -n "_process_payment_verification" payments/views.py
</function_calls>
Let me get the full file to better understand the context:
<function_calls>
cat -n payments/views.py
</function_calls>
Now let me check the exact return statements in _process_payment_verification:
<function_calls>
sed -n '232,320p' payments/views.py | grep -n -A 2 "return"
</function_calls>
Let me get a cleaner view of the _process_payment_verification method:
<function_calls>
python3 << 'EOF'
with open('payments/views.py', 'r') as f:
lines = f.readlines()
Find _process_payment_verification method
for i, line in enumerate(lines):
if 'def _process_payment_verification' in line:
print(f"Found at line {i+1}")
# Print the method (estimate 100 lines should be enough)
for j in range(i, min(i+100, len(lines))):
print(f"{j+1:3d}: {lines[j]}", end='')
break
EOF
</function_calls>
Perfect! Now I can see the issue clearly. Let me verify the exact return types by analyzing the code flow:
<function_calls>
python3 << 'EOF'
import re
with open('payments/views.py', 'r') as f:
content = f.read()
lines = content.split('\n')
Find _process_payment_verification method
start_idx = None
for i, line in enumerate(lines):
if 'def _process_payment_verification' in line:
start_idx = i
break
if start_idx:
# Find all return statements within this method
method_lines = []
indent_level = None
for i in range(start_idx, len(lines)):
if indent_level is None and lines[i].strip() and not lines[i].startswith(' ' * 4):
break
if i == start_idx:
indent_level = len(lines[i]) - len(lines[i].lstrip())
method_lines.append((i+1, lines[i]))
elif lines[i].strip():
current_indent = len(lines[i]) - len(lines[i].lstrip())
if current_indent <= indent_level and lines[i].strip() != '':
if not lines[i].startswith(' ' * (indent_level + 4)):
break
method_lines.append((i+1, lines[i]))
else:
method_lines.append((i+1, lines[i]))
print("Return statements in _process_payment_verification:")
for line_num, line in method_lines:
if 'return' in line:
print(f"{line_num}: {line}")
EOF
</function_calls>
Now let me trace the actual behavior by checking what happens with Response returns:
<function_calls>
python3 << 'EOF'
with open('payments/views.py', 'r') as f:
lines = f.readlines()
Check verify method and how it handles the result
print("=== VERIFY METHOD (lines 118-168) ===")
for i in range(117, 168):
if i < len(lines):
print(f"{i+1:3d}: {lines[i]}", end='')
print("\n\n=== Key analysis ===")
print("The verify() method does:")
print("result = self._process_payment_verification(tx_ref, user)")
print("if isinstance(result, dict):")
print(" # process dict")
print("else:")
print(" # fallback to generic error")
print("\nIf Response is returned, it will go to else branch and lose specific message info")
EOF
</function_calls>
Now let me verify the specific line numbers mentioned in the review for return statements:
<function_calls>
sed -n '240,280p' payments/views.py | cat -n
</function_calls>
<function_calls>
sed -n '232,310p' payments/views.py | cat -n
</function_calls>
Perfect! Now I can verify the exact issue. Let me create a summary of my findings:
<function_calls>
python3 << 'EOF'
Analyze the return type issue
print("=== RETURN TYPE ANALYSIS ===\n")
print("In _process_payment_verification method:")
print("\nNon-webhook (from_webhook=False) return statements:")
print(" Line 243: return Response(...) for Transaction.DoesNotExist")
print(" Line 256-258: return Response(...) for txn.status == 'SUCCESS'")
print(" Line 269-271: return Response(...) for not verification_data")
print("\nIn verify() method (which never passes from_webhook):")
print(" Calls: result = self._process_payment_verification(tx_ref, user)")
print(" Note: verify() doesn't pass from_webhook, so it defaults to False")
print(" Then checks: if isinstance(result, dict)")
print(" Since Response objects are returned, isinstance(result, dict) = False")
print(" Falls into else branch with generic 'Payment verification failed'")
print("\nIMPACT:")
print(" - Users see generic error message instead of specific status")
print(" - 'Payment already verified' shown as generic failure")
print(" - 'Transaction not found' shown as generic failure")
print(" - Gateway errors shown as generic failure")
print("\nSolution: Return dicts for non-webhook cases as suggested in review")
EOF
</function_calls>
Based on my verification, the review comment is accurate and identifies a real issue. The _process_payment_verification method does indeed return Response objects for several non-webhook cases, which breaks the verify() method's isinstance check and results in generic error messages being shown to users.
Rewritten Review Comment:
Mixed return types from _process_payment_verification lose specific error messages in verify view
verify() assumes _process_payment_verification returns a dict for browser flows:
result = self._process_payment_verification(tx_ref, user)
if isinstance(result, dict):
context = {... from result ...}
else:
context = { "success": False, "message": "Payment verification failed", ... }However, _process_payment_verification returns Response(...) for several non‑webhook cases (when from_webhook=False, which is the default for verify calls):
- Line 243: Transaction not found → returns
Response({"error": ...}, status=HTTP_404_NOT_FOUND) - Lines 256–258: Payment already verified → returns
Response({"message": "Payment already verified", ...}, status=HTTP_200_OK) - Lines 269–271: Unable to verify with gateway → returns
Response({"error": "Unable to verify payment with gateway"}, status=HTTP_503_SERVICE_UNAVAILABLE)
When verify() receives a Response object, isinstance(result, dict) evaluates to False, causing the code to fall into the else branch and always display the generic "Payment verification failed" message. This masks the actual payment status—users cannot tell whether their payment was already verified, the transaction was not found, or the gateway failed.
To fix this, have _process_payment_verification return dicts for all non‑webhook flows (preserving Response returns only for webhook calls):
except Transaction.DoesNotExist:
if from_webhook:
return Response(status=status.HTTP_200_OK)
- return Response(
- {"error": "Transaction not found"}, status=status.HTTP_404_NOT_FOUND
- )
+ return {
+ "success": False,
+ "message": "Transaction not found",
+ "is_premium": False,
+ }
if txn.status == "SUCCESS":
message = "Payment already verified"
if from_webhook:
return Response(status=status.HTTP_200_OK)
- return Response(
- {"message": message, "is_premium": user.is_premium},
- status=status.HTTP_200_OK,
- )
+ return {
+ "success": True,
+ "message": message,
+ "is_premium": user.is_premium,
+ }
if not verification_data:
if from_webhook:
return Response(status=status.HTTP_200_OK)
- return Response(
- {"error": "Unable to verify payment with gateway"},
- status=status.HTTP_503_SERVICE_UNAVAILABLE,
- )
+ return {
+ "success": False,
+ "message": "Unable to verify payment with gateway",
+ "is_premium": user.is_premium,
+ }This ensures verify() always receives a dict from non-webhook calls and properly displays contextual status messages.
🤖 Prompt for AI Agents
In payments/views.py around lines 118-168, verify() expects
_process_payment_verification to return a dict but that helper currently returns
Django REST Framework Response objects for several non-webhook flows, causing
verify() to always hit the generic error branch; update
_process_payment_verification so that for non-webhook calls (from_webhook=False)
it returns a consistent dict with keys like success, message, is_premium, and
transaction_ref, and reserve returning Response only when from_webhook=True (or
alternatively adapt verify() to handle Response by extracting its data/status),
then adjust any callers/tests accordingly.
| | `/api/v1/payments/initialize/` | POST | Initialize Payment | | ||
| | `/api/v1/payments/verify/` | POST | Verify Payment | | ||
| | `/api/v1/payments/{payment_id}/ ` | GET | Retrieve Payment Details | | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's read the README.md file to see both sections mentioned in the review
wc -l README.mdRepository: Brints/alx-project-nexus
Length of output: 81
🏁 Script executed:
# Search for payment-related views/endpoints
fd -e py -e js -e ts | grep -E '(view|route|endpoint|payment)' | head -20Repository: Brints/alx-project-nexus
Length of output: 506
🏁 Script executed:
# Read the API table section
sed -n '130,145p' README.mdRepository: Brints/alx-project-nexus
Length of output: 1080
🏁 Script executed:
# Read the example section
sed -n '210,232p' README.mdRepository: Brints/alx-project-nexus
Length of output: 621
🏁 Script executed:
# Check the payment URLs configuration
cat payments/urls.pyRepository: Brints/alx-project-nexus
Length of output: 261
🏁 Script executed:
# Check the payment views implementation
cat payments/views.pyRepository: Brints/alx-project-nexus
Length of output: 11124
API table documents /api/v1/payments/verify/ as POST, but implementation uses GET
The review comment is accurate. The endpoint is implemented as GET (confirmed by @action(detail=False, methods=["get"]) in payments/views.py, line ~60), accepting tx_ref as a query parameter. This is correct for a Chapa redirect callback.
Action required: Update line 137 of README.md from POST to GET for the /api/v1/payments/verify/ endpoint. The example at line ~227 is already correct.
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
138-138: Spaces inside code span elements
(MD038, no-space-in-code)
🤖 Prompt for AI Agents
In README.md around lines 136 to 139, update the API table entry for
`/api/v1/payments/verify/` (currently listed as POST) to use GET instead so it
matches the implementation (payments/views.py @action(detail=False,
methods=["get"])) and the example later in the file; modify line 137 to replace
"POST" with "GET".
| pyHanko==0.32.0 | ||
| pyhanko-certvalidator==0.29.0 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for pyHanko imports and usage
echo "=== Searching for pyhanko imports ==="
rg -i "from pyhanko|import pyhanko|pyhanko\.|pyHanko" --type=py -C 2
echo ""
echo "=== Searching for pyhanko-certvalidator imports ==="
rg -i "from pyhanko_certvalidator|import pyhanko_certvalidator|certvalidator" --type=py -C 2
echo ""
echo "=== Checking for any reference to these packages in code ==="
rg -i "pyhanko|certvalidator" --type=py -C 1Repository: Brints/alx-project-nexus
Length of output: 220
🏁 Script executed:
#!/bin/bash
# Check for dynamic imports or string-based references
echo "=== Searching for string references to pyhanko ==="
rg -i "pyhanko|hanko" --type=py
echo ""
echo "=== Checking requirements.txt context ==="
cat -n requirements.txt | sed -n '70,85p'
echo ""
echo "=== Searching for PDF or invoice-related imports ==="
rg -i "pdf|invoice" --type=py -l | head -20
echo ""
echo "=== Checking if these packages appear in any config files ==="
find . -type f \( -name "*.cfg" -o -name "*.ini" -o -name "*.toml" -o -name "*.yaml" -o -name "*.yml" \) -exec grep -l -i "pyhanko\|certvalidator" {} \;Repository: Brints/alx-project-nexus
Length of output: 855
🏁 Script executed:
#!/bin/bash
# Check invoice_service.py
echo "=== payments/invoice_service.py imports ==="
head -50 payments/invoice_service.py | grep -E "^import|^from"
echo ""
echo "=== Full invoice_service.py to understand PDF usage ==="
wc -l payments/invoice_service.py
echo ""
echo "=== Check payments/models.py for invoice-related imports ==="
head -50 payments/models.py | grep -E "^import|^from"
echo ""
echo "=== Search for PDF-related operations in payments directory ==="
rg -i "pdf\|pyhanko\|hanko\|sign.*pdf" payments/ -C 2Repository: Brints/alx-project-nexus
Length of output: 634
🏁 Script executed:
#!/bin/bash
# Check if xhtml2pdf is in requirements
echo "=== Checking for xhtml2pdf in requirements ==="
rg "xhtml2pdf" requirements.txt
echo ""
echo "=== Final comprehensive search for any pyHanko usage ==="
rg -i "pyhanko|certvalidator" . --type=py --type=txt --type=md
echo ""
echo "=== Check if these could be indirect dependencies ==="
pip show pyHanko 2>/dev/null | head -20 || echo "Can check requirements.txt instead"
grep -E "^xhtml2pdf|^reportlab|^pillow" requirements.txt
echo ""
echo "=== View the full invoice_service.py to confirm no signing ==="
cat -n payments/invoice_service.pyRepository: Brints/alx-project-nexus
Length of output: 3753
These PDF signing dependencies are completely unused and should be removed.
Verification confirms that pyHanko and pyhanko-certvalidator are not imported or used anywhere in the codebase. The invoice generation feature in payments/invoice_service.py uses xhtml2pdf (with pisa) for PDF generation and uploads to Cloudinary—there is no PDF digital signing functionality. These heavyweight libraries add unnecessary dependency overhead and should be removed from requirements.txt.
🤖 Prompt for AI Agents
In requirements.txt around lines 78-79, remove unused PDF signing dependencies
pyHanko==0.32.0 and pyhanko-certvalidator==0.29.0 because the codebase does not
import or use them; update the file by deleting those two lines and run
dependency checks (e.g., pip-compile or pip-sync / poetry lock) and the test
suite to ensure no breakages, then update any lockfiles or CI dependency caches
accordingly.
…sful payment. Signed-off-by: aniebietafia <aniebietafia87@gmail.com>
…sful payment. Signed-off-by: aniebietafia <aniebietafia87@gmail.com>
…sful payment. Signed-off-by: aniebietafia <aniebietafia87@gmail.com>
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
README.md (1)
137-138: Fix API table entry for/api/v1/payments/verify/and remove extra spaces in code span.Line 137 documents the endpoint as
POST, but it is actually implemented asGET(withtx_refas a query parameter). Line 231 in the examples section correctly shows this asGET. Additionally, line 138 contains extra spaces in the code span.Apply this diff:
-| `/api/v1/payments/verify/` | POST | Verify Payment | -| `/api/v1/payments/{payment_id}/ ` | GET | Retrieve Payment Details | +| `/api/v1/payments/verify/` | GET | Verify Payment | +| `/api/v1/payments/{payment_id}/` | GET | Retrieve Payment Details |This aligns the table documentation with the actual implementation and the example at line 231.
🧹 Nitpick comments (3)
README.md (1)
6-6: Add blank lines around tables for proper markdown formatting.Markdown linting rules require blank lines before and after tables for proper rendering and readability.
Apply this diff:
## Key Features + | Feature | Description |Repeat the same fix for the Technology Stack table (line 25) and API Documentation table (line 109).
Also applies to: 25-25, 109-109
test.txt (1)
1-70: Avoid committing raw coverage output / keep threshold consistentThis looks like a pasted coverage run rather than source code. Versioning generated reports tends to go stale and can confuse contributors, especially since the embedded
fail-under=50no longer matches the CI threshold (now 40% per workflow changes). Consider either:
- Removing this file from the repo (treat coverage reports as build artifacts), or
- Moving it into docs as an explicit example and updating the fail-under value to match CI.
authentication/tests.py (1)
15-21: Hardcoded password in test may trigger Ruff S106Using a hardcoded password in test setup is generally fine from a security standpoint, but Ruff’s S106 rule is flagging it. If this becomes noisy in your CI, consider either:
- Switching to something clearly non-secret like
"dummy-password"or similar, or- Adding
# noqa: S106on that line to explicitly silence the rule for this test fixture.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
core/__pycache__/settings.cpython-313.pycis excluded by!**/*.pyc
📒 Files selected for processing (4)
.github/workflows/ci.yml(2 hunks)README.md(3 hunks)authentication/tests.py(1 hunks)test.txt(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- .github/workflows/ci.yml
🧰 Additional context used
🧬 Code graph analysis (1)
authentication/tests.py (3)
authentication/utils.py (1)
_verify_email(16-71)users/models.py (2)
UserVerification(117-147)create_user(20-31)authentication/serializers.py (1)
create(40-45)
🪛 GitHub Actions: CI Pipeline
authentication/tests.py
[error] 27-27: Database IntegrityError: null value in column 'expires_at' violates not-null constraint during test setup while executing test_verify_email_success. Ensure expires_at is populated for UserVerification creation.
🪛 markdownlint-cli2 (0.18.1)
README.md
7-7: Tables should be surrounded by blank lines
(MD058, blanks-around-tables)
26-26: Tables should be surrounded by blank lines
(MD058, blanks-around-tables)
110-110: Tables should be surrounded by blank lines
(MD058, blanks-around-tables)
138-138: Spaces inside code span elements
(MD038, no-space-in-code)
🪛 Ruff (0.14.6)
authentication/tests.py
18-18: Possible hardcoded password assigned to argument: "password"
(S106)
authentication/tests.py
Outdated
| from django.contrib.auth import get_user_model | ||
| from django.test import TestCase | ||
| from django.utils.encoding import force_bytes | ||
| from django.utils.http import urlsafe_base64_encode | ||
| from rest_framework import status |
There was a problem hiding this comment.
Fix IntegrityError by setting expires_at on UserVerification in the test
UserVerification.expires_at is a non-nullable field, and the CI pipeline is failing with an IntegrityError because the test creates a record without expires_at. You also want the record to be non-expired for the “happy path” of _verify_email.
Set an explicit future expires_at and import the needed utilities, e.g.:
-from django.contrib.auth import get_user_model
-from django.test import TestCase
-from django.utils.encoding import force_bytes
-from django.utils.http import urlsafe_base64_encode
-from rest_framework import status
+from datetime import timedelta
+
+from django.contrib.auth import get_user_model
+from django.test import TestCase
+from django.utils import timezone
+from django.utils.encoding import force_bytes
+from django.utils.http import urlsafe_base64_encode
+from rest_framework import status
@@
- UserVerification.objects.create(
- user=self.user,
- verification_type="email",
- verification_code=self.token,
- )
+ UserVerification.objects.create(
+ user=self.user,
+ verification_type="email",
+ verification_code=self.token,
+ expires_at=timezone.now() + timedelta(hours=1),
+ )This should resolve the DB error and exercise the intended non-expired verification path.
Also applies to: 27-31
🤖 Prompt for AI Agents
In authentication/tests.py around lines 1-5 (and also apply to lines 27-31), the
test creates a UserVerification without setting the non-nullable expires_at
causing an IntegrityError; import django.utils.timezone and datetime.timedelta
(or from datetime import timedelta) and set expires_at to a future value (e.g.
timezone.now() + timedelta(days=1)) when creating the UserVerification so the
record is non-null and non-expired for the happy-path _verify_email test.
…sful payment. Signed-off-by: aniebietafia <aniebietafia87@gmail.com>
Summary by CodeRabbit
New Features
Documentation
Chores
Tests
✏️ Tip: You can customize this high-level summary in your review settings.