Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement Feedback submission #558

Draft
wants to merge 53 commits into
base: main
Choose a base branch
from
Draft

Conversation

harsh3dev
Copy link
Collaborator

@harsh3dev harsh3dev commented Jan 21, 2025

Resolves #309

  • Add feedback form
  • Integrate ReCAPTCHA on the page
  • Create feedback app in the backend
  • Add frontend tests
  • Add backend tests
  • Update the code to use Chakra UI instead of shadcn ui
Recording.2025-01-26.215215.mp4

Note:

I have not added any buttons to go to /feedback page as I couldn't figure out where to put it.
So I would like the reviewers' suggestions on it.
Thanks

@harsh3dev harsh3dev marked this pull request as draft January 21, 2025 08:42
@harsh3dev harsh3dev marked this pull request as ready for review January 22, 2025 12:13
Copy link
Collaborator

@kasya kasya left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@harsh3dev Hey! Could you resolve conflicts on this PR? Thanks! 👍🏼

@harsh3dev harsh3dev requested a review from kasya January 23, 2025 04:42
@harsh3dev
Copy link
Collaborator Author

@arkid15r @kasya

Recording.2025-01-29.150755.mp4

@harsh3dev harsh3dev requested a review from arkid15r January 29, 2025 10:15
@harsh3dev
Copy link
Collaborator Author

@arkid15r @kasya I have updated the PR and made the suggested changes but after every other PR merge, this PR is having conflict in the poetry.lock and package-lock.json files. So I think it would be better to review this PR for any other changes and then I can resolve the conflict just before merging.

@github-actions github-actions bot added the schema label Feb 3, 2025
@github-actions github-actions bot removed the schema label Feb 4, 2025
@arkid15r arkid15r marked this pull request as draft March 17, 2025 21:34
Copy link
Contributor

coderabbitai bot commented Mar 27, 2025

Summary by CodeRabbit

  • New Features
    • Launched a comprehensive feedback system accessible through a header modal and a dedicated feedback page.
    • Users can now submit feedback with options for both anonymous and identified entries.
    • The new form includes spam prevention measures and provides immediate submission confirmation.
    • Enhancements in secure processing ensure that user feedback is handled reliably and efficiently.

Walkthrough

This pull request introduces a feedback functionality across the backend and frontend. On the backend, a new feedback module is integrated with AWS S3 for storing feedback data and includes necessary configuration, dependencies, and tests. On the frontend, new environment variables, API integration, a dedicated feedback page, and UI components (form, modal, and supporting elements) have been added along with tests, recaptcha configuration, and routing updates to support anonymous feedback submission.

Changes

Files Change Summary
backend/.env.example, backend/apps/feedback/{admin.py, api/.../feedback.py, api/urls.py, apps.py, models/models.py}, backend/pyproject.toml, backend/settings/{base.py, urls.py}, backend/tests/feedback/{admin_test.py, api/feedback_test.py, urls_test.py, apps_test.py, models_test.py} Added new AWS and feedback environment variables; configured AWS S3 settings; introduced a feedback app with a viewset handling feedback submissions (creating/appending to a TSV file on S3) and associated tests; added the boto3 dependency.
frontend/.env.example, frontend/__tests__/unit/{App.test.tsx, pages/{ChapterDetails.test.tsx, Feedback.test.tsx}}, frontend/jest.setup.ts, frontend/package.json, frontend/src/{App.tsx, api/postFeedbackData.ts, components/{FeedbackForm.tsx, FeedbackFormModal.tsx, Header.tsx, ui/{Field.tsx, Switch.tsx}}, pages/{Feedback.tsx, index.ts}, sentry.config.ts, utils/{credentials.ts, helpers/schema.ts}} Introduced reCAPTCHA environment variables and new dependencies; added a feedback page, form, modal, and supporting UI components; implemented an API for posting feedback; updated routing and Sentry configuration; added validation schemas and global polyfills/mocks; included comprehensive unit tests.

Assessment against linked issues

Objective Addressed Explanation
Implement user feedback form on Nest frontend (#309)
✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai plan to trigger planning for file edits and PR creation.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

Quality Gate Failed Quality Gate failed

Failed conditions
10.1% Duplication on New Code (required ≤ 3%)

See analysis details on SonarQube Cloud

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

🧹 Nitpick comments (29)
backend/apps/feedback/models/models.py (1)

1-2: Models file is empty but has a docstring

This file contains only a docstring but no actual models. Based on the PR description, it seems the feedback data is stored in AWS S3 rather than in a database. Consider one of the following approaches:

  1. Add actual model definitions if there are database entities to be stored
  2. Add a clarifying comment explaining that no models are needed because data is stored in S3
  3. Remove this file if it's not needed for this implementation
backend/apps/feedback/admin.py (1)

1-2: Admin registration file is empty

This file contains only a comment but no model registrations. This is consistent with the empty models.py file, but should be updated if any models are added.

If no models will be used (because data is stored in AWS S3), consider adding a clarifying comment or removing this file to avoid confusion.

backend/tests/feedback/models/models_test.py (1)

1-2: Test file is empty but has a docstring

This file contains only a docstring but no actual tests. This is consistent with the empty models.py file, but should be updated if any models are added.

If no models will be used in this implementation, consider adding a clarifying comment or removing this file to avoid confusion.

frontend/__tests__/unit/pages/ChapterDetails.test.tsx (2)

6-6: Consider limitations of JSON-based cloning for structuredClone polyfill.

While using JSON.parse(JSON.stringify(val)) is a common approach for deep cloning, it has limitations when handling circular references, functions, or special JavaScript types. For test data this may be acceptable, but consider using a more robust solution if complex objects need to be cloned.

-global.structuredClone = (val) => JSON.parse(JSON.stringify(val))
+global.structuredClone = (val) => {
+  // Handle special cases or use a more robust cloning library for complex objects
+  if (val === undefined || val === null) return val;
+  return JSON.parse(JSON.stringify(val));
+}

8-8: Consider implementing a mock for fetchAlgoliaData.

The current mock doesn't provide any implementation, which means it will return undefined by default. Consider providing a specific mock implementation to better control test behavior.

jest.mock('api/fetchAlgoliaData', () => ({
  fetchAlgoliaData: jest.fn().mockResolvedValue({ hits: [] })
}));
backend/tests/feedback/apps_test.py (1)

6-6: Consider removing extra blank line.

There's an unnecessary blank line here. While this is a minor formatting issue, removing it would improve code consistency.

frontend/src/components/FeedbackFormModal.tsx (3)

23-23: Simplify variant prop syntax.

Consider using variant="outline" instead of variant={'outline'} for consistency with other attribute styles.

-            variant={'outline'}
+            variant="outline"

20-35: Add accessibility attributes to the button.

The button is missing accessibility attributes like aria-label. Adding these would improve accessibility for screen readers.

          <Button
            size="sm"
            variant="outline"
            colorScheme="blue"
            className="group flex items-center border"
+            aria-label="Open feedback form"
          >

41-42: Consider adding a close button with visible text.

The current implementation has a DialogCloseTrigger but without any visible UI element. Consider adding a visible close button for better usability.

        <DialogBody>
          <FeedbackForm />
        </DialogBody>
-        <DialogCloseTrigger />
+        <DialogCloseTrigger asChild>
+          <Button size="sm" variant="ghost" position="absolute" top="4" right="4">
+            Close
+          </Button>
+        </DialogCloseTrigger>
frontend/src/utils/helpers/schema.ts (1)

3-16: Consider adding length validation for the message field.

While the current validation ensures the message is not empty, you might want to add a maximum length constraint to prevent excessively long submissions.

  message: z.string().min(1, 'Message is required'),
+    .max(1000, 'Message cannot exceed 1000 characters'),
backend/tests/feedback/api/urls_test.py (2)

22-25: Consider simplifying the viewset assertion.

While the current assertion is clear, it could be simplified for better readability.

        viewset = route.callback.cls
-        assert issubclass(
-            viewset, viewset_class
-        ), f"Viewset for '{route.name}' does not match {viewset_class}."
+        assert issubclass(viewset, viewset_class), f"Viewset for '{route.name}' does not match {viewset_class}."

7-12: Consider testing more API endpoints.

The test is currently parameterized but only includes one test case. Consider expanding it to test other endpoints that might exist in the feedback API (e.g., detail, create, update).

@pytest.mark.parametrize(
    ("url_name", "expected_prefix", "viewset_class"),
    [
        ("feedback-list", "feedback", FeedbackViewSet),
+        ("feedback-detail", "feedback", FeedbackViewSet),
    ],
)
frontend/src/api/postFeedbackData.ts (1)

17-19: Consider handling the error response status in the catch block.

Currently, when a non-2xx response is received, an error is thrown but it's not explicitly caught by the function's catch block. Instead, it would propagate to the caller. Consider either:

  1. Moving this error-throwing code outside the try block, or
  2. Returning the response object and letting the caller handle status checks
-    if (!response.ok) {
-      throw new Error(`Failed to submit feedback: ${response.statusText}`)
-    }
+    return response

And then handle the status check in the component that calls this function.

frontend/src/components/ui/Switch.tsx (1)

11-32: Well-implemented Switch component with proper ref forwarding.

The Switch component correctly uses forwardRef to allow forwarding refs to the hidden input element, which is important for accessibility and form integration. The component provides good customization options through the trackLabel and thumbLabel props.

Consider optimizing the conditional rendering of children.

-      {children != null && <ChakraSwitch.Label>{children}</ChakraSwitch.Label>}
+      {children && <ChakraSwitch.Label>{children}</ChakraSwitch.Label>}

This simplifies the check since JavaScript treats null, undefined, and empty strings as falsy.

frontend/jest.setup.ts (1)

11-15: Duplicate ResizeObserver implementation

There's another identical implementation of ResizeObserver in the beforeAll hook at lines 31-35. Having two implementations of the same mock can lead to confusion and potential issues if they differ in the future.

Remove one of the duplicates:

-global.ResizeObserver = class {
-  observe() {}
-  unobserve() {}
-  disconnect() {}
-}

Since the implementation in the beforeAll hook already exists, I recommend removing this duplicate global declaration.

backend/apps/feedback/api/feedback.py (4)

41-42: Default parameter value for tsv_key

The default parameter for tsv_key is hardcoded as "feedbacks.tsv", but the same value is used again in _upload_tsv_to_s3. This could lead to inconsistency if one is updated but not the other.

Use a class constant or settings variable:

+    FEEDBACK_TSV_KEY = "feedbacks.tsv"
+
-    def _get_or_create_tsv(self, s3_client, tsv_key="feedbacks.tsv"):
+    def _get_or_create_tsv(self, s3_client, tsv_key=None):
        """Get the existing TSV file or creates a new one if it doesn't exist."""
+        if tsv_key is None:
+            tsv_key = self.FEEDBACK_TSV_KEY

And update _upload_tsv_to_s3 to use the same constant:

    def _upload_tsv_to_s3(self, s3_client, output):
        """Upload the updated TSV file back to S3."""
        output.seek(0)
        s3_client.put_object(
            Bucket=settings.AWS_STORAGE_BUCKET_NAME,
-            Key="feedbacks.tsv",
+            Key=self.FEEDBACK_TSV_KEY,
            Body=output.getvalue(),
            ContentType="text/tab-separated-values",
        )

58-58: Clarify the seek operation

This line might be confusing without additional context as noted in a past review.

Add a clarifying comment:

            # move the cursor to the end of the file
-            output.seek(0, 2)
+            output.seek(0, 2)  # 0 offset from position 2 (end of file)

89-95: Redundant public wrapper methods

The public methods write_feedback_to_tsv and get_s3_client simply call their private counterparts without adding functionality. This creates unnecessary code and maintenance burden.

If these methods are meant for testing or external use, consider removing them and making the private methods more accessible:

-    def write_feedback_to_tsv(self, writer, feedback_data):
-        """Public method to write feedback data to TSV format."""
-        self._write_feedback_to_tsv(writer, feedback_data)
-
-    def get_s3_client(self):
-        """Public method to get the S3 client."""
-        return self._get_s3_client()

Alternatively, if they're needed for testing, you could use Python's @property decorator or document their purpose more clearly.


59-64: Missing logging for S3 errors

When a NoSuchKey error occurs, it silently creates a new TSV file without logging the event, which could make debugging difficult.

Add logging for better observability:

import csv
from datetime import datetime, timezone
from io import StringIO
+import logging

import boto3
import botocore
from django.conf import settings
from django.core.exceptions import ValidationError
from rest_framework import status, viewsets
from rest_framework.permissions import AllowAny
from rest_framework.response import Response

+logger = logging.getLogger(__name__)
+

class FeedbackViewSet(viewsets.ModelViewSet):
    # ... existing code ...
    
    def _get_or_create_tsv(self, s3_client, tsv_key="feedbacks.tsv"):
        # ... existing code ...
        except botocore.exceptions.ClientError as e:
            if e.response["Error"]["Code"] == "NoSuchKey":
+                logger.info(f"TSV file {tsv_key} not found, creating new file")
                writer.writerow(
                    ["Name", "Email", "Message", "is_anonymous", "is_nestbot", "created_at"]
                )
+            else:
+                logger.error(f"Error accessing S3: {e}")
+                raise
frontend/__tests__/unit/pages/Feedback.test.tsx (2)

8-14: Use optional chaining for ReCAPTCHA mock

The current implementation uses a conditional check with the && operator, but optional chaining would be cleaner.

jest.mock('react-google-recaptcha', () => ({
  __esModule: true,
  default: jest.fn(({ onChange }) => (
    <button data-testid="recaptcha-button" onClick={() => 
-      onChange && onChange('test-token')
+      onChange?.('test-token')
    }>
      Verify ReCAPTCHA
    </button>
  )),
}))
🧰 Tools
🪛 Biome (1.9.4)

[error] 10-10: Change to an optional chain.

Unsafe fix: Change to an optional chain.

(lint/complexity/useOptionalChain)


27-32: Nested ChakraProvider instances

The test renders the component with nested ChakraProvider instances, which is redundant and could lead to unexpected behavior.

render(
  <ChakraProvider value={defaultSystem}>
-    <ChakraProvider value={defaultSystem}>
      <FeedbackPage />
-    </ChakraProvider>
  </ChakraProvider>
)

Apply this change to all test cases that have the same nested providers.

frontend/src/components/FeedbackForm.tsx (5)

33-65: Consider improving error handling in the API response

The form submission logic has generic error handling that doesn't provide specific feedback to users about what went wrong. Consider leveraging the error details from the API response for more helpful user feedback.

- const responseData = await postFeedback(data)
- if (responseData.ok) {
+ try {
+   const responseData = await postFeedback(data)
+   if (responseData.ok) {
+     toast({
+       title: 'Feedback Submitted',
+       description: 'Thank you for your feedback!',
+     })
+     captchaRef.current.reset()
+     form.reset({
+       name: '',
+       email: '',
+       message: '',
+       is_anonymous: isAnonymous,
+       is_nestbot: false,
+     })
+   } else {
+     toast({
+       variant: 'destructive',
+       title: 'Failed to submit feedback',
+       description: responseData.error || 'Please try again later.',
+     })
+   }
+ } catch (error) {
   toast({
     variant: 'destructive',
     title: 'Failed to submit feedback',
-    description: 'Please try again later.',
+    description: error.message || 'Please try again later.',
   })
-}
+ }

73-102: Add form labeling to improve accessibility

While the form inputs have labels, they could be better associated with their inputs for screen readers.

<Field label="Name">
  <Input
    placeholder="Your name"
    id="name"
+   aria-labelledby="name-label"
    className="rounded-lg border border-border p-4"
    {...form.register('name')}
  />
  {form.formState.errors.name && (
-   <p className="mt-1 text-xs text-red-500">{form.formState.errors.name.message}</p>
+   <p id="name-error" className="mt-1 text-xs text-red-500" aria-live="polite">{form.formState.errors.name.message}</p>
  )}
</Field>

And similar changes for other form fields.


134-134: Add accessible error handling for ReCAPTCHA

The ReCAPTCHA component should have an accessible way for users to understand when validation fails.

-<ReCAPTCHA sitekey={RECAPTCHA_SITE_KEY} ref={captchaRef} />
+<div>
+  <ReCAPTCHA 
+    sitekey={RECAPTCHA_SITE_KEY} 
+    ref={captchaRef} 
+    aria-describedby="recaptcha-instructions"
+  />
+  <span id="recaptcha-instructions" className="sr-only">
+    Please complete the reCAPTCHA to verify you are not a robot.
+  </span>
+</div>

1-16: Add PR issue context to component

Since this component implements the feedback functionality mentioned in the PR objectives (issue #309), consider adding a comment header to document this connection.

+/**
+ * FeedbackForm Component
+ * 
+ * Implements the feedback submission form functionality for issue #309.
+ * Uses Chakra UI components and ReCAPTCHA for spam prevention.
+ */
import { Textarea, Input, Button } from '@chakra-ui/react'

67-146: Address navigation to feedback page mentioned in PR objectives

The PR objectives mentioned uncertainty about where to place navigation to the feedback page. Consider adding a comment about this or implementing a standard navigation pattern.

Would you like me to suggest some common patterns for adding navigation to this page, such as a footer link, navbar dropdown, or help menu item?

backend/tests/feedback/api/feedback_test.py (2)

14-16: Remove parentheses from pytest fixtures

Per Python best practices and static analysis hints, remove empty parentheses from the pytest.fixture decorators.

-@pytest.fixture()
+@pytest.fixture
def feedback_viewset():
    return FeedbackViewSet()

-@pytest.fixture()
+@pytest.fixture
def valid_feedback_data():
    return {

-@pytest.fixture()
+@pytest.fixture
def mock_s3_client():
    with patch("boto3.client") as mock_client:

Also applies to: 19-21, 30-32

🧰 Tools
🪛 Ruff (0.8.2)

14-14: Use @pytest.fixture over @pytest.fixture()

Remove parentheses

(PT001)


77-77: Use realistic test data with timezone-aware datetime

The test uses a hardcoded date in 2025, which could cause the test to fail when that date is no longer in the future.

-current_time = datetime(2025, 1, 22, 10, 45, 34, 567884, tzinfo=timezone.utc)
+current_time = datetime.now(timezone.utc)
backend/settings/base.py (1)

126-139: Add documentation for AWS configuration variables

The AWS configuration variables would benefit from documentation explaining their purpose and requirements.

# AWS S3 Configuration
+# Used for storing feedback submissions in a TSV file
AWS_ACCESS_KEY_ID = values.SecretValue(environ_name="AWS_ACCESS_KEY_ID")
AWS_SECRET_ACCESS_KEY = values.SecretValue(environ_name="AWS_SECRET_ACCESS_KEY")
AWS_STORAGE_BUCKET_NAME = values.SecretValue(environ_name="AWS_STORAGE_BUCKET_NAME")
AWS_S3_REGION_NAME = values.SecretValue(environ_name="AWS_S3_REGION_NAME")
+# Key for the TSV file storing feedback data in the S3 bucket
FEEDBACK_SHEET_KEY = values.SecretValue(environ_name="FEEDBACK_SHEET_KEY")
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b345898 and 65e60c1.

⛔ Files ignored due to path filters (1)
  • backend/poetry.lock is excluded by !**/*.lock
📒 Files selected for processing (32)
  • backend/.env.example (1 hunks)
  • backend/apps/feedback/admin.py (1 hunks)
  • backend/apps/feedback/api/feedback.py (1 hunks)
  • backend/apps/feedback/api/urls.py (1 hunks)
  • backend/apps/feedback/apps.py (1 hunks)
  • backend/apps/feedback/models/models.py (1 hunks)
  • backend/pyproject.toml (1 hunks)
  • backend/settings/base.py (2 hunks)
  • backend/settings/urls.py (1 hunks)
  • backend/tests/feedback/admin_test.py (1 hunks)
  • backend/tests/feedback/api/feedback_test.py (1 hunks)
  • backend/tests/feedback/api/urls_test.py (1 hunks)
  • backend/tests/feedback/apps_test.py (1 hunks)
  • backend/tests/feedback/models/models_test.py (1 hunks)
  • frontend/.env.example (1 hunks)
  • frontend/__tests__/unit/App.test.tsx (1 hunks)
  • frontend/__tests__/unit/pages/ChapterDetails.test.tsx (1 hunks)
  • frontend/__tests__/unit/pages/Feedback.test.tsx (1 hunks)
  • frontend/jest.setup.ts (1 hunks)
  • frontend/package.json (1 hunks)
  • frontend/src/App.tsx (3 hunks)
  • frontend/src/api/postFeedbackData.ts (1 hunks)
  • frontend/src/components/FeedbackForm.tsx (1 hunks)
  • frontend/src/components/FeedbackFormModal.tsx (1 hunks)
  • frontend/src/components/Header.tsx (2 hunks)
  • frontend/src/components/ui/Field.tsx (1 hunks)
  • frontend/src/components/ui/Switch.tsx (1 hunks)
  • frontend/src/pages/Feedback.tsx (1 hunks)
  • frontend/src/pages/index.ts (2 hunks)
  • frontend/src/sentry.config.ts (1 hunks)
  • frontend/src/utils/credentials.ts (1 hunks)
  • frontend/src/utils/helpers/schema.ts (1 hunks)
🧰 Additional context used
🧬 Code Definitions (5)
backend/apps/feedback/api/urls.py (1)
backend/apps/feedback/api/feedback.py (1)
  • FeedbackViewSet (16-95)
frontend/src/App.tsx (1)
frontend/src/pages/Feedback.tsx (1)
  • FeedbackPage (4-13)
frontend/src/components/Header.tsx (1)
frontend/src/components/FeedbackFormModal.tsx (1)
  • FeedbackFormModal (15-45)
frontend/src/pages/Feedback.tsx (1)
frontend/src/components/FeedbackForm.tsx (1)
  • FeedbackForm (17-146)
frontend/src/api/postFeedbackData.ts (1)
frontend/src/utils/credentials.ts (1)
  • API_URL (1-1)
🪛 Ruff (0.8.2)
backend/tests/feedback/api/feedback_test.py

14-14: Use @pytest.fixture over @pytest.fixture()

Remove parentheses

(PT001)


19-19: Use @pytest.fixture over @pytest.fixture()

Remove parentheses

(PT001)


30-30: Use @pytest.fixture over @pytest.fixture()

Remove parentheses

(PT001)

🪛 Biome (1.9.4)
frontend/__tests__/unit/pages/Feedback.test.tsx

[error] 10-10: Change to an optional chain.

Unsafe fix: Change to an optional chain.

(lint/complexity/useOptionalChain)

🔇 Additional comments (28)
frontend/src/utils/credentials.ts (1)

10-10:

✅ Verification successful

LGTM: Added ReCAPTCHA site key

The addition of the RECAPTCHA_SITE_KEY environment variable follows the existing pattern in the file and supports the ReCAPTCHA integration mentioned in the PR description.


🏁 Script executed:

#!/bin/bash
# Verify that the new environment variable is properly defined in .env.example
grep -r "VITE_RECAPTCHA_SITE_KEY" frontend/.env.example

Length of output: 80


LGTM: ReCAPTCHA credentials integration verified.

The RECAPTCHA_SITE_KEY environment variable is correctly implemented following the existing pattern in the file, and the verification confirms its presence in frontend/.env.example (with an empty default). No further changes are necessary.

backend/apps/feedback/apps.py (1)

4-6: LGTM: Django app configuration is correctly implemented

The FeedbackConfig class follows Django's standard app configuration pattern with proper name and default_auto_field settings.

frontend/.env.example (1)

6-7: LGTM: Required reCAPTCHA environment variables added

These environment variables are necessary for the reCAPTCHA integration in the feedback form.

backend/pyproject.toml (1)

47-47: Confirm boto3 dependency requirement

The boto3 library has been added, which is appropriate for AWS S3 integration for storing feedback data.

Note: A previous review asked "Why these new dependencies? I understand we need boto3 only, right?" The current changes show only boto3 being added, which aligns with the feedback functionality's AWS integration needs.

frontend/src/pages/Feedback.tsx (1)

4-12: LGTM: Clean implementation of feedback page

The FeedbackPage component is well-structured and follows React best practices with proper layout and component composition.

As mentioned in the PR description, you're unsure about navigation to this page. Consider adding a feedback link in appropriate locations like the main navigation menu, footer, or user profile section based on the expected user flow.

frontend/src/pages/index.ts (1)

10-10: Appropriate integration of FeedbackPage component

The import and export additions for the FeedbackPage component follow the existing pattern in the file, making the new feedback page accessible throughout the application.

Also applies to: 30-30

frontend/package.json (1)

48-48: New dependencies properly added for feedback functionality

The addition of react-google-recaptcha aligns with implementing ReCAPTCHA on the feedback page as mentioned in the PR objectives. The next-themes package is likely related to the transition from shadcn UI to Chakra UI for theming support.

Also applies to: 51-51

backend/apps/feedback/api/urls.py (1)

1-9: API routing for feedback functionality appropriately configured

The URL routing setup for the feedback API follows Django REST Framework best practices by using SimpleRouter and registering the FeedbackViewSet with an appropriate basename.

backend/.env.example (1)

7-9: AWS S3 configuration added for feedback storage

The new environment variables properly support the AWS S3 integration needed for storing feedback data.

However, I noticed from the provided code snippet that FeedbackViewSet._upload_tsv_to_s3() method uses a hardcoded filename "feedbacks.tsv":

s3_client.put_object(
    Bucket=settings.AWS_STORAGE_BUCKET_NAME,
    Key="feedbacks.tsv",  # Hardcoded value
    Body=output.getvalue(),
    ContentType="text/tab-separated-values",
)

Consider making this configurable using the DJANGO_FEEDBACK_SHEET_KEY environment variable that's being added here.

frontend/src/sentry.config.ts (1)

6-6: Good use of optional chaining for null/undefined safety.

The addition of the optional chaining operator (?.) before calling toLowerCase() prevents potential runtime errors if ENVIRONMENT is null or undefined, making the code more robust.

frontend/__tests__/unit/App.test.tsx (1)

21-21: Mock addition for FeedbackPage component is appropriate.

The mock follows the established pattern for other page components, using a consistent testing approach with a data-testid attribute that will facilitate testing in the future.

backend/settings/urls.py (1)

16-16: Feedback API router integration follows established patterns.

The feedback router is imported and integrated into the main router in the same way as the existing GitHub and OWASP routers, maintaining consistency in the API routing structure.

Also applies to: 24-24

backend/tests/feedback/apps_test.py (1)

1-11: LGTM: Clear and concise test for the FeedbackConfig class.

The test appropriately verifies that the feedback app is configured correctly by checking the app name from both the FeedbackConfig class directly and through Django's app registry.

frontend/src/components/FeedbackFormModal.tsx (1)

15-45: Good implementation of the feedback modal component.

The component is well-structured and uses Chakra UI components effectively. The transition between icons on hover provides good user feedback.

frontend/src/utils/helpers/schema.ts (2)

3-9: Well-designed schema for feedback validation.

The schema properly validates required fields and includes appropriate validation messages. The inclusion of an is_anonymous flag is a good design choice.


11-14: Good approach for handling anonymous feedback.

Extending the main schema to make name and email optional for anonymous submissions is a clean approach.

backend/tests/feedback/api/urls_test.py (1)

7-26: Good parameterized test for router registration.

The test effectively verifies that the feedback routes are correctly registered with the appropriate viewset. Using parameterization allows for easy expansion if more routes are added in the future.

frontend/src/components/Header.tsx (2)

13-13: Good import placement for the FeedbackFormModal.

The import for the FeedbackFormModal component is correctly placed with the other component imports.


95-95: Strategic placement of the feedback button in the header.

The FeedbackFormModal is well-positioned in the header's action buttons area, making it easily accessible from anywhere in the application. This placement follows the requested feedback submission implementation from the PR description.

frontend/src/App.tsx (3)

13-13: Good import organization for the FeedbackPage component.

The FeedbackPage component is correctly imported along with the other page components.


23-23: Import repositioning looks reasonable.

The UsersPage import was moved out of the destructured import block, which is fine as long as the component is still being used correctly in the routes.


49-49: Good implementation of the feedback route.

The feedback route is correctly implemented, matching the PR's goal to add feedback functionality. The implementation includes both a dedicated page route and a modal (as seen in Header.tsx), which addresses previous reviewer suggestions.

frontend/src/components/ui/Switch.tsx (1)

4-9: Well-defined interface with good extensibility.

The SwitchProps interface properly extends the Chakra UI switch props and provides additional custom properties for enhanced functionality. The props are well-named and their types are appropriately defined.

frontend/jest.setup.ts (1)

16-16: Improved structuredClone implementation

The updated implementation correctly handles undefined values, which is an improvement over previous versions.

frontend/src/components/ui/Field.tsx (2)

4-9: Well-structured interface definition

The FieldProps interface properly extends Chakra UI's Field component props while allowing for custom label handling.


11-26: Well-implemented Field component

The component follows React best practices by:

  • Using forwardRef to properly support ref forwarding
  • Destructuring props for clean code
  • Conditionally rendering UI elements based on provided props
  • Properly spreading remaining props to the underlying component
backend/tests/feedback/api/feedback_test.py (1)

59-71:

❓ Verification inconclusive

Verify the expected status code for validation errors

The test expects a 201 Created status code even when there's a validation error from AWS. This seems inconsistent with RESTful API practices which typically use 4xx status codes for validation errors.


🏁 Script executed:

#!/bin/bash
# Check the implementation of FeedbackViewSet to verify error handling

# Look for the create method implementation
rg -A 15 "def create\(" --glob "*/feedback/api/feedback.py"

# Also check if there's any error handling that might affect the response status
rg "return Response\(" --glob "*/feedback/api/feedback.py"

Length of output: 420


Below is a new verification step to help us get more context on how the create method in the FeedbackViewSet is implemented. We need to confirm whether the 201 status code returned on AWS validation errors is intentional or if a 4xx status would be more appropriate. Please run the following shell script to search for the FeedbackViewSet definition and its create method implementation:

After verifying the output of this script, we can definitively confirm if the behavior is as intended.

Action: Manually verify the error-handling behavior in the FeedbackViewSet's create method.

  • Check that the file(s) containing FeedbackViewSet and its create method (likely under a path similar to backend/feedback/api/feedback.py) implement error handling in a manner consistent with REST practices.
  • Confirm whether a 201 status code is indeed used in cases of AWS validation errors or if this is an oversight that might require reverting to a 4xx code.

Once you have reviewed the output and determined if the behavior is by design, please update the test or implementation as necessary.


🏁 Script executed:

#!/bin/bash
echo "Searching for the FeedbackViewSet class definition..."
rg -n "class FeedbackViewSet" .

echo "Searching for the 'create' method implementation..."
rg -n "def create\(" .

Length of output: 664


Action: Confirm AWS Validation Error Response Handling

After verifying the location and basic structure of the FeedbackViewSet in backend/apps/feedback/api/feedback.py, it appears the create method (at line 21) returns a 201 Created response even when a simulated AWS validation error (a ClientError with a "ValidationError" code) occurs. Given that RESTful best practices typically expect a 4xx status code for validation errors, please ensure that this behavior is intentional.

  • If the design is to log the error and still record feedback successfully, leaving the 201 response in place may be acceptable.
  • Otherwise, update the error handling either in the API or the test to reflect the correct 4xx status when AWS validation errors occur.
backend/settings/base.py (1)

44-51: Consider ordering the LOCAL_APPS alphabetically

A previous review comment suggested ordering items alphabetically. Consider reordering the LOCAL_APPS for better maintainability.

LOCAL_APPS = (
    "apps.common",
    "apps.core",
-   "apps.feedback",
    "apps.github",
+   "apps.feedback",
    "apps.owasp",
    "apps.slack",
)

@@ -0,0 +1 @@
# Register your models here.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

This file appears to be incomplete or misplaced

This file contains only a comment that's typically found in Django admin.py files, not test files. A test file should contain actual test cases to verify functionality. Either implement proper test cases for the admin configuration or consider removing this file if it's not needed.

Comment on lines +6 to +30
export const postFeedback = async (data: FeedbackFormValues) => {
const url = `${API_URL}/feedback/`
try {
const response = await fetch(url, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(data),
})

if (!response.ok) {
throw new Error(`Failed to submit feedback: ${response.statusText}`)
}

return response
} catch (error) {
logger.error('Failed to submit feedback', error)
toast({
variant: 'destructive',
title: 'Failed to submit feedback',
description: 'Please try again later.',
})
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add a return value from the catch block to maintain consistent function behavior.

The function doesn't return anything explicitly in the catch block, which will result in undefined being returned when an error occurs. This could cause issues for calling components expecting a response object.

  } catch (error) {
    logger.error('Failed to submit feedback', error)
    toast({
      variant: 'destructive',
      title: 'Failed to submit feedback',
      description: 'Please try again later.',
    })
+   return null
  }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export const postFeedback = async (data: FeedbackFormValues) => {
const url = `${API_URL}/feedback/`
try {
const response = await fetch(url, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(data),
})
if (!response.ok) {
throw new Error(`Failed to submit feedback: ${response.statusText}`)
}
return response
} catch (error) {
logger.error('Failed to submit feedback', error)
toast({
variant: 'destructive',
title: 'Failed to submit feedback',
description: 'Please try again later.',
})
}
}
export const postFeedback = async (data: FeedbackFormValues) => {
const url = `${API_URL}/feedback/`
try {
const response = await fetch(url, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(data),
})
if (!response.ok) {
throw new Error(`Failed to submit feedback: ${response.statusText}`)
}
return response
} catch (error) {
logger.error('Failed to submit feedback', error)
toast({
variant: 'destructive',
title: 'Failed to submit feedback',
description: 'Please try again later.',
})
return null
}
}

Comment on lines +21 to +31
def create(self, request):
"""Handle POST request for feedback submission."""
try:
s3_client = self._get_s3_client()
output, writer = self._get_or_create_tsv(s3_client)
self._write_feedback_to_tsv(writer, request.data)
self._upload_tsv_to_s3(s3_client, output)
return Response(status=status.HTTP_201_CREATED)
except ValidationError:
return Response({"error": "Invalid Credentials"}, status=status.HTTP_400_BAD_REQUEST)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Error handling could be more specific

The catch block only handles ValidationError, but other exceptions (like network issues or S3 failures) are not caught, which could expose internal errors to users.

Implement more comprehensive error handling:

def create(self, request):
    """Handle POST request for feedback submission."""
    try:
        s3_client = self._get_s3_client()
        output, writer = self._get_or_create_tsv(s3_client)
        self._write_feedback_to_tsv(writer, request.data)
        self._upload_tsv_to_s3(s3_client, output)
        return Response(status=status.HTTP_201_CREATED)
    except ValidationError:
        return Response({"error": "Invalid Credentials"}, status=status.HTTP_400_BAD_REQUEST)
+    except botocore.exceptions.ClientError as e:
+        return Response(
+            {"error": "Failed to process feedback"},
+            status=status.HTTP_500_INTERNAL_SERVER_ERROR
+        )
+    except Exception:
+        return Response(
+            {"error": "An unexpected error occurred"},
+            status=status.HTTP_500_INTERNAL_SERVER_ERROR
+        )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def create(self, request):
"""Handle POST request for feedback submission."""
try:
s3_client = self._get_s3_client()
output, writer = self._get_or_create_tsv(s3_client)
self._write_feedback_to_tsv(writer, request.data)
self._upload_tsv_to_s3(s3_client, output)
return Response(status=status.HTTP_201_CREATED)
except ValidationError:
return Response({"error": "Invalid Credentials"}, status=status.HTTP_400_BAD_REQUEST)
def create(self, request):
"""Handle POST request for feedback submission."""
try:
s3_client = self._get_s3_client()
output, writer = self._get_or_create_tsv(s3_client)
self._write_feedback_to_tsv(writer, request.data)
self._upload_tsv_to_s3(s3_client, output)
return Response(status=status.HTTP_201_CREATED)
except ValidationError:
return Response({"error": "Invalid Credentials"}, status=status.HTTP_400_BAD_REQUEST)
except botocore.exceptions.ClientError as e:
return Response(
{"error": "Failed to process feedback"},
status=status.HTTP_500_INTERNAL_SERVER_ERROR
)
except Exception:
return Response(
{"error": "An unexpected error occurred"},
status=status.HTTP_500_INTERNAL_SERVER_ERROR
)

Comment on lines +66 to +77
def _write_feedback_to_tsv(self, writer, feedback_data):
"""Write the new feedback data to the TSV file."""
writer.writerow(
(
feedback_data["name"],
feedback_data["email"],
feedback_data["message"],
feedback_data["is_anonymous"],
feedback_data["is_nestbot"],
datetime.now(timezone.utc).isoformat(),
)
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider input sanitization for TSV data

The feedback data is directly written to the TSV without sanitizing inputs. This could cause issues if user inputs contain tab characters, newlines, or other special characters that might break the TSV format.

Implement input sanitization:

def _write_feedback_to_tsv(self, writer, feedback_data):
    """Write the new feedback data to the TSV file."""
+    # Sanitize inputs to prevent TSV injection
+    sanitized_data = {
+        key: str(value).replace('\t', ' ').replace('\n', ' ').replace('\r', ' ')
+        if isinstance(value, str) else value
+        for key, value in feedback_data.items()
+    }
+    
    writer.writerow(
        (
-            feedback_data["name"],
-            feedback_data["email"],
-            feedback_data["message"],
-            feedback_data["is_anonymous"],
-            feedback_data["is_nestbot"],
+            sanitized_data["name"],
+            sanitized_data["email"],
+            sanitized_data["message"],
+            sanitized_data["is_anonymous"],
+            sanitized_data["is_nestbot"],
            datetime.now(timezone.utc).isoformat(),
        )
    )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def _write_feedback_to_tsv(self, writer, feedback_data):
"""Write the new feedback data to the TSV file."""
writer.writerow(
(
feedback_data["name"],
feedback_data["email"],
feedback_data["message"],
feedback_data["is_anonymous"],
feedback_data["is_nestbot"],
datetime.now(timezone.utc).isoformat(),
)
)
def _write_feedback_to_tsv(self, writer, feedback_data):
"""Write the new feedback data to the TSV file."""
# Sanitize inputs to prevent TSV injection
sanitized_data = {
key: str(value).replace('\t', ' ').replace('\n', ' ').replace('\r', ' ')
if isinstance(value, str) else value
for key, value in feedback_data.items()
}
writer.writerow(
(
sanitized_data["name"],
sanitized_data["email"],
sanitized_data["message"],
sanitized_data["is_anonymous"],
sanitized_data["is_nestbot"],
datetime.now(timezone.utc).isoformat(),
)
)

Comment on lines +167 to +171
await waitFor(() => {
expect(screen.getByPlaceholderText('Your name')).toHaveValue('John Doe')
expect(screen.getByPlaceholderText('[email protected]')).toHaveValue('[email protected]')
expect(screen.getByPlaceholderText('Your feedback here...')).toHaveValue('Great job!')
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Incorrect assertion for form reset

The test asserts that form fields still have values after submission, but a reset test should verify that fields are cleared.

await waitFor(() => {
-  expect(screen.getByPlaceholderText('Your name')).toHaveValue('John Doe')
-  expect(screen.getByPlaceholderText('[email protected]')).toHaveValue('[email protected]')
-  expect(screen.getByPlaceholderText('Your feedback here...')).toHaveValue('Great job!')
+  expect(screen.getByPlaceholderText('Your name')).toHaveValue('')
+  expect(screen.getByPlaceholderText('[email protected]')).toHaveValue('')
+  expect(screen.getByPlaceholderText('Your feedback here...')).toHaveValue('')
})
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
await waitFor(() => {
expect(screen.getByPlaceholderText('Your name')).toHaveValue('John Doe')
expect(screen.getByPlaceholderText('[email protected]')).toHaveValue('[email protected]')
expect(screen.getByPlaceholderText('Your feedback here...')).toHaveValue('Great job!')
})
await waitFor(() => {
expect(screen.getByPlaceholderText('Your name')).toHaveValue('')
expect(screen.getByPlaceholderText('[email protected]')).toHaveValue('')
expect(screen.getByPlaceholderText('Your feedback here...')).toHaveValue('')
})

Comment on lines +62 to +64
await waitFor(() => {
expect(mockPostFeedback).toHaveBeenCalledTimes(0)
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Incorrect expectation in successful submission test

The test expects mockPostFeedback to be called 0 times, which is incorrect for a successful submission test.

await waitFor(() => {
-  expect(mockPostFeedback).toHaveBeenCalledTimes(0)
+  expect(mockPostFeedback).toHaveBeenCalledTimes(1)
})
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
await waitFor(() => {
expect(mockPostFeedback).toHaveBeenCalledTimes(0)
})
await waitFor(() => {
expect(mockPostFeedback).toHaveBeenCalledTimes(1)
})

Comment on lines +94 to +96
await waitFor(() => {
expect(screen.getByText('Anonymous Feedback')).toBeInTheDocument()
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Improper assertion for error state

The test checks for "Anonymous Feedback" text to verify an error state, which doesn't actually confirm that an error message is displayed to the user.

await waitFor(() => {
-  expect(screen.getByText('Anonymous Feedback')).toBeInTheDocument()
+  expect(screen.getByText(/error|failed/i)).toBeInTheDocument()
})
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
await waitFor(() => {
expect(screen.getByText('Anonymous Feedback')).toBeInTheDocument()
})
await waitFor(() => {
expect(screen.getByText(/error|failed/i)).toBeInTheDocument()
})

Comment on lines +40 to +99
class TestFeedbackViewSet:
def test_create_success(self, feedback_viewset, valid_feedback_data, mock_s3_client):
"""Test successful feedback submission."""
request = Mock()
request.data = valid_feedback_data

mock_s3_client.get_object.return_value = {"Body": Mock(read=lambda: b"")}

response = feedback_viewset.create(request)

assert response.status_code == status.HTTP_201_CREATED

mock_s3_client.put_object.assert_called_once()
put_call_kwargs = mock_s3_client.put_object.call_args[1]
assert put_call_kwargs["Bucket"] == settings.AWS_STORAGE_BUCKET_NAME
assert put_call_kwargs["Key"] == "feedbacks.tsv"
assert "Body" in put_call_kwargs
assert put_call_kwargs["ContentType"] == "text/tab-separated-values"

def test_create_validation_error(self, feedback_viewset, valid_feedback_data, mock_s3_client):
"""Test feedback submission with validation error."""
request = Mock()
request.data = valid_feedback_data

mock_s3_client.get_object.side_effect = botocore.exceptions.ClientError(
{"Error": {"Code": "ValidationError", "Message": "Invalid credentials"}}, "GetObject"
)

response = feedback_viewset.create(request)

assert response.status_code == status.HTTP_201_CREATED

def test_write_feedback_to_tsv(self, feedback_viewset, valid_feedback_data):
"""Test writing feedback data to TSV format."""
output = StringIO()
writer = csv.writer(output, delimiter="\t")

current_time = datetime(2025, 1, 22, 10, 45, 34, 567884, tzinfo=timezone.utc)
with patch("django.utils.timezone.now", return_value=current_time):
feedback_viewset.write_feedback_to_tsv(writer, valid_feedback_data)

output.seek(0)
written_data = output.getvalue().strip().split("\t")
assert written_data[0] == valid_feedback_data["name"]
assert written_data[1] == valid_feedback_data["email"]
assert written_data[2] == valid_feedback_data["message"]
assert written_data[3] == str(valid_feedback_data["is_anonymous"])
assert written_data[4] == str(valid_feedback_data["is_nestbot"])

@patch("boto3.client")
def test_get_s3_client(self, mock_boto3, feedback_viewset):
"""Test S3 client initialization."""
feedback_viewset.get_s3_client()

mock_boto3.assert_called_once_with(
"s3",
aws_access_key_id=settings.AWS_ACCESS_KEY_ID,
aws_secret_access_key=settings.AWS_SECRET_ACCESS_KEY,
region_name=settings.AWS_S3_REGION_NAME,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add test for anonymous feedback submission

The test suite is missing a specific test for anonymous feedback submission, which is a key feature of the functionality.

def test_create_anonymous_feedback(self, feedback_viewset, valid_feedback_data, mock_s3_client):
    """Test anonymous feedback submission."""
    # Modify data to be anonymous
    anon_data = valid_feedback_data.copy()
    anon_data["is_anonymous"] = True
    anon_data["name"] = ""
    anon_data["email"] = ""
    
    request = Mock()
    request.data = anon_data
    
    mock_s3_client.get_object.return_value = {"Body": Mock(read=lambda: b"")}
    
    response = feedback_viewset.create(request)
    
    assert response.status_code == status.HTTP_201_CREATED
    
    # Verify the S3 object contains empty name and email fields
    put_call_kwargs = mock_s3_client.put_object.call_args[1]
    body_content = put_call_kwargs["Body"]
    assert "\t\t" in body_content  # Check for empty name and email columns

Comment on lines +126 to +139
# AWS S3 Configuration
AWS_ACCESS_KEY_ID = values.SecretValue(environ_name="AWS_ACCESS_KEY_ID")
AWS_SECRET_ACCESS_KEY = values.SecretValue(environ_name="AWS_SECRET_ACCESS_KEY")
AWS_STORAGE_BUCKET_NAME = values.SecretValue(environ_name="AWS_STORAGE_BUCKET_NAME")
AWS_S3_REGION_NAME = values.SecretValue(environ_name="AWS_S3_REGION_NAME")
FEEDBACK_SHEET_KEY = values.SecretValue(environ_name="FEEDBACK_SHEET_KEY")

AWS = {
"ACCESS_KEY_ID": AWS_ACCESS_KEY_ID,
"SECRET_ACCESS_KEY": AWS_SECRET_ACCESS_KEY,
"STORAGE_BUCKET_NAME": AWS_STORAGE_BUCKET_NAME,
"S3_REGION_NAME": AWS_S3_REGION_NAME,
"FEEDBACK_SHEET_KEY": FEEDBACK_SHEET_KEY,
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Verify environment variables are added to documentation

Ensure that the new AWS environment variables are added to the project documentation and .env.example files.


🏁 Script executed:

#!/bin/bash
# Check if AWS environment variables are documented in .env.example

# Look for AWS variables in .env.example 
if [ -f "backend/.env.example" ]; then
    echo "Checking .env.example for AWS environment variables:"
    grep -E "AWS_|FEEDBACK_" backend/.env.example
else
    echo "backend/.env.example file not found"
fi

# Also check for any documentation on setting up AWS for the feedback feature
fd -t f -e md | xargs grep -l "AWS" | head -5

Length of output: 404


AWS Environment Variables Documentation Consistency

The .env.example file now includes the AWS variables, but they’re prefixed with DJANGO_ (e.g. DJANGO_AWS_ACCESS_KEY_ID), while the code in backend/settings/base.py references them without the prefix (e.g. AWS_ACCESS_KEY_ID). Additionally, a search for AWS setup guidance in the documentation didn’t return any clear instructions for the feedback feature.

  • .env.example Consistency: Verify that the environment variable names match between the code and the .env.example file. If the prefix isn’t intentional, update one side to ensure consistency.
  • Documentation Update: Confirm that the project documentation clearly explains how to configure these AWS environment variables for the feedback feature.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Implemnet user feedback form on Nest frontend
3 participants