-
-
Notifications
You must be signed in to change notification settings - Fork 96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement Feedback submission #558
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@harsh3dev Hey! Could you resolve conflicts on this PR? Thanks! 👍🏼
@arkid15r @kasya I have updated the PR and made the suggested changes but after every other PR merge, this PR is having conflict in the |
Summary by CodeRabbit
WalkthroughThis pull request introduces a feedback functionality across the backend and frontend. On the backend, a new feedback module is integrated with AWS S3 for storing feedback data and includes necessary configuration, dependencies, and tests. On the frontend, new environment variables, API integration, a dedicated feedback page, and UI components (form, modal, and supporting elements) have been added along with tests, recaptcha configuration, and routing updates to support anonymous feedback submission. Changes
Assessment against linked issues
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
🧹 Nitpick comments (29)
backend/apps/feedback/models/models.py (1)
1-2
: Models file is empty but has a docstringThis file contains only a docstring but no actual models. Based on the PR description, it seems the feedback data is stored in AWS S3 rather than in a database. Consider one of the following approaches:
- Add actual model definitions if there are database entities to be stored
- Add a clarifying comment explaining that no models are needed because data is stored in S3
- Remove this file if it's not needed for this implementation
backend/apps/feedback/admin.py (1)
1-2
: Admin registration file is emptyThis file contains only a comment but no model registrations. This is consistent with the empty models.py file, but should be updated if any models are added.
If no models will be used (because data is stored in AWS S3), consider adding a clarifying comment or removing this file to avoid confusion.
backend/tests/feedback/models/models_test.py (1)
1-2
: Test file is empty but has a docstringThis file contains only a docstring but no actual tests. This is consistent with the empty models.py file, but should be updated if any models are added.
If no models will be used in this implementation, consider adding a clarifying comment or removing this file to avoid confusion.
frontend/__tests__/unit/pages/ChapterDetails.test.tsx (2)
6-6
: Consider limitations of JSON-based cloning for structuredClone polyfill.While using
JSON.parse(JSON.stringify(val))
is a common approach for deep cloning, it has limitations when handling circular references, functions, or special JavaScript types. For test data this may be acceptable, but consider using a more robust solution if complex objects need to be cloned.-global.structuredClone = (val) => JSON.parse(JSON.stringify(val)) +global.structuredClone = (val) => { + // Handle special cases or use a more robust cloning library for complex objects + if (val === undefined || val === null) return val; + return JSON.parse(JSON.stringify(val)); +}
8-8
: Consider implementing a mock for fetchAlgoliaData.The current mock doesn't provide any implementation, which means it will return undefined by default. Consider providing a specific mock implementation to better control test behavior.
jest.mock('api/fetchAlgoliaData', () => ({ fetchAlgoliaData: jest.fn().mockResolvedValue({ hits: [] }) }));backend/tests/feedback/apps_test.py (1)
6-6
: Consider removing extra blank line.There's an unnecessary blank line here. While this is a minor formatting issue, removing it would improve code consistency.
frontend/src/components/FeedbackFormModal.tsx (3)
23-23
: Simplify variant prop syntax.Consider using
variant="outline"
instead ofvariant={'outline'}
for consistency with other attribute styles.- variant={'outline'} + variant="outline"
20-35
: Add accessibility attributes to the button.The button is missing accessibility attributes like
aria-label
. Adding these would improve accessibility for screen readers.<Button size="sm" variant="outline" colorScheme="blue" className="group flex items-center border" + aria-label="Open feedback form" >
41-42
: Consider adding a close button with visible text.The current implementation has a
DialogCloseTrigger
but without any visible UI element. Consider adding a visible close button for better usability.<DialogBody> <FeedbackForm /> </DialogBody> - <DialogCloseTrigger /> + <DialogCloseTrigger asChild> + <Button size="sm" variant="ghost" position="absolute" top="4" right="4"> + Close + </Button> + </DialogCloseTrigger>frontend/src/utils/helpers/schema.ts (1)
3-16
: Consider adding length validation for the message field.While the current validation ensures the message is not empty, you might want to add a maximum length constraint to prevent excessively long submissions.
message: z.string().min(1, 'Message is required'), + .max(1000, 'Message cannot exceed 1000 characters'),
backend/tests/feedback/api/urls_test.py (2)
22-25
: Consider simplifying the viewset assertion.While the current assertion is clear, it could be simplified for better readability.
viewset = route.callback.cls - assert issubclass( - viewset, viewset_class - ), f"Viewset for '{route.name}' does not match {viewset_class}." + assert issubclass(viewset, viewset_class), f"Viewset for '{route.name}' does not match {viewset_class}."
7-12
: Consider testing more API endpoints.The test is currently parameterized but only includes one test case. Consider expanding it to test other endpoints that might exist in the feedback API (e.g., detail, create, update).
@pytest.mark.parametrize( ("url_name", "expected_prefix", "viewset_class"), [ ("feedback-list", "feedback", FeedbackViewSet), + ("feedback-detail", "feedback", FeedbackViewSet), ], )
frontend/src/api/postFeedbackData.ts (1)
17-19
: Consider handling the error response status in the catch block.Currently, when a non-2xx response is received, an error is thrown but it's not explicitly caught by the function's catch block. Instead, it would propagate to the caller. Consider either:
- Moving this error-throwing code outside the try block, or
- Returning the response object and letting the caller handle status checks
- if (!response.ok) { - throw new Error(`Failed to submit feedback: ${response.statusText}`) - } + return responseAnd then handle the status check in the component that calls this function.
frontend/src/components/ui/Switch.tsx (1)
11-32
: Well-implemented Switch component with proper ref forwarding.The Switch component correctly uses forwardRef to allow forwarding refs to the hidden input element, which is important for accessibility and form integration. The component provides good customization options through the trackLabel and thumbLabel props.
Consider optimizing the conditional rendering of children.
- {children != null && <ChakraSwitch.Label>{children}</ChakraSwitch.Label>} + {children && <ChakraSwitch.Label>{children}</ChakraSwitch.Label>}This simplifies the check since JavaScript treats null, undefined, and empty strings as falsy.
frontend/jest.setup.ts (1)
11-15
: Duplicate ResizeObserver implementationThere's another identical implementation of
ResizeObserver
in thebeforeAll
hook at lines 31-35. Having two implementations of the same mock can lead to confusion and potential issues if they differ in the future.Remove one of the duplicates:
-global.ResizeObserver = class { - observe() {} - unobserve() {} - disconnect() {} -}Since the implementation in the
beforeAll
hook already exists, I recommend removing this duplicate global declaration.backend/apps/feedback/api/feedback.py (4)
41-42
: Default parameter value for tsv_keyThe default parameter for
tsv_key
is hardcoded as "feedbacks.tsv", but the same value is used again in_upload_tsv_to_s3
. This could lead to inconsistency if one is updated but not the other.Use a class constant or settings variable:
+ FEEDBACK_TSV_KEY = "feedbacks.tsv" + - def _get_or_create_tsv(self, s3_client, tsv_key="feedbacks.tsv"): + def _get_or_create_tsv(self, s3_client, tsv_key=None): """Get the existing TSV file or creates a new one if it doesn't exist.""" + if tsv_key is None: + tsv_key = self.FEEDBACK_TSV_KEYAnd update
_upload_tsv_to_s3
to use the same constant:def _upload_tsv_to_s3(self, s3_client, output): """Upload the updated TSV file back to S3.""" output.seek(0) s3_client.put_object( Bucket=settings.AWS_STORAGE_BUCKET_NAME, - Key="feedbacks.tsv", + Key=self.FEEDBACK_TSV_KEY, Body=output.getvalue(), ContentType="text/tab-separated-values", )
58-58
: Clarify the seek operationThis line might be confusing without additional context as noted in a past review.
Add a clarifying comment:
# move the cursor to the end of the file - output.seek(0, 2) + output.seek(0, 2) # 0 offset from position 2 (end of file)
89-95
: Redundant public wrapper methodsThe public methods
write_feedback_to_tsv
andget_s3_client
simply call their private counterparts without adding functionality. This creates unnecessary code and maintenance burden.If these methods are meant for testing or external use, consider removing them and making the private methods more accessible:
- def write_feedback_to_tsv(self, writer, feedback_data): - """Public method to write feedback data to TSV format.""" - self._write_feedback_to_tsv(writer, feedback_data) - - def get_s3_client(self): - """Public method to get the S3 client.""" - return self._get_s3_client()Alternatively, if they're needed for testing, you could use Python's
@property
decorator or document their purpose more clearly.
59-64
: Missing logging for S3 errorsWhen a
NoSuchKey
error occurs, it silently creates a new TSV file without logging the event, which could make debugging difficult.Add logging for better observability:
import csv from datetime import datetime, timezone from io import StringIO +import logging import boto3 import botocore from django.conf import settings from django.core.exceptions import ValidationError from rest_framework import status, viewsets from rest_framework.permissions import AllowAny from rest_framework.response import Response +logger = logging.getLogger(__name__) + class FeedbackViewSet(viewsets.ModelViewSet): # ... existing code ... def _get_or_create_tsv(self, s3_client, tsv_key="feedbacks.tsv"): # ... existing code ... except botocore.exceptions.ClientError as e: if e.response["Error"]["Code"] == "NoSuchKey": + logger.info(f"TSV file {tsv_key} not found, creating new file") writer.writerow( ["Name", "Email", "Message", "is_anonymous", "is_nestbot", "created_at"] ) + else: + logger.error(f"Error accessing S3: {e}") + raisefrontend/__tests__/unit/pages/Feedback.test.tsx (2)
8-14
: Use optional chaining for ReCAPTCHA mockThe current implementation uses a conditional check with the
&&
operator, but optional chaining would be cleaner.jest.mock('react-google-recaptcha', () => ({ __esModule: true, default: jest.fn(({ onChange }) => ( <button data-testid="recaptcha-button" onClick={() => - onChange && onChange('test-token') + onChange?.('test-token') }> Verify ReCAPTCHA </button> )), }))🧰 Tools
🪛 Biome (1.9.4)
[error] 10-10: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
27-32
: Nested ChakraProvider instancesThe test renders the component with nested
ChakraProvider
instances, which is redundant and could lead to unexpected behavior.render( <ChakraProvider value={defaultSystem}> - <ChakraProvider value={defaultSystem}> <FeedbackPage /> - </ChakraProvider> </ChakraProvider> )Apply this change to all test cases that have the same nested providers.
frontend/src/components/FeedbackForm.tsx (5)
33-65
: Consider improving error handling in the API responseThe form submission logic has generic error handling that doesn't provide specific feedback to users about what went wrong. Consider leveraging the error details from the API response for more helpful user feedback.
- const responseData = await postFeedback(data) - if (responseData.ok) { + try { + const responseData = await postFeedback(data) + if (responseData.ok) { + toast({ + title: 'Feedback Submitted', + description: 'Thank you for your feedback!', + }) + captchaRef.current.reset() + form.reset({ + name: '', + email: '', + message: '', + is_anonymous: isAnonymous, + is_nestbot: false, + }) + } else { + toast({ + variant: 'destructive', + title: 'Failed to submit feedback', + description: responseData.error || 'Please try again later.', + }) + } + } catch (error) { toast({ variant: 'destructive', title: 'Failed to submit feedback', - description: 'Please try again later.', + description: error.message || 'Please try again later.', }) -} + }
73-102
: Add form labeling to improve accessibilityWhile the form inputs have labels, they could be better associated with their inputs for screen readers.
<Field label="Name"> <Input placeholder="Your name" id="name" + aria-labelledby="name-label" className="rounded-lg border border-border p-4" {...form.register('name')} /> {form.formState.errors.name && ( - <p className="mt-1 text-xs text-red-500">{form.formState.errors.name.message}</p> + <p id="name-error" className="mt-1 text-xs text-red-500" aria-live="polite">{form.formState.errors.name.message}</p> )} </Field>And similar changes for other form fields.
134-134
: Add accessible error handling for ReCAPTCHAThe ReCAPTCHA component should have an accessible way for users to understand when validation fails.
-<ReCAPTCHA sitekey={RECAPTCHA_SITE_KEY} ref={captchaRef} /> +<div> + <ReCAPTCHA + sitekey={RECAPTCHA_SITE_KEY} + ref={captchaRef} + aria-describedby="recaptcha-instructions" + /> + <span id="recaptcha-instructions" className="sr-only"> + Please complete the reCAPTCHA to verify you are not a robot. + </span> +</div>
1-16
: Add PR issue context to componentSince this component implements the feedback functionality mentioned in the PR objectives (issue #309), consider adding a comment header to document this connection.
+/** + * FeedbackForm Component + * + * Implements the feedback submission form functionality for issue #309. + * Uses Chakra UI components and ReCAPTCHA for spam prevention. + */ import { Textarea, Input, Button } from '@chakra-ui/react'
67-146
: Address navigation to feedback page mentioned in PR objectivesThe PR objectives mentioned uncertainty about where to place navigation to the feedback page. Consider adding a comment about this or implementing a standard navigation pattern.
Would you like me to suggest some common patterns for adding navigation to this page, such as a footer link, navbar dropdown, or help menu item?
backend/tests/feedback/api/feedback_test.py (2)
14-16
: Remove parentheses from pytest fixturesPer Python best practices and static analysis hints, remove empty parentheses from the pytest.fixture decorators.
-@pytest.fixture() +@pytest.fixture def feedback_viewset(): return FeedbackViewSet() -@pytest.fixture() +@pytest.fixture def valid_feedback_data(): return { -@pytest.fixture() +@pytest.fixture def mock_s3_client(): with patch("boto3.client") as mock_client:Also applies to: 19-21, 30-32
🧰 Tools
🪛 Ruff (0.8.2)
14-14: Use
@pytest.fixture
over@pytest.fixture()
Remove parentheses
(PT001)
77-77
: Use realistic test data with timezone-aware datetimeThe test uses a hardcoded date in 2025, which could cause the test to fail when that date is no longer in the future.
-current_time = datetime(2025, 1, 22, 10, 45, 34, 567884, tzinfo=timezone.utc) +current_time = datetime.now(timezone.utc)backend/settings/base.py (1)
126-139
: Add documentation for AWS configuration variablesThe AWS configuration variables would benefit from documentation explaining their purpose and requirements.
# AWS S3 Configuration +# Used for storing feedback submissions in a TSV file AWS_ACCESS_KEY_ID = values.SecretValue(environ_name="AWS_ACCESS_KEY_ID") AWS_SECRET_ACCESS_KEY = values.SecretValue(environ_name="AWS_SECRET_ACCESS_KEY") AWS_STORAGE_BUCKET_NAME = values.SecretValue(environ_name="AWS_STORAGE_BUCKET_NAME") AWS_S3_REGION_NAME = values.SecretValue(environ_name="AWS_S3_REGION_NAME") +# Key for the TSV file storing feedback data in the S3 bucket FEEDBACK_SHEET_KEY = values.SecretValue(environ_name="FEEDBACK_SHEET_KEY")
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
backend/poetry.lock
is excluded by!**/*.lock
📒 Files selected for processing (32)
backend/.env.example
(1 hunks)backend/apps/feedback/admin.py
(1 hunks)backend/apps/feedback/api/feedback.py
(1 hunks)backend/apps/feedback/api/urls.py
(1 hunks)backend/apps/feedback/apps.py
(1 hunks)backend/apps/feedback/models/models.py
(1 hunks)backend/pyproject.toml
(1 hunks)backend/settings/base.py
(2 hunks)backend/settings/urls.py
(1 hunks)backend/tests/feedback/admin_test.py
(1 hunks)backend/tests/feedback/api/feedback_test.py
(1 hunks)backend/tests/feedback/api/urls_test.py
(1 hunks)backend/tests/feedback/apps_test.py
(1 hunks)backend/tests/feedback/models/models_test.py
(1 hunks)frontend/.env.example
(1 hunks)frontend/__tests__/unit/App.test.tsx
(1 hunks)frontend/__tests__/unit/pages/ChapterDetails.test.tsx
(1 hunks)frontend/__tests__/unit/pages/Feedback.test.tsx
(1 hunks)frontend/jest.setup.ts
(1 hunks)frontend/package.json
(1 hunks)frontend/src/App.tsx
(3 hunks)frontend/src/api/postFeedbackData.ts
(1 hunks)frontend/src/components/FeedbackForm.tsx
(1 hunks)frontend/src/components/FeedbackFormModal.tsx
(1 hunks)frontend/src/components/Header.tsx
(2 hunks)frontend/src/components/ui/Field.tsx
(1 hunks)frontend/src/components/ui/Switch.tsx
(1 hunks)frontend/src/pages/Feedback.tsx
(1 hunks)frontend/src/pages/index.ts
(2 hunks)frontend/src/sentry.config.ts
(1 hunks)frontend/src/utils/credentials.ts
(1 hunks)frontend/src/utils/helpers/schema.ts
(1 hunks)
🧰 Additional context used
🧬 Code Definitions (5)
backend/apps/feedback/api/urls.py (1)
backend/apps/feedback/api/feedback.py (1)
FeedbackViewSet
(16-95)
frontend/src/App.tsx (1)
frontend/src/pages/Feedback.tsx (1)
FeedbackPage
(4-13)
frontend/src/components/Header.tsx (1)
frontend/src/components/FeedbackFormModal.tsx (1)
FeedbackFormModal
(15-45)
frontend/src/pages/Feedback.tsx (1)
frontend/src/components/FeedbackForm.tsx (1)
FeedbackForm
(17-146)
frontend/src/api/postFeedbackData.ts (1)
frontend/src/utils/credentials.ts (1)
API_URL
(1-1)
🪛 Ruff (0.8.2)
backend/tests/feedback/api/feedback_test.py
14-14: Use @pytest.fixture
over @pytest.fixture()
Remove parentheses
(PT001)
19-19: Use @pytest.fixture
over @pytest.fixture()
Remove parentheses
(PT001)
30-30: Use @pytest.fixture
over @pytest.fixture()
Remove parentheses
(PT001)
🪛 Biome (1.9.4)
frontend/__tests__/unit/pages/Feedback.test.tsx
[error] 10-10: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
🔇 Additional comments (28)
frontend/src/utils/credentials.ts (1)
10-10
:✅ Verification successful
LGTM: Added ReCAPTCHA site key
The addition of the RECAPTCHA_SITE_KEY environment variable follows the existing pattern in the file and supports the ReCAPTCHA integration mentioned in the PR description.
🏁 Script executed:
#!/bin/bash # Verify that the new environment variable is properly defined in .env.example grep -r "VITE_RECAPTCHA_SITE_KEY" frontend/.env.exampleLength of output: 80
LGTM: ReCAPTCHA credentials integration verified.
The RECAPTCHA_SITE_KEY environment variable is correctly implemented following the existing pattern in the file, and the verification confirms its presence in
frontend/.env.example
(with an empty default). No further changes are necessary.backend/apps/feedback/apps.py (1)
4-6
: LGTM: Django app configuration is correctly implementedThe FeedbackConfig class follows Django's standard app configuration pattern with proper name and default_auto_field settings.
frontend/.env.example (1)
6-7
: LGTM: Required reCAPTCHA environment variables addedThese environment variables are necessary for the reCAPTCHA integration in the feedback form.
backend/pyproject.toml (1)
47-47
: Confirm boto3 dependency requirementThe boto3 library has been added, which is appropriate for AWS S3 integration for storing feedback data.
Note: A previous review asked "Why these new dependencies? I understand we need
boto3
only, right?" The current changes show only boto3 being added, which aligns with the feedback functionality's AWS integration needs.frontend/src/pages/Feedback.tsx (1)
4-12
: LGTM: Clean implementation of feedback pageThe FeedbackPage component is well-structured and follows React best practices with proper layout and component composition.
As mentioned in the PR description, you're unsure about navigation to this page. Consider adding a feedback link in appropriate locations like the main navigation menu, footer, or user profile section based on the expected user flow.
frontend/src/pages/index.ts (1)
10-10
: Appropriate integration of FeedbackPage componentThe import and export additions for the FeedbackPage component follow the existing pattern in the file, making the new feedback page accessible throughout the application.
Also applies to: 30-30
frontend/package.json (1)
48-48
: New dependencies properly added for feedback functionalityThe addition of
react-google-recaptcha
aligns with implementing ReCAPTCHA on the feedback page as mentioned in the PR objectives. Thenext-themes
package is likely related to the transition from shadcn UI to Chakra UI for theming support.Also applies to: 51-51
backend/apps/feedback/api/urls.py (1)
1-9
: API routing for feedback functionality appropriately configuredThe URL routing setup for the feedback API follows Django REST Framework best practices by using SimpleRouter and registering the FeedbackViewSet with an appropriate basename.
backend/.env.example (1)
7-9
: AWS S3 configuration added for feedback storageThe new environment variables properly support the AWS S3 integration needed for storing feedback data.
However, I noticed from the provided code snippet that
FeedbackViewSet._upload_tsv_to_s3()
method uses a hardcoded filename "feedbacks.tsv":s3_client.put_object( Bucket=settings.AWS_STORAGE_BUCKET_NAME, Key="feedbacks.tsv", # Hardcoded value Body=output.getvalue(), ContentType="text/tab-separated-values", )Consider making this configurable using the
DJANGO_FEEDBACK_SHEET_KEY
environment variable that's being added here.frontend/src/sentry.config.ts (1)
6-6
: Good use of optional chaining for null/undefined safety.The addition of the optional chaining operator (
?.
) before callingtoLowerCase()
prevents potential runtime errors ifENVIRONMENT
is null or undefined, making the code more robust.frontend/__tests__/unit/App.test.tsx (1)
21-21
: Mock addition for FeedbackPage component is appropriate.The mock follows the established pattern for other page components, using a consistent testing approach with a
data-testid
attribute that will facilitate testing in the future.backend/settings/urls.py (1)
16-16
: Feedback API router integration follows established patterns.The feedback router is imported and integrated into the main router in the same way as the existing GitHub and OWASP routers, maintaining consistency in the API routing structure.
Also applies to: 24-24
backend/tests/feedback/apps_test.py (1)
1-11
: LGTM: Clear and concise test for the FeedbackConfig class.The test appropriately verifies that the feedback app is configured correctly by checking the app name from both the
FeedbackConfig
class directly and through Django's app registry.frontend/src/components/FeedbackFormModal.tsx (1)
15-45
: Good implementation of the feedback modal component.The component is well-structured and uses Chakra UI components effectively. The transition between icons on hover provides good user feedback.
frontend/src/utils/helpers/schema.ts (2)
3-9
: Well-designed schema for feedback validation.The schema properly validates required fields and includes appropriate validation messages. The inclusion of an
is_anonymous
flag is a good design choice.
11-14
: Good approach for handling anonymous feedback.Extending the main schema to make name and email optional for anonymous submissions is a clean approach.
backend/tests/feedback/api/urls_test.py (1)
7-26
: Good parameterized test for router registration.The test effectively verifies that the feedback routes are correctly registered with the appropriate viewset. Using parameterization allows for easy expansion if more routes are added in the future.
frontend/src/components/Header.tsx (2)
13-13
: Good import placement for the FeedbackFormModal.The import for the FeedbackFormModal component is correctly placed with the other component imports.
95-95
: Strategic placement of the feedback button in the header.The FeedbackFormModal is well-positioned in the header's action buttons area, making it easily accessible from anywhere in the application. This placement follows the requested feedback submission implementation from the PR description.
frontend/src/App.tsx (3)
13-13
: Good import organization for the FeedbackPage component.The FeedbackPage component is correctly imported along with the other page components.
23-23
: Import repositioning looks reasonable.The UsersPage import was moved out of the destructured import block, which is fine as long as the component is still being used correctly in the routes.
49-49
: Good implementation of the feedback route.The feedback route is correctly implemented, matching the PR's goal to add feedback functionality. The implementation includes both a dedicated page route and a modal (as seen in Header.tsx), which addresses previous reviewer suggestions.
frontend/src/components/ui/Switch.tsx (1)
4-9
: Well-defined interface with good extensibility.The SwitchProps interface properly extends the Chakra UI switch props and provides additional custom properties for enhanced functionality. The props are well-named and their types are appropriately defined.
frontend/jest.setup.ts (1)
16-16
: Improved structuredClone implementationThe updated implementation correctly handles undefined values, which is an improvement over previous versions.
frontend/src/components/ui/Field.tsx (2)
4-9
: Well-structured interface definitionThe
FieldProps
interface properly extends Chakra UI's Field component props while allowing for custom label handling.
11-26
: Well-implemented Field componentThe component follows React best practices by:
- Using
forwardRef
to properly support ref forwarding- Destructuring props for clean code
- Conditionally rendering UI elements based on provided props
- Properly spreading remaining props to the underlying component
backend/tests/feedback/api/feedback_test.py (1)
59-71
:❓ Verification inconclusive
Verify the expected status code for validation errors
The test expects a 201 Created status code even when there's a validation error from AWS. This seems inconsistent with RESTful API practices which typically use 4xx status codes for validation errors.
🏁 Script executed:
#!/bin/bash # Check the implementation of FeedbackViewSet to verify error handling # Look for the create method implementation rg -A 15 "def create\(" --glob "*/feedback/api/feedback.py" # Also check if there's any error handling that might affect the response status rg "return Response\(" --glob "*/feedback/api/feedback.py"Length of output: 420
Below is a new verification step to help us get more context on how the
create
method in theFeedbackViewSet
is implemented. We need to confirm whether the 201 status code returned on AWS validation errors is intentional or if a 4xx status would be more appropriate. Please run the following shell script to search for theFeedbackViewSet
definition and itscreate
method implementation:After verifying the output of this script, we can definitively confirm if the behavior is as intended.
Action: Manually verify the error-handling behavior in the FeedbackViewSet's create method.
- Check that the file(s) containing
FeedbackViewSet
and itscreate
method (likely under a path similar tobackend/feedback/api/feedback.py
) implement error handling in a manner consistent with REST practices.- Confirm whether a 201 status code is indeed used in cases of AWS validation errors or if this is an oversight that might require reverting to a 4xx code.
Once you have reviewed the output and determined if the behavior is by design, please update the test or implementation as necessary.
🏁 Script executed:
#!/bin/bash echo "Searching for the FeedbackViewSet class definition..." rg -n "class FeedbackViewSet" . echo "Searching for the 'create' method implementation..." rg -n "def create\(" .Length of output: 664
Action: Confirm AWS Validation Error Response Handling
After verifying the location and basic structure of the
FeedbackViewSet
inbackend/apps/feedback/api/feedback.py
, it appears thecreate
method (at line 21) returns a 201 Created response even when a simulated AWS validation error (aClientError
with a "ValidationError" code) occurs. Given that RESTful best practices typically expect a 4xx status code for validation errors, please ensure that this behavior is intentional.
- If the design is to log the error and still record feedback successfully, leaving the 201 response in place may be acceptable.
- Otherwise, update the error handling either in the API or the test to reflect the correct 4xx status when AWS validation errors occur.
backend/settings/base.py (1)
44-51
: Consider ordering the LOCAL_APPS alphabeticallyA previous review comment suggested ordering items alphabetically. Consider reordering the LOCAL_APPS for better maintainability.
LOCAL_APPS = ( "apps.common", "apps.core", - "apps.feedback", "apps.github", + "apps.feedback", "apps.owasp", "apps.slack", )
@@ -0,0 +1 @@ | |||
# Register your models here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This file appears to be incomplete or misplaced
This file contains only a comment that's typically found in Django admin.py files, not test files. A test file should contain actual test cases to verify functionality. Either implement proper test cases for the admin configuration or consider removing this file if it's not needed.
export const postFeedback = async (data: FeedbackFormValues) => { | ||
const url = `${API_URL}/feedback/` | ||
try { | ||
const response = await fetch(url, { | ||
method: 'POST', | ||
headers: { | ||
'Content-Type': 'application/json', | ||
}, | ||
body: JSON.stringify(data), | ||
}) | ||
|
||
if (!response.ok) { | ||
throw new Error(`Failed to submit feedback: ${response.statusText}`) | ||
} | ||
|
||
return response | ||
} catch (error) { | ||
logger.error('Failed to submit feedback', error) | ||
toast({ | ||
variant: 'destructive', | ||
title: 'Failed to submit feedback', | ||
description: 'Please try again later.', | ||
}) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add a return value from the catch block to maintain consistent function behavior.
The function doesn't return anything explicitly in the catch block, which will result in undefined
being returned when an error occurs. This could cause issues for calling components expecting a response object.
} catch (error) {
logger.error('Failed to submit feedback', error)
toast({
variant: 'destructive',
title: 'Failed to submit feedback',
description: 'Please try again later.',
})
+ return null
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
export const postFeedback = async (data: FeedbackFormValues) => { | |
const url = `${API_URL}/feedback/` | |
try { | |
const response = await fetch(url, { | |
method: 'POST', | |
headers: { | |
'Content-Type': 'application/json', | |
}, | |
body: JSON.stringify(data), | |
}) | |
if (!response.ok) { | |
throw new Error(`Failed to submit feedback: ${response.statusText}`) | |
} | |
return response | |
} catch (error) { | |
logger.error('Failed to submit feedback', error) | |
toast({ | |
variant: 'destructive', | |
title: 'Failed to submit feedback', | |
description: 'Please try again later.', | |
}) | |
} | |
} | |
export const postFeedback = async (data: FeedbackFormValues) => { | |
const url = `${API_URL}/feedback/` | |
try { | |
const response = await fetch(url, { | |
method: 'POST', | |
headers: { | |
'Content-Type': 'application/json', | |
}, | |
body: JSON.stringify(data), | |
}) | |
if (!response.ok) { | |
throw new Error(`Failed to submit feedback: ${response.statusText}`) | |
} | |
return response | |
} catch (error) { | |
logger.error('Failed to submit feedback', error) | |
toast({ | |
variant: 'destructive', | |
title: 'Failed to submit feedback', | |
description: 'Please try again later.', | |
}) | |
return null | |
} | |
} |
def create(self, request): | ||
"""Handle POST request for feedback submission.""" | ||
try: | ||
s3_client = self._get_s3_client() | ||
output, writer = self._get_or_create_tsv(s3_client) | ||
self._write_feedback_to_tsv(writer, request.data) | ||
self._upload_tsv_to_s3(s3_client, output) | ||
return Response(status=status.HTTP_201_CREATED) | ||
except ValidationError: | ||
return Response({"error": "Invalid Credentials"}, status=status.HTTP_400_BAD_REQUEST) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Error handling could be more specific
The catch block only handles ValidationError
, but other exceptions (like network issues or S3 failures) are not caught, which could expose internal errors to users.
Implement more comprehensive error handling:
def create(self, request):
"""Handle POST request for feedback submission."""
try:
s3_client = self._get_s3_client()
output, writer = self._get_or_create_tsv(s3_client)
self._write_feedback_to_tsv(writer, request.data)
self._upload_tsv_to_s3(s3_client, output)
return Response(status=status.HTTP_201_CREATED)
except ValidationError:
return Response({"error": "Invalid Credentials"}, status=status.HTTP_400_BAD_REQUEST)
+ except botocore.exceptions.ClientError as e:
+ return Response(
+ {"error": "Failed to process feedback"},
+ status=status.HTTP_500_INTERNAL_SERVER_ERROR
+ )
+ except Exception:
+ return Response(
+ {"error": "An unexpected error occurred"},
+ status=status.HTTP_500_INTERNAL_SERVER_ERROR
+ )
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def create(self, request): | |
"""Handle POST request for feedback submission.""" | |
try: | |
s3_client = self._get_s3_client() | |
output, writer = self._get_or_create_tsv(s3_client) | |
self._write_feedback_to_tsv(writer, request.data) | |
self._upload_tsv_to_s3(s3_client, output) | |
return Response(status=status.HTTP_201_CREATED) | |
except ValidationError: | |
return Response({"error": "Invalid Credentials"}, status=status.HTTP_400_BAD_REQUEST) | |
def create(self, request): | |
"""Handle POST request for feedback submission.""" | |
try: | |
s3_client = self._get_s3_client() | |
output, writer = self._get_or_create_tsv(s3_client) | |
self._write_feedback_to_tsv(writer, request.data) | |
self._upload_tsv_to_s3(s3_client, output) | |
return Response(status=status.HTTP_201_CREATED) | |
except ValidationError: | |
return Response({"error": "Invalid Credentials"}, status=status.HTTP_400_BAD_REQUEST) | |
except botocore.exceptions.ClientError as e: | |
return Response( | |
{"error": "Failed to process feedback"}, | |
status=status.HTTP_500_INTERNAL_SERVER_ERROR | |
) | |
except Exception: | |
return Response( | |
{"error": "An unexpected error occurred"}, | |
status=status.HTTP_500_INTERNAL_SERVER_ERROR | |
) |
def _write_feedback_to_tsv(self, writer, feedback_data): | ||
"""Write the new feedback data to the TSV file.""" | ||
writer.writerow( | ||
( | ||
feedback_data["name"], | ||
feedback_data["email"], | ||
feedback_data["message"], | ||
feedback_data["is_anonymous"], | ||
feedback_data["is_nestbot"], | ||
datetime.now(timezone.utc).isoformat(), | ||
) | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider input sanitization for TSV data
The feedback data is directly written to the TSV without sanitizing inputs. This could cause issues if user inputs contain tab characters, newlines, or other special characters that might break the TSV format.
Implement input sanitization:
def _write_feedback_to_tsv(self, writer, feedback_data):
"""Write the new feedback data to the TSV file."""
+ # Sanitize inputs to prevent TSV injection
+ sanitized_data = {
+ key: str(value).replace('\t', ' ').replace('\n', ' ').replace('\r', ' ')
+ if isinstance(value, str) else value
+ for key, value in feedback_data.items()
+ }
+
writer.writerow(
(
- feedback_data["name"],
- feedback_data["email"],
- feedback_data["message"],
- feedback_data["is_anonymous"],
- feedback_data["is_nestbot"],
+ sanitized_data["name"],
+ sanitized_data["email"],
+ sanitized_data["message"],
+ sanitized_data["is_anonymous"],
+ sanitized_data["is_nestbot"],
datetime.now(timezone.utc).isoformat(),
)
)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def _write_feedback_to_tsv(self, writer, feedback_data): | |
"""Write the new feedback data to the TSV file.""" | |
writer.writerow( | |
( | |
feedback_data["name"], | |
feedback_data["email"], | |
feedback_data["message"], | |
feedback_data["is_anonymous"], | |
feedback_data["is_nestbot"], | |
datetime.now(timezone.utc).isoformat(), | |
) | |
) | |
def _write_feedback_to_tsv(self, writer, feedback_data): | |
"""Write the new feedback data to the TSV file.""" | |
# Sanitize inputs to prevent TSV injection | |
sanitized_data = { | |
key: str(value).replace('\t', ' ').replace('\n', ' ').replace('\r', ' ') | |
if isinstance(value, str) else value | |
for key, value in feedback_data.items() | |
} | |
writer.writerow( | |
( | |
sanitized_data["name"], | |
sanitized_data["email"], | |
sanitized_data["message"], | |
sanitized_data["is_anonymous"], | |
sanitized_data["is_nestbot"], | |
datetime.now(timezone.utc).isoformat(), | |
) | |
) |
await waitFor(() => { | ||
expect(screen.getByPlaceholderText('Your name')).toHaveValue('John Doe') | ||
expect(screen.getByPlaceholderText('[email protected]')).toHaveValue('[email protected]') | ||
expect(screen.getByPlaceholderText('Your feedback here...')).toHaveValue('Great job!') | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect assertion for form reset
The test asserts that form fields still have values after submission, but a reset test should verify that fields are cleared.
await waitFor(() => {
- expect(screen.getByPlaceholderText('Your name')).toHaveValue('John Doe')
- expect(screen.getByPlaceholderText('[email protected]')).toHaveValue('[email protected]')
- expect(screen.getByPlaceholderText('Your feedback here...')).toHaveValue('Great job!')
+ expect(screen.getByPlaceholderText('Your name')).toHaveValue('')
+ expect(screen.getByPlaceholderText('[email protected]')).toHaveValue('')
+ expect(screen.getByPlaceholderText('Your feedback here...')).toHaveValue('')
})
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
await waitFor(() => { | |
expect(screen.getByPlaceholderText('Your name')).toHaveValue('John Doe') | |
expect(screen.getByPlaceholderText('[email protected]')).toHaveValue('[email protected]') | |
expect(screen.getByPlaceholderText('Your feedback here...')).toHaveValue('Great job!') | |
}) | |
await waitFor(() => { | |
expect(screen.getByPlaceholderText('Your name')).toHaveValue('') | |
expect(screen.getByPlaceholderText('[email protected]')).toHaveValue('') | |
expect(screen.getByPlaceholderText('Your feedback here...')).toHaveValue('') | |
}) |
await waitFor(() => { | ||
expect(mockPostFeedback).toHaveBeenCalledTimes(0) | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect expectation in successful submission test
The test expects mockPostFeedback
to be called 0 times, which is incorrect for a successful submission test.
await waitFor(() => {
- expect(mockPostFeedback).toHaveBeenCalledTimes(0)
+ expect(mockPostFeedback).toHaveBeenCalledTimes(1)
})
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
await waitFor(() => { | |
expect(mockPostFeedback).toHaveBeenCalledTimes(0) | |
}) | |
await waitFor(() => { | |
expect(mockPostFeedback).toHaveBeenCalledTimes(1) | |
}) |
await waitFor(() => { | ||
expect(screen.getByText('Anonymous Feedback')).toBeInTheDocument() | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Improper assertion for error state
The test checks for "Anonymous Feedback" text to verify an error state, which doesn't actually confirm that an error message is displayed to the user.
await waitFor(() => {
- expect(screen.getByText('Anonymous Feedback')).toBeInTheDocument()
+ expect(screen.getByText(/error|failed/i)).toBeInTheDocument()
})
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
await waitFor(() => { | |
expect(screen.getByText('Anonymous Feedback')).toBeInTheDocument() | |
}) | |
await waitFor(() => { | |
expect(screen.getByText(/error|failed/i)).toBeInTheDocument() | |
}) |
class TestFeedbackViewSet: | ||
def test_create_success(self, feedback_viewset, valid_feedback_data, mock_s3_client): | ||
"""Test successful feedback submission.""" | ||
request = Mock() | ||
request.data = valid_feedback_data | ||
|
||
mock_s3_client.get_object.return_value = {"Body": Mock(read=lambda: b"")} | ||
|
||
response = feedback_viewset.create(request) | ||
|
||
assert response.status_code == status.HTTP_201_CREATED | ||
|
||
mock_s3_client.put_object.assert_called_once() | ||
put_call_kwargs = mock_s3_client.put_object.call_args[1] | ||
assert put_call_kwargs["Bucket"] == settings.AWS_STORAGE_BUCKET_NAME | ||
assert put_call_kwargs["Key"] == "feedbacks.tsv" | ||
assert "Body" in put_call_kwargs | ||
assert put_call_kwargs["ContentType"] == "text/tab-separated-values" | ||
|
||
def test_create_validation_error(self, feedback_viewset, valid_feedback_data, mock_s3_client): | ||
"""Test feedback submission with validation error.""" | ||
request = Mock() | ||
request.data = valid_feedback_data | ||
|
||
mock_s3_client.get_object.side_effect = botocore.exceptions.ClientError( | ||
{"Error": {"Code": "ValidationError", "Message": "Invalid credentials"}}, "GetObject" | ||
) | ||
|
||
response = feedback_viewset.create(request) | ||
|
||
assert response.status_code == status.HTTP_201_CREATED | ||
|
||
def test_write_feedback_to_tsv(self, feedback_viewset, valid_feedback_data): | ||
"""Test writing feedback data to TSV format.""" | ||
output = StringIO() | ||
writer = csv.writer(output, delimiter="\t") | ||
|
||
current_time = datetime(2025, 1, 22, 10, 45, 34, 567884, tzinfo=timezone.utc) | ||
with patch("django.utils.timezone.now", return_value=current_time): | ||
feedback_viewset.write_feedback_to_tsv(writer, valid_feedback_data) | ||
|
||
output.seek(0) | ||
written_data = output.getvalue().strip().split("\t") | ||
assert written_data[0] == valid_feedback_data["name"] | ||
assert written_data[1] == valid_feedback_data["email"] | ||
assert written_data[2] == valid_feedback_data["message"] | ||
assert written_data[3] == str(valid_feedback_data["is_anonymous"]) | ||
assert written_data[4] == str(valid_feedback_data["is_nestbot"]) | ||
|
||
@patch("boto3.client") | ||
def test_get_s3_client(self, mock_boto3, feedback_viewset): | ||
"""Test S3 client initialization.""" | ||
feedback_viewset.get_s3_client() | ||
|
||
mock_boto3.assert_called_once_with( | ||
"s3", | ||
aws_access_key_id=settings.AWS_ACCESS_KEY_ID, | ||
aws_secret_access_key=settings.AWS_SECRET_ACCESS_KEY, | ||
region_name=settings.AWS_S3_REGION_NAME, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add test for anonymous feedback submission
The test suite is missing a specific test for anonymous feedback submission, which is a key feature of the functionality.
def test_create_anonymous_feedback(self, feedback_viewset, valid_feedback_data, mock_s3_client):
"""Test anonymous feedback submission."""
# Modify data to be anonymous
anon_data = valid_feedback_data.copy()
anon_data["is_anonymous"] = True
anon_data["name"] = ""
anon_data["email"] = ""
request = Mock()
request.data = anon_data
mock_s3_client.get_object.return_value = {"Body": Mock(read=lambda: b"")}
response = feedback_viewset.create(request)
assert response.status_code == status.HTTP_201_CREATED
# Verify the S3 object contains empty name and email fields
put_call_kwargs = mock_s3_client.put_object.call_args[1]
body_content = put_call_kwargs["Body"]
assert "\t\t" in body_content # Check for empty name and email columns
# AWS S3 Configuration | ||
AWS_ACCESS_KEY_ID = values.SecretValue(environ_name="AWS_ACCESS_KEY_ID") | ||
AWS_SECRET_ACCESS_KEY = values.SecretValue(environ_name="AWS_SECRET_ACCESS_KEY") | ||
AWS_STORAGE_BUCKET_NAME = values.SecretValue(environ_name="AWS_STORAGE_BUCKET_NAME") | ||
AWS_S3_REGION_NAME = values.SecretValue(environ_name="AWS_S3_REGION_NAME") | ||
FEEDBACK_SHEET_KEY = values.SecretValue(environ_name="FEEDBACK_SHEET_KEY") | ||
|
||
AWS = { | ||
"ACCESS_KEY_ID": AWS_ACCESS_KEY_ID, | ||
"SECRET_ACCESS_KEY": AWS_SECRET_ACCESS_KEY, | ||
"STORAGE_BUCKET_NAME": AWS_STORAGE_BUCKET_NAME, | ||
"S3_REGION_NAME": AWS_S3_REGION_NAME, | ||
"FEEDBACK_SHEET_KEY": FEEDBACK_SHEET_KEY, | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify environment variables are added to documentation
Ensure that the new AWS environment variables are added to the project documentation and .env.example
files.
🏁 Script executed:
#!/bin/bash
# Check if AWS environment variables are documented in .env.example
# Look for AWS variables in .env.example
if [ -f "backend/.env.example" ]; then
echo "Checking .env.example for AWS environment variables:"
grep -E "AWS_|FEEDBACK_" backend/.env.example
else
echo "backend/.env.example file not found"
fi
# Also check for any documentation on setting up AWS for the feedback feature
fd -t f -e md | xargs grep -l "AWS" | head -5
Length of output: 404
AWS Environment Variables Documentation Consistency
The .env.example file now includes the AWS variables, but they’re prefixed with DJANGO_
(e.g. DJANGO_AWS_ACCESS_KEY_ID
), while the code in backend/settings/base.py
references them without the prefix (e.g. AWS_ACCESS_KEY_ID
). Additionally, a search for AWS setup guidance in the documentation didn’t return any clear instructions for the feedback feature.
- .env.example Consistency: Verify that the environment variable names match between the code and the
.env.example
file. If the prefix isn’t intentional, update one side to ensure consistency. - Documentation Update: Confirm that the project documentation clearly explains how to configure these AWS environment variables for the feedback feature.
Resolves #309
feedback
app in the backendRecording.2025-01-26.215215.mp4
Note:
I have not added any buttons to go to
/feedback
page as I couldn't figure out where to put it.So I would like the reviewers' suggestions on it.
Thanks