-
Notifications
You must be signed in to change notification settings - Fork 325
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add feedback to API #1003
Add feedback to API #1003
Conversation
The API call seems to work correctly, but I had some trouble writing a proper test. Relevant error also occurs when testing locally: https://travis-ci.org/jupyter/nbgrader/jobs/416929082#L1363 |
Thanks! From a cursory glance this looks great; I'll have a closer look over the next few days. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good!
For the test, I think rather than adding new files autograded-unchanged.ipynb
and autograded-changed.ipynb
, you should use the existing submitted-unchanged.ipynb
and submitted-changed.ipynb
and run the autograder on them to produce the autograded versions (you will also need to run the assign
method as well to populate the database---have a look at test_autograde
for an example).
I would prefer to also not check verbatim the files that are produced by the feedback, as this comparison will fail if anything changes upstream in how nbconvert produces the html. So I think it's sufficient to just check that the files exist rather than comparing them directly (and so you can remove feedback-unchanged.html
and feedback-changed.html
from the PR as well).
Let me know if you have any more questions or if you need any help!
with temp_attrs(self.coursedir, assignment_id=assignment_id): | ||
app = Feedback(coursedir=self.coursedir, parent=self) | ||
app.force = force | ||
app.create_assignment = create |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think create_assignment
is only relevant for nbgrader assign
and won't do anything for nbgrader feedback
, so you should be able to remove this line.
@@ -861,6 +861,36 @@ def assign(self, assignment_id, force=True, create=True): | |||
app.create_assignment = create | |||
return capture_log(app) | |||
|
|||
def feedback(self, assignment_id, force=True, create=True): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this function should also take a student_id (as with the autograde
method) and only generate feedback for one student at a time.
Thank you for the detailed feedback! I will make look into making these changes ASAP. FYI @ttimbers |
Hi @samhinshaw just wanted to see if you ever had a chance to look more at this? I think we may be looking at generating feedback from the formgrader during the Edinburgh hackathon over the next few days (#1079) and this would be a prerequisite for that so it would be great to get this merged. If you haven't had time to work on it further, that's totally fine---we can use what you've done as a starting point and finish it up for you if you'd like 🙂 |
@jhamrick - @samhinshaw is no longer actively working with me on the project this PR was related to. However, we would still love this functionality, we've just had no one able to work on this. So, if this can be worked on at the Edinburgh hackathon that would be amazing! |
Thanks for the update @ttimbers! It sounds like something a few other people want as well, so hopefully it will get checked off the list 🙂 |
This is going to be superseded by #1092 so I am going to go ahead and close this. Thanks for the work that you put into this originally @samhinshaw ! |
Add the
feedback()
command to the nbgrader API, for programmatic feedback generation in Python.