✨ 🥜 Toohak 🥜 ✨
[[TOC]]
- 25/09: A few remaining references to 'assumptions' removed from the spec
- 10/10: Updated section
4.9to try and elaborate in more detail about sessions. - 11/10: Few minor system updates; Removed a 403 error from swagger for some routes because they weren't applicable
- 14/10: Correction at the top of
4.1; Unnecessary400error condition forQuiz ID does not refer to a quiz that this user ownsremoved from a number of places where this is covered by403. - 22/10:
/v1/admin/quiz/trash/emptyhas had 400 and 403 error descriptions fixed up; removal of "All sessions for this quiz must be in END state" references. - 25/10: Clarity that errors are thrown in order 401, 403, 400. This was in the spec and the swagger had this order in source code, but swagger was not rendering in a way to reflect that.
- 01/11:
- Replaced section 5.9 with section on deployment (and removed the previous version which required to store or upload the image file from the URL locally - now you can just store the URL and serve it again)
- Added to section
5a clarification about interoperability between iteration 2 and iteration 3 routes. - Added missing
401error to/v1/admin/quiz/{quizid}/sessions - Clarified interface design can follow a similar pattern to the swagger docs
- Clarified how to get the 10% bonus marks for typescript compliance
- 09/11:
- Clarified that question position starts at 1
- Clarified the maximum sessions not in END state is for a particular quiz (only clarified for the benefit of setting a standard, but that will not be tested this term);
- Removed "SKIP_COUNTDOWN" from State enum
- 10/11: Changed
403error description for sessions fromValid token is provided, but user is not authorised to view this sessiontoValid token is provided, but user is not an owner of this quiz - 12/11:
- For
POST /v2/admin/quiz/{quizid}/questionandPOST /v2/admin/quiz/{quizid}/question/{questionid}we have removed the requirements to ensure that the file is valid by downloading it and checking its actual file type. Instead we've just replaced it with a string check of the file URL itself (with no need to fetch/request/download it). If you haven't implemented the original one, do not implement it. If you have already implemented it, you can talk to your tutor about putting it in as bonus marks. - 14/11: Info about where to upload your deployment URL shared (don't stress, you can just email your tutor too)
- Demonstrate effective use of software development tools to build full-stack end-user applications.
- Demonstrate effective use of static testing, dynamic testing, and user testing to validate and verify software systems.
- Understand key characteristics of a functioning team in terms of understanding professional expectations, maintaining healthy relationships, and managing conflict.
- Demonstrate an ability to analyse complex software systems in terms of their data model, state model, and more.
- Understand the software engineering life cycle in the context of modern and iterative software development practices in order to elicit requirements, design systems thoughtfully, and implement software correctly.
- Demonstrate an understanding of how to use version control and continuous integration to sustainably integrate code from multiple parties.
UNSW has been having severe issues with lecture attendance - student's just aren't coming to class, and they're citing that class isn't interesting enough for them.
UNSW must resort to giving into the limited attention span of students and gamify lecture and tutorial time as much as possible - by doing interactive and colourful quizzes.
However, instead of licensing well-built and tested software, UNSW is hoping to use the pool of extremely talented and interesting COMP1531 students to create their own version to distribute around campus for free. The chosen game to "take inspiration from" is Kahoot.
The 23T3 cohort of COMP1531 students will build the backend Javascript server for a new quiz game platform, Toohak. We plan to task future COMP6080 students to build the frontend for Toohak, something you won't have to worry about.
Toohak is the questionably-named quiz tool that allows admins to create quiz games, and players to join (without signing up) to participate and compete.
We have already specified a common interface for the frontend and backend to operate on. This allows both courses to go off and do their own development and testing under the assumption that both parties will comply with the common interface. This is the interface you are required to use.
The specific capabilities that need to be built for this project are described in the interface at the bottom. This is clearly a lot of features, but not all of them are to be implemented at once.
(For legal reasons, this is a joke).
We highly recommend creating and playing a Kahoot game to better understand your task:
- To sign up and log in as an admin, go to kahoot.com.
- To join a game created by an admin, go to kahoot.it.
You can watch the iteration 0 introductory video here from a previous term. This video is not required watching (the specification is clear by itself) though many students find it useful as a starting point.
This iteration is designed as a warm-up to help you setup your project, learn Git and project management practises (see Marking Criteria), and understand how your team works together.
In this iteration, you are expected to:
- Write stub code for the basic functionality of Toohak. The basic functionality is defined as the
adminAuth*,adminQuiz*capabilities/functions, as per the interface section below (2.2).- A stub is a function declaration and sample return value (see example below). Do NOT write the implementation for the stubbed functions. That is for the next iteration. In this iteration you are just focusing on setting up your function declarations and getting familiar with Git.
- Each team member must stub AT LEAST 1 function each.
- Function stub locations should be inside files named a corresponding prefix e.g.
adminQuiz*insidequiz.js. - Return values should match the interface table below (see example below).
// Sample stub for the authLoginV1 function
// Return stub value matches table below
function adminAuthLogin(email, password) {
return {
authUserId: 1,
}
}- Design a structure to store all the data needed for Toohak, and place this in the code block inside the
data.mdfile. Specifically, you must consider how to store information about users and quizzes and populate ONE exampleuserandquizin your data structure (any values are fine - see example below).- Use the interface table (2.2) to help you decide what data might need to be stored. This will require making some educated guesses about what would be required to be stored in order to return the types of data you see. Whilst the data structure you describe in data.md might be similar to the interface, it is a different thing to the interface. If you're still confused, think of the interface like a restaurant menu, and
data.mdlike where the food is stored in the back. It's all the same food, but the menu is about how it's packaged up and received from the kitchen, anddata.mdis describing the structure of how it's all stored behind the scenes. - As functions are called, this structure would be populated with more users and quizzes, so consider this in your solution.
- Focus on the structure itself (object/list composition), rather than the example contents.
- Use the interface table (2.2) to help you decide what data might need to be stored. This will require making some educated guesses about what would be required to be stored in order to return the types of data you see. Whilst the data structure you describe in data.md might be similar to the interface, it is a different thing to the interface. If you're still confused, think of the interface like a restaurant menu, and
// Example values inside of a 'user' object might look like this
// NOTE: this object's data is not exhaustive,
// - you may need more/fewer fields stored as you complete this project.
// We won't be marking you down for missing/adding too much sample data in this iteration.
{
uId: 1,
nameFirst: 'Rani',
nameLast: 'Jiang',
email: 'ranivorous@gmail.com',
}- Follow best practices for git and teamwork as discussed in lectures.
- You are expected to have at least 1 meeting with your group, and document the meeting(s) in meeting minutes which should be stored at a timestamped location in your repo (e.g. uploading a word doc/pdf or writing in the GitLab repo Wiki after each meeting).
- For this iteration each team member will need to make a minimum of 1 merge request per person in your group into the
masterbranch. - 1 merge request per function must be made (9 in total).
- Check out the lab on Git from week 1 to get familiar with using Git.
The following are strings: email, password, nameFirst, nameLast, name, description.
The following are integers: authUserId, quizId.
In terms of file structure:
- All functions starting with
adminAuthoradminUsergo inauth.js - All functions starting with
adminQuizgo inquiz.js cleargoes inother.js
| Name & Description | Data Types | |
|---|---|---|
adminAuthRegister
Register a user with an email, password, and names, then returns their authUserId value.
|
Parameters:( email, password, nameFirst, nameLast )
Return object: {
authUserId: 1
}
|
|
adminAuthLogin
Given a registered user's email and password returns their authUserId value.
|
Parameters:( email, password )
Return object: {
authUserId: 1
}
|
|
adminUserDetails
Given an admin user's authUserId, return details about the user. name" is the first and last name concatenated with a single space between them |
Parameters:( authUserId )
Return object: { user:
{
userId: 1,
name: 'Hayden Smith',
email: 'hayden.smith@unsw.edu.au',
numSuccessfulLogins: 3,
numFailedPasswordsSinceLastLogin: 1,
}
}
|
|
adminQuizList
Provide a list of all quizzes that are owned by the currently logged in user. |
Parameters:( authUserId )
Return object: { quizzes: [
{
quizId: 1,
name: 'My Quiz',
}
]
}
|
|
adminQuizCreate
Given basic details about a new quiz, create one for the logged in user. |
Parameters:( authUserId, name, description )
Return object: {
quizId: 2
}
|
|
adminQuizRemove
Given a particular quiz, permanently remove the quiz. |
Parameters:( authUserId, quizId )
Return object: { } empty object
|
|
adminQuizInfo
Get all of the relevant information about the current quiz. |
Parameters:( authUserId, quizId )
Return object: {
quizId: 1,
name: 'My Quiz',
timeCreated: 1683125870,
timeLastEdited: 1683125871,
description: 'This is my quiz',
}
|
|
adminQuizNameUpdate
Update the name of the relevant quiz. |
Parameters:( authUserId, quizId, name )
Return object: { } empty object
|
|
adminQuizDescriptionUpdate
Update the description of the relevant quiz. |
Parameters:( authUserId, quizId, description )
Return object: { } empty object
|
|
clear
Reset the state of the application back to the start. |
Parameters:() no parameters
Return object: { } empty object
|
| Section | Weighting | Criteria |
|---|---|---|
| Automarking (Implementation) | 40% |
|
| Documentation | 20% |
|
| Git Practices | 30% |
|
| Project Management & Teamwork | 10% |
|
We have provided a dryrun for iteration 0 consisting of one test for each function. Passing these tests means you have a correct implementation for your stubs, and have earned the marks for the automarking component iteration 0.
To run the dryrun, you should on a CSE machine (i.e. using VLAB or ssh'ed into CSE) be in the root directory of your project (e.g. /project-backend) and use the command:
1531 dryrun 0Please see section 6 for information on due date and on how you will demonstrate this iteration.
You can watch the iteration 1 introductory video here.. Please note that this video was recorded in 23T2, and there are changes in 23T3. You should consult this spec for changes. This video is not required watching (the specification is clear by itself) though many students will watch this for the practical demo of how to get started.
In this iteration, you are expected to:
-
Write tests for and implement the basic functionality of Toohak. The basic functionality is defined as the
adminAuth*,adminQuiz*capabilities/functions, as per the interface section below.- Test files you add should all be in the form
*.test.js. - Do NOT attempt to try and write or start a web server. Don't overthink how these functions are meant to connect to a frontend yet. That is for the next iteration. In this iteration you are just focusing on the basic backend functionality.
- Test files you add should all be in the form
-
Follow best practices for git, project management, and effective teamwork, as discussed in lectures.
-
The marking will be heavily biased toward how well you follow good practices and work together as a team. Just having a "working" solution at the end is not, on its own, sufficient to even get a passing mark.
-
You need to use the GitLab Issue Boards (or similar) for your task tracking and allocation. Spend some time getting to know how to use the taskboard. If you would like to use another collaborative task tracker e.g. Jira, Trello, Airtable, etc. you must first get approval from your tutor and grant them administrator access to your team board.
-
You are expected to meet regularly with your group and document the meetings via meeting minutes, which should be stored at a timestamped location in your repo (e.g. uploading a word doc/pdf or writing in the GitLab repo Wiki after each meeting).
-
You should have regular standups and be able to demonstrate evidence of this to your tutor.
-
For this iteration, you will need to collectively make a minimum of 12 merge requests into
master.
-
Nearly all of the functions will likely have to reference some "data source" to store information. E.g. If you register two users, create two quizzes, all of that information needs to be "stored" somewhere. The most important thing for iteration 1 is not to overthink this problem.
Firstly, you should not use an SQL database, or something like firebase.
Secondly, you don't need to make anything persist. What that means is that if you run all your tests, and then run them again later, it's OK for the data to be "fresh" each time you run the tests. We will cover persistence in another iteration.
Inside src/dataStore.js we have provided you with an object called data which will contain the information that you will need to access across multiple functions. An explanation of how to get and set the data is in dataStore.js. You will need to determine the internal structure of the object. If you wish, you are allowed to modify this data structure.
For example, you could define a structure in a file that is empty, and as functions are called, the structure populates and fills up like the one below:
let data = {
users: [
{
id: 1,
nameFirst: 'user1',
},
{
id: 2,
nameFirst: 'user2',
},
],
quizzes: [
{
id: 1,
name: 'quiz1',
},
{
id: 2,
name: 'quiz2',
},
],
}You should first approach this project by considering its distinct "features". Each feature should add some meaningful functionality to the project, but still be as small as possible. You should aim to size features as the smallest amount of functionality that adds value without making the project more unstable. For each feature you should:
- Create a new branch.
- Write function stub/s for your feature. This may have been completed in iteration 0 for some functions.
- Write tests for that feature and commit them to the branch. These will fail as you have not yet implemented the feature.
- Implement that feature.
- Make any changes to the tests such that they pass with the given implementation. You should not have to do a lot here. If you find that you are, you're not spending enough time on your tests.
- Create a merge request for the branch.
- Get someone in your team who did not work on the feature to review the merge request.
- Fix any issues identified in the review.
- After merge request is approved by a different team member, merge the merge request into
master.
For this project, a feature is typically sized somewhere between a single function, and a whole file of functions (e.g. auth.js). It is up to you and your team to decide what each feature is.
There is no requirement that each feature is implemented by only one person. In fact, we encourage you to work together closely on features, especially to help those who may still be coming to grips with Javascript.
Please pay careful attention to the following:
- We want to see evidence that you wrote your tests before writing your implementation. As noted above, the commits containing your initial tests should appear before your implementation for every feature branch. If we don't see this evidence, we will assume you did not write your tests first and your mark will be reduced.
- Merging in merge requests with failing tests is very bad practice. Not only does this interfere with your team's ability to work on different features at the same time, and thus slow down development, it is something you will be penalised for in marking.
- Similarly, merging in branches with untested features is also bad practice. We will assume, and you should too, that any code without tests does not work.
- Pushing directly to
masteris not possible for this repo. The only way to get code intomasteris via a merge request. If you discover you have a bug inmasterthat got through testing, create a bugfix branch and merge that in via a merge request. - As is the case with any system or functionality, there will be some things that you can test extensively, some things that you can test sparsely/fleetingly, and some things that you can't meaningfully test at all. You should aim to test as extensively as you can, and make judgements as to what things fall into what categories.
The tests you write should be as small and independent as possible. This makes it easier to identify why a particular test may be failing. Similarly, try to make it clear what each test is testing for. Meaningful test names and documentation help with this. An example of how to structure tests has been done in:
src/echo.jssrc/echo.test.js
The echo functionality is tested, both for correct behaviour and for failing behaviour. As echo is relatively simple functionality, only 2 tests are required. For the larger features, you will need many tests to account for many different behaviours.
Your tests should be black box unit tests:
- Black box means they should not depend your specific implementation, but rather work with any faithful implementation of the project interface specification. I.e. you should design your tests such that if they were run against another group's backend they would still pass.
- For iteration 1, you should not be importing the
dataobject itself or directly accessing it via thegetorsetfunctions fromsrc/dataStore.jsinside your tests. - Unit tests mean the tests focus on testing particular functions, rather than the system as a whole. Certain unit tests will depend on other tests succeeding. It's OK to write tests that are only a valid test if other functions are correct (e.g. to test
quizfunctions you can assume thatauthis implemented correctly).
This will mean you will use code like this to test login, for instance:
let result = adminAuthRegister('validemail@gmail.com', '123abc!@#', 'Jake', 'Renzella')
adminAuthLogin('validemail@gmail.com', '123abc!@#') // Expect to work since we registeredYou should reset the state of the application (e.g. deleting all users, quizzes, etc.) at the start of every test. That way you know none of them are accidentally dependent on an earlier test. You can use a function for this that is run at the beginning of each test (hint: clear).
- If you find yourself needing similar code at the start of a series of tests, consider using Jest's beforeEach to avoid repetition.
Sometimes you may ask "What happens if X?". In cases where we don't specify behaviour, we call this undefined behaviour. When something has undefined behaviour, you can have it behave any reasonable way you want - because there is no expectation or assumption of how it should act.
A common question asked throughout the project is usually "How can I test this?" or "Can I test this?". In any situation, most things can be tested thoroughly. However, some things can only be tested sparsely, and on some other rare occasions, some things can't be tested at all. A challenge of this project is for you to use your discretion to figure out what to test, and how much to test. Often, you can use the functions you've already written to test new functions in a black-box manner.
The functions required for iteration 1 are described below.
All error cases should return {error: 'specific error message here'}, where the error message in quotation marks can be anything you like (this will not be marked).
The following are strings: email, password, nameFirst, nameLast, name, description.
The following are integers: authUserId, quizId.
For timestamps, these are unix timestamps in seconds. You can find more information that here https://en.wikipedia.org/wiki/Unix_time
| Name & Description | Data Types | Error returns |
|---|---|---|
adminAuthRegister
Register a user with an email, password, and names, then returns their authUserId value.
|
Parameters:( email, password, nameFirst, nameLast )
Return type if no error: { authUserId }
|
Return object {error: 'specific error message here'} when any of:
|
adminAuthLogin
Given a registered user's email and password returns their authUserId value.
|
Parameters:( email, password )
Return type if no error: { authUserId }
|
Return object {error: 'specific error message here'} when any of:
|
adminUserDetails
Given an admin user's authUserId, return details about the user. |
Parameters:( authUserId )
Return type if no error: { user:
{
userId,
name,
email,
numSuccessfulLogins,
numFailedPasswordsSinceLastLogin,
}
}
|
Return object {error: 'specific error message here'} when any of:
|
adminQuizList
Provide a list of all quizzes that are owned by the currently logged in user. |
Parameters:( authUserId )
Return type if no error: { quizzes: [
{
quizId,
name,
}
]
}
|
Return object {error: 'specific error message here'} when any of:
|
adminQuizCreate
Given basic details about a new quiz, create one for the logged in user. |
Parameters:( authUserId, name, description )
Return type if no error: { quizId }
|
Return object {error: 'specific error message here'} when any of:
|
adminQuizRemove
Given a particular quiz, permanently remove the quiz. |
Parameters:( authUserId, quizId )
Return type if no error: { }
|
Return object {error: 'specific error message here'} when any of:
|
adminQuizInfo
Get all of the relevant information about the current quiz. |
Parameters:( authUserId, quizId )
Return type if no error: {
quizId,
name,
timeCreated,
timeLastEdited,
description,
}
|
Return object {error: 'specific error message here'} when any of:
|
adminQuizNameUpdate
Update the name of the relevant quiz. |
Parameters:( authUserId, quizId, name )
Return type if no error: { }
|
Return object {error: 'specific error message here'} when any of:
|
adminQuizDescriptionUpdate
Update the description of the relevant quiz. |
Parameters:( authUserId, quizId, description )
Return type if no error: { }
|
Return object {error: 'specific error message here'} when any of:
|
clear
Reset the state of the application back to the start. |
Parameters:( )
Return type if no error: { }
|
Elements of securely storing passwords and other tricky authorisation methods are not required for iteration 1. You can simply store passwords plainly, and use the user ID to identify each user. We will discuss ways to improve the quality and methods of these capabilities in the later iterations.
Note that the authUserId variable is simply the user ID of the user who is making the function call. For example,
- A user registers an account with Toohak and is assigned some integer ID, e.g.
42as their user ID. - When they make subsequent calls to functions, their user ID - in this case,
42- is passed in as theauthUserIdargument.
Since authUserId refers to the user ID of the user calling the functions, you do NOT need to store separate user IDs (e.g. a uId or userId + a authUserId) to identify each user in your data structure - you only need to store one user ID. How you name this user ID property in your data structure is up to you.
This iteration provides challenges for many groups when it comes to working in parallel. Your group's initial reaction will be that you need to complete registration before you can complete quiz creation, and then quiz creation must be done before you update a quiz name, etc.
There are several approaches that you can consider to overcome these challenges:
- Have people working on down-stream tasks (like the quiz implementation) work with stubbed versions of the up-stream tasks. E.g. The register function is stubbed to return a successful dummy response, and therefore two people can start work in parallel.
- Co-ordinate with your team to ensure prerequisite features are completed first (e.g. Giuliana completes
adminAuthRegisteron Monday meaning Hayden can startadminQuizCreateon Tuesday). - You can pull any other remote branch into your own using the command
git pull origin <branch_name>.- This can be helpful when two people are working on functions on separate branches where one function is a prerequisite of the other, and an implementation is required to keep the pipeline passing.
- You should pull from
masteron a regular basis to ensure your code remains up-to-date.
| Section | Weighting | Criteria |
|---|---|---|
| Automarking (Testing & Implementation) | 40% |
Whilst we look at your group's work as a whole, if we feel that materially unequal contributions occurred between group members we will assess your individual contribution to the following criteria:
|
| Test Quality | 15% |
Whilst we look at your group's work as a whole, if we feel that materially unequal contributions occurred between group members we will assess your individual contribution to the following criteria:
Develop tests that show a clear demonstration of:
|
| General Code Quality | 15% |
Whilst we look at your group's work as a whole, if we feel that materially unequal contributions occurred between group members we will assess your individual contribution to the following criteria:
|
| Git Practices, Project Management, Teamwork | 30% |
As an individual, in terms of git:
|
For this and for all future milestones, you should consider the other expectations as outlined in section 6 below.
The formula used for automarking in this iteration is:
Mark = t * i (Mark equals t multiplied by i)
Where:
tis the mark you receive for your tests running against your code (100% = your implementation passes all of your tests)iis the mark you receive for our course tests (hidden) running against your code (100% = your implementation passes all of our tests)
We have provided a very simple dryrun for iteration 1 consisting of a few tests, including your implementation of adminAuthRegister, adminAuthLogin, adminQuizCreate. These only check the format of your return types and simple expected behaviour, so do not rely on these as an indicator of the correctness of your implementation or tests.
To run the dryrun, you should be on a CSE machine (i.e. using VLAB or ssh'ed into CSE) and in the root directory of your project (e.g. /project-backend) and use the command:
1531 dryrun 1Tips to ensure dryrun runs successfully:
- Files used for imports are appended with
.jse.g.import { clearV1 } from './other.js'; - Files sit within the
/srcdirectory
Please see section 6 for information on due date and on how you will demonstrate this iteration.
Please see section 7.5 for information on peer assessment.
In this iteration, more features were added to the specification, and the focus has been changed to HTTP endpoints. Most of the theory surrounding iteration 2 is covered in week 4-5 lectures. Note that there will still be some features of the frontend that will not work because the routes will not appear until iteration 3. There is no introductory video for iteration 2.
Iteration 2 both reuses a lot of work from iteration 1, as well as has new work. Most of the work from iteration 1 can be recycled, but the following consideration(s) need to be made from previous work:
DELETE /v1/admin/quiz/{quizid}now requires that upon deletion items are moved to trash instead of permanently removed.
If you'd like more support in this iteration, you can see a previous term's video where a lecturer discusses iteration 2 with the students of that term
In this iteration, you are expected to:
-
Make adjustments to your existing code as per any feedback given by your tutor for iteration 1.
-
Migrate to Typescript by changing
.jsfile extensions to.ts. -
Implement and test the HTTP Express server according to the entire interface provided in the specification.
-
Part of this section may be automarked.
-
Your implementation should build upon your work in iteration 1, and ideally your HTTP layer is just a wrapper for underlying functions you've written that handle the logic, see week 4 content.
-
Your implementation will need to include persistence of data (see section 4.7).
-
Introduce sessions for your login system (see 4.9).
-
You can structure your tests inside a
/testsfolder (or however you choose), as long as they are appended with.test.ts. For this iteration and iteration 3 we will only be testing your HTTP layer of tests. You may still wish to use your iteration 1 tests and simply wrap up them - that is a design choice up to you. An example of an HTTP test can be found in section 4.4. -
You do not have to rewrite all of your iteration 1 tests as HTTP tests - the latter can test the system at a higher level. For example, to test a success case for
POST /v1/admin/quiz/{quizid}/transfervia HTTP routes you will need to callPOST /v1/admin/auth/registerandPOST /v1/admin/quiz; this means you do not need the success case for those two functions seperately. Your HTTP tests will need to cover all success/error conditions for each endpoint, however.
-
-
Ensure your code is linted to the provided style guide
-
eslintshould be added to your repo vianpmand then added to yourpackage.jsonfile to run when the commandnpm run lintis run. The provided.eslintrc.jsonfile is very lenient, so there is no reason you should have to disable any additional checks. See section 4.5 below for instructions on adding linting to your pipeline. -
You are required to edit the
gitlab-ci.ymlfile, as per section 4.5 to add linting to the code onmaster. You must do this BEFORE merging anything from iteration 2 intomaster, so that you ensuremasteris always stable.
-
-
Continue demonstrating effective project management and effective git usage
-
You will be heavily marked for your use of thoughtful project management and use of git effectively. The degree to which your team works effectively will also be assessed.
-
As for iteration 1, all task tracking and management will need to be done via the GitLab Issue Board or another tracking application approved by your tutor.
-
As for iteration 1, regular group meetings must be documented with meeting minutes which should be stored at a timestamped location in your repo (e.g. uploading a word doc/pdf or writing in the GitLab repo wiki after each meeting).
-
As for iteration 1, you must be able to demonstrate evidence of regular standups.
-
You are required to regularly and thoughtfully make merge requests for the smallest reasonable units, and merge them into
master.
-
-
(Recommended) Remove any type errors in your code
-
Run
npm run tscand incrementally fix all type errors. -
Either choose to change one file at a time, or change all file extensions and use
// @ts-nocheckat the beginning of select files to disable checking on that specific file, omitting errors. -
There are no explicit marks this term for completing this step, however:
- Groups who ensure their code are type-safe tend to perform much better in the automarker
- For iteration 3, if you make your entire code type safe you will receive 10 bonus marks! Starting early makes that easier!
-
A frontend has been built that you can use in this iteration, and use your backend to power it (note: an incomplete backend will mean the frontend cannot work). You can, if you wish, make changes to the frontend code, but it is not required. The source code for the frontend is only provided for your own fun or curiosity.
As part of this iteration it is required that your backend code can correctly power the frontend. You should conduct acceptance tests (run your backend, run the frontend and check that it works) prior to submission.
In this iteration we also expect for you to improve on any feedback left by tutors in iteration 1.
To run the server you can the following command from the root directory of your project:
npm startThis will start the server on the port in the src/server.ts file, using ts-node.
If you get an error stating that the address is already in use, you can change the port number in config.json to any number from 49152 to 65535. Is it likely that another student may be using your original port number.
Do NOT move the location of either config.json or server.ts
You should first approach this project by considering its distinct "features". Each feature should add some meaningful functionality to the project, but still be as small as possible. You should aim to size features as the smallest amount of functionality that adds value without making the project more unstable. For each feature you should:
- Create a new branch.
- Write tests for that feature and commit them to the branch. These will fail as you have not yet implemented the feature.
- Implement that feature.
- Make any changes to the tests such that they pass with the given implementation. You should not have to do a lot here. If you find that you are, you're not spending enough time on your tests.
- Create a merge request for the branch.
- Get someone in your team who did not work on the feature to review the merge request. When reviewing, not only should you ensure the new feature has tests that pass.
- Fix any issues identified in the review.
- Merge the merge request into master.
For this project, a feature is typically sized somewhere between a single function, and a whole file of functions (e.g. auth.ts). It is up to you and your team to decide what each feature is.
There is no requirement that each feature be implemented by only one person. In fact, we encourage you to work together closely on features, especially to help those who may still be coming to grips with Javascript.
Please pay careful attention to the following:
- We want to see evidence that you wrote your tests before writing your implementation. As noted above, the commits containing your initial tests should appear before your implementation for every feature branch. If we don't see this evidence, we will assume you did not write your tests first and your mark will be reduced.
- You should have black-box tests for all tests required (i.e. testing each function/endpoint).
- Merging in merge requests with failing pipelines is very bad practice. Not only does this interfere with your teams ability to work on different features at the same time, and thus slow down development, it is something you will be penalised for in marking.
- Similarly, merging in branches with untested features is also very bad practice. We will assume, and you should too, that any code without tests does not work.
- Pushing directly to
masteris not possible for this repo. The only way to get code intomasteris via a merge request. If you discover you have a bug inmasterthat got through testing, create a bugfix branch and merge that in via a merge request. - As is the case with any system or functionality, there will be some things that you can test extensively, some things that you can test sparsely/fleetingly, and some things that you can't meaningfully test at all. You should aim to test as extensively as you can, and make judgements as to what things fall into what categories.
In this iteration, the layer of abstraction has changed to the HTTP level, meaning that you are only required to write integration tests that check the HTTP endpoints, rather than the style of tests you write in iteration 1 where the behaviour of the Javascript functions themselves was tested.
You will need to check as appropriate for each success/error condition:
- The return value of the endpoint;
- The behaviour (side effects) of the endpoint; and
- The status code of the response.
An example of how you would now test the echo interface is in echo.test.ts.
Some routes will have timestamps as properties. The tricky thing about timestamps is that the client makes a request at a known time, but there is a delay between when the client sends the request and when the server processes it. E.G. You might send an HTTP request to create a quiz, but the server takes 0.3 seconds until it actually creates the object, which means that the timestamp is 0.3 seconds out of sync with what you'd expect.
To solve this, when checking if timestamps are what you would expect, just check that they are within a 1 second range.
E.G. If I create a quiz at 12:22:21pm I will then check in my tests if the timestamp is somewhere between 12:22:21pm and 12:22:22pm.
With the introduction of linting to the project with ESlint, you will need to manually edit the gitlab-ci.yml file to lint code within the pipeline. This will require the following:
- Addition of
npm run lintas a script under a customlintingvariable, apart ofstage: checks.
Refer to the lecture slides on continuous integration to find exactly how you should add these.
You are required to store data persistently in this iteration.
Modify your backend such that it is able to persist and reload its data store if the process is stopped and started again. The persistence should happen at regular intervals so that in the event of unexpected program termination (e.g. sudden power outage) a minimal amount of data is lost. You may implement this using whatever method of serialisation you prefer (e.g. JSON).
You might notice that some routes are prefixed with v1. Why is this? When you make changes to specifications, it's usually good practice to give the new function/capability/route a different unique name. This way, if people are using older versions of the specification they can't accidentally call the updated function/route with the wrong data input. If we make changes to these routes in iteration 3, we will increment the version to v2.
Hint: Yes, your v1 routes can use the functions you had in iteration 1, regardless of whether you rename the functions or not. The layer of abstraction in iteration 2 has changed from the function interface to the HTTP interface, and therefore your 'functions' from iteration 1 are essentially now just implementation details, and therefore are completely modifiable by you.
In iteration 1, a problem we have with the authUserId is that there is no way to "log-out" a user - because all the user needs to identify themselves is just their user ID.
In iteration 2, we want to issue something that abstracts their user ID into the notion of a session - this way a single user can log in, log out, or maybe log in from multiple places at the same time.
If you're not following the issue with the authUserId, imagine it like trying to board a plane flight but your boarding pass IS your passport. Your passport is a (effectively) a permanent thing - it is just "always you". That wouldn't work, which is why airlines issue out boarding passes - to essentially grant you a "session" on a plane. And your boarding pass is linked to your passport. In this same way, a session is associated with an authUserId!
In iteration 2, instead of passing in authUserId into functions, we will instead pass in a session. Then on our server we look up the session information (which we've stored) to:
- Identify if the session is valid
- Identify which user this session belongs to
Then in this way, we can now allow for things like the ability to meaningfully log someone out, as well as to have multiple sessions at the same time for multiple users (e.g. imagine being logged in on two computers but only wanting to log one out).
You may however notice in the specification that the word token is used - not session. This is because when sending HTTP requests a common practice is to package up information relating to the session of the user, we wrap it up into an object called a token. This token could take on a number of different forms, though the simplest form is to just have your session inside a token object:
{
"sessionId": 23145
}A token is generally stringified for sending over HTTP - since everything over an HTTP request needs to be stringified. This is typically done with JSON. If you pass a JSONified object (as opposed to just a string or a number) as a token, we recommend that you use encodeURIComponent and decodeURIComponent to encode it to be friendly for transfer over URLs.
How you generate unique identifiers for sessions is up to you.
Implentation details are up to you, though the key things to ensure that you comply with are that:
- Token is an object that contains some information that allows you to derive a user session
- Your system allows multiple sessions to be able to be logged in and logged out at the same time.
Either a 400 (Bad Request) or 401 (Unauthorized) or 403 (Forbidden) is thrown when something goes wrong. A 400 error refers to issues with user input; a 401 error refers to when someone does not attempt to authenticate properly, and a a 403 error refers to issues with authorisation. Most of the routes in the API interface provided through types of these errors under various conditions.
To throw one of these errors, simply use the code res.status(400).send(JSON.stringify({ error: 'specific error message here' })) or res.status(400).json({ error: 'specific error message here' }) in your server where 400 is the error.
Errors are thrown in the following order: 401, then 403, then 400.
There is a SINGLE repository available for all students at https://nw-syd-gitlab.cseunsw.tech/COMP1531/23T3/project-frontend. You can clone this frontend locally.
Please remember to pull regularly as we continue to work on the frontend
If you run the frontend at the same time as your express server is running on the backend, then you can power the frontend via your backend.
Please note: The frontend may have very slight inconsistencies with expected behaviour outlined in the specification. Our automarkers will be running against your compliance to the specification. The frontend is there for further testing and demonstration.
Please note: This frontend is experiment. It will not be perfect and is always under development.
A working example of the Toohak application can be used at https://cgi.cse.unsw.edu.au/~cs1531/23T3/toohak/a/login. This is not a gospel implementation that dictates the required behaviour for all possible occurrences. Our implementation will make reasonable assumptions just as yours will, and they might be different, and that's fine. However, you may use this implementation as a guide for how your backend should behave in the case of ambiguities in the spec.
The data is reset occasionally, but you can use this link to play around and get a feel for how the application should behave.
Please note: This frontend and backend that powers this example is experiment. It will not be perfect and is always under development.
Our recommendation with this iteration is that you start out trying to implement the new functions similarly to how you did in iteration 1.
- Write HTTP tests. These will fail as you have not yet implemented the feature.
‼️ ‼️ HINT: To improve the marks you get and speed at which you get work done, consider trying to avoid re-writing your tests for iteration 2 and instead tweak your iteration 1 tests that they can be "used" by the HTTP server.
- Implement the feature and write the Express route/endpoint for that feature too.
‼️ ‼️ HINT: make sure GET and DELETE requests utilise query parameters, whereas POST and PUT requests utilise JSONified bodies.
- Run the tests and continue following 4.3. as necessary.
Please note, when you have a single route (e.g. /my/route/name) alongside a wildcard route (e.g. /my/route/{variable}) you need to define the single route before the variable route.
| Section | Weighting | Criteria |
|---|---|---|
| Automarking (Testing & Implementation) | 50% |
Whilst we look at your group's work as a whole, if we feel that materially unequal contributions occurred between group members we will assess your individual contribution to the following criteria:
|
| Test Quality | 15% |
Whilst we look at your group's work as a whole, if we feel that materially unequal contributions occurred between group members we will assess your individual contribution as to whether you develop tests that show a clear demonstration of:
|
| General Code Quality | 15% |
Whilst we look at your group's work as a whole, if we feel that materially unequal contributions occurred between group members we will assess your individual contribution to the following criteria:
|
| Git Practices, Project Management, Teamwork | 20% |
As an individual, in terms of git:
|
For this and for all future milestones, you should consider the other expectations as outlined in section 7 below.
The formula used for automarking in this iteration is:
Automark = 95*(t * i) + 5*e
(Mark equals 95% of t multiplied by i plus 5% of e). This formula produces a value between 0 and 1.
Where:
tis the mark between 0-1 you receive for your tests running against your code (100% = your implementation passes all of your tests)iis the mark between 0-1 you receive for our course tests (hidden) running against your code (100% = your implementation passes all of our tests)eis the score between 0-1 achieved by running eslint against your code with the provided configuration
The dryrun checks the format of your return types and simple expected behaviour for a few basic routes. Do not rely on these as an indicator for the correctness of your implementation or tests.
To run the dryrun, you should be in the root directory of your project (e.g. /project-backend) and use the command:
1531 dryrun 2Please see section 6 for information on due date and on how you will demonstrate this iteration.
Please see section 7.5 for information on peer assessment.
There is no pre-recorded introductory video for this iteration, as we will cover this iteration in regular lectures.
Iteration 3 builds off all of the work you've completed in iteration 1 and 2. If you haven't completed the implementation of iteration 2, you must complete it as part of this iteration. Most of the work from iteration 1 and 2 can be recycled, but the following consideration(s) need to be made from previous work:
- All routes that had token in the query or body now have it in the header
PUT /v2/admin/quiz/{quizid}/question/{questionid}has different body inputGET /v2/admin/quiz/{quizid}has different return typePOST /v2/admin/quiz/{quizid}/questionhas a different input type
Iteration 2 routes and Iteration 3 routes do not need to be interoperable. You can assume that for a given usage of your system, once someone is using iteration 3 routes they can be assumed to not be calling any iteration 2 routes. In this way we need iteration 2 routes to still function properly, but in a way that is fine to be isolated from iteration 3 routes.
In this iteration, you are expected to:
-
Make adjustments to your existing code and tests as per any feedback given by your tutor for iteration 2. In particular, you should take time to ensure that your code is well-styled and complies with good software writing practices and software and test design principles discussed in lectures.
-
Implement and test the HTTP Express server according to the entire interface provided in the specification, including all new routes added in iteration 3.
-
Part of this section will be automarked.
-
It is required that your data is persistent, just like in iteration 2.
-
eslintis assessed identically to iteration 2. -
Good coverage for all files that aren't tests will be assessed: see section 5.4 for details.
-
You can structure your test files however you choose, as long as they are appended with
.test.ts. You may place them inside a/testsfolder, if you wish. For this iteration, we will only be testing your HTTP layer of tests. -
In iteration 2 and 3, we provide a frontend that can be powered by your backend: see section 6.8 for details. Note that the frontend will not work correctly with an incomplete backend. As part of this iteration, it is required that your backend code can correctly power the frontend.
-
You must comply with instructions laid out in
5.3 -
Ensure that you correctly manage sessions (tokens) and passwords in terms of authentication and authorisation, as per requirements laid out in section
5.8.
-
-
Continue demonstrating effective project management and git usage.
-
You will be heavily marked on your thoughtful approach to project management and effective use of git. The degree to which your team works effectively will also be assessed.
-
As for iteration 1 and 2, all task tracking and management will need to be done via the GitLab Taskboard or other tutor-approved tracking mechanism.
-
As for iteration 1 and 2, regular group meetings must be documented with meeting minutes which should be stored at a timestamped location in your repo (e.g. uploading a word doc/pdf or writing in the GitLab repo wiki after each meeting).
-
As for iteration 1 and 2, you must be able to demonstrate evidence of regular standups.
-
You are required to regularly and thoughtfully make merge requests for the smallest reasonable units, and merge them into
master.
-
-
Document the planning of new features.
-
You are required to scope out 2-3 problems to solve for future iterations of Toohak. You aren't required to build/code them, but you are required to go through SDLC steps of requirements analysis, conceptual modelling, and design.
-
Full detail of this can be found in
5.5.
-
To run the server, you can run the following command from the root directory of your project (e.g. /project-backend):
npm startThis will start the server on the port in the src/server.ts file, using ts-node.
If you get an error stating that the address is already in use, you can change the port number in config.json to any number from 1024 to 49151. Is it likely that another student may be using your original port number.
Please note: For routes involving the playing of a game and waiting for questions to end, you are not required to account for situations where the server process crashes or restarts while waiting. If the server ever restarts while these active "sessions" are ongoing, you can assume they are no longer happening after restart.
Continue working on this project by making distinct "features". Each feature should add some meaningful functionality to the project, but still be as small as possible. You should aim to size features as the smallest amount of functionality that adds value without making the project more unstable. For each feature you should:
- Create a new branch.
- Write tests for that feature and commit them to the branch. These will fail as you have not yet implemented the feature.
- Implement that feature.
- Make any changes to the tests such that they pass with the given implementation. You should not have to do a lot here. If you find that you are, you're not spending enough time on your tests.
- Create a merge request for the branch.
- Get someone in your team who did not work on the feature to review the merge request. When reviewing, not only should you ensure the new feature has tests that pass, but you should also check that the coverage percentage has not been significantly reduced.
- Fix any issues identified in the review.
- Merge the merge request into master.
For this project, a feature is typically sized somewhere between a single function, and a whole file of functions (e.g. auth.ts). It is up to you and your team to decide what each feature is.
There is no requirement that each feature be implemented by only one person. In fact, we encourage you to work together closely on features.
* You are required to edit the `gitlab-ci.yml` file, as per section 4.5 to add linting to the code on `master`. **You must do this BEFORE merging anything from iteration 2 into `master`**, so that you ensure `master` is always stable.
- We want to see evidence that you wrote your tests before writing the implementation. As noted above, the commits containing your initial tests should appear before your implementation for every feature branch. If we don't see this evidence, we will assume you did not write your tests first and your mark will be reduced.
- You should have black-box tests for all tests required (i.e. testing each function/endpoint). However, you are also welcome to write white-box unit tests in this iteration if you see that as important.
- Merging in merge requests with failing pipelines is very bad practice. Not only does this interfere with your team's ability to work on different features at the same time, and thus slow down development - it is something you will be penalised for in marking.
- Similarly, merging in branches with untested features is also very bad practice. We will assume, and you should too, that any code without tests does not work.
- Pushing directly to
masteris not possible for this repo. The only way to get code intomasteris via a merge request. If you discover you have a bug inmasterthat got through testing, create a bugfix branch and merge that in via a merge request.
To get the coverage of your tests locally, you will need to have two terminals open. Run these commands from the root directory of your project (e.g. /project-backend).
In the first terminal, run
npm run ts-node-coverageIn the second terminal, run jest as usual
npm run testBack in the first terminal, stop the server with Ctrl+C or Command+C. There should now be a /coverage directory available. Open the index.html file in your web browser to see its output.
Software development is an iterative process - we're never truly finished. As we complete the development and testing of one feature, we're often then trying to understand the requirements and needs of our users to design the next set of features in our product.
For iteration 3 you are going to produce a short report in planning.pdf and place it in the repository. The contents of this report will be a simplified approach to understanding user problems, developing requirements, and doing some early designs.
N.B. If you don't know how to produce a PDF, you can easily make one in Google docs and then export to PDF.
We have opted not to provide you with a sample structure - because we're not interested in any rigid structure. Structure it however you best see fit, as we will be marking content.
Find 2-3 people to interview as target users. Target users are people who currently use a tool like Toohak, or intend to. Record their name and email address.
Develop a series of questions (at least 4) to ask these target users to understand what problems they might have with quiz tools that are currently unsolved by Toohak. Give these questions to your target users and record their answers.
Once you have done this, think about how you would solve the target users' problem(s) and write down a brief description of a proposed solution.
Once you've elicited this information, it's time to consolidate it.
Take the responses from the elicitation step and express these requirements as user stories (at least 3). Document these user stories. For each user story, add user acceptance criteria as notes so that you have a clear definition of when a story has been completed.
Once the user stories have been documented, generate at least ONE use case that attempts to describe a solution that satifies some of or all the elicited requirements. You can generate a visual diagram or a more written-recipe style, as per lectures.
With your completed use case work, reach out to the 2-3 people you interviewed originally and inquire as to the extent to which these use cases would adequately describe the problem they're trying to solve. Ask them for a comment on this, and record their comments in the PDF.
Now that we've established our problem (described as requirements), it's time to think about our solution in terms of what capabilities would be necessary. You will specify these capabilities as HTTP endpoints, similar to what is described in the swagger docs. There is no minimum or maximum of what is needed - it will depend on what problem you're solving.
You are also encouraged to update your swagger.yaml file to include the routes associated with your new work.
Now that you have a sense of the problem to solve, and what capabilities you will need to provide to solve it, add at least ONE state diagram to your PDF to show how the state of the application would change based on user actions. The aim of this diagram is to help a developer understand the different states of the application.
Iteration 3 sees the introduction of a quiz session, which describes a particular instance of a quiz being run.e
Sessions can be in one of many states:
- LOBBY: Players can join in this state, and nothing has started
- QUESTION_COUNTDOWN: This is the question countdown period. It always exists before a question is open and the frontend makes the request to move to the question being open
- QUESTION_OPEN: This is when players can see the question, and the answers, and submit their answers (as many times as they like)
- QUESTION_CLOSE: This is when players can still see the question, and the answers, but can no longer submit answers
- ANSWER_SHOW: This is when players can see the correct answer, as well as everyone playings' performance in that question, whilst they typically wait to go to the next countdown
- FINAL_RESULTS: This is where the final results are displayed for all players and questions
- END: The game is now over and inactive
There are 5 key actions that an admin can send that moves us between these states:
- NEXT_QUESTION: Move onto the countdown for the next question
- SKIP_COUNTDOWN: This is how to skip the question countdown period immediately.
- GO_TO_ANSWER: Go straight to the next most immediate answers show state
- GO_TO_FINAL_RESULTS: Go straight to the final results state
- END: Go straight to the END state
The constraints on moving between these states can be found in the state diagram here: https://miro.com/app/board/uXjVMNVSA6o=/?share_link_id=275801581370
In Iteration 3 we require you, for new /v1/ routes or any /v2/ routes, to use exceptions to throw errors instead of res.send. You can do so as the following:
if (true) { // condition here
throw HTTPError(403, "description")
}The descriptions do not matter, they are up to you to spend time on (if at all).
To have these exceptions work effectively, you need to do two things:
(1) Install middleware-http-errors[https://www.npmjs.com/package/middleware-http-errors]. This is a package that is custom-made for COMP1531 students.
(2) Add app.use(errorHandler()) to your server.ts file where errorHandler is the default export of the library above. This needs to be added AFTER all of your routes you define.
For iteration 3, we require that passwords must be stored in a hashed form.
Hashes are one-way encryption where you can convert raw text (e.g. a password like password123) to a hash (e.g. a sha256 hash ef92b778bafe771e89245b89ecbc08a44a4e166c06659911881f383d4473e94f).
If we store passwords as the hash of the plain text password, as opposed to the plain text password itself, it means that if our data store is compromised that attackers would not know the plain text passwords of our users.
We require that you protect your sessions by using obfuscation. You can do this one of two ways:
- Using a randomly generated session ID (rather than incremental session IDs, such as 3492, 485845, 49030); or
- Returning a hash of a sequentially generated session ID (e.g. session IDs are 1, 2, 3, 4, but then you return the hash of it)
You may already be doing (1) depending on your implementation from the previous iteration.
If we don't have some kind of randomness in our session IDs, then it's possible for users to potentially just change the session ID and trivially use someone elses session.
If you'd like to explore more tamper-proof tokens, then we suggest looking into and implementing a JWT-like approach for potential bonus marks.
In this model, you will replace token query and body parameters with a token HTTP header when dealing with requests/routes only. You shouldn't remove token parameters from backend functions, as they must perform the validity checks.
You can access HTTP headers like so:
const token = req.header('token');This will also mean you no longer need to use encodeURIComponent or decodeURIComponent if you were using that in iteration 2.
Any query parameters (those used by GET/DELETE functions) can be read in plaintext by an eavesdropper spying on your HTTP requests. Hence, by passing an authentication token as a query parameter, we're allowing an attacker to intercept our request, steal our token and impersonate other users! On the other hand, HTTP headers are encrypted (as long as you use HTTPS protocol), meaning an eavesdropper won't be able to read token values.
Note: While this safely protects sessions from man-in-the-middle attacks (intercepting our HTTP requests), it doesn't protect against client-side attacks, where an attacker may steal a token after the HTTP header has been decoded and received by the user. You do not need to worry about mitigating client-side attacks, but you can read more about industry-standard session management here.
The following describes one potential way of implementing:
A sample flow logging a user in might be as follows (other flows exist too):
1. Client makes a valid `auth/register` call
2. Server stores the hash of the plain text password that was provided over the request, but does not store the plain text password
3. Server generates an incremental session ID (e.g. 1, 2, 3) and then stores a hash of that session ID to create something obfuscated
4. Server returns that hash of the session ID as a token to the user in the response body
This section previously spoke about image uploading and is no longer required. For iteration 3 it is fine to store and image URL and then just serve that again to the user.
For this iteration some part of the marks (see marking criteria) will come from your group having deployed a version of your code to a public web server. Instructions about how to deploy can be found in lab09_deploy.
Once you have deployed your server to a URL, share this URL with your tutor by adding it to deploy.md
To determine the score a user receives for a particular question:
- If they do not get the question correct, they receive a score of 0
- If they do get the question correct, the score they received is P*S where P is the points for the question, and S is the scaling factor of the question.
- The scaling factor of the question is 1/N, where N is the number of how quickly they correctly answered the question. N = 1 is first person who answered correctly, N = 2 is second person who answered correctly, N = 3 is third person who answered correctly, etc
- Players who answer the question at the exact same time results in undefined behaviour
- For multiple-correct-answer questions, people need to select all the correct answers (no less, no more) to be considered having gotten the question correct.
When returned through any of the inputs:
- All scores are rounded to the nearest 1 decimal place.
- If there are players with the same score, they share the same rank, e.g. players scoring 5, 3, 3, 2, 2, 1 have ranks 1, 2, 2, 4, 4, 6
For the CSV format return, the format should be the following (and include the header line):
Player,question1score,question1rank,...
X,Y,Z,...
X,Y,Z,...
X,Y,Z,...
An example for a quiz with 3 players and 2 questions might be:
Player,question1score,question1rank,question2score,question2rank
Giuliana,1,3,2,2
Hayden,1.5,2,1.3,3
Yuchao,3,1,4,1
X,Y,Z,...
X,Y,Z,...
X,Y,Z,...
The CSV is ordered in alphabetical/ascii ascending order of player name.
If a player does not answer a question, their rank is 0 for that question.
| Section | Weighting | Criteria |
|---|---|---|
| Automarking (Testing & Implementation) | 60% |
Whilst we look at your group's work as a whole, if we feel that materially unequal contributions occurred between group members we will assess your individual contribution to the following criteria:
|
| Requirements & Design for future work | 15% |
Whilst we look at your group's work as a whole, if we feel that materially unequal contributions occurred between group members we will assess your individual contribution to the following criteria:
|
| General Code Quality | 10% |
Whilst we look at your group's work as a whole, if we feel that materially unequal contributions occurred between group members we will assess your individual contribution to the following criteria:
|
| Git Practices, Project Management, Teamwork | 10% |
As an individual, in terms of git:
|
| Feature demonstrations | 5% |
|
| (Bonus Marks) Typescript | 10% |
|
The formula used for automarking in this iteration is:
Mark = 95*(t * i * c^3) + 5*e
(Mark equals 95% of t multiplied by i multiplied by c to the power of three, plus 5% of e)
Where:
tis the mark you receive for your tests running against your code (100% = your implementation passes all of your tests).iis the mark you receive for our course tests (hidden) running against your code (100% = your implementation passes all of our tests).cis the score achieved by running coverage on your entire codebase.eis the score between 0-1 achieved by runningeslintagainst your code and the provided configuration.
The dryrun checks the format of your return types and simple expected behaviour for a few basic routes. Do not rely on these as an indicator for the correctness of your implementation or tests.
To run the dryrun, you should be in the root directory of your project (e.g. /project-backend) and use the command:
1531 dryrun 3Please see section 6 for information on due date. There will be no demonstration for iteration 3.
Please see section 7.5 for information on peer assessment.
| Iteration | Due date | Demonstration to tutor(s) | Assessment weighting (%) |
|---|---|---|---|
| 0 | 10pm Friday 22nd Sep (week 2) | No demonstration | 5% of project mark |
| 1 | 10pm Friday 6th Oct (week 4) | In YOUR week 5 laboratory | 30% of project mark |
| 2 | 10pm Friday 27th Oct (week 7) | In YOUR week 8 laboratory | 35% of project mark |
| 3 | 10pm Friday 17th Nov (week 10) | No demonstration | 30% of project mark |
To submit your work, simply have your master branch on the gitlab website contain your groups most recent copy of your code. I.E. "Pushing to master" is equivalent to submitting. When marking, we take the most recent submission on your master branch that is prior to the specified deadline for each iteration.
The following late penalties apply depending on the iteration:
- Iteration 0: No late submissions at all
- Iteration 1: No late submissions at all
- Iteration 2: No late submissions at all
- Iteration 3: Can submit up to 72 hours late, with 5% penalty applied every time a 24 hour window passes, starting from the due date
We will not mark commits pushed to master after the final submission time for a given iteration.
If the deadline is approaching and you have features that are either untested or failing their tests, DO NOT MERGE IN THOSE MERGE REQUESTS. In some rare cases, your tutor will look at unmerged branches and may allocate some reduced marks for incomplete functionality, but master should only contain working code.
Minor isolated fixes after the due date are allowed but carry a penalty to the automark, if the automark after re-running the autotests is greater than your initial automark. This penalty can be up to 30% of the automarking component for that iteration, depending on the number and nature of your fixes. Note that if the re-run automark after penalty is lower than your initial mark, we will keep your initial mark, meaning your automark cannot decrease after a re-run. E.g. imagine that your initial automark is 50%, on re-run you get a raw automark of 70%, and your fixes attract a 30% penalty: since the 30% penalty will reduce the mark of 70% to 49%, your final automark will still be 50% (i.e. your initial mark).
Groups are limited to making 1 automark re-run request per week.
If you want to have your automarking re-run:
- Create a branch, e.g.
iter[X]-fix, based off the submission commit - Make the minimal number of necessary changes (i.e. only fix the trivial bugs that cost you many automarks)
- Push the changes to GitLab on a new branch
- Create a merge request (but do not merge) and share that merge request with your tutor.
The demonstrations in weeks 5 and 8 will take place during your lab sessions. All team members must attend these lab sessions. Team members who do not attend a demonstration may receive a mark of 0 for that iteration. If you are unable to attend a demonstration due to circumstances beyond your control, you must apply for special consideration.
Demonstrations consist of a 15 minute Q&A in front of your tutor and potentially some other students in your tutorial. For online classes, webcams and audio are required to be on during this Q&A (your phone is a good alternative if your laptop/desktop doesn't have a webcam).
The marks given to you for each iteration are given to you individually. We do however use group marks (e.g. automarking) to infer this, and in many cases, you may receive the same mark as your group members, particularly in cases with well functioning groups. Your individual mark is determined by your tutor, with your group mark as a reference point.Your tutor will look at the following items each iteration to determine your mark:
- Project check-in
- Code contribution
- Tutorial contributions
- Peer assessment
In general, all team members will receive the same mark (a sum of the marks for each iteration), but if you as an individual fail to meet these criteria, your final project mark may be scaled down, most likely quite significantly.
During your lab class, you and your team will conduct a short standup in the presence of your tutor. Each member of the team will briefly state what they have done in the past week, what they intend to do over the next week, and what issues they have faced or are currently facing. This is so your tutor, who is acting as a representative of the client, is kept informed of your progress. They will make note of your presence and may ask you to elaborate on the work you've done.
Project check-ins are also excellent opportunities for your tutor to provide you with both technical and non-technical guidance.
Your attendance and participation at project check-ins will contribute to your individual mark component for the project. In addition, your tutor will note down any absences from team-organised standups.
These are easy marks. They are marks assumed that you will receive automatically, and are yours to lose if you neglect them.
The following serves as a baseline for expected progress during project check-ins, in the specified weeks. For groups which do not meet this baseline, teamwork marks and/or individual scaling may be impacted.
| Iteration | Week/Check-in | Expected progress |
|---|---|---|
| 0 | Week 2 | Twice-weekly standup meeting times organised, iteration 0 specification has been discussed in a meeting, at least 1 task per person has been assigned |
| 1 | Week 3 | Iteration 1 specification has been discussed in a meeting, at least 1 task per person has been assigned |
| 1 | Week 4 | 1x function per person complete (tests and implementation in master) |
| 2 | Week 5 | Iteration 2 specification has been discussed in a meeting, at least 1 task per person has been assigned |
| 2 | Week 6 | (Checked by your tutor in week 7) Server routes for all iteration 1 functions complete and in master |
| 2 | Week 7 | 1x iteration 2 route per person complete (HTTP tests and implementation in master) |
| 3 | Week 8 | Iteration 3 specification has been discussed in a meeting, at least 1 task per person has been assigned |
| 3 | Week 9 | Exceptions & tokens in HTTP headers added across the project AND 1x iteration 3 route per person complete (HTTP tests and implementation in master) |
| 3 | Week 10 | 2x iteration 3 routes per person complete (HTTP tests and implementation in master) |
From weeks 2 onward, your individual project mark may be reduced if you do not satisfy the following:
- Attend all tutorials
- Participate in tutorials by asking questions and offering answers
- [online only] Have your web cam on for the duration of the tutorial and lab
We're comfortable with you missing or disengaging with 1 tutorial per term, but for anything more than that please email your tutor. If you cannot meet one of the above criteria, you will likely be directed to special consideration.
These are easy marks. They are marks assumed that you will receive automatically, and are yours to lose if you neglect them.
All team members must contribute code to the project to a generally similar degree. Tutors will assess the degree to which you have contributed by looking at your git history and analysing lines of code, number of commits, timing of commits, etc. If you contribute significantly less code than your team members, your work will be closely examined to determine what scaling needs to be applied.
Note that contributing more code is not a substitute for not contributing documentation.
All team members must contribute documentation to the project to a generally similar degree.
In terms of code documentation, your functions are required to contain comments in JSDoc format, including paramters and return values:
/**
* <Brief description of what the function does>
*
* @param {data type} name - description of paramter
* @param {data type} name - description of parameter
* ...
*
* @returns {data type} - description of condition for return
* @returns {data type} - description of condition for return
*/In each iteration you will be assessed on ensuring that every relevant function in the specification is appropriately documented.
In terms of other documentation (such as reports and other notes in later iterations), we expect that group members will contribute equally.
Note that, contributing more documentation is not a substitute for not contributing code.
At the end of each iteration, there will be a peer assessment survey where you will rate and leave comments about each team member's contribution to the project up until that point.
Your other team members will not be able to see how you rated them or what comments you left in either peer assessment. If your team members give you a less than satisfactory rating, your contribution will be scrutinised and you may find your final mark scaled down.
| Iteration | Link | Opens | Closes |
|---|---|---|---|
| 1 | Click here | 10pm Friday 6th Oct | 9am Monday 9th Oct |
| 2 | Click here | 10pm Friday 27th Oct | 9am Monday 30th Oct |
| 3 | Click here | 10pm Friday 17th Nov | 9am Monday 20th Nov |
When a group member does not contribute equally, we are aware it can implicitly have an impact on your own mark by pulling the group mark down (e.g. through not finishing a critical feature), etc.
The first step of any disagreement or issue is always to talk to your team member(s) on the chats in MS Teams. Make sure you have:
- Been clear about the issue you feel exists
- Been clear about what you feel needs to happen and in what time frame to feel the issue is resolved
- Gotten clarity that your team member(s) want to make the change.
If you don't feel that the issue is being resolved quickly, you should escalate the issue by talking to your tutor with your group in a project check-in, or alternatively by emailing your tutor privately outlining your issue.
It's imperative that issues are raised to your tutor ASAP, as we are limited in the mark adjustments we can do when issues are raised too late (e.g. we're limited with what we can do if you email your tutor with iteration 2 issues after iteration 2 is due).
Each iteration consists of an automarking component. The particular formula used to calculate this mark is specific to the iteration (and detailed above).
When running your code or tests as part of the automarking, we place a 90 second minute timer on the running of your group's tests. This is more than enough time to complete everything unless you're doing something very wrong or silly with your code. As long as your tests take under 90 seconds to run on the pipeline, you don't have to worry about it potentially taking longer when we run automarking.
In the days preceding iterations 1, 2, and 3's due date, we will be running your code against the actual automarkers (the same ones that determine your final mark) and publishing the results of every group on a leaderboard. The leaderboard will be available here once released.
You must have the code you wish to be tested in your master branch by 10pm the night before leaderboard runs.
The leaderboard will be updated on Monday, Wednesday, and Friday morning during the week that the iteration is due.
Your position and mark on the leaderboard will be referenced against an alias for your group (for privacy). This alias will be emailed to your group in week 3. You are welcome to share your alias with others if you choose! (Up to you.)
The leaderboard gives you a chance to sanity check your automark (without knowing the details of what you did right and wrong), and is just a bit of fun.
If the leaderboard isn't updating for you, try hard-refreshing your browser (Ctrl+R or Command+R), clearing your cache, or opening it in a private window. Also note the HTTP (not HTTPS) in the URL, as the site is only accessible via HTTP.
The work you and your group submit must be your own work. Submission of work partially or completely derived from any other person or jointly written with any other person is not permitted. The penalties for such an offence may include negative marks, automatic failure of the course and possibly other academic discipline. Assignment submissions will be examined both automatically and manually for such submissions.
Relevant scholarship authorities will be informed if students holding scholarships are involved in an incident of plagiarism or other misconduct.
Do not provide or show your project work to any other person, except for your group and the teaching staff of COMP1531. If you knowingly provide or show your assignment work to another person for any reason, and work derived from it is submitted, you may be penalized, even if the work was submitted without your knowledge or consent. This may apply even if your work is submitted by a third party unknown to you.
Note: you will not be penalized if your work has the potential to be taken without your consent or knowledge.