Skip to content

Rubrics, Grading and Submission

Rubrics and Grading

This rubric gives an overview of the points that passing tests in the PrairieLearn autograder is worth, relative to each function you have to fill out. This project is autograded, not manually graded, so what you see on PL when the autograder stops running is what your score is.

Rubrics
Part 1: The Board (25)
get_board_item(board, x, y) 5
set_board_item(board, x, y, item) 5
valid_coordinate(board, coordinate) 5
get_row(board, y) 5
check_row_full(board, y) 5
Part 2: The Pytromino (25)
rotate_block_90_cw(pytromino, pos) 5
filter_blocks_pos(pytromino, fn) 5
shift_down_fn(pos, steps) 5
shift_left_fn(pos, steps) 5
validated_apply_non_rot(self, fn, validator) 5

Submitting to PrairieLearn

Before you submit, it’s a good idea to run the local sanity-check test cases again. You can do the whole batch at once using:

for Windows:

py grader.py

for Mac:

python3 grader.py

Do keep in mind that the local tests aren’t representative of all the test cases that will be checked on PrairieLearn. The PrairieLearn autograder is available on the assignment on PrairieLearn, and will be announced over Ed, at which point you can submit to the autograder an unlimited number of times to see how your project is doing.

To run the PrairieLearn autograder, you’ll need to submit models.py and board.py to the assignment on PrairieLearn. You can either submit these files individually or submit the entire project folder to the PL assignment, as irrelevant files will be filtered and not processed automatically.