Implementing Python (Part 2)
Your next goal for implementing Python is to touch upon most of the features that exist in the tests we've given you. We want to make sure that by the end of next week, you've tackled a number of things, to make sure that your design is adequate. Note that we've reorganized the due dates for the different pieces of Python, and we're expecting you to get this part done by the end of Thursday, November 15, rather than November 8. Here's what you must get to by then (and we will ask you for a handin):
- You should be able to parse all the tests (in
get-structured-python.rkt), so you know all the kinds of expressions you'll be dealing with. We won't actually check this, but we highly recommend it.
- The 3 test files from lists/
- At least 4 test files from each (16 total) of:
- At least 6 test files from each (18 total) of:
(This is a total of 37 test files, putting you nearly halfway to completion.)
If you're going for an A, you should also try to pass some test in range/ and iter/, to convince yourself that you know what's going on there. You don't have to tackle the (single) test in super/ yet, but there's no harm in looking at it.
Designs to Build On
There are several designs that we wanted to highlight and distribute as
worth building on. These are available on Github in the
directories, with descriptions available.
What to Turn In
As with the last assignment, you will need to turn in any Racket files used in your Python implementation, along with a single README file that explains how your code is structured and your design choices, all in a single zip file. We also ask that you submit the README separately as well, just to make our lives easier if we need quick access to it.
Again, we aren't collecting a real grade report from you, so there's no
finalizing happening, and no magical binary. We do want a report from you, so
we've added a
--progress-report option to
python-main.rkt. You should submit the result of running:
$ racket python-main.rkt --python-path your/python/path --progress-report python-reference/This standard format helps us automatically get a head count of how folks are doing. We'll distribute a real grader once we get a sense of what implementation strategies people are taking, since that will affect what grading scripts we're able to distribute.
You can turn in all the files at this upload link.