Auto-multiple-choice (AMC) is an application for generating exams with randomized elements. Instructors can give each student a unique exam, with parameters changed, options permuted, and even questions shuffled. Completed exam papers are scanned and scored using optical mark reader (OMR) software. Free-response questions can be included, scored by hand, and read with the OMR software.
Gradescope is a web service that allows digital grading of short-answer and free-response questions. Instructors prepare a PDF template and upload scanned exams. The software presents the grader with student submissions, and the grader can apply a consistent rubric with comments. Gradescope can handle multiple choice questions, but only if they are the same on each exam.
The author will detail a workflow for using both services on the same exam. It involves processing the same scans through each service, then running scripts to merge the scores on the multiple choice items (generated by AMC) with scores and comments on the free response items (collected by Gradescope).
Combining auto-multiple-choice and Gradescope for paperless grading
1. Combining auto-multiple-choice and
Gradescope for paperless grading
Matthew Leingang
New York University
MAA Session on Technology and Apps for Teaching
Mathematics and Statistics
San Diego, CA
January 13, 2018
2. About me
• I use a mixture of fixed-response (MCQ) and free-
response (FRQ) items
• I like tech, hate paper, but need paper
• I love two applications for grading and I want to
marry them
3. auto-multiple-choice (AMC)
• Desktop app (Unix, MacOS)
• Open source (Gtk3, Perl, SQLite)
• LaTeX driven
• Optical Mark Recognition (OMR)
• Exports annotated PDF which can be emailed or
uploaded to LMS
4. Gradescope (GS)
• Web application
• Free for a limited time
• PDF driven
• AI trained by Instructor
• Student views annotated PDF and comments on GS
website
gradescope
5. AMC problem types
Fixed-response
• options
permuted by
student
• programmable
Free-response
• marked by
instructor on
paper or
scanned PDF
• no support for
rubric
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18. GS problem types
Fixed-response
• AI grouping of
common
responses
• no permutation,
programmability
• More features
“coming soon”
Free-response
• simple PDF
annotation tools
• develop rubric on
the fly
• add comments
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31. The Marriage Problem
• Write exam in LaTeX for AMC
• grade MCQ in AMC
• upload cover page and FRQ to GS
• grade FRQ in GS
• Then what?
32. Solutions
• Three columns in LMS for each
assessment: MC in AMC, FR in
GS, and a total
• Or: Programmatically incorporate
GS scores (and comments?) into
AMC and import to LMS from
there
33. Progress
• Command-line script to import GS scores
file as AMC responses (“captures”)
• Re-scoring in AMC records corresponding
scores
$ AMC-Import-Gradescope --qlist FR-Sch-35-9 FR-Sc
35. Complications
• AMC uses an Object-relational mapping (good)
which is lightly documented (bad)
• No slot for GUI plugins at the capture stage
36. What now?
• Might go back to the three-
column solution for spring
• Look at deeper injection
into AMC application
architecture
• Welcome collaborators
Hi, thank you to the organizers for accepting my contribution, and to the audience for making the trek all the way to the other side of the SDCC.
In my career I have settled into a chalk-talk-and-worksheet class management style. I love technology but I tend to keep it out of the classroom. I like quizzes and exams with a combination of fixed-response (MC) and free-response questions. But I think paper is the best way to administer those.
Outside of class, though, I hate paper. I don’t want to lug it around, mark it up, and chase students down to return it to them.
In the past few years I’ve fallen in love with two applications for grading paper electronically. So much that I want to marry them. But to each other. This talk is about my efforts to do that.
The first of these apps is auto-multiple-choice or AMC.
It runs on your machine. You create questions in a LaTeX file. Students mark up the PDF, you scan it, and the AMC software reads the PDFs, looking for marked responses.
You match exams to students by letting them encode their ID numbers, just like on standardized bubble tests.
The last bit, about uploading annotated PDFs to LMS, was the subject I gave at the WebSIGMAA invited address two years ago.
First five users at any university are free forever, everybody else gets two terms for free, then you’re hooked and have to pay. $3/student/term for individual, $5/student/term for team. I’m pushing our uni to pay for a site license.
PDF driven means you upload a template file, and outline the questions on that PDF. You have to do this “manually”.
I want to contrast these two applications’ features on fixed-response vs free-response questions
programmable means parameters can be randomized. Lua is an option.
With FRQ, the flow is to write the problem, draw a box for their work, and a bubble form is generated for the score. So the instructor is the one that does the bubbling. You can comment either on the paper, or on the scanned PDF. But there’s no support for a rubric. You have to keep track of that yourself.
the code at the top identifies the sheet and page number
the code at the top identifies the sheet and page number
the code at the top identifies the sheet and page number
the code at the top identifies the sheet and page number
the code at the top identifies the sheet and page number
the code at the top identifies the sheet and page number
By contrast, let me describe the feature of Gradescope.
Gradescope allows you to flag problems for mistakes. You can make a comment on that mistake. You can assign a deduction to that flag, and if you change your mind, you change the deduction and it updates automatically.
With MCQ, the AI can group similar responses, so you can assign a score to all the students who marked “C” on the problem. But then C needs to be the right answer on every problem, so you lose the random permutation of options. They’re working on a solution to this, but not there yet.
So how can I get the best of both worlds here? What I want to do is:
Here are some ways to work around this. The “primitive” way would be just keep them separate. The advantage of the first method is it’s simple to implement. But my LMS can’t automatically add columns, so if I change one column I have to change another manually (or upload).
The interesting/complicated way:
image: console with command line
The command line options describe the parameters. I need to tell the program which GS problems get mapped to AMC problems, and I need to tell it how to match student records from one database to the other. Then I import the scores file.
The program maps the scores back to boxes to be checked, finds the student’s paper, then the student’s problem, then marks that box as checked (different from assigning the score, it’s mimicking my checking of a box).
So if I say in Gradescope that this problem gets 4/5, the script will make sure that the box for 4 points is ticked for that problem. Then you restore in AMC and the proper scores are recorded.
So now the PDF generated by AMC includes those checked boxes, so the student knows their score, but they still need to go to GS to get the comments. I’d like to incorporate those (and maybe even the extra comments and in-GS annotations).
The other things I would like to do is not run this on the command line. Ideally I could do this within the AMC GUI.
image: error
ORM makes “undoing” complicated. Data layer is transactional, but buggy
I attempted a plugin at the “reporting” stage, but not enough of the application is exposed there
To be honest, my commitment to this path hindered my ability to give timely feedback. This defeats the whole purpose
Thank you very much! If you want these slides, you can find them on slideshare, and if you want to browse my code or contribute, you can find it on github.