MOOC-Raking, Part II: Peer Marking

In my last entry on MOOCs, Massive Online Open Courses, I mentioned that one of the reasons I felt that some MOOCs didn’t offer an educational experience as high in quality as traditional, in-person university courses is because qualitatively evaluating the work  of thousands of students poses a huge problem to instructors, and qualitative evaluation is necessary for key skills like writing, artistic projects, and other communication skills.

I also mentioned that I’d discuss some of the ways I’ve seen MOOCs try to get around this limitation. So here are a few.

First and most rarely, instructors can try to make qualitative evaluation unnecessary by relying entirely on quantitative evaluation. This is rare because most people understand that this limits many subjects to their most trivial aspects: math becomes arithmetic, English literature becomes memorizing what colour Scout Finch’s jacket was on page 43, and history becomes a list of dates and names. Most of the MOOCs I’ve taken didn’t try this, or at least didn’t try to overstretch its utility.

For courses in which factual assessment was necessary, multiple-choice questions worked well for this aspect of the material. However, different courses offered different multiple-choice standards. In some cases, one can re-try a multiple-choice quiz with no penalty, with similar or even identical questions. Personally, I like this best as a student because it lets me be lazy and try the quiz without finishing the lectures or readings, but pedagogically, it definitely leaves something to be desired. Some courses allow students to try quizzes an unlimited number of times; others allow one fewer tries than there are multiple-choice options; and still others allow only one attempt.

Pedagogically speaking, I prefer the last: it forces me to consider my answers instead of just ticking boxes and seeing what happens when I submit.

There’s also the question of what tools one can use. Personally, I prefer that questions be constructed so as to be un-Googlable (i.e. to require synthesis and judgement), but I like to be able to check my notes. One course I’m taking asks users to treat the exams as closed-book tests, and I’ve kept to their standards. However, I find it frustrating because first, I can’t shake the feeling that others who are willing to cheat by using their notes or online sources are able to get much higher marks than me with no effort, and, second, because I don’t care if I can memorize this information. I just want to learn it. This mismatch between my learning goals and the instructors’ pedagogical goals feels onerous.

Second and most often, instructors ask students to peer evaluate. In my opinion, this is a mixed bag.

It’s mixed because most students are taking the course precisely because they aren’t qualified to evaluate work in the field. Yes, many are very intelligent people, and some do know as much or more than the instructors. However, it’s much more likely that one’s MOOC-mates will be just plain wrong. If or when this happens, there’s no way to address it within the course, because if the instructors don’t have time to mark thousands of papers, they certainly don’t have time to deal with thousands of he-said-that, she-called-my-work-this complaints.

True, peer evaluation can be a useful tool in the traditional classroom as well, and it can certainly edify the marker through exposure to the efforts of others. But the problems that can arise with this teaching tool in the stone-and-mortar universities are exacerbated by the anonymity of online interactions.

Students asked to peer-evaluate often worry that others will game the system by giving good marks to people they like and bad marks to people they don’t like. In real life, instructors are advised to keep an eye on the actual work process and relationships of the students and discount or otherwise minimize the effect of any suspect marks.

But online, the same sheer size that makes expert grading impossible also makes it impossible to monitor every single grade or to respond to complaints of ineffectiveness.

Moreover, some aspects of the online experience can exacerbate biased marking. It swings both ways: on one hand, there are few or no consequences for being a troll. You will literally never have to encounter this person again. You will never even meet a group of classmates and wonder if one of them was the person to whom you were mean.

For the design course I took, in which we had to mark people’s creative design projects, I went to the other extreme. All these people were trying, and I certainly had no expertise in design. Sometimes I couldn’t understand them, but people from all around the world were taking this course. I certainly couldn’t do better than what they did. And, honestly, I knew I didn’t understand enough of the design principles to judge people on them with the same degree of experience and knowledge I bring to, say, writing or history. So I gave them higher marks than I would have given my own non-virtual students under the same circumstances.

And when I got my own marks back, I got way higher marks that I thought I deserved. To be frank, my work in that course was C work at best. It wasn’t my focus, and I took a lot of shortcuts because, you know, writing and actual job > free online course I’m taking for the sake of curiosity. But I got an A that I definitely didn’t deserve.

So far, the only place I’ve seen peer marking work all right is in the coding course I’m currently taking. I don’t yet have my marks back, but the purpose of the peer grading in this course is to assess objective qualities of code that a computer would have trouble evaluating — for instance, has the user included tests for certain scenarios? Has he or she used the best-practice format or strung together inefficient code that works? Do his or her comments and function names follow the guidelines for clarity?

I found this peer evaluation much easier than that for the design course. The possible points were divided into a series of Yes/No questions (or sometimes Yes, and quality X/Yes but not quality X/No), and although some of the questions were subjective — for instance, different people are going to have different ideas of what it means for a function name to be clear — overall, I doubt there was much variance between the grades. In fact, the only person who gave me a different mark was me. I wasn’t satisfied that I’d achieved a satisfactory level of clarity in my comments, but my peers disagreed.

So there are some cases in which peer marking works. However, I’m sceptical of its use in MOOCs that ask students to grade according to subjective, rather than objective, criteria. These courses tend to lean towards subjects like programming, mathematics, language mechanics, and the hard sciences. That’s not to say that MOOC courses can never have a satisfactory humanities grading; it’s just that, from what I’ve seen so far, instructors will have to provide much more specific and clear marking systems or even rubrics in order to make them worthwhile.

Overall, then, MOOC grading depends heavily on the subject matter and the cleverness of the instructors. Because most MOOC instructors try to replicate university-classroom methods of evaluation, courses that traditionally use methods like multiple-choice exams are more satisfying to me as a student. For other types of courses, although the material is equally valuable, students must accept that they must cultivate their own feeling of accomplishment, as evaluation is unlikely to feel rewarding… for now. Maybe soon, instructors will move away from classroom-based evaluation types and try something completely new that will blow me out of the water.

2 Replies to “MOOC-Raking, Part II: Peer Marking”

  1. I must say this and the previous post have made me a bit more intrigued by the whole MOOC thing. I was vaguely aware of the phenomenon but had not realized quite where it had gotten to. It is certainly an interesting set of possibilities it lays out.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.