Peer Assessment in MOOCs

Introduction

As mentioned in some previous posts, I’m currently taking a MOOC (Massive Open Online Course) through Coursera. I’m overall very pleased with the course, enjoying it a lot, learning a lot. It’s my second MOOC course, and the first I’ve taken from one of the big three vendors (Coursera, Udacity, and EdX). In the process, I’m getting a lot of great insights in terms of thinking about design for this exciting new format of university education.

One area where I am pleasantly surprised is in how well the aspect of peer assessment in MOOCs works for assignments. For those who are not so familiar with the format, in the MOOC environment, there can be thousands of students per course (this is where the “Massive” part comes in). As a result, instructor or facilitator grading is not practical or cost effective. Now, for some types of assessments, such as multiple choice or short answer or matching or what not, these can be graded by machine. The challenge is grading of more open-ended work like essays, projects, problem sets, and what not. These more complicated types of submissions cannot readily be done by a machine.

As an alternative, therefore, “peer assessment” is used. That is, the grading is carried out by the students themselves.

Peer Assessment in MOOCs: How it Works

Each student gets randomly assigned five other student submissions (anonymous) to grade using an online grading form. A rubric guides the marker on areas to assess, and criteria for assigning or subtracting marks.

Here’s an example of a rubric used for a recent assignment on programming a simple virtual stopwatch (with the stopwatch itself on the right):

stopwatch_rubric  stopwatch

 

A web form based on the rubric is used to collect the assessment feedback. Drop down menus are used to select grades on each rubric item (0/2, 1/2, 2/2, e.g.) with criteria given for each possible mark for each item. Text boxes are given for more detailed feedback, to give the student some explanation when marks are taken off or to give some positive feedback for a truly impressive submission.

rubric_webform

The rubrics help to focus the markers and to standardize the grading for improvements in the all-important Inter-Rater Reliability. For people to take MOOCs seriously, there needs to be that sense among students that the grading is as uniform and objective as whatever grading they would get in a normal university course. (I won’t get into the thorny question of how much that exists in current brick and mortar university classrooms 😉 ) Assigning multiple submissions to be marked by each student also means that each student will in turn get assessed by multiple peers. This adds another level of objectivity on top of the rubric; variance between markers can be washed out by dropping outlier grades (lowest and highest) and averaging over the remaining grades.

Overall this encourages objective, fair, and thorough grading. Extremely lax/superficial grading or abusive, over-strict grading tends to get dropped out when the low and high outliers are dropped. Also, there is a bit of a “do unto others” / karma effect here. The person you are grading this assignment could well be grading you the next assignment, so you don’t want to be abusive of the power. On the other hand, you know others are going to be marking down your errors if you made them, so you wouldn’t want to be over-lenient.

The student duty to contribute to marking is reinforced by making it part of the criteria for success; students lose grades on the next assignment if they fail to grade their minimum of five assignments. The option is also given to choose to mark more than the minimum. In the course I am taking, this is voluntary, and does not give any added benefit. I would probably suggest to the makers of the course or Coursera that some additional marks or offsets for errors in other assignments might be one way to incentivize learners in the desired behavior of going beyond the minimum.

Another neat touch is self-assessment. After grading the requisite five peers, the student is asked to evaluate his own work. This also becomes part of the grade. From a perspective of Adult Learning Theory, this gives a nice opportunity for the learners to reflect on their own work after a few days away from it, and in the light of what he has seen of the mistakes and highlights of what others have done, helping to further reinforce learning.

What we can learn from this in general

Overall, this is a system that works nicely, and helps to reinforce the idea that MOOCs can work in practice to replicate a university course online.

I think, however, we could learn some takeaway lessons from this format for our design of e-Learning in general. Assessment can often be a weak point in e-Learning / m-Learning due to the limitations in what can be graded by machine. This is why a lot of the assessment in e-Learning is in the form of simple, closed answer forms readily supported by authoring tools, things like True/False, multiple choice, short answer, drag and drop, and matching. Some interesting assessment options can be engineered for soft skills training by combining multiple choice questions with slide branching to create “Choose Your Own Adventure” style interactive scenarios. But open ended question answer types like essays, video responses, or project submissions are not so readily supported.

Perhaps we can take some inspiration from this technological implementation of peer assessment and self-assessment in MOOCs to enable simple grading of more complex assessments in e-Learning. At least for cases where we have a large number of students taking the course at the same time.

All that would be required from an Instructional Designer would be the design of easy to use and understand grading rubrics. In addition, the Instructional Designer would need to design web forms for the students to enter marks with drop down menus for each rubric item, and to enter open ended grader comments.

To handle all the magic the background, programmers would need to implement an engine for:

  • Receiving submissions and enforcing any soft/hard deadlines
  • Randomly assigning submissions to students for grading
  • Receiving and processing the submitted grades and calculating final assignment grades
  • Communicating the results to the learners

In summary, online peer and self-assessments based on rubrics and supported by the right IT functionality offer refreshing new possibilities for richer forms of assessment in e-Learning, given a large enough pool of students taking the course at the same time.

Leave a Reply

Your email address will not be published. Required fields are marked *