The peer review rubric is based on the following four questions:
- were you programming on the relevant assignment? (1 point)
- did you have a pair partner? (9 points)
- did you rotate driver/navigator roles frequently? (approx evenly 4 times an hour) (40 points)
- did you communicate effectively? i.e. regular talking, discussing the task at hand (50 points)
There’s a conflict here of course between trying to prevent people gaming the system (submitting irrelevant or borrowed videos) and encouraging effective pair programming behaviour. The peer review process itself is not without controversy. We’ve softened the wording under rotation to indicate that roles do not need to be rotated precisely every 15 minutes, and increased the range of rotation points that can be assigned. The rotation rubric now looks like this:
Did the participants rotate driver/navigator roles frequently? Ideally at least once every 15 minutes, i.e. this means that roughly every 15 minutes the person who is typing stops and allows their partner to type for the next 15 minutes, while switching to an advisory “navigator” role. Check particularly the time-indexes submitted.
Don’t be too strict - the point is that the participants rotate roles approximately four times an hour - so for example a rotation at 13 mins, then after 17 mins then after 16 mins and then 14 mins is fine.
- No (0 points) There was no driver navigator rotation, only a single person contributing code during the video
- Not so much (10 points) There was only one driver/navigator rotation for hour of coding
- A couple of times (20 points) There were a couple of driver/navigator rotations per hour of coding
- Several times (30 points) There were 3 or even 4 rotations, but they weren’t spaced out very well over an hours coding
- Yes (40 points) There were at least 4 rotations and all the rotations were roughly evenly spaced throughout the pairing, i.e. at least one every 15 minutes
Did the pair continue to communicate effectively through the pairing process? Specifically, did the driver explain what they were doing as, or around, the process of them typing. Did the navigator ask questions or make suggestions or look up relevant documentation to the task at hand?
- No (0 points) There was no communication or no helpful task focused communication between driver and navigator.
- Not so much (10 points) There was almost no communication between driver and navigator, but they did communicate a little. Or the majority of communication was not helpful or task-focused.
- Some (20 points) There were some occasional periods when the pair was communicating effectively, but there were longer periods of total silence and/or most of the communication was unrelated to the task and not helping the pair move forward.
- A Fair Amount (30 points) There was communication but it was on and off, or communication that was unhelpful, e.g. talking off topic, getting angry etc.
- Quite a lot (40 points) The communication was pretty good, but there was the occasional period with no communication of any kind, when perhaps it might have been helpful.
- Yes, lots and very effectively (50 points) There was regular communication between the driver and the navigator. Although there might be occasional silences it is clear from the communication that driver and navigator are focused on solving the same task together, and they are using excellent communication skills to complete that task.
The edX peer review system is also a little tricky for some learners since progress through the review training component is not obvious. That said there is great support for learners to give feedback on the reviews they receive, and a nice admin interface to allow us to override peer review grading where necessary. I just which I could hook it all up to a slack bot via APIs ...
The peer review of pair programming videos is an experiment to see if we can effectively and reliably grade pair programming activity. Pair programming has been shown to have great benefits for learners in terms of understanding. The process of explaining technical concepts to others is hugely beneficial in terms of consolidating learning. We’ve encouraged learners in the MOOC to use the AgileVentures remote pairing system to schedule pairing events, and use a shared Gitter chat room for additional coordination.
The most challenging parts of MOOC pair programming appear to be the scheduling and frustrations arising from pair programming logistics. Anecdotally there is a fair amount of time spent in Google hangouts waiting for a pair partner to appear, or conversely one’s pair partner has to leave early. Some people feel nervous about the process of pair programming with a different stranger each week.
Some of this is specific to a MOOC where participation drops precipitously as the weeks go by. For the first assignments it’s easy to find a partner, but if you join the course late, or simply move on to the later assignments where fewer learners are available finding a pair partner in your timezone can be challenging.
Of course your pair partner doesn’t have to be in the same timezone as you. In principle there are learners in other timezones with different schedules that happen to overlap with you. We don’t require learners to pair with others in the MOOC. They can pair with their room-mates, work colleagues, friends, whoever comes to hand. The only requirement is that a video is submitted. AgileVentures events provide a relatively convenient way to generate hangouts on air which make it fairly straightforward to generate a youtube screencast of a pairing session. Even the hangout itself can be used as a local pairing sessions recording mechanism. There are however too many points that one can get stuck at.
Have we found the perfect solution? Certainly not. The difficulty with a more rigid pair scheduling system in a MOOC is not being able to rely on the participation of learners who are only involved in an ad-hoc basis. That said, my experience running a bootcamp is that an imposed pairing schedule is actually preferred by most learners, since it removes the cognitive overhead of negotiating with others about initiating pairing sessions.
Perhaps for the next run of the MOOC, we could have a more structured approach where we get groups of 8 together at specific times, with the assumption that at least 2 of the 8 will show up and will be able to pair … we’ll need to analyse all the pairing data in some detail first …