English language arts teachers have long recognized the critical role meaningful feedback from peers has in process-writing classrooms. However, one limitation of traditional face-to-face peer response I noticed in my own teaching is that I never knew who was engaged with others and at what level. I didn’t have an efficient way of knowing who was giving (and who was getting) good feedback.
Over the past few years, peer feedback has been integrated into learning management systems like Canvas, Blackboard, and Turnitin. Stand-alone applications like Peerceptiv and Eli Review are online peer review systems that provide data never possible in traditional face-to-face settings. I’ve recently begun incorporating Eli Review into my teaching and am excited about the potential for learning and literacy development.
Once students submit a draft of their writing in Eli, reviewers add comments and rate the draft on the basis of the assignment’s goals. During the next phase, student writers indicate the helpfulness of the feedback they received in two ways—by rating it and by stating whether the reviewers’ suggestions were incorporated into the revision plan. Here, a student writer rated the comment by Student #39 as being more helpful (four stars) than one by Student #37 (three stars), so Student #39 will have a higher helpfulness rating for this particular task. However, the writer has indicated he will add both suggestions to his revision plan; this will have a positive impact on both students’ helpfulness rating.
Teachers can endorse feedback, which would also raise students’ helpfulness score. So far in my use of Eli Review, I haven’t endorsed any comments because I want students to take more ownership of the process. Bill Hart-Davidson, cocreator of Eli Review, advises teachers to use the endorsement feature judiciously, such as by telling students from the beginning of an assignment they will endorse certain types of feedback that support particular learning goals. For example, in an argumentation unit, teachers might endorse a reviewer’s comment on a peer’s use of counterarguments.
With each subsequent assignment, helpfulness ratings and other data aggregate and after multiple reviews, a student’s overall helpfulness index is quantified.
As a teacher, I can use data from Eli in a number of ways. For example, engagement data can be used as formative assessment. The helpfulness score category can be sorted in descending order, and this information could be used to make groups of equal or mixed ability. That same list also identifies students who have the lowest helpfulness scores, which says (at least) two things about those students: Either they are not engaged in the activity, or the kind of feedback they’re giving isn’t considered useful by their peers. I can discuss with students ways they could provide more valuable feedback.
In the article “Learning by Reviewing,” Kwangsu Cho and Charles MacArthur found that students who read and reviewed peers’ papers outperformed students who read but didn’t review those same papers. Peerceptiv’s Christian Schunn cites a decade’s worth of research describing the numerous cognitive gains students get through the act of online peer reviewing.
Teachers have long known that students become better writers by reading and reviewing peers’ work. Data generated in online peer feedback systems make that learning more visible.
Chris Sloan teaches high school English and media at Judge Memorial Catholic High School in Salt Lake City, UT. He is also a PhD candidate in Educational Psychology and Educational Technology at Michigan State University. His article The Relationship of High School Student Motivation and Comments in Online Discussion Forums was published in March 2015 in the Journal of Educational Computing Research.