Better teamwork through peer review

Close Icon
These competencies are explicitly fostered and described in Competence View.
These competencies are fostered in this course but are not explicitly described in Competence View. Please contact the responsible person for further information. Competencies in grey are fostered in this course but are generally not the focus of Competence View, which focusses on cross-disciplinary competencies.

Our vision is that teamwork improves with peer review. Hence, we built a peer review tool allowing students to assess each other in a grade-relevant way that builds on game theory and reduces incentives to freeride. We now have evidence that, by implementing peer review, we have less free-riding, better communication, and better group performance. Students enjoy working in teams more, and most students find their grades fairer.

Most graded coursework at university takes one of two forms. On the one hand, there are good old-fashioned exams and essays that are graded by the professor and result in individual grades. On the other hand, there are group projects, also graded by the professor, which typically result in the same grade for all team members. Neither of those is in line with how much of modern professional reality after university will unfold, where the majority of value is collaboratively generated but rewards/remunerations typically vary within the team (because of various factors such as seniority, relative commitments, bonus schemes, etc.). There is a recent tendency to redesign performance pay and incentive contracts through inclusion of peer-review elements. Rather than managers assessing their staff top-down, colleagues get to review each other to indicate bottom-up who deserves extra rewarding.

Students submit their coursework as teams, and are (individually) assessed based on two ingredients:

First: Regular grading of group-level project quality:
This element is what the teacher and experts think the group project is worth in terms of an average grade. Usually, this step would involve some qualitative feedback and report on the submitted project as well, which is summarized by an average grade (e.g. a 5.5). Importantly, the professor and experts do not “look into the process” of the teamwork, trying to identify who did what. This assessment is left to the peers to assess.

Second: Peer review of individual contributions:
Each team member expresses his/her assessment of the other team members’ contributions to the project. This is done in terms of allocating percentages to the other group members. The key feature of the reviewing method is that one does not express how much one thinks one did oneself, and that one’s own review of the others does not change how much one gets oneself. Instead, one only reviews the other group members’ performances in terms of percentages toward doing “the rest” (i.e. what one did not do). As a result, how much you get depends only on what the others say about you, not on what you say. If you let people rate themselves, this has resulted in biases through manipulations done primarily by relatively low contributors who inflate their grades.

The first two components are aggregated so that the average grade achieved in a given team equals the group grade. Individual grades may vary based on the outcome of the peer review of the other group members contributionsif there is consensus amongst the peers that some team members deserve more credit than others.

The exact mechanism used is one presented in Impartial division of a dollar by Geoffroy De Clippel, Herve Moulin and Nicolaus Tideman (JET 2008). The mechanism ensures that lying has no first-order effect on what one will get oneself, and the reviews are aggregated in a particular way that ensures that other attractive properties such as consensuality and anonymity are ensured. Sven Seuken (special thanks to him!) and myself had adapted it for a blockchain start-up company before I worked on it as a grading tool. was also inspired by spliddit (see a demo tool), which, was developed jointly with some of the authors of the aforementioned article (their website offering some other fun mechanisms too). We tailored the user flow and privacy settings to the teacher-student setting.

I must say that it is a bit strange for me to point out fingers to whom did not contribute. However, I understand the intention behind this system and I think it's a good idea. It's like the coin you put in the supermarket cart, knowing that you can lose something, forces you to cooperate. Not knowing how the others will rate you is a good incentive not to free ride.
Student (anonymous)

Can everyone use it?
Yes! We have implemented a first version of peer review for group projects online under (“DiViSioN”) – with considerable time investment and with the help of a Critical Thinking project. The site is accessible to everyone. Teachers can set up projects, invite team leaders and team members through unique links being generated by the site, and determine exactly how and what they would like to split. Students then get to peer review each other anonymously, based on which the grades are given.

Does it work in practice?
In the use courses we have taught and followed, the introduction of peer review in group projects improved project quality and teamwork. Teams coordinate better and communicate more effectively, leading to better results and higher satisfaction within the team. Students who get better grades than the average feel rewarded for their efforts and are appreciative of their peers recognition. Students who get grades below average take this as a fair outcome of not having cared as much for the project (and the grade) as others, or as peer feedback that they should try to do more next time. Indeed, positive/negative peer reviews are novel forms of feedback for students that can help foster positive group dynamics and result in better team synergies. Most importantly, we actually see fewer free-riders and fewer complaints about them from group members who feel they are being exploited.

Status quo and outlook
This module was first used in the Controversies in Game Theory course at ETH Zurich in 2017, which was a course about the role of mechanisms in game theory to foster cooperation amongst humans. It has been used in this course annually since, as well as in other courses at the Computational Social Science group. Other use cases of include projects at ETH Zurich, ZHAW, UZH and Cornell, and others are coming (e.g. Paris Dauphine, Yale and UPenn).
Together with Rafael Kallis, Jochen Krause and Luke Zirngibl, we are collaborating with some of the other DVSN users to report our findings from using the mechanism in the field, and we are working on the mechanism functionalities to include further tools such as team management, communication, milestone definition, collusion detection, conflict mediation, etc. Actually, some of our ex-students are helping us conceptualize new mechanisms and code — thanks Johannes!

Much better than me, Karin Brown described for me how we split marks of group projects in my courses.

Course Description

Introduction to Game Theory
This course introduces the foundations of game theory. It treats models of social interaction, conflict and cooperation, the origin of cooperation, and concepts of strategic decision making behavior. Examples, applications, and the contrast between theory and empirical results are particularly emphasized.
Learn the fundamentals, models, and logic of thinking about game theory.
Apply game theory models to strategic interaction situations and critically assess game theory's capabilities through a wide array of experimental results.
Science in Perspective
Science in Perspective
Block course
Teaching Power:
Graded course group projects