One of the hallmarks of many group projects is the incorporation of peer assessment into the project grade. This strategy is often used to try to reduce the potential for inequity when individual members of groups share a common grade. However, peer assessment is more than simply having students evaluate how much different group members contributed to a project. It refers to a much broader array of activities that can serve a variety of functions.

According to Keith Topping (2009), who has written extensively on the subject, peer assessment refers to “an arrangement for learners to consider and specify the level, value, or quality of a product or performance of other equal-status learners” (p. 20). If you need a definition, this is a perfectly good one, but beyond merging definitions of the words peer and assessment into a single coherent sentence, it doesn’t actually leave me feeling like I know anything new about what peer assessment is. Perhaps more useful for developing a better understanding of peer assessment is Topping’s breakdown of the typology of peer assessment, which serves to highlight the diverse types of activities that fall under this label by pointing out the ways in which specific peer assessments can differ from one another.

Based on a review of research on the subject, Topping (1998) suggested 17 ways in which peer assessments might vary (see the table below). Notably, this was neither intended to be a comprehensive list of ways in which peer assessments might vary, nor a list of variables that have been shown to impact the results of peer assessment. Rather, this is simply a list of the main ways in which peer assessment activities varied in the research he had reviewed. Since publishing this typology, the list has been revised and reorganized by other researchers. For example, Ineke van den Berg and colleagues (2006) grouped these 17 variables into a four-factor structure (which I’ve included, as I think it makes the list easier to digest). Sarah Gielen and colleagues (2011) further revised this typology by adding, removing, and consolidating some of the variables from Topping’s original list to create an even longer list of 20 variables grouped into five clusters. Topping even revisited the idea over a decade later, creating a list of additional questions related to variables that need further study (Topping 2010).

FactorVariableRange of Variation
Function of AssessmentCurriculum area/subjectAll
ObjectivesOf staff and/or students? Time saving or cognitive/affective gains?
Focus*Quantitative/summative or qualitative/formative or both?
Product/outputTests/marks/grades or writing or oral presentations or other skilled behaviours?
Relation to staff assessmentSubstitutional or supplementary
Official weightContributing to assessee final official grade or not?
Mode of InteractionDirectionalityOne-way, reciprocal, mutual?
ContactDistance or face-to-face?
Composition of Feedback GroupYearSame or cross year of study?
AbilitySame or cross ability?
Constellation AssessorsIndividuals or pairs or groups?
Constellation AssessedIndividuals or pairs or groups?
External factorsPlaceIn/out of class?
TimeClass time/free time/informally?
RequirementCompulsory or voluntary for assessors/ees?
RewardCourse credit or other incentives or reinforcement for participation
Table adapted from Topping 1998.

*I would argue the Focus variable should be separated into two variables: Focus, which is highly related to the objective variable and refers to the summative or formative focus of the feedback, and a new variable called Format, which refers to the quantitative vs qualitative format of the feedback.

Regardless of the precise list of variables identified, this all paints a picture of the complexity of peer assessment as a topic of research, and it’s because of the incredible complexity that Topping argues that “sweeping conclusions about peer assessment in general are unlikely to be meaningful, irrespective of issues of implementation quality” (1998, 250). By now, this feels like a running theme when it comes to research on anything related to collaborative learning. So, where does that leave us? Well, if we can’t make sweeping conclusions, maybe we can shoot for specific ones.

Of all the variables on that list, I would argue that the most important one for instructors and researchers is the objective. For instructors, the question, “why am I having students do this?” is really the first question that should be asked with any assignment. For researchers, this question is also of primary importance because the answer dictates the outcome that needs to be measured in a study of the peer assessment activity. Related to this variable is what Topping calls the “focus” of the peer assessment, which he characterizes as a variable with two levels: summative/quantitative or formative/qualitative. I think this is confusing for two reasons. First, whether an assignment is intended to be summative or formative is so intrinsically tied to the objective or purpose of the assignment that I don’t see the two as being separate variables. Second, the type of data collected (i.e., quantitative or qualitative) is not the same thing as the purpose of the data (i.e., summative or formative assessment).

If the objective is the “why” behind assessment, the product is the “what.” It’s that thing which peers are assessing. Knowing both things is the start of building an assignment or activity. All the other variables then become features of the activity which can possibly impact how successful you are in achieving your objective. The type of peer assessment that I mentioned in the first paragraph of this post involves intra-group assessment of contributions to a collaborative project. Since I’m looking into research on collaborative projects, that’s the kind of peer assessment that I’m going to focus on.

This kind of activity typically involves the assessment of individual group members’ general effort on the project or their specific teamwork behaviors, but carrying out this kind of peer assessment can serve several possible objectives. The objective for the instructor may be to increase the validity of a group grade by incorporating the peer assessments of individual group members into their group grade (Baker 2008; Dijkstra et al. 2016) or at the very least to increase students’ sense of the fairness of that final grade (Carvalho 2013).

Conversely, the objective may be more about student development, rather than assessment. For example, the intention may be to use peer assessment as a means of reducing the likelihood of problematic behaviors like social loafing and freeriding (Karau and Williams1993) or helping group members develop good teamwork skills (Druskat and Wolff 1999). In the next few posts, I will discuss the research on intra-group peer assessment and how it might be used to achieve different summative and formative objectives.


Baker, Diane F. 2008. “Peer Assessment in Small Groups: A Comparison of Methods.” Journal of Management Education 32(2): 183–209.

Carvalho, Ana. 2013. “Students’ Perceptions of Fairness in Peer Assessment: Evidence from a Problem-Based Learning Course.” Teaching in Higher Education 18(5): 491-505.

Dijkstra, Joost, Mieke Latijnhouwers, Adriaan Norbart, and René A. Tio. 2016. “Assessing the I in Group Work Assessment: State of the Art and Recommendations for Practice.” Medical Teacher 38(7): 675–682.

Druskat, Vanessa Urch, and Steven B. Wolff. 1999. “Effects and Timing of Developmental Peer Appraisals in Self-Managing Work Groups.” Journal of Applied Psychology 84(1): 58.

Gielen, Sarah, Filip Dochy, and Patrick Onghena. 2011. “An Inventory of Peer Assessment Diversity.” Assessment & Evaluation in Higher Education, 36(2): 137-155.

Karau, Steven J., and Kipling D. Williams. 1993. “Social Loafing: A Meta-analytic Review and Theoretical Integration.” Journal of Personality and Social Psychology 65(4): 681-706.

Topping, Keith. 1998. “Peer Assessment between Students in Colleges and Universities.” Review of Educational Research 68(3): 249–76.

Topping, Keith J. 2009. “Peer Assessment.” Theory into Practice 48(1): 20–27.

Topping, Keith J. 2010. “Methodological Quandaries in Studying Process and Outcomes in Peer Assessment.” Learning and Instruction. 20(4):339-343.

van den Berg, Ineke, Wilfried Admiraal, and Albert Pilot. 2006. “Design Principles and Outcomes of Peer Assessment in Higher Education.” Studies in Higher Education 31(3): 341–56.

David Buck, associate professor of psychology, is the 2020-2022 Center for Engaged Learning Scholar. Dr. Buck’s CEL Scholar project focuses on collaborative projects and assignments as a high-impact practice.

How to Cite this Post

Buck, David. (2021, August 12). A Typology of Peer Assessment [Blog Post]. Retrieved from