HomeEngaged Learning Resources Assessment and Feedback Share: Section NavigationSkip section navigationIn this sectionDefinition What Makes it a Key Practice Research-Informed Practices Embedded and Emerging Questions Key Scholarship Related Blog Posts References The Center thanks Aaron Trocki for contributing the initial content for this resource as part of his CEL Scholar work. Definition Assessment and feedback are inherent parts of teaching and learning in higher education. The meaning of these practices may seem straightforward, with assessment being to judge current competence, and feedback being to offer information to students (Boud and Molloy 2013). In practice, however, achieving quality assessment and feedback practices is complicated. Forsyth (2023) invokes Rittel and Webber’s (1973) notion of a wicked problem to characterize assessment as “unique, poorly defined, has many stakeholders with potentially conflicting values, and has no single correct solution” (Forsyth 2023, 15). We’ve recreated Forsyth’s table to capture some of the multiple purposes of assessment. Multiple purposes of assessment To judge current competence To judge current knowledge To judge capacity for future learning To encourage focus on particular aspects of the curriculum To reward the meeting of teacher expectations To accredit a minimum level of professional competence To differentiate performance among students To validate the effectiveness of teaching To permit progression to the next level of study To permit award of a final qualification To demonstrate maintenance of academic standards To identify areas for individual future development To recognize an ability to follow instructions To recognize the ability to perform under pressure To confirm that intended learning outcomes have been achieved To build student confidence To reduce the number of students on the course To judge teacher competence in preparing students for assessment Forsyth (2023, 13) Within this context, educational theorists and researchers have developed, tested, and studied numerous models of assessment and feedback and their implications for teaching and learning in higher education. These models rest on differing assumptions and theories of learning and often include taxonomies of learning outcomes (e.g., Bloom’s taxonomy). Faculty engages students in understanding assessment expectations and feedback practices. Photo by Freepik Unpacking the many nuanced layers of assessment is a daunting task, including, but not limited to, considering formative and summative assessments; assessment tasks and course work; classroom- and program-level outcomes; criterion- and norm-referenced grading; and multiple stakeholders. Furthermore, the intent of assessment can be categorized as assessment of learning, assessment for learning (e.g. Boud et al. 2018), or a combination of both. Feedback is considered an integral part of assessment with its own purposes, such as helping students with future learning and justifying a mark or grade. Forsyth et al. (2015) offer an assessment lifecycle framework as a professional tool to understand and unpack assessment practices. You can view a version of this framework at Manchester Metropolitan University’s website, which draws from Forsyth’s work. In the setting phase, faculty develop and share assignment details that include how marks will be apportioned, the assessment criteria, submission date, method of submission, and the date and method for the return of grades and feedback to students. During the supporting phase, students are made aware of the many ways they can be assisted in completing an assessment such as assistance from a writing center. Marking takes place next, where faculty carefully consider what is fair and accurate when assigning grades. This is coupled with giving feedback to students that identifies strengths and performance against learning outcomes. Finally in the reviewing phase, students should reflect on their performance and make plans to improve on future assessments. Forsyth (2023) defines assessment literacy as, “a fluent understanding of the vocabulary, principles, and purposes of assessment and the ability to make informed and confident decisions about the design and management of assessment.” Assessment literacy represents a key component of nuanced professional knowledge; however, it is often deprioritized by higher education faculty (Clark and Talbert 2023). The lifecycle of assessment exists within a larger framework of teaching. One useful frame is Barbeau and Happel’s Critical Teaching Behaviors (CTB), as it accounts for how assessment overlaps other aspects and goals of teaching and learning. Effective teaching is not separate from effective assessment and feedback. Assessment as a critical teaching behavior aligns well with Boud et al.’s (2018) conceptualization of assessment-as-practice. [T]he field of assessment is acknowledgement of the everyday activities of assessment as conducted, without framing them normatively in terms of what assessment should do. It provides an emphasis on assessment-as-practiced and how it operates, thereby attending to the many issues rendered invisible when it is configured as marking students. (Boud et al. 2018, 1109 –1110) This practice perspective goes beyond just measuring performance on tasks and accounts for the ‘before, during, and after’ along with effects on students and faculty. Embracing assessment as a practice encourages faculty to consider different models of assessment that go beyond a measurement perspective towards more student-centered approaches. These approaches include, but are not limited to, learning-oriented assessments (Carless 2007), peer and self-assessments, authentic assessment, feedforward strategies, contract grading, and ungrading (Stommel 2020). When applying various models of assessment, Wood (2021) encourages faculty to distinguish between learning goals and learning outcomes, with the latter being specific to students’ actions based on operational verbs (50). Many of these models promote student collaboration and buy in when developing assessment expectations, along with student choice when completing assessments (O’Neill and Padden 2021). Attention is also given to how technology can be used in assessment and feedback processes. Student-centered assessment practices encourage collaboration, choice, and active engagement in learning. Photo by Freepik The practice perspective of assessment accounts for effects on students and faculty and opens consideration to addressing equity in assessment. Driscoll (2021) offers some simple beginnings for achieving equity in assessment. She suggests starting with students by answering: who are they and how do they learn? This practice can assist with building caring classrooms that infuse culturally responsive teaching. Driscoll recommends that faculty look for opportunities to make each process in the assessment cycle more equitable for learners. Processes include developing learning outcomes; designing assignments and rubrics; collecting student evidence; reviewing and analyze student evidence; using evidence to change for improvement; and assessing changes for improvement (32). Driscoll invokes Montenegro and Jankowski’s (2017) notion of culturally responsive assessment, which is student-centered and involves students throughout the assessment process. Back to Top What makes it a key practice for fostering engaged learning? The practice perspective of assessment and student-centered models can lead to deeper learning, greater student engagement, and positive impacts on underserved student populations. Feedback is an inherent feature of high impact practices (HIPs), described by Kuh et al. (2017) as powerful interventions that foster student success. Student success is defined as, “academic achievement, engagement in educationally purposeful activities, satisfaction, persistence, attainment of educational objectives, and acquisition of desired learning outcomes that prepare one to live an economically self-sufficient, civically responsible, and rewarding life” (Kuh et al. 2017,9). Eleven HIPs have been identified, along with eight key features. One feature is named as frequent, timely, and constructive feedback, and Kuh et al. (2017) provide the following example. A student-faculty research project during which students meet with and receive suggestions from the supervising faculty (or staff) member at various points to discuss progress, next steps, and problems encountered and to review the quality of the student’s contributions up to and through the completion of the project. This key feature of HIPs reflects the purpose of Clark and Talbert’s feedback loop where students use feedback in cycles to engage them in learning and refining their efforts. Figure 1. Flowchart image adapted from Clark and Talbert (2023, 12) Clark and Talbert (2023), authors of Grading for Growth, also offer alternative approaches to traditional models of assessment and feedback. They provide four pillars for alternative grading. Student work is evaluated using clearly defined and context-appropriate content standards for what constitutes acceptable evidence of learning. Students are given helpful, actionable feedback that the student can and should use to improve their learning. Student work doesn’t have to receive a mark, but if it does, the mark is a progress indicator toward meeting a standard and not an arbitrary number. Students can reassess work without penalty, using the feedback they receive, until the standards are met or exceeded (28–29). These student-centered pillars put a priority on growth and learning in response to clear content standards. It is worth considering how these pillars may work to support HIPs. Collecting and reviewing evidence of learning is a key part of equitable, culturally responsive assessment practices. Photo by Freepik Drawing on these scholarly conversations about feedback, Moore (2023) identifies offering feedback as one of six key practices for fostering engaged learning. She notes that formative and summative feedback help students advance their understanding of key concepts and assess their application of knowledge and strategies as they transfer what they’re learning to new contexts. Back to Top Research-Informed Practices Assessment While models of assessment and feedback have been in play since the beginning of higher education, it was only recently that assessment itself has become a field of study. According to Ewell and Cumming (2017), the assessment movement in higher education began in the mid-1980s. Within this movement, many have developed and shared research-informed practices related to assessment and feedback in extant literature. Some selected practices are summarized below. In their text Critical Teaching Behaviors, Barbeau and Happel distill research-informed assessment practices down to six broad items. Align assessments with course learning goals and class activities Schedule regular summative assessments to measure student progress toward learning outcomes Embed formative assessments and opportunities for self-assessment in instruction and scaffold projects to support learning Communicate purpose, task, and criteria for assessments Provide timely, constructive, and actionable feedback to students Review assessment data to make informed decisions about course content, structure, and activities (Barbeau and Happel 2023, 60). The research-informed practices provide faculty with guidance on how to conduct assessment through the duration of a course and how to use assessment data to make course improvements. Other research has focused on assessment at the task level. Forsyth (2023) recommends that faculty consider the following key questions about any assignment task. Validity: Does it let students demonstrate achievement of the learning outcome? Manageability: Will it be straightforward to mark, give feedback, and moderate? Clarity: Will students understand what to do and see how this task fits into their course overall? Satisfaction: Will I look forward to marking it? Her work is based on extensive experience applying research-informed practices to institutional change efforts. Other researchers have addressed the cognitive demands represented in assessment tasks. A well-known framework for the cognitive demand of learning objectives is Bloom’s Taxonomy, published in 1956. The levels in his taxonomy are based on the complexity and richness represented in a learning objective. These levels offer faculty and students a way to perceive a hierarchy of learning objectives and their associated assessment tasks. In 2001, a team revised the category names of Bloom’s Taxonomy from nouns to verbs. The revised Bloom’s Taxonomy contains six levels. Figure 2. Bloom’s Taxonomy (2001 revision). Moving from the bottom to the top of the pyramid represents a shift from lower-order thinking to higher-order thinking. Although this taxonomy has been critiqued for oversimplifying cognitive complexity, it remains a valuable tool for writing and assessing learning objectives. Creative Commons Attribution license, Vanderbilt University Center for Teaching, Bloom’s Taxonomy. Feedback Feedback has been studied as a research-informed practice that deepens student learning and leads to significant learning gains. Numerous researchers have studied modes and characteristics of what makes for quality feedback. Nicol and Macfarlane-Dick (2006) offered seven principles for good feedback practice. Clarify what good performance is Facilitate self-assessment Deliver high quality feedback information Encourage teacher and peer dialogue Encourage positive motivation and self-esteem Provide opportunities to close the gap Use feedback to improve teaching These principles focus on what the teacher does to improve feedback practices. Other researchers have addressed what students should do with feedback. In their article, Measuring What Learners Do in Feedback: The Feedback Literacy Behaviour Scale, Dawson et al. (2023) introduced their feedback literacy behavior scale (FLBS). This self-reported instrument is intended to measure students’ behaviors related to feedback rather than their perceptions or orientations. This scale is organized along five factors. Seek feedback information: eliciting feedback information from a variety of sources, including one’s own notions of quality and examples of good work Makes sense of information: processing, evaluating, and interpreting feedback information Use feedback information: putting feedback information into action to improve the quality of current and/or future work Provide feedback information: considering the work of others and making comments about its quality Manage affect: persisting in feedback processes despite the emotional challenges they pose To use the FLBS students rank how often they exhibit particular behaviors associated with each factor on a scale, 1: never, 2: almost never, 3: rarely, 4: sometimes, 5: almost always, 6: always. Information on how the scale was created and how their work developed can be found at feedbackliteracy.org. Faculty may use this scale to gauge students’ current levels of feedback literacy and then develop strategies for meeting students where they are regarding feedback literacy learning needs. Generative Artificial Intelligence Technology and Assessment Educational researchers have also investigated technology use in assessment and feedback practices. These studies often focus on how faculty can offer feedback over technology platforms or how students can use technology to augment how they provide evidence of learning. Readily available generative artificial intelligence (GenAI) technology has changed the way faculty and students can interact with technology, access information, and produce new text, images, videos, software code and other forms of output. Faculty have responded with concern over how students demonstrate evidence of their learning when this technology is accessible to them. As the availability of AI chatbots has increased, the notion of AI literacy has likewise gained traction. Long and Magerko (2020) define AI literacy as “a set of competencies that enables individuals to critically evaluate AI technologies; communicate and collaborate effectively with AI; and use AI as a tool online, at home, and in the workplace” (2). In higher education, a tension has emerged between promoting authentic learning and preparing students for using GenAI. Within this context, some efforts have been undertaken to support faculty and students in considering how GenAI technology might be purposefully used in teaching and learning. Regarding GenAI and assessment, the Artificial Intelligence-Supported Assessment (AI-SA) Framework (Trocki 2025) may prove beneficial. The AI-SA framework was developed to assist faculty with considering how AI technology may be incorporated into the assessment tasks they design. Technology, including AI, can support interactive and personalized assessment opportunities. Photo by Freepik In this work, an artificial-intelligence-supported assessment (AI-SA) is defined as an assessment that includes student utilization of AI technology, such as a chatbot, and provides the teacher with information about student progress towards achieving learning goals. A key part of the framework is its use of the terms AI-active and AI-inactive. AI-active refers to assessment components or tasks where students are directed to interact (e.g. prompt engineer) with AI technology, whereas AI-inactive refers to assessment components or tasks where students are not allowed to interact with AI technology. The framework is a list of questions that faculty should consider and respond to when developing AI-SAs. The AI-SA framework is provided in the table below. Guiding Questions Response Scale: 1-very low; 2-low; 3-middle; 4-high; 5-very high Ranking (1 – 5) 1) To what degree does the artificial intelligence-supported assessment (AI-SA) align with student learning objectives? 2) To what degree does the AI-SA give every student equitable access and opportunity to actively engage in learning? 3) To what degree does the AI-SA align with students’ current levels of AI literacy: ability to understand, use, monitor, and critically reflect on AI applications (e.g. Long, Blunt, and Magerko 2021)? 4) To what degree does the AI-SA encourage students to achieve the teacher’s purpose and/or learning goals in efficient and powerful ways, which may not be feasible without the use of AI? 5) To what degree does the AI-SA appropriately assign assessment components the designation of AI-active or AI-inactive? 6) To what degree does the AI-SA encourage students to evaluate the accuracy and usability of AI output? 7) To what degree does the AI-SA promote critical thinking about the benefits and drawbacks of using AI? 8) To what degree does the AI-SA prepare students for using AI outside of academia? 9) To what degree does the AI-SA align with the institution’s honor code? 10) To what degree does the AI-SA align with the institution’s position on AI? Artificial Intelligence-Supported Assessment Framework (Trocki 2025) To apply the AI-SA framework, faculty rank framework question responses using a Likert scale from one to five regarding the AI-SA they have developed or are developing. The quality of the AI-SA can then be assessed in individual framework question components and wholistically by calculating the average, median, or mode of the framework rankings. It is anticipated that as faculty use the AI-SA framework they will better account for students’ perspectives, the purpose and/or learning goal(s), capabilities of AI technology, and implications for using the AI technology. Back to Top Embedded and Emerging Questions for Research, Practice, and Theory This resource page addresses models of assessment and feedback outside of traditional assumptions and approaches. These alternate grading models often represent recent developments or work in progress and need to be researched for their efficacy. Three models that have gained traction are standards-based grading, specifications grading, and ungrading. The efficacy of each has been explored and documented to some degree, however much work needs to be done particular to their applications in various disciplines of higher education. Furthermore, many faculty use hybrid models that include some elements of alternative grading mixed with traditional grading, and these need to be studied as well. Some studies of alternative grading and hybrid models can be found at the classroom level, but more robust studies are needed about larger scale efforts. Collaborative conversations around student work allow feedback to guide understanding and support growth. Photo by Freepik Practitioners and researchers have recognized the importance of feedback for student learning. A foundational characteristic to giving quality feedback is using clear and actionable learning objectives and giving students opportunities to act on that feedback. Additional research is needed to unpack what quality feedback entails unique to different disciplinary content. Feedback is a key feature of HIPs, and further research is needed to refine our knowledge of attributes of constructive feedback related to each HIP. Feedback literacy (Carless and Boud 2018) is a helpful construct to promote the understanding and use of feedback with students. Dawson et al.’s (2023) feedback literacy behavior scale (FLBS) can be used to gauge students’ behaviors related to feedback, but this instrument is limited to student self-reported behaviors. Further research is needed to develop valid and reliable measures of feedback literacy that include students’ perceptions and orientations as well. Additional research is needed about student and faculty perceptions of alternate grading and feedback. A deeper understanding of these perceptions will assist in promoting equity in assessment as described by Driscoll (2021). The practice perspective of assessment (Boud et al. 2018) gives theoretical grounding to alternative grading practices, but it is unknown how familiar faculty may be with this perspective. Instruments that can identify faculty awareness and knowledge of alternative grading tenants and approaches will assist in professionally developing faculty to incorporate these models. Research that identifies faculty perceptions (O’Neill & Padden 2021; Rawlusyk 2018) and needs regarding assessment literacy will help in the development of resources and supports that encourage the experimentation and adoption of alternate grading practices. Back to Top Key Scholarship Boud, David. 2000. “Sustainable Assessment: Rethinking Assessment for the Learning Society.” Studies in Continuing Education 22 (2): 151-167. https://doi.org/10.1080/713695728.More InformationAbout this Journal Article:Models of assessment that students experience in higher education do not reflect modes of assessment experienced after college in a learning society. Boud recommends sustainable assessment practices that address learning goals of higher education courses along with preparing students for their own assessment needs in the future. This two–pronged approach to assessment will assist faculty in reconsidering the models traditionally employed. [Annotation contributed by Aaron Trocki] Boud, David, Phillip Dawson, Margaret Bearman, Sue Bennett, Gordon Joughin, and Elizabeth Molloy. 2018. “Reframing Assessment Research: Through a Practice Perspective.” Studies in Higher Education 43 (7): 1107-1118. https://doi.org/10.1080/03075079.2016.1202913.More InformationAbout this Journal Article:In higher education, the measurement perspective of assessment has historically been assumed and used to guide practice. The authors advocate for a new practice perspective of assessment that extends ideas from practice theory. Benefits of this practice perspective include infusing assessment into teaching and learning as opposed to separating assessment from teaching and learning. The recommendation of assessments for learning can work to reposition assessment within teaching and learning practices. [Annotation contributed by Aaron Trocki] Boud, David, and Elizabeth Molloy. 2013. “Rethinking Models of Feedback for Learning: The Challenge of Design.” Assessment and Evaluation in Higher Education 38 (6): 698-712. https://doi.org/10.1080/02602938.2012.691462.More InformationAbout this Journal Article:This article explores the educative potential and challenges around producing student feedback. Two models of feedback are addressed and contrasted. In the first model, the instructor drives the feedback, but in the second model, students and instructors are integral to the feedback production process. The authors explain the benefits of the latter model, which include making students reflective assessors of their own learning. This feedback practice may help students sustain reflective assessment practice in subsequent learning. [Annotation contributed by Aaron Trocki] Carless, David. 2007. “Learning‐Oriented Assessment: Conceptual Bases and Practical Implications.” Innovations in Education and Teaching International 44: 57-66.More InformationAbout this Journal Article:The learning-oriented assessment project worked to develop and promote assessment practices that promote student learning. In learning-oriented assessment, formative and summative assessment combine and treat assessment tasks as learning tasks; promote student involvement in assessment; and utilize closed feedback loops. Barriers include accountability and distrust and need to be addressed while promoting learning-oriented assessment. [Annotation contributed by Aaron Trocki] Clark, David, and Robert Talbert. 2023. Grading for Growth: A Guide to Alternative Grading Practices that Promote Authentic Learning and Student Engagement in Higher Education. Stylus Publishing.More InformationAbout this Book:Grading for Growth presents a need for alternative grading through the sharing of the authors’ stories and perspectives on the shortcomings of traditional grading practices. After addressing some history on grading practices, they present a framework for alternative grading that includes four pillars. Examples of alternative grading and their implementations are described with an emphasis of how each promotes authentic learning. Examples include standards-based grading and specifications gradings. Contexts such as large classes and lab classes are addressed along with practical guidance on how to adopt alternative grading practices. [Annotation contributed by Aaron Trocki] Driscoll, Amy, Swarup Wood, Dan Shapiro, and Nelson Graff. 2021. Advancing Assessment for Student Success: Supporting Learning by Creating Connections Across Assessment, Teaching, Curriculum, and Cocurriculum in Collaboration with Our Colleagues and Our Students. Routledge.More InformationAbout this Book:This book examines ways that assessment practices promote student success. It begins with an overview of the field of assessment, which can help faculty understand where we have been,what has worked, and opportunities for refining assessment practices. An invaluable takeaway is Wood’s distinction between learning goals and learning outcomes, with the latter being specific to students’ actions based on operational verbs. The SMART framework for developing learning outcomes is a powerful tool and stands for the following criteria: (1) Specific; (2) Measurable; (3) Action-oriented; (4) Reasonable; and (5) Time-bound. [Annotation contributed by Aaron Trocki] Forsyth, Rachel. 2023. Confident Assessment in Higher Education. SAGE Publications, Inc. .More InformationAbout this Book:Although assessment is integral to teaching and learning in higher education, many faculty members are not confident in discussing or changing their assessment practices. Assessment in higher education is complicated and involves multiple stakeholders. The model for wicked problem management can be used to address these complexities and consider the lifecycle of assessment. Multiple approaches for making assessment work are addressed with an emphasis on assessment tasks and giving meaningful feedback. [Annotation contributed by Aaron Trocki] Stommel, Jesse. 2020. “Ungrading: A Bibliography.” Jesse Stommel (blog), March 3, 2020. https://www.jessestommel.com/ungrading-a-bibliography/.More InformationAbout this Blog Post:This blog post gives an informative introduction to those interested in ungrading. It uses the intriguing question, “What if we didn’t grade?”, to explore literature about ungrading and alternate grading approaches. A theme of questioning all the choices educators make emerges in the blog’s annotations. Information in this post rewards those interested in the topic of ungrading. [Annotation contributed by Aaron Trocki] Winstone, Naomi, and David Carless. 2019. Designing Effective Feedback Processes in Higher Education: A Learning-Focused Approach. Routledge.More InformationAbout this Book:This book highlights the importance and influence feedback has on student achievement. After addressing the many challenges to feedback practices, the authors describe feedback practices and introduce the construct of student feedback literacy. Feedback literacy involves calibrating evaluative judgement to inform future study behavior. Feedback literacy is an essential life skill and is important to one’s professions and relationships. [Annotation contributed by Aaron Trocki] See all Assessment entries Back to Top Related Blog Posts Allocating Time for Multiple Choice Tests October 2, 2025 by Jessie L. Moore 60-Second SoTL – Episode 60 How much time should students have to take online, multiple-choice tests? That’s the focus of the open-access Teaching & Learning Inquiry article featured in this week’s 60-second SoTL: Kennette, Lynne N., and Dawn McGuckin. 2025. “Best… Creating Feedback Cultures with Neurodivergent Students September 23, 2025 by Jessie L. Moore Since publishing Key Practices for Fostering Engaged Learning: A Guide for Faculty and Staff in 2023, I’ve had several opportunities to brainstorm with educators about how the key practices apply in their higher education settings and with their diverse students…. Building Trust through Feedback September 18, 2025 by Jessie L. Moore 60-Second SoTL – Episode 60 This episode features an open access article on building trust through feedback with relationship-building conditions and agency-promoting practices: Bayraktar, Breana, Kiruthika Ragupathi, and Katherine A. Troyer. 2025. “Building Trust Through Feedback: A Conceptual Framework for Educators.” Teaching… Implementing Effective Feedback Practices: Strategy 2 June 27, 2025 by Aaron Trocki In part one of this two-part blog, I revisited results from my students who completed the feedback literacy behavior scale (Dawson et al. 2023). I identified two areas in which students demonstrated room for improvement: (1) seeking feedback information; and… Should AI be Involved in Assessing Student Work? June 24, 2025 by Amanda Sturgill I was at a campus workshop this week, and we discussed this recent article about a student requesting a tuition refund after discovering a piece of course content was generated by ChatGPT (Hill 2025). I thought the use of the… Implementing Effective Feedback Practices: Strategy 1 May 20, 2025 by Aaron Trocki Two blog posts ago, I shared my early spring semester plans for giving students effective feedback and promoting their feedback literacy in an early college calculus course. Going into that semester, I realized that to promote student feedback literacy, it… View All Related Blog Posts Back to Top References Barbeau, Lauren, and Claudia Cornejo Happel. 2023. Critical Teaching Behaviors: Defining, Documenting, and Discussing Good Teaching. Stylus Publishing. Boud, David, Phillip Dawson, Margaret Bearman, Sue Bennett, Gordon Joughin, and Elizabeth Molloy. 2018. “Reframing Assessment Research: Through a Practice Perspective.” Studies in Higher Education 43(7): 1107– 1118. https://doi.org/10.1080/03075079.2016.1202913. Boud, David, and Elizabeth Molloy. 2013. “Rethinking Models of Feedback for Learning: The Challenge of Design.” Assessment and Evaluation in Higher Education 38(6): 698– 712. https://doi.org/10.1080/02602938.2012.691462. Carless, David. 2007. “Learning‐Oriented Assessment: Conceptual Bases and Practical Implications.” Innovations in Education and Teaching International 44, 57–66. https://doi.org/10.1080/14703290601081332. Carless, David, and David Boud. 2018. “The development of student feedback literacy: Enabling uptake of feedback.” Assessment and Evaluation in Higher Education, 43(8), 1315– 1325. https://doi.org/10.1080/02602938.2018.1463354. Clark, David, and Robert Talbert. 2023. Grading for Growth: A Guide to Alternative Grading Practices that Promote Authentic Learning and Student Engagement in Higher Education. Routledge. https://doi.org/10.4324/9781003445043. Cumming, Tammie, and M. David Miller, eds. 2017. Enhancing Assessment in Higher Education: Putting Psychometrics to Work. Foreword by Michael J. Kolen. Stylus Publishing. Dawson, Phillip, Zi Yan, Anastasiya Lipnevich, Joanna Tai, David Boud, and Paige Mahoney. 2023. “Measuring What Learners Do in Feedback: The Feedback Literacy Behaviour Scale.” Assessment & Evaluation in Higher Education 49 (3): 348–62. https://doi.org/10.1080/02602938.2023.2240983. Driscoll, Amy, Swarup Wood, Dan Shapiro, and Nelson Graff. 2021. Advancing Assessment for Student Success: Supporting Learning by Creating Connections Across Assessment, Teaching, Curriculum, and Cocurriculum in Collaboration with Our Colleagues and Our Students. 1st ed. Routledge. https://doi.org/10.4324/9781003442899. Ewell, Peter, and Tammie Cumming. 2017. “Introduction: History and Conceptual Basis of Assessment in Higher Education.” In Enhancing Assessment in Higher Education: Putting Psychometrics to Work, edited by Tammie Cumming and David M. Miller. Stylus Publishing. https://academicworks.cuny.edu/ny_pubs/231. Forsyth, Rachel. 2023. Confident Assessment in Higher Education. SAGE Publications, Inc. Forsyth, Rachel, Rod Cullen, Neil Ringan, and Mark Stubbs. 2015. “Supporting the Development of Assessment Literacy of Staff through Institutional Process Change.” London Review of Education 13, 34– 41. https://doi.org/10.18546/LRE.13.3.05. Kuh, George, Ken O’Donnell, and Carol Geary Schneider. 2017. “HIPs at Ten.” Change: The Magazine of Higher Learning 49(5): 8–16. https://doi.org/10.1080/00091383.2017.1366805. Long, Duri, Takeria Blunt, and Brian Magerko. 2021. “Co-designing AI Literacy Exhibits for Informal Learning Spaces.” Proceedings of the ACM on Human-Computer Interaction 293(5): 1–35. https://doi.org/10.1145/3476034. Long, Duri, and Brian Magerko. 2020. “What is AI Literacy? Competencies and Design Considerations.” In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3313831.3376727. Montenegro, Erick, and Natasha A. Jankowski. 2017. “Equity and Assessment: Moving Toward Culturally Responsive Assessment.” University of Illinois and Indiana University, National Institution of Learning Outcomes Assessment (NILOA). https://doi.org/10.1002/au.30117. Moore, Jessie. 2023. Key Practices for Fostering Engaged Learning: A Guide for Faculty and Staff. Routledge. Nicol, David J., and Debra Macfarlane‐Dick. 2006. “Formative Assessment and Self‐regulated Learning: A Model and Seven Principles of Good Feedback Practice.” Studies in Higher Education 31 (2): 199–218. https://doi.org/10.1080/03075070600572090. O’Neill, Geraldine, and Lisa Padden. 2021. “Diversifying Assessment Methods: Barriers, Benefits and Enablers.” Innovations in Education and Teaching International 59 (4): 398–409. https://doi.org/10.1080/14703297.2021.1880462. Rawlusyk, P. 2018. “Assessment in Higher Education and Student Learning.” Journal of Instructional Pedagogies. https://eric.ed.gov/?id=EJ1194243. Rittel, Horst, W. J., and Melvin M. Webber. 1973. “Dilemmas in a General Theory of Planning.” Policy Sciences 4(2), 155– 169. https://doi.org/10.1007/BF01405730. Stommel, Jesse. 2020, March 3. “Ungrading: A Bibliography.” (blog) https://www.jessestommel.com/ungrading-a-bibliography/ Trocki, Aaron. 2025. “Introducing a Framework for Incorporating Artificial Intelligence into Assessments.” Proceedings of Society for Information Technology & Teacher Education International Conference 906– 909. Association for the Advancement of Computing in Education (AACE). https://academicexperts.org/conf/site/2025/papers/64940/. Back to Top How to Cite This Resource Trocki, Aaron. 2025. Assessment and Feedback. Center for Engaged Learning at Elon University. https://www.centerforengagedlearning.org/resources/assessment-and-feedback/.