Embedding Inclusive Assessment: lessons from large-scale assessment change

In this blog post, we introduce and discuss a recently completed QAA-funded Collaborative Enhancement Project that aimed to explore and understand the relationship(s) between assessment outcomes and inclusive assessment designs for different groups of students during the pandemic-affected academic years 2019-20 and 2020-21. There have been few large-scale empirical studies of this kind conducted and shared with the sector despite changes in assessment practices attracting significant scrutiny and evaluation throughout the pandemic. The project brought together eight institutions from across the University Alliance mission group and comprised a three-phase approach: 1) an analysis of assessment outcomes for specific cohorts across each partner institution capturing the range of design/policy changes alongside those course/programmes displaying the largest percentage reduction in attainment/awarding gaps (for 2019-20) and improved student continuation rates (for 2020-21). 2) interviews with academic staff and focus groups with students from those courses identified by each partner with the latter facilitated by a cadre of student researchers employed by each institution to garner student feedback on the inclusivity of assessment arrangements. 3) staff interview and student focus group data were subjected to a process of thematic analysis to capture key themes and sub-themes at a course/programme level.

This collaborative project work culminated in the production of a series of outputs developed as practical resources with the aim of supporting HE leaders, academics, and students in higher education to review, plan for, and evaluate enhancement-led inclusive assessment policies, initiatives, and interventions. Each resource is framed by an overarching position statement we developed for the project that offers the lens through which we now invite universities and practitioners to critically consider their own assessment policies and practices. We believe inclusive assessment:

‘… is realised through holistic and flexible approaches that recognise value and reflect student diversity, facilitating choice and enabling every individual to demonstrate their achievement with respect to academic/professional standards and empowering them to take ownership of their learning journey. To achieve this, assessment needs to be strategically designed as an embedded element of the curriculum to proactively consider students’ needs and to remove systemic barriers in institutional policies, processes, and practices.’

A set of inclusive assessment attributes was collectively developed to reflect the insights generated through the research work undertaken. These attributes formed the basis for an associated toolkit and suite of case studies as a way of illustrating the types of approaches that were deployed, alongside their impact on student learning and performance. Together these resources provide a framework to assist universities and practitioners in reflecting upon their current institutional policies and practices.

The project has produced a series of practical, evidence-based insights into the impact of alternative assessment arrangements on student outcomes, highlighting areas of good practice and creative implementation. Project findings and outputs are illustrative of how clear, positive outcomes can develop from adversity and how agile thinking and responses to change enabled institutions to put creative solutions and inclusive practices in place within a short period time with the culminative effect of positively impacting student outcomes.


Sam Elkington is Professor of Learning and Teaching at Teeside University, a National Teaching Fellow and Principal Fellow of the HEA
LinkedIn
Twitter: @sd_elkington

Can we use multiple-choice questions for assessment in any subject?

Can multiple-choice questions be used effectively for assessment in any academic subject? Having worked mainly in arts and humanities, I admit I’d never seriously considered this. But what with one thing and another over the last couple of years (!) I found myself grappling with this exact question about … errr… questions.

Recent moves by universities towards more blended and online learning contexts have necessitated more consideration of online assessment and the trickle down consequences this might entail. For the managerially oriented, this offers the intoxicating waft of increased efficiency and even automated marking in the virtual air. You can almost sense the technology vendors circling, pitching Martin Weller’s Silicon Valley Narrative on how education is broken, and only private sector solutions can fix it. Check out Weller’s ed tech pitch generator to get inside knowledge on the next big ed tech tornado coming our way!

So it was with some chastened surprise that I learned about the nuance, challenge, and even validity (insert shocked emoji if required) that MCQs can offer, according to a range of very credible advocates. I started with Phil Race’s Lecturers Toolkit, which lists a host of advantages, many perhaps unsurprising:

  • Reliability
  • Time efficiency
  • Availability of multiple-choice and multiple response options

Other strengths of MCQs were, to me, less obvious. As mentioned above, Race contends that good question items can be meaningful and valid – they test what you want them to – whilst also covering a greater extent of the syllabus (Race, 2015, p. 61). Having close correspondence to much real world decision-making was a further benefit listed. If well-constructed MCQs can be combined with other forms of assessment, it seems even greater validity, syllabus coverage and efficiency might be possible.

At this point, a request for advice on the SEDA mailing list produced the usual very generous and well-informed response. Colleagues in the fields of science, medicine, agriculture and educational development (among others*) have been producing great work in this area for some time. Of course, the community produced the same answer to my question as to virtually any in HE: ‘it depends … [how deep you want to go?]’.

Dr David Smith at Sheffield Hallam devises unGoogleable exam questions and gives tips and resources for teasing out those higher order thinking skills including “the ability to apply, analyse and evaluate information”. Rebecca McCarter and Dr Janette Myers note that single best answer questions (select the ‘most correct’ option) can test application of concepts over simple recall, whilst Peter Grossman uses degree of difficulty estimation (“think scoring of Olympic diving”). His idea of setting in-class tasks for students to construct their own questions is also valuable for a range of educational reasons. Colleagues Linda Sullivan (MTU Cork) and Ruth Brown (SEDA consultant) added further nuance by highlighting how we can ask students to rank or confidence-weight their responses. They also link to psychological research by Butler (2018) outlining 5 best practices in MCQ construction. It might, however, be challenging to balance Butler’s overall recommendation for simple item formats with the more nuanced demands of confidence weightings, rankings and explanations of answers discussed above.  

This tension between simplicity and effectiveness gets to the heart of the issue on writing MCQs, and to the conclusion of my brief research into this area. Doing this well will take substantial time, expertise and input from a range of stakeholders. The usual suspects will be required: lecturers, educational developer, learning technologist, student voice, quality and standards and more I’m sure. Good items take time to produce; banks of items much longer. Quality control and piloting is essential – I was amazed how many ways there are to get this stuff obviously wrong, in ways which aren’t obvious during question construction. Colleagues from Harper Adams highlighted the analysis techniques needed to assess both the difficulty of items and the extent to which each one discriminates student level, correlating student scores for each question to the overall assessment mark. Fascinating, but tricky and time-consuming. So, MCQs – better than I thought as a tool, even more challenging to make than I realised. Reassuringly, Butler (2018, p.323) notes that aside from simply measuring things, MCQs can “cause learning”. Phew. I’ll try that line next time someone asks what I do: “I cause learning”. I wonder if the OfS will accept that as evidence when they next come knocking?


Steve White has been lurking in teaching, learning and research-related third spaces in HE for about 20 years. Most relevant for this article, he’s dabbled in online materials and test item writing for Oxford University Press. He worked in intriguingly ill-defined roles while developing online MA courses and MOOCs for the University of Southampton, leading him to complete PhD research on the third space in HE. More recent roles have straddled Learning Development and Educational Development at Arts University Bournemouth and Southampton.

*Many thanks to contributors to the discussion on MCQs: Ruth Brown, Dr David Smith, Peter Grossman, Dr Janette Myers, Clare Davies, and colleagues at Harper Adams. My apologies if I’ve missed anyone out – I’ve recently changed employer so lost access to some emails.

References

Here’s a list of resources and references I received from the SEDA community, including a number of items suggested by Clara Davies:

Burton, R. F. 2005. Multiple-choice and true/false tests: myths and misapprehensions. Assessment and Evaluation in Higher Education. 30 (1), 65 – 72

Butler, A. C. (2018). Multiple-Choice Testing in Education: Are the Best Practices for Assessment Also Good for Learning? Journal of Applied Research in Memory and Cognition, 7(3), 323–331.

Case, S.M. & Swanson, D.B, 2002. Constructing Written Test Questions For the Basic and Clinical Sciences

Gronlund, N.E. 1991. How to Construct Achievement Tests. Allyn Bacon, 4th Edition.

Race, P. (2020). The Lecturer’s Toolkit (5th ed.). Routledge. (Ch.2) Race, P. (n.d.) Designing Multiple-Choice Questions. Phil Race: Assessment, learning and teaching in higher education.

Could encouraging low-stakes failure build up our students and prevent them falling later on?

For most students, studying for a degree is a challenge. The experience for each student will be unique but the challenge of transitioning from the familiar to the unknown is common to all. We expect our students to arrive at university eager to learn and full of enthusiasm for their chosen subject ready to dedicate themselves to their studies and thrive. However, an increasing number arrive capable of little more than surviving the turbulent transition to university life.

As educators we need to stop projecting our personal experiences of studying at university onto today’s students. Instead of asking why our students are less engaged than previous cohorts, reporting that they don’t feel part of their university community or seemingly reluctant to form support groups with their peers, we need to start responding. We need to recognise the effect of their disrupted education during the ongoing pandemic and acknowledge that the future will be volatile too. We need to empathise and ask how we can help develop the skills and capabilities our students need to enable them to successfully transition to university life and to face future challenges. Skills such as coping with complex challenges and future uncertainty that they will need as they transition from learning to becoming. We often focus on protecting our students from failure when a better preparation for the uncertainties of life is to support them as they experience failure. By helping them to gain resilience, increase confidence and manage their fear of failure we start to remove barriers to learning and equip our students with the skills needed to thrive in their studies and beyond.

Tips For Building Student Resilience

  • Create opportunities for students to experience low-stakes failure, e.g. campus-wide scavenger hunts during freshers week or non-assessed practicals/presentations. Post-task reflection activities can make these activities even more effective for building student resilience and encouraging a growth mindset.
  • Engage students in activities which have only limited instructions requiring the students to make decisions about how to accomplish the task, e.g. classic team-building tasks of building a specified object (the tallest tower, widest bridge etc.) using an array of given items or a methodology to follow which lacks timings or quantities. Such opportunities encourage decision-making, groupwork skills and autonomy and if elements that require negotiation and discussion are incorporated, can be effective for building self-efficacy and confidence.
  • Avoid repeating similar tasks, forms of assessment or activities and instead vary the “what” and “how” elements of what the students are asked to do. This encourages development of a variety of skills and experience of dealing with ‘the unknown’.
  • Make use of low-stakes discussion prompts that ask students what they think rather than what they know e.g.
    “How would a successful student prepare for this?”
    “What do you notice about…?” or
    “What tips do you have to help other students in the year ahead?”
    Holding these discussions informal setting where all voices are heard can build self-confidence and a feeling of community amongst the student cohort.

Kelly Edmunds, University of East Anglia
k.edmunds@uea.ac.uk, @kellyedmunds