Academic Development is a straightforward enterprise. The idea being that academic development interventions influence lecturers’ conceptions of teaching and learning and this in turn brings about changes in practice. If these changes are representative of a range of pedagogic approaches that foster student-centred active learning, then this can impact positively on student learning (Gibbs, 2010). Job Done! However, despite this simplicity, evaluating it is a complex task and as a result the literature (and SEDA mailing list e.g. Hancock, 2021) is peppered with debates about what and how to evaluate; and what value should be attributed to results. These debates have taken place against a backdrop of shrinking funding for pedagogic projects including academic development across the UK sector, which has led to confusion about the purpose of evaluation. Is it to save out skins? Or to evidence how, where, and to what extent our practice impacts on the student learning experience? Luckily – and as you are probably aware, these are the same endeavour. So why are we finding it so hard to do, and how can we do it better? Existential questions beyond the remit of this blog, but I do want to use this space to comment on three issues which if addressed could, perhaps make evaluating academic development less onerous. These are raising awareness of existing practices in evaluating academic development, challenging how we measure learning, and suggesting that we use other trends in HE evaluation to further our own agenda.
In terms of directly evaluating our impact on lecturers and triangulating this with institutional metrics there is some brilliant and very accessible work being done on data use by for example the QAA with Liz Austin and Stella Jones Devitt, and work that has specifically looked at how to evaluate academic development (Bamber, 2020; Baume, 2008; Kneale et al, 2016; Spowart et al., 2017; Spowart and Turner, 2021; Winter et al., 2017). Upskilling ourselves as part of routine academic development practice is a solid first step.
Whilst the sector is good at conceptualising how to evaluate learning it tends to be less good at putting it into practice. A cursory glance at in most in-house module evaluation formats tells us that. The emphasis on Student Evaluation of Teaching (SET) instruments over those which capture learning gain, learning transfer, students’ behavioural, emotional, and cognitive engagement, and subsequent engagement in life long and life wider learning, means that we do not often have the right data to answer our own question. Creating awareness of these alternatives to SET is an essential endeavour because measuring student anything takes place outside of academic development units and so we need others to measure learning for us. This then should be core business via our PGCerts and in our sphere of influence across the institution. Once others are evaluating learning properly, we will be in a better place to evidence our own contribution.
The focus on evaluation as underpinning evidence-based practice is being laid at the door of HE in many ways. One which I see as offering academic development possibilities is the OfS Access and Participation Plan mechanism to eliminate inequality in access and participation in UK HE. This has brought about significant changes in how the sector creates, manages, and uses data on and by students. Within Universities data analysis for the OFS is evolving as its own enterprise as interventions underpinned by theories of change, iterative evaluation strategies carefully developed conceptions of value are put into place. These interventions are often modest but linked through different aspects of the student/university cycle. This sort of project offers academic developers’ opportunities to be part of institutional interventions advising on how learning is and can be embedded, the sharing the data produced – and a seat at the (often senior level) table where these projects are discussed. Evaluation of these projects is often innovative which can be adopted within our own evaluation practice, fostering creativity in method and dissemination.
With the financial pressures on the sector looking set, and the imminent reinstating of institutional TEF, generating positive evidence-based impact ‘stories’ continues to be important. So, let’s ask the right questions, get ourselves sat round the right tables and then shout our value loud!
Jennie Winter, Professor of Academic Development at Plymouth Marjon University. Her current research interests are teaching sustainability in Chinese higher education and decolonising curricula in non-diverse contexts.
References
Bamber, V. (2020). Our Days Are Numbered: Metrics, Managerialism, and Academic Development. Staff and Educational Development Association
Baume, D. (2008). A toolkit for evaluating educational development ventures. Educational Developments, 9: 1-6.
Gibbs, G. (2010). Dimensions of quality. York: Higher Education Academy.
Hancock, J. (2021) SEDA discussion ‘Evaluation of the impact of learning and teaching development’
Hughes, J., McKenna, C., Kneale, P., Winter, J., Turner, R., Spowart, L., & Muneer, R. (2016). Evaluating teaching development in higher education: Towards impact assessment (literature review). York: Higher Education Academy.
Kneale, P., Winter, J., Turner, R., Spowart, L. & Muneer, R. (2016). Evaluating Teaching Development activities in higher education. Higher Education Academy.
Spowart, L. & Turner, R. (2021) Institutional Accreditation and the Professionalisation of Teaching in the HE Sector.
Spowart, L., Winter, J., Turner, R., Muneer, R., McKenna, C. & Kneale, P. (2017). Evidencing the impact of teaching-related CPD: beyond the ‘Happy Sheets’, International Journal for Academic Development, 22(4): 360-372.
Winter, J., Turner, R., Spowart, S. Muneer, R. and Kneale, P. (2017) Evaluating academic development in the higher education sector: Academic developers’ reflections on using a Toolkit resource. Higher Education Research and Development. 36:7 1503-1514