“Lee” had completed the PGCert HE (Post Graduate Certificate in Higher Education) with Fellowship of the HEA. And had been awarded an “A” grade (Summative submissions on the Ravensbourne PGCert Units are graded A-E. The PGCert at Greenwich is pass/ fail only). As the PGCert Course Leader, I, Virna, should have rejoiced. Yet I felt disappointed. I felt I had actually failed her. Because I knew that she had strategically fulfilled all the assessment elements and fully met the criteria, but I could not avoid concluding that her teaching practice was weak. She had an A PGCert but her teaching was far from being A standard. The discomfort I felt prompted me to turn to my SEDA colleagues. In February 2019, I posted this question on the SEDA Jiscmail group:
‘To what extent does teaching quality of participants affect the final outcome in your PGCert? I have been thinking how to give more weight to the actual teaching quality of the participants when it comes to PGCert assessment. As it stands you can get an A if you are very good at essay writing/reflection, even if the teaching is relatively poor. I would like to change that, as it defeats the purpose, I feel.’
The quality and quantity of replies were a clear demonstration of the interest this question sparkled. From the outset, the responses indicated that many colleagues had been in a similar predicament and felt this was one of the ‘wicked problems’ in teacher education.
‘Bad news first – this debate has been going on for a long time. If memory serves me right, it was one of the first conversations I had with colleagues when I started work at what is now the Centre for Educational Development at Bradford in 2003’
‘This is something that I have grappled with for a long long while, and have almost resigned myself to the fact that it might be an unsolvable problem’
The online thread also opened up tangential discussions about the wider purposes of the PGCerts (or equivalents but referred to as ‘PGCerts’ throughout for convenience) we all work on and even caused one colleague to ‘disappear down the rabbit hole’.
In this post we have summarised some of the key themes under broad questions that arose from the initial post. We then point to some of the potential compromises, solutions and alternative approaches to observation currently used by colleagues. We have quoted from several responses though have not named responders against each quote. Names of all contributors are listed at the end of this post, however.
The initial question prompted other, related big questions: what is the purpose of the PGCert? And what is the purpose of the observations within PGCert?
‘Your question boils down to what the purpose of the PGCert is. Is the PGCert an academic qualification about tertiary teaching and learning, a teaching credential, or a combination of both?’
Besides the more obvious outcome of enhancing the teaching quality of participants, a few ‘side effects’ of lesson observations on PGCert were identified by SEDA participants.
Firstly, the development of the relationship between observer and observee. Seeing someone ‘in practice’ can be transformational for the PGCert staff:
‘We really get to know our participants and we have the chance to see them applying the learning from the course in their own contexts. My own experience has been that it really transforms the relationship that I have with the participants, I feel like I understand them and the perspective that they bring to the PGCert much more fully as a result.’
Secondly, using lesson observations as a tool to find out whether teaching is the right profession for the participants (!):
‘If someone’s teaching practice is below ideal and there is no improvement then they need to reflect on why. This could then be turned into a pass/fail element of the course and if a fail the student could have a 1-2-1 discussion going through their evidence to show them that teaching is maybe not for them.’
Thirdly, using lesson observations to interrogate the quality of the PGCert itself. In judging the teaching quality of others, it is assumed that the staff development course is sound and is equipping participants to reach a required standard. What if this was not the case? We would be unfairly judging teaching as poor, without providing the means to improve it:
‘It may be right to raise concerns about teaching quality if a colleague has clearly not improved their teaching as a result of participating in the course. Alternatively, it could be that the course being studied is not equipping that colleague with the skills necessary to teach to whatever the required standard. A huge can of worms!’
Although nearly all PGCert courses require lesson observations (including online PGCerts), there is wide variations of expectations in terms of the amount of observations, by whom and the way (or whether) they are assessed. On some courses there is no ‘measure’ for teaching quality, on others participants are required to observe and be observed by colleagues in their field and outside of their field; on others lesson observations recognise improvement and capacity for further improvement rather than quality based on a snapshot in time; yet on others if a participant doesn’t perform well in teaching observations, extra observations are built in, so that the process is developmental and supportive; some institutions use peer-reviewed, video-recorded lessons for teacher development.
When thinking about lesson observations for the purpose of gauging teaching quality of the participants, another issue raised was the time frame of the PGCert course, which is usually around 1 year, and the perceived unrealistic expectation of developing high quality teaching within such a short time frame, effectively a few terms of practice, particularly for new teachers.
‘… It is often stated that one of the most powerful professional development tools is often just ‘doing the job’ and learning via experience. Hence, for some, it may take several years to develop the skills, confidence and reflective qualities that support quality teaching.’
‘Perhaps we should instigate the 10,000 hour rule – that this is how long it takes to get good at something…(in jest).’
So it seems unfair to expect a very new colleague to attain a given standard of teaching in such a short time, because ‘participants have different developmental trajectories during their time on the course and beyond’. One colleague warned: ‘Credentialing provides us with a level of power that undermines the developmental nature of our work’.
Another related issue is what is observed: usually only ‘lectures’ get assessed. It is not common to observe tutorials or small group teaching; approach to feedback; project or dissertation supervision, which are perhaps more important than lectures in the student learning experience. This is problematic, if we base an overall judgement of teaching quality on the basis of only one type of teacher-student interaction, which might happen not to be the forte of the observee:
‘Someone may not have the most dynamic and charismatic lecturing style but be actively creating an amazing learning environment for the students…’
Should observations on PGCerts be graded?
It became clear as the thread evolved that observational practice is hard to separate from the values and culture that underpin the PGCert that frames them. Within a PGCert, authentication of practice or lesson observations are seen as a key element of the professional development the course is supposed to prompt. While studying on a PGCert course, participants should be applying the theories of teaching and learning to develop and enhance their own teaching practice. It seems clear that the main benefit of lesson observations is the developmental nature of the process, including the feedback conversation that should accompany each observation. A key question then is how PGCert courses could ‘provide the safe space for colleagues to develop practice without fear’.
In this regard, one of the most debated areas of lesson observations on PGCerts is grading. There were many contributions about this thorny issue in the discussion thread: most contributors clearly stated that grading the observation is controversial and counterproductive, not least because it might hinder the willingness to learn from the observation and to focus on ‘measuring’ quality, which is in itself very controversial. The following sub-questions emerged from the discussion around grading observations.
What counts as grading and how can you go about it?
In an institution that uses fail/pass/good pass on each of the three PGCert observations, one colleague describes how success or failure is dependent on planning, reflection and action planning as well as the actual teaching. Fascinating for us was the criteria (if we can call them that) are not fixed to competences but to perceptions of development:
‘Everyone has different starting points so I think it’s better to recognise improvement and capacity for further improvement rather than quality based on a snapshot in time.’
Citing Chris Rust’s argument*, and before he disappeared down the rabbit hole, another colleague connected the negative impact of grading on ‘regular’ students to grading of observations. He went on to argue:
‘So how might we operationalise ‘quality’ in a teaching observation and provide a grade? On what basis a ‘quality’ lesson? Leaving aside measuring learning we might observe what a teacher/lecturer (even what we call the person ‘doing’ this thing is problematic!) does in order to promote, facilitate, enable learning (let’s for the sake of a name call this engagement). Imagine that we can scientifically (objectively and without debate) measure student engagement with a class and that this was the direct and undisputed result of someone ‘doing’ teaching. In the class I observe the ‘engagement measurer’ said 78% of students were engaged for 78% of the time (the other 12% just stared out the window) – good enough for a ‘quality’ session?’
The many layers to this effort to provide a single measure are indicative of the complexity involved in fair grading. This may be because:
‘Teaching and Learning is an exercise in critical reflective practice and continuous improvement, and doesn’t lend itself to a simple competence model or even a model of excellence’.
What is the potential impact of adopting a grading system for observations?
While those that are comfortable and confident already in their teaching may crave a grade, there may be a case for seeing grading as an opportunity to provide ipsative staging posts, aside from summative/ final observations as a motivational tool.
From our perspective of academics who are ‘anti-grading as a matter of principle’ the core reasoning for this is because it shifts motivation from the intrinsic to the extrinsic. Aside from increasing anxiety (which may simply correlate to observation per se rather than only graded observation), other potential issues are noted based on experiences of observations used for performance management since the inclusion of a grade may well further align PGCert observations with perceptions of those.
It is worth quoting a few contributions to the discussion on grading as they all add different perspectives to the discussion:
‘Such schemes are perceived negatively by teachers, risking alienation, lack of engagement and sometimes outright opposition…they also tend to prevent teachers from taking necessary risks…that are critical to improving practice’.
‘I fear that graded observation could end up another stick to beat colleagues with’.
‘The ‘problem’ for me in any notion of grading teaching quality is what are we actually grading? Teaching is not the end point, which arguably is student learning. This in turn is problematic; learning what and for what purpose? This, for me, then makes any observation of teaching (and its quality) completely context dependent. ‘Good’ teaching can then only be something that is measured against that context’s criteria… there is a moment in time issue.’
‘Using ‘numbers’ encourages surface learning rather than deep learning.’
‘I feel adding a “fitness to practice” judgement, when they are already practicing, deeply problematic.’
‘It’s really important though that this assessment is conducted in a safe and supportive manner and it is divorced from any use of peer observation as a QA or QE device within their departments.’
I, Martin, wrote the many reasons why grading observations is counter-productive (my full reply is at the end of this post) and concluded with perhaps a fundamental question:
Are we academic developers or academic assessors?
Towards the end of the discussion thread a number of colleagues contributed ideas of new models for teaching observations. It is worth noting that these revolve around collaborative and longitudinal learning as opposed to one observer gauging/judging the quality of teaching of one observee, at one point in time. We concur it would be more supportive for staff development to create a peer review culture where everyone learns from each other – including participants’ students, PGCert staff, and participants’ peers:
‘Teaching and learning doesn’t just take place in traditional classrooms, and will spill over even when it does, so encourage a wider culture of peer review.’
An example of this is the ‘Peer Collaboration Network’ at the University of Windsor, designed to promote ongoing conversations about teaching enhancement, with peer observations at the heart of the design. Another example is the ‘Peer-Supported Review’ model at De Montfort University, where colleagues can receive feedback on any aspect of their learning and teaching practice, not just what happens in class. This provides a more well-rounded view of practice. A similar process – ‘Peer Supported Development’- is in its first year across the University of Greenwich (in addition to PGCert observations) where observer learning is weighted equally to observee learning. At Ravensbourne we have also experimented with ‘peer-reviewed videoed lessons’ which have equal weighting as other PGCert observations.
These collaborative networks seek to promote continued development of teaching and learning, well after the PGCert course and its observations are over. This matters because professionalism is also about maintaining good standing:
‘What would be more helpful in raising overall academic quality would be a requirement for maintenance of good standing and the inclusion of teaching assessment in all promotion cases where any student contact is included.’
In view of all the above points raised in the thread, how can we answer Virna’s original question? How can we make teaching quality ‘count’ more on PGCerts? There is no simple answer, but there is an increased awareness of the various factors that affect the implementation of teaching observations in PGCerts. Teaching observations, collaborative in nature, are deemed an important aspect of teacher development within PGCerts, but they should remain developmental in nature and not bound to QA and grading systems which shift the focus from professional growth to performance metrics.
The collective wisdom of the SEDA community has helped enrich the debate abound teaching quality through this online discussion, so we would like to sincerely thank all contributors, listed below. We conclude quoting the words of one colleague that we feel succinctly encapsulate the paradigm shift that all educationalist should be aiming for:
Exchange and growth of teaching effectiveness is a collective responsibility of academic staff … “we are all works in progress”
*Rust, C. (2011). The unscholarly use of numbers in our assessment practices: What will make us change?. International Journal for the Scholarship of Teaching and Learning, 5(1), 4.
Blog post authors (contacts):
Martin Compton,University of Greenwich: M.Compton@greenwich.ac.uk
Virna Rossi, Ravensbourne University: email@example.com
Names of contributors (with permission):