Looking at assessment differently
"Assessment is a glorious celebration of student achievement." So said Dr. Jan McArthur in her keynote at the International Assessment in Higher Education (AHE) Conference 2017, and we all tittered and rolled our eyes. She said her colleagues usually responded that way too.
We all have it: the dread of having to get through a pile of marking, the feeling we don’t have time to do it properly, the secret belief – increasingly supported in the literature – that the way feedback is traditionally done doesn’t actually help students. Part of Dr. McArthur’s response to this conundrum was a phrase elegant in its simplicity: it need not be so. Bad assessment, unhelpful assessment, assessment neither we nor our students believe in – it need not be so. Well, yes. We could assess differently. We do, sometimes, at Bradford. We use portfolio.
Portfolio at the University of Bradford is synonymous with PebblePad, which we’ve been using in one way or another since 2007. It doesn’t solve all our assessment problems – what does? – but it has made certain things a whole lot better.
Much of the AHE Conference focused on feedback, with a key understanding taken as a given: summative end-point feedback is not particularly useful for students. They need something earlier, something which forms (and informs) their learning, rather than something to keep in mind for next semester or over summer or after they graduate. The idea is both pedagogically and intuitively sound, but I doubt I’m alone in struggling to move peers and programmes away from end-point feedback. Feedback is like fish, I tell my colleagues, repeating a well-known line, it goes off if it’s not used quickly. In one of the AHE feedback masterclasses, David Boud was much more strident: “stop feedback at the end – it’s just mark justification!” It’s good to hear this kind of clarity on what has often been unexamined feedback practice, unconsciously inherited and thenceforth perpetrated by teachers who legitimately want to do right by their students. I frequently use David Carless's table of shifts in feedback priorities Excellence in University Assessment, 2015 p.240 and the usual reaction is something like ‘yeah, that makes sense’. In the book, the table recommends:
- In-class dialogic feedback within module
- Written feedback comments on first assessment task of module
- Feedback for first year students
- Unidirectional comments after completion of module
- Written feedback comments on final task of module
- Feedback for final year students
This is often followed, though, with concern along the lines of ‘but given the constraints of my module/ programme/ circumstances, how can I make that happen?’ I have found two ways to help shift feedback using PebblePad, a simple hack and a planned intervention.
The simple hack is to exploit the submission functionality of PebblePad; that is, just get work onto the assessment and feedback space (ATLAS) from the start. This is easily done as it’s built into the system: work can be set to auto-submit as soon as it’s first saved, or students can manually submit as soon as they begin to create. Seeing the work develop live – no resubmission – in an online space that teachers, workplace mentors, or other relevant staff can access at any time from any computer is invaluable: suddenly there is recorded work to respond to before end-point summative marking rolls around. It’s a simple concept and yet can’t be done in most learning technologies, including VLEs and Turnitin which both require resubmission. (The closest approximation would be to get students to create work in an always-online environment using the Google or Microsoft suite of tools, but these better suit actual work projects rather than learning projects that will need to be non-live at various points for assessment and QAA requirements, and which need integrated feedback and grading functions to work at any kind of scale). The submission-initial model can sometimes require a bit of theoretical recalibration from students and teachers alike as it’s often not what we’re used to. The pink text in our student guide to PebblePad highlights the differences:
Once teachers realise they can see student work develop live, it actually becomes feasible to give more feedback during, less feedback at the end. Without this technological aid, attempts to shift feedback earlier usually end up adding to our marking and admin time, rather redistributing it.
There’s plenty more to exploit in the work being live, from reporting on a whole cohort’s understanding of topic X or completion of activity Y (if you are using templates or workbooks) to helping students realise their work has an audience and real purpose. It can get overwhelming though, with students potentially thinking that because their work is live, you’ll look at it night and day, or alternatively getting worried that their time-stamped edits will betray their poor time management. Just as the old end-point feedback schedule was clear to all parties, the formative and mid-point feedback schedule must be made clear too. I usually suggest teachers take this simple timeline as a starting point.
The second way I’ve seen programme teams successfully shift away from end-point summative feedback is to build in feedback cycles using either workbooks with ‘assessor only’ fields or structured feedback templates. For example, one programme moved the formative marks and comments for a mid-semester practice presentation into a module workbook. The practice presentation was already part of the module, but the move to PebblePad allowed students to self-mark how they thought they did, and the teacher then gave their mark in an ‘assessor only’ field right underneath, as the below extract shows:
The difference or similarity between the marks, plus the teacher’s formative comments, helped the students write something concrete in their workbook reflections and more effectively plan what they would do for the final presentation. Taken together the two presentations form a full feedback cycle of task –> feedback –> student action –> repeat task. As the AHE Conference kept hammering home, feedback must be used by the student to do something, otherwise it’s just teacher talk.
The glorious celebration
While thoughtfully designed workbooks can be an effective way to shift feedback to where it belongs, sometimes workbooks can be a little too addictive. Teachers quickly see how useful workbooks could be: ‘You mean I can see if my students have done the work they’re supposed to do before they come to class?’ ‘Can I add a tickbox for them to declare if they’ve done task XYZ?’ ‘Can I make them prove they’ve done it?’ Yes, yes, and yes, and looking at the how and why for these sorts of questions leads to very productive learning design, and is a big part of why I use PebblePad.
But if our students leave us with only the ability to fill in structures we give them, I wonder how they’ll fare in work and in their own development after university. And to bring it back to assessment, a very locked-down workbook design could lead to student work that is not particularly individual, and assessment of such work goes back to where we started, with page after page and student after student blurring into each other as we slump in front of our computers, numbly consuming caffeine. But it need not be so.
We can allow students freedom to evidence how they meet the requirements and learning outcomes we have set by asking them to create portfolios instead of (or in addition to) filling in workbooks. Portfolios are not as standardized as workbooks and can, therefore, require more engagement from the teacher and the student - but this is part of the reason why eportfolios can be such a glorious form of assessment.