We are going to build a world class Submission and Assessment Management System (S&AMS), a tool that can be used to support the assessment of learning beyond just PebblePad assets. In some respects this is tricky territory because file submission and assessment is typically handled by the LMS or by tools like Turnitin. So why take ATLAS in this direction? Partly it’s a case of improving the platform to catch up with those customers who are already using it in this way, and partly it’s because we have already done the hard miles. What I mean by this is that in order to provide our customers with the richest possible palette of tools to assess portfolio-like items, we already support processes like blind and double blind marking, moderation, external examination, assessment sets, peer-review and have some very unique features like pausing, look-back-in-time and feedback templates.
There are lots of enhancements we want to deliver to what we might think of as our ‘core audience’ before moving on to offering ATLAS as a standalone S&AMS. We know that ATLAS is already highly regarded as an 'assessment’ platform - but think how much more useful it will be when it supports annotations, offline marking, and even better reporting and analytics! These are changes which both enhance our current offer and make the platform much more capable as a standalone S&AMS.
As I noted above, some of our customers already use ATLAS as their S&AMS - and some do so at rather surprising scale. Some do so because they can’t achieve their desired workflows in any other platform (particularly synchronous, double-blind marking), some because of reliability issues experienced elsewhere, and some because of the pedagogic advantages. We were privileged to have David Boud as a keynote at our 2014 PebbleBash conference. Followers of David will know that he advocates much greater engagement and responsibility for learners in the assessment of learning. There are some simple things ATLAS does to support this, such as allowing feedback to be independently released, with grades following only once learners have responded to the feedback. To finesse that slightly, some courses have what I’m going to do with my feedback templates, whilst others, emulating David Boud’s appeals, have students actually complete templates upon submission signalling how well they think they have attended to the relevant learning outcomes and signposting to the tutor the kind of feedback they would particularly appreciate.
Whilst offline marking is always going to be a very difficult trick to pull off for PebblePad assets, it’s quite an easy feat for word docs, PDFs and the like. On the other hand, we’re pretty sure (I’d like to be more certain, but we’re still at the design stage) that we can introduce a method of annotating ‘stuff’ that’s the same for native PebblePad assets and for files - and further down the line maybe even images and video. We have provided shareable feedback comment banks for 10 years, and even the ability to save new comments to the bank as you type. Our plan is to link these to the annotation tools and to provide analytics to show you where and how often the comments were used - across multiple assessments. Add to this improved rubrics to support everything our current scorecards do (weighting, alt scores), feedback templates with auto-summing tables, and embedded hints for assessors, and it all starts to look like it will meet our twin aims of improving feedback to learners whilst making life easier for assessors.
A wee bit further down the timeline (which I’m conscious I haven’t discussed yet) will be the re-emergence of Flourish as a learner-centric space to keep track of all feedback and grades - as well as a place to plan to learn from this by inviting in tutors, academic advisors and even peers.