Monday, 23 March 2009

So where are we?

At the end of February I called ‘time’ on Sounds Good: audio feedback after that date would not be taken into account when writing up the project. Since then, I’ve been gathering and analysing information and drafting the reports for JISC. Several months ago I asked on the blog ‘Are we nearly there yet?’ Only now, after a journey of a year, can we answer ‘yes’.

So where are we? Regular readers will know the main aim of Sounds Good was to test the hypothesis that using digital audio for feedback can benefit staff and students by:
  • saving assessors’ time (speaking the feedback rather than writing it)
and
  • providing richer feedback to students (speech is a richer medium than written text).
Initially the project was funded for the period January to July 2008. During this time a team of 16 Leeds Met lecturers experimented with digital audio to give formative and summative feedback on students’ coursework. Later, funding was provided under JISC’s ‘benefits realisation’ initiative for a second stage, ‘Sounds Good 2’, which ran until February 2009. In this phase the design called for six Leeds Met staff from the first stage to mentor 12 colleagues joining the project and for audio feedback to be introduced to three other higher education institutions: Newman University College, Birmingham; University of Northampton; York St John University.

Overlapping with the second phase, two HE Academy subject centres – Engineering and Geography, Environmental and Earth Sciences (GEES) – were funded to introduce audio feedback to their constituencies as part of JISC’s ‘Widening Stakeholder Engagement’ initiative. I’ve been helping the subject centres with this work.

Sounds Good has mainly been a qualitative study. Even so, it has produced a few statistics. Taking the two phases of Sounds Good together, 38 teachers in four institutions have supplied audio feedback to at least 1,201 students at all educational levels from foundation degree and first-year undergraduate to doctoral. The staff were located as follows: Leeds Met 23, Newman University College 8, University of Northampton 4, York St John University 3. In the first phase the numbers on the various modules ranged from six to 151, with at least 463 students receiving one or more items of audio or video feedback. In Sounds Good 2 the numbers on modules ranged from three to 150 and at least 738 students received one or more items of audio feedback.

The project has operated in widely differing circumstances, which has been a mixed blessing. The main advantage of this diversity is that has enabled a worthwhile preliminary exploration of the potential of digital audio for assessment feedback. On the other hand, the differing circumstances have led to a suite of case studies rather than one large, standardised experiment.

Sounds Good has worked very well overall. In the first phase it ran almost entirely to plan. In the second phase it generally went well in all four institutions, but there were a few minor problems, including:
  • Only four of the six Leeds Met mentors managed to engage with mentees.
  • Only seven, rather than the planned 12, mentees were recruited at Leeds Met.
  • The extended communication channels between me and some team members led, occasionally, to staff not being entirely clear what was expected and me being less well-informed than previously as to what was happening.
  • The data returned were somewhat less complete and even more varied in nature than in the first phase.
  • I found it difficult to give the project sufficient time in January-February 2009.
The Sounds Good staff team is, on balance, strongly in favour of audio feedback. Even if they didn’t manage to save time, a high proportion of the team have commented that they were able to give more, and higher-quality, feedback using audio, which they felt was worthwhile. Their reservations about audio feedback were mainly about the practical difficulties they encountered. Most of these could be regarded as ‘teething problems’ which might reduce or disappear with further practice and the use of the practice tips which we’ve published. The majority of the team have clearly said they intend to continue using audio feedback, and almost all will probably do so.

No doubt some staff are encouraged by the fact that the great majority of students were positive about receiving audio feedback on their coursework. Students particularly appreciate the personal nature of individual audio feedback, as well as the detail they often received.

As for the central question tackled by Sounds Good:
  • Can digital audio be used to give students quicker, better feedback on their work?
the answer is ‘yes’, in some circumstances. The most favourable conditions seem to be:
  • The assessor is comfortable with the technology.
  • The assessor writes or types slowly but records his/her speech quickly.
  • A substantial amount of feedback is given.
  • A quick and easy method of delivering the audio file to the student is available.
At this stage it is fair to say that most UK academic staff assessing student coursework would probably find it worth giving audio feedback an extended trial with at least with some of their assessment work. For many it would be sensible to begin where the conditions are most favourable. For example, this might be with a small cohort or where the detail or personal quality of audio feedback are particularly important. In contrast, it would probably be inadvisable to start by attempting to give individual audio feedback to a big cohort, because of the problem of accurately providing large numbers of audio files to students. However, with a big cohort an early, efficient step might be a ‘one-to-many’ communication: group audio feedback.

There is much yet to explore in the field of audio feedback. There is plenty of scope for larger trials, attempting to tease out the variables and studying the effectiveness of audio feedback (i.e. whether it enables students to learn more). However, a particularly pressing problem – one which might be solved quickly by a programmer – is to automate the process of sending feedback to students. Audio feedback is already an attractive proposition, yet if assessors could be confident that – regardless of cohort size – it would take them little or no time to let students have their audio feedback, even more would probably find audio feedback worth adopting.

Sounds Good has broadly achieved what it set out to achieve. It has done some valuable exploration and produced useful practice guidelines. All in all, it has delivered an excellent return on JISC’s modest investment and, most of the time, it’s been fun.

No comments: