Now that the formal part of the project is over and the final report is pretty much ready to go, it's time to acknowledge that it's been a team effort and thank everyone.
Sounds Good has involved many people, staff and students, all of whom helped the project to succeed in one way or another. Thanks are particularly due to JISC, for the funding and also for consistent encouragement and support, most obviously from Lawrie Phipps, Programme Manager for the Users and Innovation Programme. My line manager, Prof Sally Brown, Pro-Vice-Chancellor for Assessment, Learning and Teaching at Leeds Met, deserves an accolade for allowing me to run the project as I wished and to give it more time than budgeted for. Simon Thomson, Sounds Good’s Deputy Project Manager, has been a valuable ally, sounding board and source of advice. I am also grateful to my main contacts at the three partner institutions for Sounds Good 2: Bob Ridge-Stearn at Newman University College, Caroline Stainton and Katie Jackson at the University of Northampton and Simon Sweeney at York St John University. Peter Chatterton, the project’s ‘critical friend’, provided reassurance and an extra forum for discussion as well as provoking productive thought. Isobel Falconer, our external evaluator, negotiated sensitively on how to review the project and then worked with colleagues before producing her helpful insights and perspectives. It has also been fun to get to know Will Stewart, leader of the ASEL project at the University of Bradford, and to share experiences with him. To all these people, and to others too numerous to mention, I much appreciate your contributions.
What a team!
Wednesday, 25 March 2009
Monday, 23 March 2009
So where are we?
At the end of February I called ‘time’ on Sounds Good: audio feedback after that date would not be taken into account when writing up the project. Since then, I’ve been gathering and analysing information and drafting the reports for JISC. Several months ago I asked on the blog ‘Are we nearly there yet?’ Only now, after a journey of a year, can we answer ‘yes’.
So where are we? Regular readers will know the main aim of Sounds Good was to test the hypothesis that using digital audio for feedback can benefit staff and students by:
Overlapping with the second phase, two HE Academy subject centres – Engineering and Geography, Environmental and Earth Sciences (GEES) – were funded to introduce audio feedback to their constituencies as part of JISC’s ‘Widening Stakeholder Engagement’ initiative. I’ve been helping the subject centres with this work.
Sounds Good has mainly been a qualitative study. Even so, it has produced a few statistics. Taking the two phases of Sounds Good together, 38 teachers in four institutions have supplied audio feedback to at least 1,201 students at all educational levels from foundation degree and first-year undergraduate to doctoral. The staff were located as follows: Leeds Met 23, Newman University College 8, University of Northampton 4, York St John University 3. In the first phase the numbers on the various modules ranged from six to 151, with at least 463 students receiving one or more items of audio or video feedback. In Sounds Good 2 the numbers on modules ranged from three to 150 and at least 738 students received one or more items of audio feedback.
The project has operated in widely differing circumstances, which has been a mixed blessing. The main advantage of this diversity is that has enabled a worthwhile preliminary exploration of the potential of digital audio for assessment feedback. On the other hand, the differing circumstances have led to a suite of case studies rather than one large, standardised experiment.
Sounds Good has worked very well overall. In the first phase it ran almost entirely to plan. In the second phase it generally went well in all four institutions, but there were a few minor problems, including:
No doubt some staff are encouraged by the fact that the great majority of students were positive about receiving audio feedback on their coursework. Students particularly appreciate the personal nature of individual audio feedback, as well as the detail they often received.
As for the central question tackled by Sounds Good:
There is much yet to explore in the field of audio feedback. There is plenty of scope for larger trials, attempting to tease out the variables and studying the effectiveness of audio feedback (i.e. whether it enables students to learn more). However, a particularly pressing problem – one which might be solved quickly by a programmer – is to automate the process of sending feedback to students. Audio feedback is already an attractive proposition, yet if assessors could be confident that – regardless of cohort size – it would take them little or no time to let students have their audio feedback, even more would probably find audio feedback worth adopting.
Sounds Good has broadly achieved what it set out to achieve. It has done some valuable exploration and produced useful practice guidelines. All in all, it has delivered an excellent return on JISC’s modest investment and, most of the time, it’s been fun.
So where are we? Regular readers will know the main aim of Sounds Good was to test the hypothesis that using digital audio for feedback can benefit staff and students by:
- saving assessors’ time (speaking the feedback rather than writing it)
- providing richer feedback to students (speech is a richer medium than written text).
Overlapping with the second phase, two HE Academy subject centres – Engineering and Geography, Environmental and Earth Sciences (GEES) – were funded to introduce audio feedback to their constituencies as part of JISC’s ‘Widening Stakeholder Engagement’ initiative. I’ve been helping the subject centres with this work.
Sounds Good has mainly been a qualitative study. Even so, it has produced a few statistics. Taking the two phases of Sounds Good together, 38 teachers in four institutions have supplied audio feedback to at least 1,201 students at all educational levels from foundation degree and first-year undergraduate to doctoral. The staff were located as follows: Leeds Met 23, Newman University College 8, University of Northampton 4, York St John University 3. In the first phase the numbers on the various modules ranged from six to 151, with at least 463 students receiving one or more items of audio or video feedback. In Sounds Good 2 the numbers on modules ranged from three to 150 and at least 738 students received one or more items of audio feedback.
The project has operated in widely differing circumstances, which has been a mixed blessing. The main advantage of this diversity is that has enabled a worthwhile preliminary exploration of the potential of digital audio for assessment feedback. On the other hand, the differing circumstances have led to a suite of case studies rather than one large, standardised experiment.
Sounds Good has worked very well overall. In the first phase it ran almost entirely to plan. In the second phase it generally went well in all four institutions, but there were a few minor problems, including:
- Only four of the six Leeds Met mentors managed to engage with mentees.
- Only seven, rather than the planned 12, mentees were recruited at Leeds Met.
- The extended communication channels between me and some team members led, occasionally, to staff not being entirely clear what was expected and me being less well-informed than previously as to what was happening.
- The data returned were somewhat less complete and even more varied in nature than in the first phase.
- I found it difficult to give the project sufficient time in January-February 2009.
No doubt some staff are encouraged by the fact that the great majority of students were positive about receiving audio feedback on their coursework. Students particularly appreciate the personal nature of individual audio feedback, as well as the detail they often received.
As for the central question tackled by Sounds Good:
- Can digital audio be used to give students quicker, better feedback on their work?
- The assessor is comfortable with the technology.
- The assessor writes or types slowly but records his/her speech quickly.
- A substantial amount of feedback is given.
- A quick and easy method of delivering the audio file to the student is available.
There is much yet to explore in the field of audio feedback. There is plenty of scope for larger trials, attempting to tease out the variables and studying the effectiveness of audio feedback (i.e. whether it enables students to learn more). However, a particularly pressing problem – one which might be solved quickly by a programmer – is to automate the process of sending feedback to students. Audio feedback is already an attractive proposition, yet if assessors could be confident that – regardless of cohort size – it would take them little or no time to let students have their audio feedback, even more would probably find audio feedback worth adopting.
Sounds Good has broadly achieved what it set out to achieve. It has done some valuable exploration and produced useful practice guidelines. All in all, it has delivered an excellent return on JISC’s modest investment and, most of the time, it’s been fun.
Subscribe to:
Posts (Atom)