Many years ago I read a cautionary article in a qualitative research journal by Steinar Kvale called "The 1,000 page question". In the article he was ruminating on what a person does with massive volumes of interview data. Picture the lonely PhD student chain smoking in a draughty garret above the railway junction with only the pallor of her complexion and a laptop screen for light. The desk is piled high with worthy and half thumbed academic texts, the room is stuffy, and the threat and prospect of great thoughts can only follow months of tedium, pauses, ellipses, and endless impenetrable mumblings from respondents. Research can be a lonely business, and there are fine judgements to make about the volume of data you collect.
2,000 pages plus
TESTA has collected an amazing amount of data in the 20 months since its inception. Nicole has punched in some 1500 Assessment Experience Questionnaire returns and churned out descriptive statistics for programmes far and wide, and far beyond the original TESTA four. We've looked at them sideways, portrait, landscape, and through the lens of statistical tests with Greek letters I never knew existed. The TESTA team have audited some 20 programmes, and in other universities, researchers and teachers have begun to use the audit methodology to map programme wide assessment. Yaz, Laura, Penny and Sabine have helped to write up the TESTA audits. In the four partner universities, we have conducted 46 focus groups with 256 students. Most of our research team have helped with these these - Nicole, Laura, Penny, Sabine, Joelle, Yaz and me. Occasionally we have called on researchers beyond TESTA, like Vanessa Harbour and Fiona Handley, to help out. Another unsung hero, Helen Lorraine, has sat tapping her foot pedal into the small hours to transcribe them. With more coming in, I tentatively suggested she might be getting a little bit sick of assessment and may want to move on to something more exciting. She replied – in a flash – that she found them "stunningly interesting".
We're not finished collecting data yet - we have just embarked on the evaluation data, but I'm confident we'll make sense of it because we have so many team players, and the wisdom and experience of our consultant Graham Gibbs to challenge us to go that one step further in our thinking. The particularity of experience for each team has spoken volumes in each context, and there are common threads - the balance of summative to formative is skewed in the direction of measurement; variety needs a 'drive with care' sign if it not sequenced carefully, and if students do not understand the new formats; feedback which feeds forward to the next assessment is likely to engage students more, and therefore linked multi-stage assessment is a useful tool. And so on.