This is an archive of the original site and you may encounter broken links and/or functionality

0096 Changing the way we see test-items in a computer-based environment: Screen design and question difficulty


13:00 - 14:00 on Wednesday, 8 September 2010 in Pos


0096 Changing the way we see test-items in a computer-based environment: Screen design and question difficulty
Matt Haigh


0096 Changing the way we see test-items in a computer-based environment: Screen design and question difficulty
Matt Haigh
This poster illustrates research investigating the effect of screen design features on the difficulty of a computer-based test item. The research emerges from an increased drive in the UK context to utilise computer-based testing in the high-stakes examination system (Ofqual, 2005). In addition, Higher Education and Further Education practitioners are making more use of computer-based assessment in their educational programmes (Conole & Warburton, 2005). Research emphasis has been focussed on demonstrating the equivalence of computer-based and paper-based forms of an assessment, (e.g. Clariana & Wallace, 2002). However, there is little literature relating to the precise features of computer-based tests that may affect the way a student performs. Of particular importance is the potential impact of the screen environment on test-item difficulty. The aim of the study is to investigate this relationship. With the majority of research in computer-based testing emerging from Higher Education contexts, this study takes the opportunity to provide additional evidence from secondary education; six English secondary schools were recruited for the study, and the research was conducted with students undertaking a GCSE Science course. The findings largely relate to assessment design aspects, so are relevant to those using computer-based assessment in all educational contexts. In line with dominant methodologies in educational measurement, this study took an experimental design approach. Two parallel forms of a computer-based science assessment were developed. For specific test-items, the screen layout or student interaction was modified in the second of the parallel forms. The test forms were randomly assigned to the sample of students and measures of test-item difficulty were established. Analysis identified where significant differences in test-item difficultly occurred in the parallel forms. This poster provides a clear visual display of the parallel forms of each test item. This graphical component is supplemented with the analysis of item difficulties of each version and a commentary on the possible causes of significant differences. The poster also provides suggestions for future best-practice in test-item writing for computer-based assessments in educational contexts.