ACM Computing Surveys
31(4), December 1999,
http://www.acm.org/surveys/Formatting.html. Copyright ©
1999 by the Association for Computing Machinery, Inc. See the permissions statement below.
Quantitative Evidence For Differences Between Learners Making Use Of Passive Hypermedia Learning Environments
Faculty of Medicine ComputingCentre
Newcastle upon Tyne
United Kingdom, NE2 4HH.
Tel: (0191) 2225017
Fax: (0191) 2225016
1 INTRODUCTIONSeveral studies investigating the use of hypermedia and hypertext educational learning materials (courseware) have attempted to quantify the effectiveness of learners accessing information in a hypermedia learning environment, in relation to moderator variables such as learner characteristics or learning task. The philosophy of an unstructured hypermedia of units of information connected by many associative links presents a large, passive environment [Nelson 1988], [Nielsen 1990a] in which the context for accessing information is established by the learner (discovery learning), in relation to an overall purpose or task [Malone 1981], [Markle 1992]. Hutchings, et al. [Hutchings 1992], (p 172) told us that "Learners differ, not only in terms of abilities, strategies and styles, but in their goals and contexts" and that "learners'" goals and actions are likely to be shaped by the changing display of information" before them. Hypermedia that adapts to the learner or their goal requires either pre-programmed knowledge about the learner, or extensive classification of the information and its suitability, in order to enable it to "intelligently" respond to the actions of a learner [Hekmatpour 1995]. This empowers the tutor by increasing the likelihood that the learner will visit valuable learning materials, and reduces the individualisation of the learning experience for each student by partially overriding the personal goals and the context generated by their prior knowledge and experience.
Evaluating the effectiveness of learning via passive hypermedia may be more difficult than for more traditional educational courseware [Kulik 1998], [Clark 1985], [Kulik 1991], particularly as it has been shown that in passive hypermedia students do not cover the same subject information [Quentin-Baxter 1997]. The quantity and the content of the information accessed from courseware is a component of the evidence for learning when considering learning outcomes. Although it often not possible to determine the level of student engagement with the accessed material, information which is not accessed has no opportunity to be learned.
Continuous tracking using audit trails or video protocols have been widely applied to gathering behavioural data from hypermedia learners, but there are relatively few examples in the literature of attempts to interpret the data beyond "looking" at it (e.g. [Perlman 1989], [Mayes 1988]). Logging or tracking learner interactions and then analysing the audit trails by calculating the amount of time spent in (e.g. [Loo 1990]), estimating the relative proportion of time spent doing (common to behavioural studies [Atkins 1989]), or counting the frequency of accesses to [Hammond 1989], [Edwards 1989] are examples of measuring what information students have accessed and how they have studied, and provides a basis for identifying "outliers" to the main group. The analysis of standardised audit trails using graphical and statistical techniques provides a fine-grained picture of student use of a hypermedia learning resource, and can be used to inform the causes of cognitive outcome as a result of using the courseware. Most importantly, it may empower the learner to personalise the learning experience and access the relevant materials by providing a basis for formative, reflective learning processes. This paper summarises some of the literature relating to quantitative evaluation of hypermedia courseware, and discusses how the application and statistical analysis of audit trails, knowledge change tests, learner characteristic tests and questionnaires can help educators to understand and adjust for the differences between learners using hypermedia.
2 QUANTITATIVE EVALUATION OF HYPERMEDIAThere is some indication that learners fail to appreciate the extent of information when it is presented in a hypermedia format, and that learner characteristics may be correlated with the amount of information accessed. Hammond and Allinson [Hammond 1989] used performance logs to count the number of times different types of navigation tool were chosen when hypertext was either alone or combined with one or more other navigation techniques (such as a map, index or guided tour) in two different access conditions: exploratory (knowledge tested after hypertext use) and directed (solving questions during hypertext use). They also compared the number of different screens accessed out of a total of 39 for the four navigation options, the total number of screens seen (including repetitions), and the ratio of new to old screens seen (in each of the two conditions). They found that students in the hypertext-only navigation condition demonstrated significantly less exposure to new information overall than students using hypertext with other navigation techniques (F[4, 70] =3.87 p<0.05 for total screens and F[4,70]=7.03 p<0.001 for different screens). These students' estimates of what they had achieved also diverged significantly more than the other groups (F[4,35]=3.02 p<0.05) in the direction of overestimating. However, these students equalled the other group's learning outcomes in the post-test. The authors quote Baddeley's [Baddeley 1976] total time hypothesis, that the amount of learning is directly proportional to the total time spent in learning, regardless of how the time is distributed, as a possible explanation of this observation.
Quentin-Baxter [Quentin-Baxter 1997], [Quentin-Baxter 1998] also observed that students in her study (n=35) had a poor appreciation of the amount of information that they had covered in a hypermedia learning condition when students were instructed to browse the information available. Audit trails were quantified by categorising and counting the units of information accessed out of a total of 780 units [Quentin-Baxter 1992]. No correlation was found between student estimates of their own achievement when accessing information versus their actual achievement measured from the audit trails (r=0.07 p=0.64), despite the availability of comprehensive maps. A Bland-Altman investigation of this indicated that learners who accessed very little of the information were significantly less accurate when estimating their own achievement than those who accessed more (r=0.78 p<0.001). Quentin-Baxter suggested that learners who over-estimated might prematurely experience a sense of "closure" or completion. Learners were generally inefficient and each accessed very little of the information (<32%), although, altogether, they accessed over 80% of the material available. Learners began by browsing but switched, over time, to interacting with randomly-generated questions which was more task orientated. There was some evidence that students revisited information and used the same strategy when they returned to the program a second time, despite plenty of unaccessed information remaining.
Verheij, et al. [Verheij 1996] considered the use of hypermedia and the composition of audit trails as a possible method for more objectively estimating an individual's preferred learning style than applying one or more questionnaire-based inventories. To investigate this theory (and others) they classified 33 students (out of 142) as either "deep" processors or "surface" processors (according to a questionnaire-based Inventory of Learning Styles developed by Vermunt and Van Rijswijk [Vermunt 1987]), and logged each learner's interactions with a hypertext environment while they completed two tasks: a search task followed by exam-preparation. Each audit trail was categorised into either "Map", "Text Relations", "Linear" or "Mixed", according to the type of navigation technique which students preferred. They observed evidence of a difference between two groups of learners for the search task (c 2(3)=9.13p<0.05), but not for the exam-preparation. Surface processors were consistent in the type of strategy they used for both tasks, but deep processors varied their approach depending on the task, indicating that they were more "versatile" [Pask 1976]. The authors did not state the average quantity of information accessed in the different conditions, but concluded that surface processors might "profit from supporting instruction as to how to define and realize study goals" (p 14).
Yildiz and Atkins [Yildiz 1993] and Yildiz [Yildiz 1994] reported an investigation of the within-medium independent variables "gender" and "prior academic achievement in science" (low, medium and high) when learning from a selection of interactive video teaching programs, using a pre- and post-test multivariate experimental design. In this, prior academic achievement was the only independent variable observed to contribute to explaining the variance in post-test over pre-test scores. Low-ability students did not demonstrate a knowledge improvement as a result of using a hypermedia simulation (n=14 p=0.60), while high-ability students improved significantly between the two tests (n=28 p=0.001). The authors suggested that low-ability students were less able to combine new information from the simulation with previously acquired knowledge, and that:
"The differential effectiveness of the learning experience in terms of pupils' prior achievement was particularly interesting in view of claims of multimedia technologists to make learning more concrete, relevant and understandable - claims which would, if true, assist the less able as much as, or more than, the abler student." [Yildiz 1993] (p 139).
3 DISCUSSIONThe findings of these studies suggest that some learners may be failing to thrive in passive hypermedia learning environments, and indicate that independent learner characteristics may influence their ability to access and combine information [Entwistle 1981]. It was not possible to relate the studies in order to determine whether, for example, students who overestimated their progress came from a particular category of learning style or prior academic achievement, because no independent variables were common to all of the studies, and the audit trails were collected and analysed in different ways. Each study was highly specific and, even though their sample sizes were large compared with other studies in this area, it is difficult to generalise their findings for similar learning topics, let alone between subject disciplines.
One tentative conclusion is that some students may be systematically disadvantaged by an increasing dependence on passive hypermedia networked learning environments delivered over the world wide web (WWW), unless developments in adaptive hypermedia are able to cater for these students by varying the controlling effect of the environment [Tergan 1997]. The classification of the information in adaptive hypermedia may require significantly more effort on the part of tutors who may have to rank the value of every piece of information. Fortunately the WWW also provides educationalists with an alternative and the first real opportunity for large-scale evaluation of learners use of hypermedia courseware [Chinien 1994]. It is possible to standardise the collection of audit trails using the WWW through server usage logs, indexing and logging databases, and this data can be dynamically analysed and presented to the learner in the form of charts and simple statistics. Combining the audit trails with information about independent learner characteristics, academic achievement and self-perceived progress has the potential to provide learners with dynamic, personalised "learning profiles" on which to base reflective learning practices. This may act as a hybrid between learner and tutor control by empowering students to identify their progress for themselves and make decisions about future learning needs. Further research is required to inform the development of "learning profiles", especially in order to increase their accuracy across academic disciplines and for different learning conditions, and possibly to identify their role in the development of lifelong learning skills. This may enable learners to adjust their interactive strategies in order to increase their effectiveness when using hypermedia.
ACKNOWLEDGMENTThe author would like to thank Dr G. R. Hammond (Faculty of Medicine Computing Centre, University of Newcastle) for comments on this draft, and the four reviewers for their constructive comments.
References[Atkins 1989] Madeleine J. Atkins and Gill M. Blissett. "Learning Activities and Interactive Videodisc: an Exploratory Study" in British Journal of Educational Technology, 20(1), 47-56, 1989.
[Baddeley 1976] Alan D. Baddeley. The Psychology of Memory, New York, NY, Basic Books, 1976.
[Chinien 1994] Christian A. Chinien and France Boutin. "A Framework for Evaluating the Effectiveness of Instructional Materials" in Performance and Instruction, 33(3), 15-17, 1994.
[Clark 1985] Richard Clark. "Evidence for Confounding in Computer-Based Instructional Studies: Analyzing the Meta-Analyses" in Educational Communication and Technology Journal, 33(4), 249-262, 1985.
[Edwards 1989] D. M. Edwards and Lynda Hardman. "Lost in Hyperspace: Cognitive Mapping and Navigation in a Hypertext Environment" in Hypertext: Theory into Practice, Ray McAleese (editor), Oxford: Intellect, 105-125, 1989.
[Entwistle 1981] Noel Entwistle. Styles of Learning and Teaching, Chichester: John Wiley & Sons, 1981.
[Hammond 1989] Nick Hammond and Lesley Allinson. "Extending Hypertext for Learning: An Investigation of Access and Guidance Tools" in Proceedings of the British Computer Society HCI å89, 293-304, 1989.
[Hekmatpour 1995] Amir Hekmatpour. "An Adaptive Presentational Model for Hypermedia Information" in Journal of Educational Multimedia and Hypermedia, 4(2-3), 211-238, 1995.
[Hutchings 1992] Gerard Hutchings, Wendy Hall, Jonathan Briggs, Nick Hammond, Marj R. Kibby, Cliff McKnight, D. Riley. "Authoring and Evaluation of Hypermedia for Education" in Computers and Education, 18, 1-3, (1992), 171-177.
[Kulik 1991] Chen-Lin C. Kulik and James A. Kulik. "Effectiveness of Computer-based Instruction: An Updated Analysis" in Computers in Human Behaviour, 7, 75-94, 1991.
[Kulik 1998] James A. Kulik, Chen-Lin C. Kulik, and Peter A. Cohen. "Effectiveness of Computer-based College Teaching: A Meta-analysis of Findings" in Review of Educational Research, 50(4), 525-544, 1998.
[Loo 1990] J. Loo and T. Chung. "An Environment for Evaluating Browsing in Hypermedia Systems. Technical Report TR90-51-0, Institute of System Science, National University of Singapore, 18, 1990.
[Malone 1981] Thomas W. Malone. "Towards a theory of intrinsically motivating instruction" in Cognitive Science, 4, 333-369, 1981.
[Markle 1992] S. M. Markle. "Unchaining the slaves: discovery learning is not being told" in British Journal of Educational Technology, 23(2), 222-227, 1992.
[Mayes 1988] Terry Mayes, Mike Kibby, H. Watson. "The development and evaluation of a learning by browsing system on the Macintosh" in Computer Education, 12(1), 221-229, 1988.
[Nelson 1988] Theodor Helm Nelson. "Managing immense storage" in Byte, 13(1), 225-238, January 1988.
[Nielsen 1990a] Jacob Nielsen. "The Art of Navigating through Hypertext" in Communications of the ACM (CACM), 33(3), 297-310, March 1990.
[Pask 1976] Gordon Pask. Styles and strategies of learning, British Journal of Educational Psychology, 46, 128-148, 1976.
[Perlman 1989] Gary Perlman. "Asynchronous design/evaluation methods for hypertext technology development" in Proceedings of ACM Hypertext '89, Pittsburgh, PA, 61-81, November 1989.
[Quentin-Baxter 1992] Megan Quentin-Baxter and David G. Dewhurst. "A Method for Evaluating the Efficiency of Presenting Information in a Hypermedia Environment" in Computers & Education, 18, 178-182, 1992.
[Quentin-Baxter 1997] Megan Quentin-Baxter. "Developing and Evaluating a Hypermedia Computer-based Learning Program in Biology, Unpublished Ph.D. Thesis Leeds Metropolitan University, 168p., 1997.
[Quentin-Baxter 1998] Megan Quentin-Baxter. "Hypermedia Learning Environments Limit Access to Information" in Computer Networks and ISDN Systems, 30, 587-590, 1998.
[Tergan 1997] Sigmar-Olaf Tergan. "Misleading theoretical assumptions in Hypertext/Hypermedia research" in Journal of Educational Multimedia and Hypermedia, 6(3-4), 257-283, 1997.
[Verheij 1996] J. Verheij, E. Stoutjesdijk, and J. Beishuizen. "Search and study strategies in hypertext" in Computers in Human Behaviour, 12(1), 1-15, 1996.
[Vermunt 1987] J. D. H. M. Vermunt and F. A. W. M. Van Rijswijk. Inventaris leerstijlen voor het hoger onderwijs [Inventory of learning styles for higher education], Tilburg, Netherlands: Katholieke Universiteit Brabant, 1987.
[Yildiz 1993] Rauf Yildiz and Madeleine J. Atkins. "Evaluating multimedia applications" in Computers and Education, 21(1-2), 133-139, 1993.
[Yildiz 1994] Rauf Yildiz. The Cognitive and Attitudinal Impact of Multimedia Simulations on Secondary Pupils, Unpublished Ph.D. Thesis University of Newcastle upon Tyne, 1994.
NOTESThis document is a shortened version of a position paper Quentin-Baxter, M. "Evaluating learners in a biological hypermedia learning environment: the use of audit trails and questionnaires for estimating effectiveness and efficiency", prepared for HTF IV workshop held during the WWW7 Conference, Brisbane, Australia, 1997.
BiographyMegan Quentin-Baxter is the Assistant Director of the Faculty of Medicine Computing Centre at the University of Newcastle. She holds a BSc in Zoology and a PhD in Educational Technology. Her research interests are in the use of technology in learning and teaching, in particular the use of networked learning environments to provide a framework for student support and distance learning.
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Publications Dept, ACM Inc., fax +1 (212) 869-0481, or firstname.lastname@example.org.