Chapter 12:

Justification: Evaluations and Artifacts

Scenario

Partial transcript of the hearing on Educational Technology before the Board of Regents:

Mr. John Murphy (Member of the Board of Regents and CEO of a large manufacturing company): Look if we are to convince the legislators to fund a multi-million dollar project to outfit electronic classrooms and to wire campuses to create what you call the "electronic educational environment" we need numbers. How much are we going to save on education over the next 10 years? What's the cost-benefit ratio? How much faster and how much more do students learn in electronic classrooms? How much can we save on personnel costs?

Dr. Allen Kolhberg (Professor of Education): It's not that simple. The research in educational technology is mixed. You can find numbers all over the place. Some studies show as much as a 30% reduction in classroom time. But this may be due to merely a reduction in the time to access materials. Others show that learning time may increase because students have to learn the new system as while as the material itself. To complicate matters even more, there are some studies that show that interactive learning may take longer because the students enjoy spending the time studying more and they learn the material more deeply.

Dr. John Kribs (Dean of the College of Arts and Humanities): I am a bit, too put it mildly, concerned about the implication that a part of the cost savings of electronic classrooms might be a reduction in faculty lines. If electronic educational environments translates into fewer teachers, I am opposed to the concept. Students need contact with faculty not machines.

Mr. Larry Johnson (Member of the Board of Regents and CEO of a major telecommunications firm): Now wait a minute. I think that we are approaching this in the wrong way. The justification for electronic classrooms is not that we can do the same old thing faster, cheaper, and with fewer teachers. The justification is that there is a new world out there. We need to produce graduates that are relevant. They need to learn in an environment that is electronic because that's where they will work when they graduate. It's the quality of the graduates that we produce not the efficiency with which we produce them.


The electrification of the world cost billions. We cannot imagine a world without electricity. The computerization of business and industry cost billions. We can hardly imagine what it was like before management information systems and automation. But in education we are still at the point of change. Some can see neither the vision nor the benefit. Others see change as inevitable and necessary for education to keep pace with the world. As change occurs, it is important for us to document it and to evaluate it so that we can learn from it and help to guide it.

In this chapter we will consider the issues of evaluation from two standpoints. The first deals with the justification for spending unprecedented amounts on instructional technology over the next several decades. The second deals with doing it right. Once we have committed ourselves to this course, the question is "How shall we ensure that the investment pays off?" To this end we will look at four factors: learning performance, learning efficiency, subjective assessments, and artifacts. The discussion will be anything but conclusive. Instead, the point of this chapter is to show how complex and intractable the task of evaluation is for education in the electronic media. Nevertheless, we will find that we can glean a number of insights that will help to define successive generations of the switched-on classroom.

Justification for Change

Over the past few years the advocates of instructional technology have been scurrying to collect evidence that the use of multimedia, interactive lectures, and electronic classrooms is in some way good for education. Exaggerated claims about the efficacy of instructional technology abound, "With a multimedia lecture presentation the students can learn 30% more in half the time!" The problem is that even if one can track down the actual study whether in industry or academics, the results of any one study with all of its particular conditions hardly warrant such sweeping conclusions. Many fear that such statements will be taken at face value to justify a wholesale dumping of bad technology, mediocre multimedia, and poor pedagogy into the system. It is as meaningless to say that multimedia is good for education is as it is to say that food is good for nutrition. It all depends on what, how much, and when. The same is true for instructional technology in the switched-on classroom.

Evaluation of instructional technology is complex for two reasons. First one has to specify what is being evaluated and how. Second one has to define the experimental conditions being compared. The first question is complex because there is no unitary outcome of education. Although standardized tests are helpful, they themselves are the result of aggregating many factors and specific outcomes together. Furthermore, progressive voice in education would find standardized testing restrictive and would wish to add process outcomes as described in Chapter 10. In later sections we will consider measures of performance as well as subjective assessments. The second question has to do with the methodology used to provide empirical evidence that instructional technology is beneficial. One cannot merely apply instructional technology and assert with confidence that it worked. Such conclusions are subject to bias and are without scientific merit. First one needs a point of comparison. Generally this is accomplished with a control group. Second, one needs an objective, unbiased assessment. Finally, on needs to use statistical procedures to ensure that the improvement is replicable and not a one time event.

Beyond the possible benefit to the learning process of the individual student looms the institutional justification to save money, teach more with less, and become even more institutionalized. With the increasing costs of education, many administrators are looking to computerization of education to help cut costs. Unfortunately, this may translate to either cutting teachers or increasing class size. Neither of which is in the best interest of the student or the teacher.

Fortunately, on the other side of the institutional equation is the desire to stay competitive and to increase the quality of education. For educational institutions to stay competitive they must not only deliver top rate programs and faculty, they must also provide the electronic educational infrastructure to deliver this education in a state-of-the-art form. As the world changes outside the campus walls, students come to expect, if not take for granted, the extensive use of computers. Because of this more and more schools from k-12 to college are beginning to advertise their use of instructional and information technology in education.

Evaluation of Learning Performance

Of course the best evidence for instructional technology in the electronic classroom would be to show systematic increases in learning outcomes: higher test scores, reduced learning time, longer retention of knowledge, etc. Indeed the literature is filled with studies that do show benefits of Computer Based Instruction (CBI) and Computer Aided Instruction (CAI). These studies have generally focused on fairly small modules of instruction (e.g., names of countries and their capitols, parts of the nervous system, and arithmetic facts). They have also emphasized individualized instruction (see Chapter 9) over interactive classroom and group learning. These approaches have capitalized on the benefits of instruction "on demand" and on the ability of the student to determine his or her own personalized schedule of instruction. The results are very encouraging but are limited to individualized instruction and do not necessarily generalize to the wider issue of instructional technology and the electronic classroom.

When the evaluation is at a more global or classroom level, a number of problems make it difficult to draw firm conclusions even when one is using objective learning performance as a measure. It is worthwhile noting some of these problems. All but the last problem may serve to create a spurious difference between the electronic and the traditional classroom.

Hawthorne Effect in the Classroom. A famous study on worker productivity was conducted years ago in an industrial plant in Hawthorne, Illinois (Rosthlisberger & Dickson, 1939). The researchers found that every time they shifted to a new set of experimental conditions productivity increased and then leveled off. Productivity even increased when they shifted back to old conditions. This effect has been attributed to the subjects' awareness that they were a part of an experiment. It is easy to see how such an effect can take place in the electronic classroom. Students know that they are participating in a novel situation. This knowledge may serve to increase their interest, attention, and motivation in the classroom.

Researcher Expectancy Effect. Rosenthal (1966) found that a researcher's expectancy of what should happen can somehow change his or her behavior toward the subjects. In turn the subjects may respond to these subtle cues and change their behavior. This creates a self-fulfilling prophesy such that the researcher's actions cause the subjects to behave just as they were expected to behave. In the classroom this can be particularly powerful. The instructor expects the students in the electronic classroom to perform better than in the traditional classroom. This expectation can be conveyed to the students either directly or indirectly in what is said in class. The students may adopt more positive attitudes about instructional technology and may even work harder to meet performance expectations.

Personal Relationship Effect. Jourard (1971) demonstrated that the time spent in mutual self-disclosure by the subject and the experimenter could affect the rate of learning. As students and instructors get to know each other on a personal level, it can positively effect the learning performance of the students. If the electronic classroom fosters these interpersonal relationships through tools such as the class roll and chat sessions, it may lead to an increase in student performance. This is obviously not a bad sort of confounding and may be effectively used to increase performance, but it must be remembered that it is not directly due to instructional technology but due to interpersonal relationships.

Self-Selection Biases. For a comparison to be unbiased, groups must be constructed so that one would not expect any differences prior to any treatment being applied. Generally, this is assured by random selection of subjects and assignment to groups. However, it is not easy to achieve this in educational settings. Students register for classes that they want rather than being randomly assigned. They are more likely to register for a class in an electronic classroom if they like computers than if they don't. Furthermore, instructors who are enthusiastic about instructional technology are more likely to decide to teach in electronic classrooms that those who are not. These self-selection biases can lead to differences that do not generalize to the rest of the population.

Self-Adjusting Factors. Students have an uncanny ability and inherent motivation to adjust their effort to receive the desired grade. If the class is hard, they work harder to achieve the grade that they want. If the class is easy, they may do only as much as is necessary to achieve the desired grade. The implication for research is that outcome performance is often confounded by the self-adjustment of effort factors on the part of the students. This is true for both standardized tests and for instructor assigned grades. Instructional technology may make learning easier, but that does not necessarily mean that grades will improve. Alternatively, instructional technology may actually confuse and confound students to such a degree that they over compensate by increased study or use of traditional learning methods. This seems to be the explanation for a 20% increase in scores for students in a virtual learning environment over students in a traditional classroom reported by Jane Black (1997) Virtual Teaching in Higher Education: The New Intellectual Superhighway or Just Another Traffic Jam?

By way of illustration of these problems, I will relate what might have seemed to be a nice experiment comparing the use of HyperCourseware in the AT&T Teaching Theater and the use of overheads in the traditional classroom. In the Spring Semester of 1994, I was scheduled to teach a behavioral statistics course in the electronic classroom in the morning and in the traditional classroom in the afternoon. The materials were virtually identical and exams highly similar. In the electronic classroom, however, the materials were hosted in HyperCourseware so that the students could navigate through them on their own and interact with simulations and data analysis. In addition they were shown short video clips that illustrated statistics in use. As the instructor I tried not to bias my presentation to favor either class. But who am I to judge whether I was successful? The students in the electronic classroom knew that they were in a very special environment and may have been attentive and motivated. On the other hand, they also experienced the problems of learning how to use the system and suffered frustrating computer crashes. In both classes, there was the desire to get a good grade since the course was required for all psychology majors. In the final analysis there was no different in the test scores or final grades between the two classes. Whatever factors, positive, negative, confounding, or experimental canceled out to result in no difference. Had the difference been positive, it would have been tempting to argue for the benefits of the electronic classroom. Nevertheless, one does not know in these situations because it was not a controlled experiment and any differences could have been due to any number of confounding factors.

It is not impossible to conduct unbiased studies, it is just very difficult. It requires control over all of the possible biases that can easily crop up. But there is another problem, even if one does find a significant difference, what does it mean and how is it to be interpreted? Electronic classrooms and the traditional classrooms differ on many dimensions, some theoretically relevant and others irrelevant. They may differ on relevant factors that have to do with interactivity, engagement, enhanced multimedia materials, interfaces, etc.; and they may differ on irrelevant factors such as the style of the room, types of chairs, visibility, distractions, etc. Even if a difference is found, we probably know little about its cause and will not be able to generalize the result to some other situation. The answer is to move from overall comparisons of classrooms and educational systems to focused studies on specific factors that are involved in electronic educational environments. These studies may focus on the addition of simulations, user control of interaction, addition of graphics, etc.

Evaluation of Learning Efficiency

The preceding section is about the direct effects of electronic classrooms on the learning process. In theory, such effects occur within the psychological and cognitive processes of the students. It may be increased motivation, more salient materials, reinforcement of associations, or appropriateness of simulations, metaphors, and other cognitive representations. Educational gains are in terms learnability of materials. As noted, these are often very hard to substantiate. However, there are also many gains in the learning process that pertain to efficiency in housekeeping, time management, and automation that facilitate learning by reducing the time that it takes to retrieve information, obtain feedback, organize and schedule events, and process information. These benefits go back to the computer abilities listed in Chapter 3 and have little to go with psychological effects. These gains may be very substantial and are relatively easy to demonstrate.

A good example of this sort of benefit is provided by a study of training on technical manuals for the US. Navy. In the recent past, technical manuals used for repair of equipment occupied many bookshelves in the back of instructional classrooms. During training classes students had to learn to locate the relevant section of the manual, to retrieve it from the bookshelf, and to understand what it said and how to apply it. Considerable time was spent searching for the manuals and looking up the codes. When the manuals were converted to CD-ROM and made available on the students' workstations, researchers found a 30% reduction in the time required for training. The gains were brought about not by increasing the learnability of the materials, but by eliminating wasted time physically retrieving the manuals and paging to the appropriate section. These sorts of findings suggest that the classroom management tools discussed in Chapter 11 are more important than previously recognized.

Subjective Assessments

While gains due to learnability of materials in the electronic classroom are of great interest but tend to be illusive and gains due to automation are substantial but less than interesting, the greatest enthusiasm and statistical significance for the electronic classroom comes from subjective impressions. While some researchers consider subjective measures of less value than objective performance measures, most educators today realize the importance of student satisfaction with the learning environment. More and more students are seen as customers; and schools are in the business of marketing education. In student-centered and student-directed learning, the subjective perception of the student takes on central importance. Moreover, the attitudes and affect of the students serve to motivate their own learning behavior and consequently feed back into the learnability of materials. Consequently, subjective assessments are extremely important.

Student assessments of the educational process may be either on numeric rating scales or responses to open ended questions. We will consider both. The data from rating scales are useful for making statistical comparisons between conditions. The qualitative, verbal data is useful for identifying specific strengths and weaknesses of the environment. In general assessments can be made of a number of different factors to isolate the good and sometimes bad aspects of the electronic classroom on learning.

Assessments of the Implementation. We may not be able to determine in general if multimedia is beneficial, but we can assess the quality of the implementation, that is, how well it is being applied. This was great concern in the Teaching Theaters at the University of Maryland since we were particularly interested in the human/computer interface being designed. We should at least find that students agree that the system was easy to use and that they liked it.

When the first classes were taught in the AT&T Teaching Theater the Steering Committee was interested in whether the physical layout of the room and the computers are satisfactory. Students in a number of classes were surveyed and asked to rate these aspects a 9-point bipolar scales shown in Table 12.1

Table 12.1 Mean ratings of items on the Questionnaire for User Interaction Satisfaction for HyperCourseware in the AT&T Teaching Theater.

Item and AnchorsMeanSignificance
The arrangement of the computer and the desk
(1 = extremely hard to use; 9 = very workable)
7.1 p<.001
The chairs
(1 = very uncomfortable; 9 = comfortable)
7.8 p<.001
Glare on the computer screens
(1 = very distracting; 9 = not noticeable)
7.7 p<.001
Keyboard noise
(1 = very distracting; 9 = not noticeable)
7.3 p<.001
Working in pairs (if you did so)
(very difficult; 9 = very easy)
6.4 p<.001
The room's lighting
(1 = terrible; 9 = excellent)
7.3 p<.001
The clarity (sharpness) of the large projection screens in the front of the room
(1 = terrible; 9 = excellent)
5.4 n.s.
Acoustics (sound) of the room
(1 = terrible; 9 = excellent)
7.7 p<.001

Footnote. Significance was for two-tailed t-tests from the scale midpoint of 5.0. N.s. is not significant.

These data indicated all aspects of the room were excellent except for the clarity of the projection screens in the front of the classroom. This appears to be a general problem with the projection technology in electronic classrooms and one of the reasons why it is necessary to project materials class materials on the workstation screens right in front of the students.

To assess the software interface, the Questionnaire for User Interaction Satisfaction (QUIS) developed by the Human/Computer Interaction Laboratory at the University of Maryland was used. An early study evaluated the user interface of HyperCourseware in the AT&T Teaching Theater (Lindwarm & Norman, 1993). The short form of the QUIS uses six overall scales and then 21 scales for specific factors. The QUIS is one of the few instruments that has demonstrated reliability and validity and is standardly administered in usability tests of software interfaces. These scales are listed in Table 12.2

Table 12.2 Mean ratings of items on the Questionnaire for User Interaction Satisfaction for HyperCourseware in the AT&T Teaching Theater.

Item and AnchorsMeanSignificance
Overall reactions to the system
(1 = terrible; 9 = wonderful)
6.1 p < .01
(1 = frustrating; 9 = satisfying) 6.1 p < .01
(1 = dull; 9 = stimulating) 5.8 p < .05
(1 = difficult; 9 = easy) 6.6 p < .001
(1 = inadequate power; 9 = adequate power) 6.0 p < .01
(1 = rigid; 9 = flexible) 5.8 p < .05
Characters on the screen
(1 = hard to read; 9 = easy to read)
7.9 p < .001
Was the highlighting on the screen helpful
(1 = not at all; 9 = easy to read)
7.0 p < .001
Were the screen layouts helpful?
(1 = never; 9 = always)
6.8 p < .001
Sequence of screens
(1 = confusing; 9 = clear)
6.8 p < .001
Use of terms throughout system
(1 = inconsistent; 9 = consistent)
7.1 p < .001
Does the terminology relate to the work you are doing?
(1 = unrelated; 9 = well related)
6.6 p < .001
Messages which appear on screen
(1= inconsistent; 9 = consistent)
6.9 p < .001
Message which appear on screen
(1 = confusing; 9 = clear)
6.7 p < .001
Does the computer keep you informed about what it is doing?
(1 = never; 9 = always)
5.8 p < .05
Error messages
(1 = unhelpful; 9 = helpful)
4.6 n.s.
Learning to operate the system
(1 = difficult; 9 = easy)
7.1 p < .001
Exploration of features by trial and error
(1 = discouraging; 9 = encouraging)
6.7 p < .001
Remembering names and use of commands
(1 = difficult; 9 = easy)
7.2 p < .001
Can tasks be performed in a straight-forward manner?
(1 = never; 9 = always)
6.7 p < .001
Help messages on the screen
(1 = confusing; 9 = clear)
5.8 p < .05
Supplemental reference materials
(1 = confusing; 9 = clear)
5.9 p < .05
System speed
(1 = too slow; 9 = fast enough)
5.2 n.s.
How reliable is the system?
(1 = unreliable; 9 = reliable)
5.5 n.s.
System tends to be
(1 = noisy; 9 = quiet)
8.3 p < .001
Correcting your mistakes
(1 = difficult; 9 = easy)
6.5 p < .001
Are the needs of both experienced and inexperienced users taken into consideration?
(1 = never; 9 = always)
6.3 p < .001

Footnote. Significance was for two-tailed t-tests from the scale midpoint of 5.0. N.s. is not significant.

Overall, the results were very encouraging and suggested good usability of the HyperCourseware interface. In particular, the system was easy to learn and easy to remember how to use. The major problems were in terms of the system keeping the user informed as to the state of the system and the helpfulness of error messages. These problems were traced to several bugs in the software that generated error messages that were meaningless to the students. Help messages and supplemental reference materials were almost non-existent in the early version of HyperCourseware. Consequently, these features were given low ratings. Finally, system speed and reliability were rated very low. The system took several seconds to move from screen to screen due to the slow computer processors. To make matters worse, the operating environment was unstable and crashes were frequent.

In addition to the QUIS, a number of questions were added to assess aspects of the learning environment. These are shown in Table 12.3.

Table 12.3 Mean ratings of environment specific items on in the AT&T Teaching Theater.

Item and AnchorsMeanSignificance
The instructor integrates the media into the lectures
(1 = poorly; 9 = very well)
6.8 p < .001
The media interferes with learning the material
(1 = agree; 9 = disagree)
5.2 n.s.
The media helped to the learning process
(1 = disagree; 9 = agree)
6.0 p < .01
The computer helped in the learning process
(1 = disagree; 9 = agree)
6.2 p < .001
Technologies in the classroom were used to full potential
(1 = disagree; 9 = agree)
6.0 p < .01
Time to learn the computer was
(1 = inadequate; 9 = adequate)
6.3 p < .001
Degree of difficulty staying on the correct screen while attending to lectures
(1 = difficult; 9 = easy)
7.0 p < .001
During this semester my ability to use the computer
(1 = remained the same; 9 = improved)
6.3 p < .001M
Accessibility of the system from the WAM Labs
(1 = inaccessible; 9 = accessible)
4.1 p < .01

Footnote. Significance was for two-tailed t-tests from the scale midpoint of 5.0. N.s. is not significant.

While most of these ratings were high, two major problems emerge. First, was a problem with the interference of the media. Early use of materials in the electronic classroom and students did not always know were to look and sometimes got lost in the materials. Second, in the early semesters of use, students were not able to access materials outside the classroom from the computer labs. This was a constant point of frustration and sometimes led to distributing paper printouts of the notes.

In another study 36 undergraduate students in a behavioral statistics course used HyperCourseware for a full semester in the AT&T Teaching Theater. Prior to the first class, students were not aware that the course would meet in an electronic classroom so that the class would be a representative sample of students. Students were asked to rate a number of aspects of the system and the use of the software during the semester. Items were rated on a 9-point bipolar scale with varying end anchors depending on the item as shown in Table 12.4

Table 12.4 Mean ratings of items in the electronic classroom.

Item and AnchorsMeanSignificance
The computer
(1 = hard; 9 = easy)
6.9 p < .001
Frustrated by the computer
(1 = all the time; 9 = none of the time)
5.0 n.s
How hard is it to make sure you're on the right screen while
you are paying attention to the lecture?

(1 = hard; 9 = easy)
6.4 p < .001
How well the instructor integrated the media into the lectures
(1 = breaks in lecture; 9 = seamless presentation)
7.1 p < .001
The usage of the media
(1 = disliked a lot; 9 = liked a lot)
6.6 p < .001
The helpfulness of the computer in the learning process
(1 = not helpful; 9 = very helpful)
5.9 p < .05
Technologies in classroom were used to potential
(1 = far from potential; 9 = very close to potential)
7.1 p < .001

Footnote. Significance was for two-tailed t-tests from the scale midpoint of 5.0. N.s. is not significant.

Similarly, these results indicate very positive assessments of both the software and the learning environment. The low rating for being frustrated by the computer continued to be due to software bugs and crashes.

Comparative Assessments. Finally, it is instructive to ask the students to compare their experiences in the electronic classroom with the traditional classroom. Table 12.5 shows a number of questions that have been asked in various classes.

Table 12.4 Mean ratings of items in the electronic classroom.

Item and AnchorsMeanSignificance
Do you think that you LEARNED more or less here in the Teaching Theater than you would have learned had this class been in a regular classroom?
(1 = less; 9 = more)
6.9 p < .001
Do you think that your class was more or less INTERESTING here in the Teaching theater than it would have been in a regular classroom?
(1 = less; 9 = more)
7.5 p < .001
Do you thing that you were more or less MOTIVATED to come to class here in the Teaching Theater than you would have been if this class were in a regular classroom?
(1 = less; 9 = more)
6.9 p < .001
Do you think that you had more or less opportunity to BE HEARD by your professor here in the Teaching Theater than you would have had if this class were in a regular classroom?
(1 = less; 9 = more)
6.6 p < .001
Do you think that you heard more or less FROM YOUR CLASSMATES here in the Teaching Theater than you would have heard if this class had been in a regular classroom?
(1 = less; 9 = more)
6.6M p < .001
Do you think that your class was more or less DOMINATED by aggressive students here in the Teaching Theater than if this class had been held in a regular classroom?
(1 = less; 9 = more)
5.3 n.s.

Footnote. Significance was for two-tailed t-tests from the scale midpoint of 5.0. N.s. is not significant.

These results support the electronic classroom over the regular classroom in every area except problems caused by dominating students. This also suggestions that greater effort needs to be directed at the development and use of software tools for interpersonal communication. While aggressive students can still dominate using such tools as Chat Channels discussed in Chapter 8, techniques can be used to attenuate such dominance by limiting the number of submissions and length per entry.

Open ended questions have proven to be very diagnostics of the benefits and problems with software, hardware, and educational procedures in the electronic classroom. The following questions have been routinely used in studies at the University of Maryland:

Tracks and Traces

A very telling assessment of a system is whether or not it is actually used. In the use of most software there are mandatory features that have to be used and there are discretionary features which may be used or ignored. In educational environments such as HyperCourseware, a number of components are compulsory. The students must use the syllabus to access the notes; they must follow along in the notes to study the materials; they must submit their assignments, and they must complete exams. However, other components such as the class roll, the seating chart, the feedback and question tools, use of chat channels for discussion with other students, and access of ancillary information is encouraged but not demanded in the system. If the students do not access these modules, it suggests that they were not useful or nor well implemented. On the other hand, one has to be careful in looking at frequency of access data in hypermedia systems. Some screens may be frequently accessed only because they are navigational paths (e.g., indexes) to other screens that are more interesting. Some screens may be extremely important but rarely viewed, such as screens for entering or changing one's password.

Frequency of access data is routinely collected in HyperCourseware as the user navigates through the system. Table 12.6 shows some of these data in terms of percentages of times screens are accessed within each of the modules.

Table 12.6 Typical access percentages for screens viewed within modules in HyperCourseware in class and out of class

In ClassOut Of Class
Home Screen 16.30 11.57
Syllabus Module 5.79 2.37
Lecture Modules 23.21 9.54
Reading Module 0.52 0.53
Assignments Module 1.45 2.44
Class Roll Module 8.15 6.58
Seating Chart 0.99 0.20
Chat Module 4.06 2.33
Exam Modules 1.03 0.75
Grade Module 0.40 0.98
Projects Modules 0.47 0.34
Totals 62.37 37.63

These and other data indicate that as one might expect, students tend to visit modules that are specifically a part of course work and that are compulsory. The Home Screen is very frequently accessed because it is a navigational hub to access other modules. The lecture modules are the most frequent because they contain the bulk of the materials. Overall there is much more frequent access to the materials in the classroom than out of the classroom. The only exception is accessing one's grades. Access is also time dependent. Entering information into the class roll tends be limited to only the first two weeks of the semester, but viewing screens in the Class Roll is rather frequent throughout the semester. Viewing grades and grade distributions occurs shortly after exams. Inspecting the seating chart occurs during the first few weeks of the semester and then drops off as the novelty wears thin and students sit in the same places as a matter of habit.

Frequency of access data is also highly dependent on the structure of the course and types of activities required by the instructor. In courses with greater emphasis on discussion, access of lecture modules will be much lower and the use of Chat Modules much higher. The use of the Assignments and Projects modules will also depend on relative importance of these activities in class. If the system is working properly, the frequency of access of the modules by students should reflect the educational priorities of the instructor. Monitoring access of modules in the educational process should help instructors evaluate how well these priorities are being transferred to the students and effected in their behavior.

Artifacts

The artifacts of education consist of the many objects that are created during the process of education. These objects fill the areas of the diagram shown in Figure 5. Even today one can find megabytes of course materials created by faculty and publishers of electronic media. Furthermore, the course products created by the students are increasing rapidly. These artifacts will become the best testimony to the success of the switched-on-classroom. There are three perspectives on these artifacts from publishers, faculty, and students.

From the perspective of publishers, the switched-on-class is becoming a target for marketing new media materials. As they develop materials they become evaluated in the marketplace in terms of what sells, what is adopted, and what ultimately survives. This has been the situation with hardcopy textbooks; and it will be same with electronic materials, whether CD-ROMs, transferable files on the network, or World Wide Web sites. The success of books is clear from their sales and from the size of libraries. The success of electronic materials will be gauged by similar statistics. It will be somewhat by the volume of materials but more importantly by their use and ultimately by sales revenue.

From the perspective of faculty, the success of the electronic educational environment will be measured by their use of these materials in their courses. Use presupposes that they have either collected materials from published domains or they have created their own materials. In the same way that some instructors have developed courses, written textbooks, and compiled materials, there will be those in the electronic media that do the same. Others will adopt and use materials generated by others. The success of electronic materials will be revealed as one's teaching portfolio shifts from paper to electronic media, from static notes to dynamic images, and from small sets of examples to large sets of electronic pointers to examples.

Finally and most importantly from the perspective of the student, the success of the switched on classroom will be indicated by what they take from the educational process as they proceed on their journey through life. Students move from grade to grade and on to graduation with an accumulation of both internalized knowledge and skills and external documents and records. The latter constitute the student's artifacts of the learning process. In the hardcopy classroom these artifacts were notes in the three ring binders, term papers and written reports, collections of laboratory exercises, and countless handouts. Most of these are eventually discarded as they deteriorate, loose their meaning, become disorganized, and become obsolete. As the electronic educational environment succeeds these artifacts will become electronic. The advantage of these educational artifacts are go back to the advantages listed in Chapter 3. They can be easily stored, copied, searched, updated, and linked to other material. As students progress from grade to grade they retain a rich history of course materials and the products of their work. When they need to call up some information that they learned in the past but cannot recall it from memory; they can retrieve it from the external compendium of their course work. Finally, these materials will become a fertile ground of research for educators studying the effects of teaching methods. What do students retain in their external files, how do they organize these materials, and what materials are actually accessed for use as life goes on?

Conclusion

The driving forces for the electronic classroom go well beyond enhanced learning. They also involve new efficiencies due to automation and new expectations on the part of the students. Overall assessments of the benefits of the electronic classroom are hard to conduct and results are susceptible to bias. However, assessments of the specific aspects of the electronic classroom not only easy to get but more useful for evaluation purposes. Frequency of use data is also easy to collect and highly indicative of what works and what does. These data need to be gathered to provide direction to interface design and implementation of the electronic classroom.

Ultimately the justification for the spending of billions of dollars on the new electronic educational environment will be after the fact. When all is said and done, it will be obvious in 20/20 hindsight. Students and teachers will sit around and remark, "I can't imagine what it was like before the switched-on classroom!"

Exercises

1. Use one of the questionnaires discussed in this chapter as a basis for studying an implementation of teaching technology in your school. You will probably want to add demographic questions (e.g., age, sex, computer experience, etc.) and other questions related to the specific implementation.

2. Do a survey of the artifacts of your own education. What notes have you retained? How are they organized? What have you referred back to and found useful?


[Table of Contents] [Part IV] [Chapter 13]