Moursund's IT in Education Home Page

Editorials

Volume 12 1984-85 Editorial (with Retrospective Comments)

Reprinted with permission from Learning and Leading with Technology (c) 1984-85, ISTE (the International Society for Technology in Education. 800.336.5191 (U.S. & Canada) or 541.302.3777, cust_svc@iste.org, http://www.iste.org/. Reprint permission does not constitute an endorsement by ISTE of the product, training, or course. Note: These editorials are not yet available.

1. Aug.-Sept. 1984 Back to Basics
2. October 1984 NEA and Educational Software
3. November 1984
4. Dec./Jan. 1984/85
5. February 1985
6. March 1985
7. April 1985 The Fifth Generation: It’s for Real
8. May 1985

Back To Basics

Moursund, D.G. (August/September 1984). Back to basics. The Computing Teacher. ICCE.

Reading, writing and arithmetic—the 3 Rs. Some computer educators become so enamored with computer potentials that they forget why the “basics” are so named.

Reading provides access to information. A book is an inexpensive, easily portable vehicle for transmitting large quantities of information over time and distance. Reading provides access to quite a bit of the accumulated knowledge of the human race. Reading is also a form of entertainment.

Writing provides the materials to be read. Equally important, writing is an aid to the human mind as it works to solve a variety of problems. For example, writing provides temporary storage of ideas as I work out the order and details of a workshop or lecture I intend to present.

Arithmetic also serves two major purposes. Numbers can represent quantities or location, distance, time, area, volume, and other measurements. Arithmetic (more generally, mathematics) provides a language to represent, store, and access these types of information. As with reading and writing, quantifiable information can be transmitted over time and distance. The geometric theorems of Euclid are as valid today as they were 2,000 years ago.

Arithmetic is also an aid to problem solving. If a problem can be represented using the notation and ideas of arithmetic, then one may be able to solve the problem using the accumulated knowledge and the tools of this field. The tools include operations such as addition, subtraction, multiplication, and division; other tools include drawing diagrams and graphs.

As an educator, it is important that you understand the 3 R's. As a computer educator, it is important that you understand how computers interface with and possibly affect the 3 R's.

The role of reading and writing as aids in transmitting information over time and distance has been indicated. A number of other aids have been invented. The telegraph and telephone certainly revolutionized communication over long distances. Photographs and movies, radio and television, phonographs and tape recorders, computers and laser discs—all aid communication over time and/or distance. The telephone is particularly interesting. It takes some training to use a telephone. But what is mostly required is a level of speaking and listening skills that people can usually acquire without benefit of formal education. Thus, while formal training in use of telephones is required for some jobs, telephone literacy is not part of the school curriculum.

Right now computerized telecommunication systems, data banks, bulletin boards, and teleconferencing seem rather esoteric to many. The suggestion is that learning to use such aids to communication is difficult and requires extensive formal training, even though using them is mostly a matter of reading and writing (typing). That is mainly true because such facilities are still relatively expensive and not readily available, and because the people-machine interfaces need additional work. Children who grow up with ready access to such facilities will find that they are easy and convenient to use. Reading and writing will remain basics, but they will be supplemented and extended by computerized aids to communication.

I want to make two additional points about some of the modern inventions. First, each new invention such as radio or television broadens the scope of communication. It takes substantial training and experience to be a skilled radio broadcaster or television producer. But generally it takes little formal training to be a user of these new inventions. The knowledge and skill needed to use the inventions is decreased by the development of appropriate people-machine interfaces. One sees this in modern cameras and in television sets.

Second, some inventions actually decrease or substantially change the type of training and experience important to the basics of education. The typewriter has decreased the relative importance of being able to write very neatly and rapidly. It does take training to learn touch-typing. But elementary school children can learn to type, rapidly acquiring useful skills. As a second example, consider learning to use a card catalog and to search library stacks versus learning to use a computerized information retrieval system. The latter will eventually be an easier and a far more reliable means of securing desired information. Notice in both examples that reading and writing are necessary skills and that the usefulness of the skills is expanded by inventions.

Increasing the Power of Basics

We have also indicated that reading and writing are aids to organizing ideas. Consider what you do as you prepare to write a paper or prepare to give a lecture. Consider the nature of the notes you take during a lecture or a staff meeting. To me it seems clear that an easily portable word processor may satisfy some of the same needs. But for me, such a tool will never replace pencil and paper for doodling during an incomprehensible talk or a dull staff meeting. Moreover, pencil and paper remain an excellent tool for prewriting and other organizing processes.

And that brings us to arithmetic. A calculator can aid in addition, subtraction, multiplication, and division. A computer can draw graphs, solve equations, and carry out complicated symbol manipulations. But these things are meaningful and useful only if one has mastered the vocabulary, notation, and methods of representing problems in mathematical form. Electronic technology is a wonderful aid to parts of arithmetic, and its ready availability suggests changes in the nature of mathematics education. There can be less emphasis upon routine manipulation and more emphasis upon higher-level cognitive processes. But the need to learn vocabulary, notation, what types of problems can be solved, and the representation of problems as mathematics remains. And so far, no computerized system approaches pencil and paper as an aid to organizing one's thoughts and trying to figure out how to represent or to solve a math problem.

Long Live Basics!

The point to be made with each of the 3 Rs is the same. Computers do not decrease the value of reading, writing, and arithmetic. But computers are an aid to accomplishing the underlying purpose of each of the basics. Thus, the ready availability of computers actually tends to broaden the scope/nature of each of the basics and thus places an additional burden on our educational system unless we change the system somewhat. I think that gradually computers will be assimilated into the definition of each of the three basics. Eventually the term “writing” will include keyboarding and use of a word processor. The term “reading” will include accessing information from computerized data banks. The term “arithmetic” will include making use of calculators and computers as aids to problem solving. And the basics will stay basic.

NEA and Educational Software

Moursund, D.G. (October 1984). NEA and Educational Software. The Computing Teacher. V12 N2. ICCE.

The National Educational Association is a very large union, with about 1.7 million members. Its membership includes over half of all teachers in the United States. The NEA represents its members as they bargain for pay and working conditions. It represents its members on issues coming before state and federal legislative bodies. The NEA has a long and successful history.

In 1983 the NEA became interested in the area of computers in education. The special issue that attracted the NEA's attention was the quality of available educational software. A sequence of decisions was made, with an announcement of the general ideas appearing in the June, 1983 NEA Today.

The first decision the NEA made was to establish the NEA Educational Computer Service and to become involved in software evaluation. This was a major decision, made at a time when MicroSIFT from the public sector and EPIE from the private sector were both well established. Many local, regional, state and/or provincial educational organizations were (and still are) quite involved in software evaluation. Many companies publish software reviews, with their publications competing in the commercial marketplace. And many periodicals, both non-profit and for profit, publish software reviews.

The second decision the NEA made was to charge software companies to have their software evaluated. This was a particularly interesting decision, since no one else was doing so. Perhaps the ideal model for product comparison and evaluation is provided by Consumer's Union. Consumer's Union goes so far as to purchase all products it will test, making the purchases through retail stores at various sites throughout the country.

Most software evaluation organizations don't go that far. Even EPIE, which has close ties with Consumer's Union, expects software companies to provide it with the software to be evaluated.

The third decision the NEA made was to work with a for-profit company on this project. NEA and Cordatum, an engineering company, created the NEA Educational Computing Service. NEA ECS publishes a series of catalogs listing NEA “Teacher Certified” software. A subscription is $20 per year.

The fourth, and most interesting, decision made by the. NEA was to become involved in the sale of software. NEA “Teacher Certified” software may be purchased through the catalog. It was arranged that all profits would be split equally between NEA's not-for-profit National Foundation for the Improvement of Education and the for-profit engineering firm Cordatum. Evidently, this arrangement has been changed recently. The Spring 1984 issue of the catalog indicates that checks and money orders should be made payable to the NEA Educational Computer Service. The Summer Supplement 1984 indicates that payments should be made to Cordatum, Inc., and gives Cordatum's address for the placement of orders.

In this editorial I will comment on all four decisions.

First is NEA's decision to get involved in software evaluation. Essentially this was a statement that those who were cur-rently involved were not doing the job in a manner acceptable to the NEA-that the NEA felt it could do better.

The evaluation of software is a difficult business. It is easy enough to look at one piece of software and to write a personal “evaluation” (more technically, this should be called a review). It is quite another thing to handle large quantities of software in a timely and professional manner. There are perhaps 7,000 or more pieces of educational software currently on the market. It is essential that the evaluative work be credible—that is, so well done that it is widely accepted by others. It does no good to publish software evaluations if the potential end users don't trust the results and decide they would rather do it themselves.

MicroSIFT, for example, is part of the Northwest Regional Educational Laboratory, a prestigious federally supported research and development center. MicroSIFT assembled an advisory group that read like a “Who's Who” in computer science education. It made a substantial effort to get school districts involved from throughout the country and to produce very high-quality evaluations.

EPIE has a long history of educational products review. It secured grant funds from private foundations and support through Consumer's Union to enter the software evaluation fray.

The NEA has none of these characteristics. Its involvement in education for the past several years has been as a labor union, as an advocate of its members. The NEA has no record of academic involvement or expertise that lends credibility to this operation. What does NEA “Teacher Certified” mean when compared against a solid, positive evaluation from EPIE, MicroSIFT or a local educational authority?

The decision to charge software companies to have the NEA take a look at their products is questionable. It is evident that software evaluation is a costly process. The amount being charged (between $100 and $1,000 per package) is not enough to pay the expenses involved in a careful evaluation. Both EPIE and MicroSIFT make extensive use of unpaid volunteers. EPIE is dependent upon grants from foundations and is making a determined effort to vend its evaluations in a manner to keep the whole operation financially solvent. MicroSIFT has depended mainly on continued federal funding.

I view the decision to charge an evaluation fee to software companies to be politically poor. But perhaps worse was the decision to use the terminology NEA “Teacher Approved.” The terminology suggests that all software has been divided into two categories—those packages that meet NEA's criteria and those that fail to meet the criteria. A person might be led to conclude that if software is not NEA “Teacher Approved” it is somehow inferior. But that is quite misleading. First, some sort of preliminary selection process occurs. The great majority of the 7,000 or more available software packages are not even selected for possible evaluation. Then two additional criteria must be met. The software company must agree to pay the evaluation fee of $100 to $1,000. The software company must agree to allow NEA (through its Educational Computer Service or Cordatum) to sell the software. Since some software companies are not willing to allow their products to be sold by outside vendors, their software is automatically excluded. Other companies, as a matter of principle, are unwilling to pay the required fees. The overall result is that many fine pieces of software are excluded from the evaluation process.

The third NEA decision, to contract with a for-profit company, is also questionable. The fact that Lawrence Fedewa is director of the NEA Educational Computer Service and is also both a major stockholder and vice president of Cordatum complicates the issue. Remember, the value of a software evaluation depends on the wide acceptance of the results, so that others who might do the evaluation decide not to. The credibility of the evaluators and their commitment to producing accurate and unbiased evaluations is essential.

But it is the fourth decision, the NEA deciding to become a software vendor, that is hardest to accept. The problem is one of vested interests. There appear to be two major categories of vested interests here. First, we have the NEA involved both as the evaluator of the software and the vendor of the software. If all of the software were deemed inadequate, there would be no software to vend. If all of it were deemed adequate, there would be no value to NEA “Teacher Certified,” This is also tied in closely with the second decision—charging the software companies. Here we seem to have pressure being put on companies to pay the NEA to evaluate their software in the hopes that it will become NEA "Teacher Certified" and then widely sold by the NEA.

An analogy might help make this first type of vested interest clearer. How would you feel if Consumer's Union were in the retail business, and it only sold products that received high ratings in its evaluation process? This would certainly decrease my faith in the value of their ratings.

The second aspect of this vested interest is that many NEA members are in a position where they are purchasing software for use in schools. Will some allow their membership in NEA to enter into their purchasing decisions? Might not other software be better or might not the software be available from a better vendor? “Better” in this case might include cheaper prices, better service, local support, a better guarantee and so on. Certainly this situation opens NEA members to a charge of conflict of interest.

Here we have the very essence of the issue. A union, such as the NEA, is created and exists for certain purposes. Once in existence it has the potential to become involved in activities that only vaguely relate to these initial purposes. I have no confidence whatsoever that the NEA involvement in software evaluation and software sales will help the quality of education in this country. I fail to see how this involvement is important to the mission of the NEA as a labor union. Moreover, this is a step in a direction that really scares me. If we can have NEA “Teacher Certified” software, then we can have NEA “Teacher Certified” books, films and other instructional media. I can imagine a battle between various teacher education unions on the appropriateness of different curricula or teaching methods. In all of this I see the student as the real loser.

The very essence of this issue is its potential impact on the quality of education students obtain. There is nothing that leads me to believe that having the NEA involved in software evaluation and sales will improve our educational system. On the contrary, I see this venture as setting a very bad precedent. Extensions of this precedent have a strong likelihood of decreasing the quality of our educational system.
Perhaps these conclusions are based on insufficient information. Cordatum, NEA or NEA ECS may have additional facts or counterarguments that will shed fresh light on their efforts. An article by staff members of NEA ECS is scheduled for the November TCT.

Retrospective Comment 1/19/05

The software evaluation and sales venture by NEA did not prove to be a success, and NEA wihtdrew from this business. Over the years it made a variety of other attemps to help teachers deal with the field of computers in education.

In addition, the National Foundaton for the Improvement of Educaiton ( NEA's Private Foundation, which is housed in the NEA building in Washington, DC) has been involved in a number of different computer-related projects. One of these was funded by Bill Gates. He wrote a book titled The Road Ahead and donated $3 million from the royalities and some of his own funds to a three-year NFIE project called The Road Ahead. Talbot Bielefeldt and I at ISTE wrote a proposal to try to be awarded this $3 million project. We (ISTE) received a $600,000 grant to serve as outside evaluators and consultants to the project. This allowed Talbot and I to work with the NFIE people over a period of three years, getting to know their leadership as well as some of the NEA leadership. We also got to work with a number of uotstanding educators from throughout the country. It was a valuable and fun experience.

Over the years, ISTE has done a variety of cooperative ventures and activities that involved NEA. Retrospectively, I am relieved that NEA did not seem to hold a grudge from the editorial!


Editorial # 48. Preparation to Be a Computer Coordinator. V12 N3
(Editor’s Message)
David Moursund
Editorial # 48 (Vol. 12 No. 3) November 1984
The Computing Teacher
As part of the University of Oregon summer program in computer science education, I run a twice-weekly colloquium series. Generally I present a number of these colloquia, but visitors and other faculty make the bulk of the presentations. Students who attend the colloquium series receive one graduate hour of credit for attending regularly and writing brief reports, consisting of four paragraphs: a summary; the most important point; the least important point; and how the presentation relates to the student's previous knowledge or experience. Via these reports I receive substantial feedback on students' reactions to the presentations.
One of my presentations this past summer was on computer coordinators. I will summarize the presentation and focus on its most controversial aspect herein. Approximately 100 computer educators have provided feedback that has contributed to the main ideas given here,
There are two general categories of computer coordinators—those at the school building level and those at a school district level. Often the school building computer coordinator has both teaching and coordinating duties, which is a considerable responsibility. A computer coordinator in a small district may find the breadth of responsibilities and the skills demanded exceed those needed to hold a similar position in a larger district.
At any level, a computer coordinator is apt to have a wide variety of duties. These can be analyzed by looking at a standard model of computers in instruction. A school or district is involved in teaching about computers (computer literacy and computer science), teaching using computers (often called computer-assisted learning) and teaching incorporating computers (use of the computer as a tool). A computer coordinator may be involved in hardware and software selection and acquisition; teacher and administrator education; curriculum planning and development; and long-term planning and budgeting. A coordinator is a resource person. A coordinator works with teachers, administrators, school board members and parents. A coordinator is a leader, doing or helping to facilitate what needs to be done.
Out of such an analysis comes a list of suggested competency areas. My list covers four major areas.
C1. Technical knowledge in the fields of computer science and computer education.
C2. Interpersonal relations skills; written and oral communication skills; administrative skills.
C3. Overall intelligence and perseverance; good ability to learn; a broad general education and dedication to lifelong learning.
C4. Knowledge and support of our educational system; good skills in teaching school children and educators.
My presentation stressed the importance of all of these areas, although more time was spent on the first than any one of the others.
I have received substantial feedback on this presentation. I carefully read the student reports, and many students approached me individually to offer suggestions. Prior to making the colloquium presentation, I wrote up the materials as a chapter in a book I am working on. Based upon the feedback, I rewrote the chapter a couple of times, then had it read by several computer coordinators and revised it again. The chapter is being reprinted in ICCE's SIG Bulletin, Vol. I Number 4. It can be obtained by subscribing to the SIG Bulletin.
The feedback I have received consists of three types.
Fl. Why did you stress technical competence (Cl) so much? It is not nearly as important as the other three areas.
F2. Why did you stress the non-technical areas (C2-C4) so much? They are not nearly as important as technical competence.
F3. If I had all of those qualifications, I'd leave teaching and get rich!
I had expected quite a few people to provide the third type of feedback. The demands being placed on computer coordinators tend to be overwhelming and unreasonable. A computer coordinator position is a high-stress job, with burnout a distinct possibility. Quite often the pay scale is the same used for teachers, but the required work and responsibilities are substantially larger.
I was surprised at the nearly equal numbers of people who offered either the first or the second feedback. In retrospect, however, it seems clear what is going on. Many people are in the computer field because they have a talent for acquiring technical skills in computer and information science. They enjoy the process of attacking and solving hard computer programming problems and studying the underlying theory. On the average, such people tend to be somewhat less interested and skilled in the non-technical areas C2-C4. Thus, they feel that C2-C4 are less important.
And of course, the opposite situation also occurs. Many educators are quite talented in the areas C2-C4, and at the same time have developed a considerable interest in the computer field. This interest carries them through learning various computer applications such as word processing, and through introductory programming. However, many such people find the “solid” computer science courses (such as the freshman/ sophomore sequence for undergraduate computer science majors) quite difficult and not particularly interesting. Their natural tendency is to conclude that technical competence is of less importance.
In my opinion a computer coordinator should be reasonably well qualified in all of CI-C4. However, I know a number of competent and successful computer coordinators, and they vary widely in their range of competencies. The really successful ones tend to share several characteristics: they are dedicated to education and have a strong work ethic; they are quite smart and good learners; they are good listeners and open to learning on the job; and they are all exceptionally strong in one or more of C1-C4.
If you have these characteristics and are reasonably qualified in all of Cl-C4, it is likely that you will succeed as a computer coordinator. If you find that your qualifications are unbalanced, put some effort into improving your weaker areas. The position of computer coordinator is terribly challenging, but it can be a tremendous amount of fun.
Several other suggestions can be based on the above type of analysis. If you are thinking about becoming a computer coordinator, look at your strengths and weaknesses. You may decide you would be much happier staying in your current position. If you find you have certain weaknesses and still want to be a computer coordinator, make this a factor in bargaining for a position. Indicate clearly what job responsibilities you are interested in having and which responsibilities you would prefer be assigned to others. After all, the overall job is apt to be bigger than one person can handle.
An essential aspect of being a computer coordinator is developing or finding others who can help you. If you are a computer coordinator with weaknesses in certain areas, try to find help in these areas. Also, consider keeping a careful log of how you spend your time and how much time you spend on the job. Use this log to bargain for increased pay or staff, as well as for deciding how to optimally use your skills and time.



Editorial # 49. More Harm Than Good. V12 N4
(Editor’s Message)
David Moursund
Editorial # 49 (Vol. 12 No. 4) December/January 1984-85
The Computing Teacher
I received a phone call a few days ago from a friend deeply involved in teacher education aspects of computer education. He was quite upset about a recent happening in his state. Evidently a change in school requirements had caused a major decrease in the number of needed physical education teachers. Consequently, a number of physical education teachers were given a two-week computer literacy course and reassigned to the job of junior high school computer literacy teacher. He was upset that such a thing could occur in 1984. Would you like your children's first serious computer course to be provided by “14-day wonders” from a random academic field? How can computer educators allow such a thing to happen? It's a crime! Something should be done.
This example illustrates some of the difficulties of being a school district superintendent. The state or district requires that all students become computer literate via a junior high school course. There is a severe shortage of teachers in this area. Meanwhile, the superintendent is faced by the possibility of having to layoff or reassign a number of teachers who are quite experienced in working with students. The decision to retrain these teachers and make them into computer literacy instructors helps solve two different problems. Of course, it also opens the superintendent to criticism from leaders in computer education.
The example illustrates some difficult questions that most computer education leaders seem unwilling to face. On the one hand we have a strong push being led by parents, computer educators and many educational leaders to have all students become involved with computers.
On the other hand we have the fact that the overall field of computer and information science, as well as the specifics of computers in education, are quite complex. It seems relatively easy to state high-level goals such as, “All students should become computer literate;” and “Computer assisted learning should be used in all aspects of the curriculum when it is educationally and economically sound.” But translating these goals into measurable behavioral objectives and specific lesson plans, and implementing the lesson plans, is a considerable challenge.
Moreover, there is little national agreement about the specific meaning of computer literacy or about when and how computer assisted learning should be used. There is little agreement about possible scope and sequence or even on the content of specific computer courses. Consequently, each school district or individual school is moving ahead on its own. The schools are trying to respond to the demands being placed by parents, students and educators. Are they doing more harm than good?
The more-harm-than-good question is worth asking even if we cannot provide a full answer. It may help us make some decisions that will improve the overall quality of our children's education.
Probably the easiest area in which to respond to the more-harm-than-good question is in computer-assisted learning. A substantial body of research literature on CAL has been accumulated over the past 30 years, and CAL has been extensively used for many years in a wide variety of educational settings. A number of researchers have surveyed the literature and given CAL favorable reports. It seems obvious to me that if CAL did more harm than good we would have accumulated substantial evidence supporting that assertion. Instead, the major issues seem to be cost effectiveness and availability of adequate hardware and software.
The more-harm-than-good question becomes more difficult as we investigate teaching computer literacy, teaching computer science or integrating the computer as a tool in the overall curriculum. School districts seriously addressing the issue of computers in education often develop a scope and sequence for instructional computing. Such a plan may cover K-12 and be designed to provide substantial computer opportunities to all students.
At the grade school levels the computer plan may have three major components. One is hands-on experience, perhaps using a variety of CAL materials. A second is instruction in using a computer as a tool, perhaps for word processing. The third is specific computer literacy instruction, perhaps including some programming. This plan is often implemented by the regular classroom teacher, who may have had several days of computer workshops—perhaps some more, some less. I doubt that most of these teachers have had the equivalent of two weeks of instruction in computer education, Of course, some elementary schools have hired computer specialists with a substantial amount of computer knowledge and experience,
Now we see why the more-harm-than good question is so difficult. I can visit a classroom and observe teachers working with kids and computers, and form a personal opinion as to whether I would want my own or others' children to be involved in this setting. Indeed, I have done this. With some (generally quite computer-knowledgeable) teachers, it seems clear that children are receiving a good educational experience. With other teachers it seems evident that the overall result is of questionable or even negative educational value. But where is the published literature on this? Who has done a definitive study that supports or rejects the idea of using regular elementary school classroom teachers to introduce computers into the curriculum?
Teachers need different amounts of training for different types of computer applications. Having students use drill and practice materials to reinforce traditional curriculum materials requires relatively little computer education knowledge on the part of teachers. It is easily justified by the existing research. And what about having students learn keyboarding, word processing or use of an applications package from a teacher who is at best a novice in these areas? My guess is that we will eventually develop solid evidence to support such instruction. User-friendly software backed up by computer assisted instruction to teach use of the software will certainly reduce necessary teacher knowledge. But finally, what about having children learn BASIC or Logo in such an environment? Show me the evidence that this is appropriate. Remember, your evidence must consider alternative uses of student and teacher time, and of the money spent for computer facilities.
Many students now receive their first formal computer course at the middle school or junior high school level. It may be nine weeks or half a year in length. Typically more than half of the course content is an introduction to computer programming, generally in BASIC. Often the course is taught by a computer teacher—a person who teaches such a course several or many times per year. Frequently such computer literacy teachers are self taught; preparation via a two-week workshop may well be about the average, Of course, many such computer teachers have more extensive preparation.
Such courses have been taught for many years and standard textbooks now are readily available. We all have heard of students who learned to program while in middle school or junior high school and then went on to make a lot of money as programmers. But have you heard of students who were completely turned off by computers because of a poor-quality course or because they were not developmentally ready to handle the course? Such negative examples receive little publicity, but may far outnumber the well-publicized, positive cases. There is surprisingly little research literature on this topic.
At the high school level the district computer scope and sequence plan generally calls for offering a number of computer literacy, computer application, and computer programming courses, perhaps culminating in an advanced placement (Pascal) course. Once again, teacher preparation varies widely. A few teachers have the equivalent of a bachelor's degree in computer science, but most have had little modern, formal preparation.
A significant fraction of students entering college computer science courses have had high school computer courses. Some colleges keep careful track of the performances of these students versus the performance of students who enter with no formal computer background, Once again the published literature is sparse. However, my personal communications with many computer scientists suggest that having had a high school computer course may be a distinct disadvantage to doing well in a college computer science course. Some suggest that such courses teach spaghetti programming and don't emphasize problem solving. Others suggest that BASIC is the problem. Certainly the more-harm-than-good question is open to lively debate for such students.
In this editorial I want to express particular concern about a child's first serious computer course. Suggestions that all students should become computer literate can be traced back at least to the early 1970s. An April 1972 publication of the Conference Board of the Mathematical Sciences recommends a junior high school computer literacy course for all students. A junior high school or middle school computer literacy course of nine to 18 weeks fits my definition of a “serious” computer course. For most students now, this would not be their first exposure to computers, but it would be their first substantial course in this field.
I read many research papers, and I do a fairly good job of keeping up with the computer education literature. I am personally aware of many individual cases in which students learned about computers while they were quite young and then went on to make very good use of this knowledge. I know that many handicapped children can benefit greatly from access to a computer with appropriate peripherals. I have also heard that girls may benefit from relatively early exposure to computers. Surprisingly, however, I have never read a paper presenting strong research evidence that it is important to a student's development to receive computer literacy instruction before completing junior high school. The lack of such evidence makes me wonder why we push so hard for such a course. The initial example of the “14-day wonders” suggests that we are willing to risk poorly taught courses in order to make this introductory course widely available before students reach high school.
Personally, I would rather this first serious introduction to computers be postponed until high school rather than be poorly taught! A delay can have benefits. It allows for increased developmental maturity on the part of students, and it may increase the likelihood that the teacher will have a deeper and broader computer background and more computer experience. High schools have had computers longer than junior high schools, and high schools are frequently larger than junior high or middle schools. Because high schools want to offer a variety of computer courses, they are more apt to be able to hire a full-time computer teacher who has good credentials in this field.
Of course, one can argue about the meaning of “poorly taught.” A well-taught computer literacy course is apt to require adequate computer facilities, a well-qualified teacher, a good course outline and appropriate supportive materials for the students and teacher. A well-qualified teacher can make do with inadequate computer access, books and other supportive materials. But a poorly qualified teacher may not be much helped by a room full of expensive new microcomputers.
The issue, then, is what constitutes a well-qualified teacher. In every standardly taught subject except computer literacy we have an answer. Teacher training institutions have developed courses of study designed to meet state or provincial teacher certification standards. The teacher of seventh-grade mathematics is required to take several years of college mathematics and a math methods course. The music teacher may well have majored in music while in college. The physical education teacher may well have been a college athlete with a major in physical education.
The key question is whether we should tolerate substantially lower qualifications for a computer literacy teacher than we require for teachers of other subjects.
As argued earlier in this editorial, in many cases delaying a student's first serious computer course will help improve the quality and/or effectiveness of that course. Prior to a student's first serious computer course, we can stress learning to use various application packages and having the opportunity to make substantial use of good computer assisted learning materials. Each of these has long-term benefits to the student. Each requires only modest amounts of teacher training. And each helps prepare a student to gain more from the first serious computer course, whenever it comes.
The more-harm-than-good question has no simple answer. Most of the necessary research remains to be done, I suspect that a major part of the answer will be the teacher. There is no substitute for a well-qualified, experienced teacher supported by appropriate technology.


Editorial # 50. Problem Solving: A Computer Educator’s Perspective. V12 N5
(Editor’s Message)
David Moursund
Editorial # 50 (Vol. 12 No. 25) February 1985
The Computing Teacher
I have been thinking about and working on problem solving for many years. I have read a number of books about problem solving and listened to others expound on this topic. I have included units on problem solving in many of my books, and problem solving is often emphasized in the courses I teach. Gradually I have convinced myself that I know quite a bit about this topic. I see problem solving as part of every academic discipline and lying at the very heart of education. I see problem solving as an ordinary, everyday activity of every person who has conscious awareness and some ability to act upon the world.
At a workshop I presented recently, I devoted an hour to problem solving. During the hour I summarized and illustrated ideas that are finally becoming clear to me—ideas that I feel every computer educator should come to understand. As the hour progressed I felt less and less rapport with the workshop participants. By the end it seemed clear to me that “it didn't go over worth a darn.”
Afterward I vacillated between feeling that my presentation was totally inadequate and that the workshop participants were totally inattentive. But there has got to be a better explanation. I believe the explanation is that most people have been brainwashed into believing that problem solving is a difficult topic that is beyond-most people's comprehension—and certainly not something to be covered in a one-hour presentation. To them I say, “Hogwash!”
Since I am not one to be easily thwarted, I intend to keep trying to clarify and to present my basic ideas. They are simple enough to be understood by all educators and all students. All computer educators, and eventually all educators, should have a good understanding of the roles of computers in problem solving. This editorial summarizes what I feel are the key ideas.
To begin, we need simple definitions of “problem” and “problem solving.” Remember, here we are giving short and non-technical definitions.
A problem has three parts:
1. How something actually is (initial state).
2. How you would like the thing to be (goal state).
3. What you can do about the situation (allowable types of actions to move from the initial state to the goal state).
For me, problem solving encompasses two interrelated ideas. First, problem solving is a philosophy which includes the above definition. It is a personal philosophy, stressing that I can understand a situation and then contemplate and carry out actions to change the situation, moving from the initial state to the goal state. Second, problem solving is an activity. It is an activity utilizing the knowledge, skills and energy needed to understand initial and goal states, and to formulate and carry out actions to move from initial to goal states.
Notice that there is no mention of computers or mathematics in these two ideas. If my shoe feels loose on my foot and I tie my shoe, I am solving a problem. If I make use of a dictionary to check the spelling or definition of a word when writing, I am problem solving. If I have a fever and take aspirin, I am problem solving. Computers and mathematics can play an important role in solving certain types of problems, but may play no role at all in solving many other problems.
A more detailed example emphasizes the point. I notice that I am shivering and that I feel cold. I identify “I feel cold” as a given initial situation. I have knowledge of what it feels like to be warm (or less cold). I set as a goal to feel warm. Based upon previously-gained knowledge and experience, I am aware of a number of possible actions that can move me from a cold state to a warm state. I can put on a sweater, turn up the heat, close a window, fly to southern Florida, exercise, or use auto-hypnosis to control my body's thermostat. I can mentally contemplate each of these along with other possible actions. I can determine likely outcomes and ramifications before actually taking action. For example, I may reject going to southern Florida, because I lack the necessary money and I have to teach a class one hour from now.
After a careful analysis of the situation (which may take only a few seconds, especially if I have frequently experienced the problem in the past), I decide upon an action and then take that action. Perhaps I put on a sweater. I then observe the results. Most likely I begin to feel warmer and I dismiss the problem from my mind—it has been solved. But perhaps I continue to shiver. I might reexamine the initial situation and conclude that the problem was not correctly defined. Perhaps I am shivering because of an illness-induced chill, and may need the help of a medical doctor.
The example illustrates the key ideas of problem solving. To begin, one must have ownership of the problem—a desire to understand and solve the problem. In school, most problems come from books and/or the teacher. We try to train students to simulate ownership, but with many students we don't succeed very well. Artificial classroom problems may be far removed from a student's world outside the classroom. Computers have helped some students, because the students can more easily simulate ownership of computer-related problems; some students are able to create their own computer-related problems.
One must have the knowledge and experience to recognize and adequately define both the given initial situation and the desired goal situation. That is where a broad general education enters the picture. With a broad general education one can understand and more carefully define initial and goal states for a broad range of problems.
Next, one must have a feeling of power—a repertoire of possible actions that might move one from the initial to the goal state. One must have the training and experience to contemplate various approaches and the ramifications of using a particular approach. Here again we see the value of a broad general education. Often it is necessary to use one's general information-acquiring skills to seek out approaches that might be useful in moving from the initial to the goal state. This might require library research or other types of research.
Finally, one must be able to take action and to assess the results of the action to see if the problem has been solved or to determine what has gone wrong. The actual action one takes may involve use of low or high technology. Often it draws upon one's overall education and experience.
In my original example, several of the proposed courses of action involved using high technology. Turning up the heat might mean adjusting a thermostat which controls a furnace which in turn is tied into a large-scale electrical or gas distribution system. A decision to fly to southern Florida involves purchasing a ticket and making use of an airplane owned by a large company, piloted by a professional pilot, and assisted by an air traffic control system.
It is also important to recognize the extent to which problem solving builds upon and makes use of the work of others. I can turn up a thermostat, but I cannot build and maintain a large-scale gas or electrical distribution system. I can earn money to purchase an airline ticket, but it is not likely that I will maintain or pilot a large airplane. I can learn auto-hypnosis, but a teacher and biofeedback mechanism would be helpful.
Now, let's return to computers and their role in problem solving. Computers have spawned a new academic discipline—computer and information science. What does a person need to know about computer and information science in order to understand initial states and desired goal states that involve computer technology? How can one come to understand the types of computerized or computer-assisted actions that do exist or might exist in moving from initial to goal states? How much formal education and experience is required in this area? What part of these actions should be under one's direct control (I could learn to use auto-hypnosis to adjust my own body thermostat) and what part should one merely know how to use (I could learn how to purchase an airplane ticket and go to the airport)? An introductory study of answers to these types of questions is a component of a definition of computer literacy. A more comprehensive study of the answers helps define a college or graduate degree in computer and information science.
Let's look at three examples from computer education. Consider first a Logo programming environment, perhaps in an elementary or middle school. Logo is a very rich environment for improving one's understanding of a philosophy of problem solving. After some initial instruction and practice, I am seated at a Logo system with a blank screen (initial state). I imagine a particular picture on the screen (desired goal state). Now I must contemplate the possible actions I might take. Paper, pencil and reference books may be a big help. It may be critical that I know how to “play turtle,” so my body and mind can visualize the results of instructions to the turtle. Will the actions I am contemplating produce the desired picture? What are the ramifications of my approach? For example, I could imagine a picture and sketch it on paper. I could then ask a friend to draw it using the Logo system. Some results of this approach may be that I fail to learn some details of using Logo and would not develop experience in using the system. I might later fail a Logo programming test.
Suppose I go ahead and draw a picture using Logo. At various stages I might look at what I am producing and decide to redefine the problem. I might change my mind on the picture I want to create. I might decide that the picture can be done better using a non-computer medium such as watercolors. In all of this I am gaining experience in various steps of problem solving. I have the added advantage that I am also learning something about computer and information science—sort of a "two for the price of one" deal.
I do not feel that it is enough to merely provide the Logo environment and initial bits of instruction to the student. A philosophy and some general steps of problem solving should be made explicit. The philosophy is one of power, of being in control, of being able to contemplate and take actions to change a given situation, and of taking responsibility for one's actions. Logo can provide an excellent environment to practice this philosophy.
If I want to use Logo in defining and solving problems, however, I must learn the Logo system and develop skill in its use. As I learn more about the Logo system, my capability to use Logo to solve a broader range of problems will increase. Again, this should be made explicit. Not all students understand that their problem-solving skills are increased by study and practice. In addition, not all students understand that the problem solving they learn in one environment can apply in another, seemingly unrelated, environment. For many students, a Logo environment can provide a vivid demonstration of the value of study and practice. With a teacher's help, students can also learn that they are gaining problem-solving skills applicable in non-Logo environments.
Specifically, the Logo environment gives me the opportunity to learn some computer and information science. And here again I feel that the instruction should be explicit. It is not enough to expect students to learn these ideas by osmosis—key ideas should be carefully pointed out and illustrated. This requires a computer science-knowledgeable teacher and appropriate instructional materials.
For a second example, consider a tool such as a graphics package. It can draw bar, circle and line graphs, and perhaps does lettering and includes a collection of drawings done by professional artists. The development of the package may have required many hundreds of hours of work by a skilled programmer who is quite knowledgeable about computer graphics.
A student can learn to use some aspects of a graphics package in a few minutes. Learning to read and interpret graphs is a more difficult task, and learning when and how to make use of graphs and graphics is more complex still. Thus, computer educators must consider the full range of training and experience needed to include computer graphics as part of a student's repertoire in problem solving. Putting a student in front of a graphics system and providing instruction in its use is just a tiny part of computer graphics in education. It does little to increase the problem-solving capability of most students. The same type of analysis applies to other “tool” applications of computers such as data base systems, filers, spreadsheets and word processors.
As a third example, consider artificial intelligence and its recent progress. Consider also the analogy of taking an airplane trip to southern Florida. An airplane helps solve certain types of transportation problems. It requires modest training and experience for most people to learn to use a commercial airline. Similarly, an artificially intelligent computer system helps solve certain types of problems. But the problems may be much more complex, requiring substantial training and experience. Typical examples are diagnoses of medical problems, determination of where to drill for oil or prospect for minerals, and design of very large-scale integrated circuits. Quite a bit of the expertise needed to solve such problems can be incorporated in so-called “expert” computer systems.
It takes only modest training and experience to learn to use such expert systems. But the number of such systems, as well as their capability, is growing rapidly. This rapid proliferation of computers ensures that more and more people will have good access to the efforts of artificial intelligence researchers and developers. More and more problems will be solved by interaction with artificially intelligent computer systems.
The second and third examples are closely related, since a graphics package can be considered as an intelligent-like tool. In both cases we can expect rapid progress during the next decade, which will change what constitutes a high-quality education. There will need to be increased emphasis upon higher-level cognitive processes. Computer-assisted problem solving broadens the scope of problems that a person can consider solving, as well as the tools available to solve problems. However, it is not clear whether our educational system will be able to cope with such change.
Computer tools are changing certain areas of problem solving more rapidly than others, partly because some problems are more amenable to computer-assisted solution than others. Indeed, one can classify problems or problem areas on the basis of the current or potential role computers play in their solution. Such a classification might place poetry writing and psychotherapy near one end of the scale, while accounting and electrical engineering are placed near the other end. Such an analysis can help put computers into a proper perspective. Any particular problem area can be examined to see the relative importance of computer and non-computer approaches. Even in areas where computers are extensively used, it is evident that other factors are equally or more important. Computer-assisted problem solving is very important, but it is still a very small part of problem solving.
That's it! Those were the key ideas that I presented in the workshop. Try them with your students and colleagues. Let me know how they work.


Editorial # 51. The Thrill of Victory—the Agony of Defeat. V12 N6
(Editor’s Message)
David Moursund
Editorial # 51 (Vol. 12 No. 26) March 1985
The Computing Teacher
I learned to play chess when I was in junior high school. I guess I was slightly better than average at the game, but certainly not strongly gifted.
A few years later I purchased a book on chess. I learned three things from the book: some general ideas on center control and mobility; some rules of thumb such as “When ahead in material, exchange pieces but not pawns;” and some specific chess openings.
I didn't play any competitive chess in high school. However, I have a distinct memory of playing several games with a person I considered to be quite a bit smarter than me. It seemed that we had about equal chess experience and interest. But he had not studied a chess book. Using the openings I had studied, I quickly achieved an initial advantage in each game. Using general ideas of center control and mobility, I was able to increase the advantage. And, from time to time, the rules of thumb helped me to determine moves that seemed to maintain my advantage.
In retrospect, this was the first time that I had a clear illustration of the intellectual or problem-solving advantages to be gained by building upon the work of others. Just a few hours of studying a chess book significantly improved my game.
I continued to play a little recreational chess in college and graduate school, eventually learning that many people were considerably more talented than me! Still, I enjoyed the game and certainly was an above average player.
During the 1970s, the Association for Computing Machinery's fall conference became the site for the annual United States National Computer Chess Tournament. Typically a dozen entrants would pit their chess-playing programs against each other in a four-round tournament. It was fun to watch the games proceed and to experience the enthusiasm and camaraderie of the humans who had developed the software. .
Occasionally a match would be arranged pitting computer against human. I remember one such match in the late 197Os between the current United States (human) speed chess champion and one of the best computer chess systems. In speed chess each player must average a move every five seconds.
As they sat down to play, the human player appeared nervous. Well he might, because he was pitted against a program that had been many years in development and that was being run on a very fast mainframe computer. Sure enough, in the first game the computer took advantage of its extensive book of openings to gain a slight advantage. The human blundered under the pressure, and the computer increased its advantage. Soon it was clear that the computer would win the first game.
But even as he went down to defeat, the human began to relax, and a hint of a grin appeared on his face. He had overcome his first fear, his confidence had returned, and he had seen some weaknesses in the way the computer played after its initial strong opening. Sure enough, in the second and subsequent games, the human handily defeated the computer!
When chess software became available for a microcomputer, I acquired a program and played with it for a while. Then I lost interest for several years, finding that working with graduate students and ICCE allowed little time for recreational chess.
Recently I became interested in chess again, and I dug out a copy of a 1979 microcomputer chess program from Hayden Software named Sargon I. I played a number of games, winning some and losing some. The program has a number of flaws. Indeed, under certain circumstances it cheats by making an illegal move!
For Christmas I received a copy of the 1983 Sargon III. I was very impressed by the four years of software progress it represents. The newer program is more user friendly, has all the features I could imagine wanting, and plays better quality chess. We sometimes forget that such rapid software progress is occurring. The newer Sargon program is far superior to the old, even though they both run on the same hardware.
One feature of Sargon III is that after it makes a move, the computer continues to "think" about possible next moves while its opponent is deciding upon a move. I found this particularly intimidating. All the time I was trying to analyze the board, I could imagine the computer analyzing it more quickly and more deeply, preparing to counter whatever move I might make.
Another feature of Sargon III is its book of openings. Its memorized collection of openings is far superior to what I can recall from my childhood.
Thus, the computer invariably gains an advantage over me in the first part of any game. Often this advantage is preserved as I go down to defeat.
I find the most interesting aspect of the Moursund versus Sargon conflict to be the human element. Several of my children often come to watch—they offer suggestions and cheer me on. Perhaps they sense my inner turmoil as my brain struggles to compete with a machine. Time after time I find myself blundering and having to resort to the feature that allows one to take back moves. Sometimes I win, and the feeling of elation is real. More often, however, I lose.
I suppose that during my lifetime I have spent about a thousand hours studying and playing chess. I have some pride in my skills. This pride has been carefully preserved over the years by avoiding competition with better quality chess players. But now I find that I cannot even play as well as a computer program that runs on an inexpensive microcomputer. That does considerable damage to my ego!
This chess example is a good illustration of what the future will bring to many of us. The field of artificial intelligence is now beginning to produce commercially viable products. These knowledge-based expert systems perform at a high level in a number of problem areas. Indeed, they are competitive with human experts within certain narrow problem domains. Eventually we will have knowledge-based expert systems that solve or help solve many of the problems that our schools currently teach students to solve without use of computers. Eventually we will have computer systems that can outperform teachers in a number of tasks that they currently consider to lie within their domain of expertise.
Many of us educators pride ourselves on our knowledge, teaching skills and problem-solving skills. How will we deal with a challenge from computers? It is interesting to think about how teachers can meet the challenge of artificial intelligence.


Editorial # 52. The Fifth Generation: It’s for Real. V12 N7
(Editor’s Message)
David Moursund
Editorial # 52 (Vol. 12 No. 7) April 1985
The Computing Teacher
Recently I attended a talk given by Pamela McCorduck. She and Edward Feigenbaum are co-authors of The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World. The revised and updated second edition was published in paperback in 1984 by Signet. I found the book interesting because I have taught artificial intelligence courses and have had a long-term interest in this field.
The Fifth Generation is about a ten-year Japanese project (now into its fourth year) that proposes great progress in both hardware and software, with the ultimate result to be a computer system that exhibits a high level of artificial intelligence. The book describes the project as well as competing work going on in a number of other countries. A major theme of the book is that the United States is losing its computer lead and may well fall behind the Japanese.
Both in reading the book and in listening to Pamela McCorduck talk I was struck by the “hype.” There appears to be a concerted effort to awe us by the potentials of faster machines, better software and artificial intelligence. The message seems to be, “Watch out! The Japanese are coming! We must do something!” The book contains a flavor of global warfare.
What is this “fifth generation,” and what difference might it make to education? Is it mainly hype, or is it for real?
My feeling is that the fifth generation is quite important and wi1l eventually help change the basic nature of education. Beneath the hype is a culmination of computer progress that is important to all of us. The following discussion of computer “generations” supports my thesis.
Many years ago it seemed easy to keep track of the generations of computer hardware. The first generation was characterized by vacuum tubes, the second by transistors, the third by integrated circuits. That hardware classification approach carried us through the 1960s, but then it began to run into trouble. There is an easy distinction between a vacuum tube and a transistor; there is an easy distinction between an individual transistor and an integrated circuit containing a number of transistors and other components. But where does one go from there?
Progress in integrated circuitry continued smoothly, with no gigantic breakthrough. But some hype was needed to publicize the progress and to help sell new hardware. So eventually we had fourth generation computers, employing large scale integrated (LSI) circuitry or very large scale integrated (VLSI) circuitry.
Now people talk about the fifth generation of computer hardware. It is characterized by the use of sti1l larger and faster VLSI circuitry, very large primary and secondary storage, and parallel processing (employing a large number of processing units). But such fifth generation hardware is not spectacularly different from fourth generation hardware. It is only when we also look at software progress that we begin to understand the significance of fifth generation computer systems.
The progress in systems software and computer languages has been steady, if not as spectacular as hardware progress. Early computers had essentially no operating systems. One user would have complete control of the machine, doing a “cold” start. The bootstrap process of first keying in or in some other way loading a program that would load one's main program was representative of first generation systems software. The early programs were written in machine or assembly language.
Soon we got more sophisticated assemblers, higher level languages with their compilers, and an operating system able to process a stream (batch) of jobs. Input and output were handled by card-to-tape and tape-to-printer systems that operated simultaneously with the central batch processing system. That represented the second generation of systems software.
Progress continued, and we got quite sophisticated disk operating systems that handled batch processing, multi-tasking, and the early efforts at timeshared computing. Application libraries grew rapidly and user interfaces became more friendly. These ideas characterize a third generation of software.
The fourth generation of software is represented by where we are now, with better user interfaces, easier access to data bases, networking, and more powerful programming languages. As with the hardware generations, there is no clear line of demarcation between third and fourth generations.
But the next generation of software does represent a significant jump. In simple terms, it has two major parts. First is an operating system and programming languages that can take advantage of parallel processing. It is difficult to appreciate how hard it is to take advantage of having thousands or perhaps hundreds of thousands of processing units all working on a single problem. But significant progress in this endeavor could well produce computers that are many thousands of times as fast as current machines.
The second major part is artificial intelligence (AI). AI researchers work to computerize some of the knowledge of an expert or a group of experts in a particular problem-solving domain. Progress in AI has been steady, but is not characterized by distinct generations or spectacular breakthroughs. Perhaps the most obvious sign of this progress is found on the front covers of many leading magazines in the past two years. Artificial intelligence has become commercially viable. Many companies believe that it is now profitable to solve or help solve a number of problems using AI techniques.
The problems that AI is attacking are very difficult. It is only now, about 40 years after the first electronic digital computers, that the necessary hardware, software and computer science progress are combining to produce significant results. The term “fifth generation” is a shorthand way of representing this progress and the goals for the next decade. A reasonable level of success is guaranteed, in that rapid evolutionary progress will continue in hardware, software and computer science. No spectacular breakthrough is necessary to produce computer systems that are increasingly capable of solving more and more problems that once were only in the province of very highly qualified human experts.
Judging from computer history, progress represented by the fifth generation will gradually filter down into the computer systems that educators, students and others can access on a daily basis. Eventually fifth generation hardware, software and ideas will become commonplace.
The educational implementations are profound. A very simple example is provided by the potentials for voice input. If voice input becomes readily available, should we teach typing, cursive handwriting, or printing? Or consider problem solving in the sciences and mathematics. If a computer can solve a particular category of problem, should students be required to learn to solve the same type of problem by hand?
These questions suggest that education must change to reflect people having easy, everyday access to very powerful machines. Moreover, they point to the equity-of-access problem. The analogy with access to books is instructive. In some sense public libraries and the fact that books are relatively inexpensive have kept the equity of book access under control. But computers are much more expensive than books, and we don't have anything like a public library system for free access to computers. It seems evident that some people will have the financial resources to take advantage of newer computer systems, and others won't.
Even these questions seem easy when compared to questions that arise as one begins to consider the creation of very large-scale data banks of knowledge that can be accessed and processed by artificially intelligent computer systems. The creation and maintenance of such systems may initially be quite dependent upon federal funding.
Who will have access to the information in the data banks? Who will have the training to retrieve such information? Who will control what “facts” go into the data banks? Who will control the type or nature of the “reasoning” that will be programmed into the computer systems? Consider, for example, questions related to a social system. There are considerable differences of opinion in the U.S. between Democrats and Republicans as to the correct answer or lines of action for many questions. The military-industrial complex may have still another view that it feels is correct.
Such questions place still additional burdens on educated people and their educational systems. The issue of fifth generation computers is not “Watch out! The Japanese are coming!” The issue is really “Watch out! Fifth generation computers are coming!”


Editorial # 53. Modem. V12 N8
(Editor’s Message)
David Moursund
Editorial # 53 (Vol. 12 No. 8) May 1985
The Computing Teacher
Recently I purchased a modem for one of my home computers. I am not sure why I resisted so long, but I suspect that there is a deep psychological reason. I also discovered several other reasons as soon as the modem was installed. The modem precipitated a family crisis. A shouting match between two of my children (one wanted to talk via voice to friends; one wanted to talk via computer to bulletin boards) led to both losing phone privileges. Also, my wife threatened bodily harm to the computer when she saw us tying up the phone for an hour or two a day.
The obvious solution was a second phone line, and that has now been installed. I feel fortunate to be able to afford such a luxury: $15 a month to purchase family harmony. Sti1l, that isn't the end of the story. I am finding that spending an extra hour a day on a computer has changed my social interaction with my family. Moreover, it adds to the contention for access to one of our three home computers. It seems clear that I have not yet worked my way through these issues,
But these personal issues seem small as I think about larger issues of communication via and/or with a computer.
Twenty years ago I was teaching at Michigan State University in East Lansing. A computer terminal, telephone and modem were installed in a room in the computing center, and I was told that the system could be used to call a timeshared computer system in Chicago. That was my first exposure to BASIC and to use of a modem.
I recall the psychological barrier. The terminal, timeshared computing and telecommunications were all new to me. The documentation was poor. I knew that long distance phone calls were expensive. Computing via commercial long distance phone lines was sti1l relatively' new.
However, after much trial and error I learned to use the system, gaining an initial appreciation for timeshared BASIC use of a modem, and difficulties associated with using a remotely located computer via the commercial telephone system. During the past 20 years I have learned quite a bit about using remotely located computers.
In this message I want to comment on three general types of usage of a computer system via a modem. First, such a system can be used to access a computer in order to write computer programs. Twenty years ago that was really something—being able to do interactive computing on a timeshared system. Our university's large computer system could process FORTRAN, COBOL and assembly language in a batch mode with punched card input. Now, of course, we are all familiar with interactive computing. If the quality of the system is high, it tends to make little difference whether the CPU is located just a few inches from the keyboard or is located many miles away. What difference does it make how long the interconnecting wires are? The key idea is interactive computing, and that can be done using a variety of equipment.
A second major use of a computer with modem is to access data bases. Many of the data bases one would like to access are quite large and require considerable expense to keep updated. Thus, the most common way to access these data bases is via a modem-equipped computer system.
But such access is now commonplace and in many business settings is a necessary cost of doing business. Imagine running an airline ticket agency without access to the computerized reservation system.
Nowadays the issue is not the hardware. Rather, the issue is the software and the training needed to adequately use the software. Consider problems associated with accessing the type of information found in a (print material) library. All of us have been trained to use a card catalog, to physically locate materials in a library, and to browse. Access via computer changes the situation.
I don't think the main difference is learning to use the computerized equivalent of a card catalog. The amount of training needed to use a card catalog could just as well teach a person to search a computerized data bank. The task of physical location of materials is changed, but the change may make it easier. What could be easier than having the material immediately displayed on one's computer screen? (Of course, retrieving books and journals via interlibrary loan can be a pain, but that difficulty exists independently of computerized data banks.)
The main change is browsing. How often have you gone to a library to find a particular book, and then ended up checking out several books located on nearby shelves? One can browse a data bank, but two things are different. First, the concept of "nearby shelves" is missing. Thus, it takes specific training to learn to use a computer system to browse related materials. But perhaps much more difficult is the psychological barrier of cost. When I am in a library, the cost of browsing is the cost of my time; moreover, browsing is fun. When I am on a computer system there are computer charges, communications costs, and charges to access particular data banks. All in all I find these to be quite intimidating.
A third area of modem use—and the main motivation for this message—is for electronic bulletin boards and electronic mail. A computerized bulletin board is somewhat like a “hard copy” bulletin board on which one can post messages and comments. The messages and comments may be arranged by topic and/or who is allowed to access them. Electronic mail is roughly like regular mail, but the actual transmission of a letter is done electronically.
One can argue that bulletin boards and electronic mail are merely slight variations on traditional written communication. Certainly it makes little difference whether a carefully composed business letter is transmitted electronically or by the postal service. And jotting a quick memo to an acquaintance is similar to keying in a short message to an electronic mail system.
But those examples are not representative of what happens in an electronic bulletin board environment. A number of people may get involved in concurrent discussions of several topics. Many of the people may never have met face to face. The pace of the communication may be rapid, with messages being entered and read once a day or even more often.
For me, electronic bulletin boards represent a unique new mode of communication. The closest thing in my personal experience to the bulletin board was when I was a ham radio operator. A bunch of people who didn't know each other would chat about miscellaneous topics. Frequently the topic was their radio equipment; the fun was in establishing and maintaining the communication link.
Bulletin board communications tend to focus on issues rather than on the computer systems being used. They can involve a number of people who have no knowledge of each other's backgrounds and viewpoints. Under such conditions effective communication is challenging, if not downright difficult.
One example will help illustrate what I mean. In one bulletin board that I use regularly, the topic of testing in schools arose. Various people expressed opinions on this topic. One person absolutely blasted the idea of testing of any sort, supporting his position with strong, emotion-laden arguments. I responded to this person with some "logical" arguments to suggest that he was wrong. He responded with more arguments along the line of his original approach.
It seemed clear that even though we were exchanging written messages, we were not communicating. Eventually it became clear why. I come from a traditional educational background, and I was quite successful in our traditional educational system. But the person I was trying to communicate with has dyslexia. He did very poorly on tests while in grade school. As a consequence he was labeled as mentally retarded and did not receive an early education appropriate to his needs. It is no wonder that his arguments about testing are emotion-laden and that we were not able to communicate effectively.
But that is a key point. When humans communicate face to face, a significant percentage of the communication is nonverbal. (I have heard estimates that range up to two-thirds or more.) Much of the communication is in the affective domain. Electronic bulletin board communication is quite restricted in the affective domain. This is especially true when one is composing "live" at the keyboard, realizing that costs are mounting.
Communication is difficult enough when one is allowed to use all of one's senses and abilities. The restrictions imposed by an electronic bulletin board severely impede communication. As with any new technology, electronic bulletin boards have both good and bad features—both good and bad potential. Likely we will once again look to our educational system for help. I can imagine that “Bulletin Board Communication” will eventually enter the scope and sequence of our school curriculum.


Editorial # 54. Next Year. V12 N9
(Editor’s Message)
David Moursund
Editorial # 54 (Vol. 12 No. 9) June 1985
The Computing Teacher
Around this time of year the ICCE staff begins planning next year's issues of The Computing Teacher and other ICCE publications. We start by gazing into a crystal ball, looking for what the future holds for computer education.
My usual approach to this type of crystal ball gazing is to divide the field of computer education into three main parts: learning/teaching about computers; learning/teaching using computers; and learning/teaching integrating computers. I also look at hardware and software trends and how they relate to these three general areas of computer use.
Hardware and software trends are easy to forecast. One can predict with considerable confidence that hardware will get better and become more readily available. For example, right now public schools in Alaska have approximately one microcomputer per 20 students. Alaska leads the nation in such computer availability. But one computer per 20 students is still very modest compared to the one computer per student (plus equal facilities at home) that will eventually become commonplace. Along with continued rapid growth in computer availability, it seems likely that increased networking and increased computer power will also come. Disk storage capabilities continue to increase, and personal hard-disk systems are growing in popularity.
The quality of educational software is also growing quite rapidly, and the value of the basic application packages is now well understood. If a school must choose between providing students with a word processor or a couple of high-quality simulations, it will choose the word processor. The trend toward integrated software application packages is clear. School systems will likely decide to place increased emphasis upon such integrated packages, since the cost is relatively small, given the broad range of individual pieces of software for instructional use that such packages provide.
The crystal ball seems quite clear in the area of learning/teaching about computers. Computer science (which includes computer programming) is well entrenched at the secondary school level. The Advanced Placement course and exam help define the upper end to which a secondary school might aspire. Eventually almost every secondary school will offer one or more computer programming or computer science courses. This means that eventually in the United States perhaps 50,000 teachers will have as a teaching assignment the teaching of computer programming and/or computer science. For those in larger schools, this will become their primary teaching assignment.
Teaching/learning using computers (CAL) now constitutes a very minor force in precollege education. If all computers in precollege education were used exclusively for this purpose, the average student would experience less than five minutes a day of CAL. Thus, it is safe to say that the current average CAL use is well under that figure. Of course it is not evenly distributed, and many teachers have yet to incorporate its use.
It seems reasonable to guess that CAL will grow about as fast as the availability of hardware. Over the years we can expect all students to experience substantial use of CAL and almost all teachers to incorporate it into their repertoire. Once a teacher has made the initial plunge, increased use of CAL may only require learning about a few new pieces of software a year.
The teaching/learning integrating computers includes use of word processors, data base systems, graphics packages, communications software and other applications software. Here the crystal ball is cloudy. Sometimes when the mist clears one can see a scenario in which an integrated package (incorporating all of the above applications, and more) is used beginning in the very early grades. All students learn to keyboard and to write in a word processing environment. All students learn process-oriented writing in an environment that includes a spelling and a grammar checker. All students learn to use and to create data bases. All students learn mathematics from the first grade on in an environment that includes calculators and computers.
Sometimes the crystal ball becomes even clearer as one looks at increased computer availability and computer-integrated instruction. Then one sees every student having an individual education plan (IEP) and a strong trend toward interdisciplinary studies. Integrated software is a tool that runs across all disciplines, blurring the lines between them. One can foresee great change in the curriculum as students master the computer-as-tool, and as the curriculum adjusts to such computer usage.
More and more schools and school districts will have a computer coordinator. This will be a person skilled in working with teachers, broadly knowledgeable in computer applications to the curriculum, and devoted to learning more about computer education. The profession of the computer coordinator will gradually become distinct from that of the computer science teacher, The number of computer coordinators wi1l probably exceed the number of computer science teachers, and they will exercise a major role in curriculum changes.
Next year's issues of The Computing Teacher will reflect the crystal ball gazing just described. More specifically, here are some things to expect:
1. Articles appearing in The Computing Teacher will tend to be shorter and will emphasize classroom applications.
2. The Computing Teacher will give special attention to the interdisciplinary aspects of computer-as-tool and how this is modifying traditional discipline-oriented barriers. The editors will solicit articles that describe interdisciplinary applications. We will also seek out articles describing major potential curriculum changes based on appropriate use of computer technology. For example, the National Council of Teachers of Mathematics has recently supported integration of calculators and computers into the full K-12 mathematics curriculum. Such changes will affect almost all teachers.
3. There will be an increased emphasis on computer-assisted learning and applications software. This means more software reviews and more information about new educational software.
4. One section of The Computing Teacher will be devoted to teachers of computer programming and computer science. It will contain articles about teaching computer science, including the Advanced Placement course.
These are by no means all of the changes you can expect for next year, but some things will be a surprise. Remember, much of the content of The Computing Teacher depends on articles that are submitted for publication. We need your help. What would you like to see, and what do you think other readers would like to see? Write to us, sending your ideas and/or an article. Remember, the field of computer education is in its infancy. The best is yet to come!

The Fifth Generation: It’s for Real

Recently I attended a talk given by Pamela McCorduck. She and Edward Feigenbaum are co-authors of The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World. The revised and updated second edition was published in paperback in 1984 by Signet. I found the book interesting because I have taught artificial intelligence courses and have had a long-term interest in this field.

The Fifth Generation is about a 10-year Japanese project (now into its fourth year) that proposes great progress in both hardware and software, with the ultimate result to be a computer system that exhibits a high level of artificial intelligence. The book describes the project as well as competing work going on in a number of other countries. A major theme of the book is that the United States is losing its computer lead and may well fall behind the Japanese.

Both in reading the book and in listening to Pamela McCorduck talk I was struck by the “hype.” There appears to be a concerted effort to awe us by the potentials of faster machines, better software, and artificial intelligence. The message seems to be “Watch out! The Japanese are coming! We must do something!” The book contains a flavor of global warfare.

What is this “fifth generation,” and what difference might it make to education? Is it mainly hype, or is it for real?

My feeling is that the fifth generation is quite important and will eventually help change the basic nature of education! Beneath the hype is a culmination of computer progress that is important to all of us. The following discussion of computer “generations” supports my thesis.

Many years ago it seemed easy to keep track of the generations of computer hardware. The first generation was characterized by vacuum tubes, the second by transistors, the third by integrated circuits. That hardware classification approach carried us through the 1960s, but then it began to run into trouble. There is an easy distinction between a vacuum tube and a transistor; there is an easy distinction between an individual transistor and an integrated circuit containing a number of transistors and other components. But where does one go from there?

Progress in integrated circuitry continued smoothly, with no gigantic breakthrough. But some hype was needed to publicize the progress and to help sell new hardware. So eventually we had fourth generation computers, employing large scale integrated (LSI) circuitry or very large scale integrated (VLSI) circuitry.

Now people talk about the fifth generation of computer hardware. It is characterized by the use of still larger and faster VLSI circuitry, very large primary and secondary storage, and parallel processing (employing a large number of processing units). But such fifth-generation hardware is not spectacularly different from fourth-generation hardware. It is only when we also look at software progress that we begin to understand the significance of fifth-generation computer systems.

The progress in systems software, and computer languages has been steady, if not as spectacular as hardware progress. Early computers had essentially no operating systems. One user would have complete control of the machine, doing a "cold" start. The bootstrap process of first keying in or in some other way loading a program that would load one's main program was representative of first generation systems software. The early programs were written in machine or assembly language.

Soon we got more sophisticated assemblers, higher-level languages with their compilers, and an operating system able to process a stream (batch) of jobs. Input and output were handled by card-to-tape and tape-to-printer systems that operated simultaneously with the central batch processing system. That represented the second generation of systems software.

Progress continued, and we got quite sophisticated disk operating systems that handled batch processing, multi-tasking, and the early efforts at timeshared computing. Application libraries grew rapidly and user interfaces became more friendly. These ideas characterize a third generation of software.

The fourth generation of software is represented by where we are now, with better user interfaces, easier access to databases, networking, and more powerful programming languages. As with the hardware generations, there is no clear line of demarcation between third and fourth generations.

But the next generation of software does represent a significant jump. In simple terms, it has two major parts. First is an operating system and programming languages that can take advantage of parallel processing. It is difficult to appreciate how hard it is to take advantage of having thousands or perhaps hundreds of thousands of processing units all working on a single problem. But significant progress in this endeavor could well produce computers that are many thousands of times as fast as current machines.

The second major part is artificial intelligence (AI). AI researchers work to computerize some of the knowledge of an expert or a group of experts in a particular problem-solving domain. Progress in AI has been steady, but is not characterized by distinct generations or spectacular breakthroughs. Perhaps the most obvious sign of this progress is found on the front covers of many leading magazines in the past two years. Artificial intelligence has become commercially viable. Many companies believe that it is now profitable to solve or help solve a number of problems using AI techniques.

The problems that AI is attacking are very difficult. It is only now, about 40 years after the first electronic digital computers, that the necessary hardware, software, and computer science progress are combining to produce significant results. The term “fifth generation” is a shorthand way of representing this progress and the goals for the next decade. A reasonable level of success is guaranteed, in that rapid evolutionary progress will continue in hardware, software, and computer science. No spectacular breakthrough is necessary to produce computer systems that are increasingly capable of solving more and more problems that once were only in the province of very highly qualified human experts.

Judging from computer history, progress represented by the fifth generation will gradually filter down into the computer systems that educators, students, and others can access on a daily basis. Eventually fifth-generation hardware, software, and ideas will become commonplace.

The educational implementations are profound. A very simple example is provided by the potentials for voice input. If voice input becomes readily available, should we teach typing, cursive handwriting, or printing? Or consider problem solving in the sciences and mathematics. If a computer can solve a particular category of problem, should students be required to learn to solve the same type of problem by hand?

These questions suggest that education must change to reflect people having easy, everyday access to very powerful machines. Moreover, they point to the equity of access problem. The analogy with access to books is instructive. In some sense public libraries and the fact that books are relatively inexpensive have kept the equity of book access problem under control. But computers are much more expensive than books, and we don't have anything like a public library system for free access to computers. It seems evident that some people will have the financial resources to take advantage of newer computer systems, and others won't.

Even these questions seem easy when compared to questions that arise as one begins to consider the creation of very large-scale data banks of knowledge that can be accessed and processed by artificially intelligent computer systems. The creation and maintenance of such systems may initially be quite dependent upon federal funding. Who will control what “facts” go into the data banks? Who will control the type or nature of the “reasoning” that will be programmed into the computer systems? For example, consider questions related to a social system. There are considerable differences of opinion between Democrats and Republicans as to the correct answer or lines of action for many questions. Our military-industrial complex may have still another view that it feels is correct.

Such questions place still additional burdens on educated people and their educational systems. The issue of fifth generation computers is not “Watch out! The Japanese are coming.” Instead, the issue is “Watch out! Fifth-generation computers are coming!”

Retrospective Comments 12/27/04

My interest in AI in education has continued over the years. In 2004 I wrote a book on AI in education. Like all of my current work, I made this book available free on the Web. See:

Moursund, D.G. (2004). Brief Introduction to Educational Implications of Artificial Intelligence. Access at http://darkwing.uoregon.edu/~moursund/AIBook/.

It has been more than 19 years since I wrote the editorial about the Fifth Generation. The Fifth Generation turned out to be more hype than solid reality. Moore's Law has continued to be a good predictor of increasing computer power over the years. Microcomputers of today rival the supercomputers of the time I wrote the editorial. Progress in AI has continued. We now have voice input and voice output computer systems. We now have a wide range of Expert Systems that contain/embody some of the knowledge of experts in narrow fields. Way back in 1997 a computer system beat the world's chess champion in a chess match.

By an large, however, progress in AI has been relatively slow. It is true that computer systems (including robots) are getting better/smarter. It is also true, however, that the intelligence of such systems is very modest, indeed. Still, within narrowly defined problem areas, such systems outperform human experts.

It will be interesting to see what the next two decades bring in the capabilities of AI systems.