Home Page of Problem Solving Book

Introduction to the Book

Chapter 1: Introduction to Problem Solving

Chapter 2: Overview of Resources in Problem Solving

Chapter 3: Intelligence as Resource

Chapter 4: Tools as Resource

Chapter 5: Accumulated Knowledge as Resource

Chapter 6: Education and Training as Resource

Chapter 7

Computer Hardware

Computer Software

Hardware and Software--A Computer System

Human-Machine Interface

Artificial Intelligence

Expert Systems

Agents

Intelligent Digital Connectivity

The Future of Hardware, Software, and Connectivity

Activities and Self-Assessment

Chapter 8: Personal Productivity Tools

Chapter 9: Computer Programming

Chapter 10: Final Remarks

References and Resources

Search Engine in Lieu of Index

Chapter 7: A Computer System

Earlier chapters in this book discuss how physical and cognitive artifacts can greatly enhance a person's ability to perform. This chapter focuses on a problem-solving environment that consists of hardware, software, and a computer user.

Figure 7.1 Interaction between computer user, hardware, and software.

In Figure 7.1, the computer user may be a person, but may also be a factory machine, a robot, or computerized instrumentation designed to gather data. However, the emphasis in this book is on a human interacting with computer hardware and software to solve problems and accomplish tasks.

The diagram in Figure 7.1 suggests an important concept. This human-machine problem-solving system can get better--become a more powerful resource--by improving the hardware, by improving the software, and/or by improving the human using the machine.

[[Consider the somewhat parallel situation of improving our educational system. Many school reform movements are based on teachers learning to do more, better, etc. These approaches consider staff development of teachers to be the key to improving our educational system. Others take a somewhat similar approach, but focus on school administrators, or on School Boards, or on parents. Such approaches can be contrasted with providing better buildings, better facilities in the buildings, computer-assisted instruction, and other changes that do not depend on improving and extending the capabilities of teachers and other adults associated with the educational process. Of course, both approaches need to be appropriately combined to improve our educational system. However, it is often convenient to talk about the two approaches separately.]]

This chapter discusses the hardware and software components of Figure 7.1. The goal is to increase understanding of the computer as a resource in making effective use of your intelligence.

Computer Hardware

Work on the development of electronic digital computers went on in England, Germany, and the United States before and during World War II. In England, a special-purpose computer proved useful in cracking German secret military codes. In Germany, development efforts were hampered by the war and by the low priority placed on the project by the military regime. In the United States, the world's first general-purpose electronic digital computer, the ENIAC, became operational in December 1945.

By 1950, approximately 20 computers had been constructed on a worldwide basis. Each was a one-of-a-kind, hand-built machine. Then, the era of mass production of computers began. [[Click here for information about the history of computers.]]

Improvements in computer hardware came rapidly. By the mid-1950s, there were a number of different brands and models of computers on the market, most costing many hundreds of thousands of dollars. The IBM model 650 was a typical mid-priced machine. This was a vacuum tube machine. A user's manual for this machine discussed its "blinding speed" of about 5,000 arithmetic operations a second. Certainly that was fast compared to calculators that took several seconds to perform a multiplication or a division. The IBM model 650 could calculate the square root of a number in about one-sixth of a second. (That is only slightly faster than the speed of the square root key on today's $4 handheld, solar battery-powered calculator!)

Although the transistor was invented in 1947, it took more than 10 years before transistors began to be used in place of vacuum tubes in computers. A transistor is more reliable than a vacuum tube. It uses less power and generates less heat.

The very rapid progress in transistor technology has been the driving force in computer hardware ever since. The individual transistor gave way to the integrated circuit (often called a chip), the large-scale integrated circuit, and the very large-scale integrated circuit. Individual wire-wrapped magnetic cores that could store one bit of information gave way to transistorized memory. Now, a single memory chip, perhaps the size of your fingernail, can store more than 64 million bits of information. It is common to use an 8-bit code for characters. Thus, this chip can store more than 8 million individual characters. This is roughly the equivalent of 8 thick books of information.

Progress in chip technology led to the microcomputer. A medium-priced modern microcomputer may be 10,000 times as fast as the IBM model 650. It might also have 10,000 times the internal memory as well as a disk storage system. Magnetic-disk storage was not available on the early mainframe computers. In the early 1960s, a 5-megabyte disk storage system was considered nearly state of the art. Now, many laptop computers have 160-megabyte or still larger capacity disk drives.

[[The past few years have seen a tremendous rate of improvement of price to performance ratios of disk storage device. During the year 2001, new microcomputers typically had 10 to 40 or more gigabyte disk drives. An external 80 gigabyte disk cost less than $500.]]

Computer hardware is fast and reliable. Continued rapid progress is occurring in improving the cost-effectiveness of computers. Some of today's laptop computers, costing perhaps $1,000 to $3,000, are more powerful than the million-dollar mainframe computers of 20 years ago.

The key idea here is that as the cost-effectiveness of computer hardware grows, it significantly changes the nature of the types of problems that can be solved and the strategies that can be effectively employed. A graphical interface and extensive use of colorful computer graphics are now common on microcomputers. Nowadays, even young children routinely play with computer graphics facilities that are far better than those on the multimillion-dollar mainframe computers of 30 years ago.

By late 1995, approximately 38% of households in the United States had a general-purpose microcomputer; the worldwide-installed base of microcomputers exceeded 150 million machines. Worldwide sales of general-purpose microcomputers in 1995 was approximately 50 million machines. Microcomputers are of growing importance in business, manufacturing, government, and education throughout the world.

[[By 2001, more than 50% of US households had a microcomputer. Worldwide sales wer running at well over 100 million a year.]]

Computer Software

A computer program is a detailed step-by-step set of directions that a computer can carry out. Computer software is divided into three main categories.

  1. Systems software. This is the operating system, the software that interfaces directly with the hardware. Many millions of microcomputers make use of DOS (Disk Operating System) developed by Microsoft Corporation. This is an operating system specifically designed for IBM and IBM-compatible microcomputers with disk drives. A new Microsoft Corporation operating system named Windows '95 came into mass distribution in the fall of 1995.

    The operating system for the Macintosh goes by the name "system" followed by a version number, as in System 6 and System 7.

    [[Both Apple and Microsoft have continued to update their operating systems. Linux has made some inroads as a competitor to Microsoft's operating systems.]]

  2. Programming languages. Hundreds of these have been developed. Examples include FORTRAN, COBOL, BASIC, Logo, Pascal, C, and Ada. For each such programming language (source language) there are computer programs that translate from the source language into the machine language of some type of computer. For example, there are computer programs that translate from C into the machine language of IBM compatibles, Macintoshes, and so on.

    [[The Language List [Online]. Accessed 11/7/01: http://cui.unige.ch/langlist contains information on more than 2,350 programming languages.]]

  3. Applications software. These are computer programs specifically designed to help solve some particular category of problems such as word-processing, database, spreadsheet, and graphics programs. Perhaps you are familiar with programs that can solve a variety of math problems. There are thousands of different pieces of applications software that are commercially available.

Historically, these three categories of software were distinct. Now, however, the dividing lines are blurred.

Figure 7.2 The blurring dividing lines between categories of software.

One example of this blurring is that, increasingly, computer programming languages are being built into applications programs. This has been true for database software for quite awhile. Hypermedia software, such as HyperCard and HyperStudio, include built-in programming languages. Microsoft Corporation uses Visual BASIC as the programming language in a number of its applications. We will discuss the problem-solving implications of this trend more in Chapter 9, which focuses on computer programming.

Hardware and Software--A Computer System

As indicated in the previous section, there are three general categories of computer software: operating systems, programming languages, and applications. Figure 7.3 presents a simplified view of how these types of software interact with each other, with hardware, and with a computer user. An application program is written in a programming language. The programming language and operating system are designed so that they can interact in an appropriate manner. The hardware actually executes (carries out) the instructions given in the application program.

Figure 7.3 Computer hardware, software, and user.

Of course, it is possible for an ordinary person to make modifications to hardware, such as adding a new hard drive, modem, scanner, or printer. A person can make modifications and/or additions to an operating system. And, even young students can learn to write computer programs. Thus, a more accurate picture appears in Figure 7.4.

Figure 7.4 The computer user working directly with both hardware and software.

Figure 7.4 suggests that the computer-and-human system is complex. Indeed it is! Thus, considerable efforts are underway to simplify this situation.

Human-Machine Interface

The human-machine interface for a computer system is a combination of hardware and software. In the earliest history of computers, this interface consisted of toggle switches and plug-in wires that could be used to connect various components of a computer. In essence, computer programming consisted of rewiring the machine. Needless to say, it was not easy to learn to write these types of computer programs. Mistakes were frequent. Often the mistakes were difficult to detect and to correct.

Gradual progress occurred in improving the human-machine interface. One major breakthrough was the realization that it was possible to store a computer program in the computer memory, rather than having to rewire the computer to carry out a particular sequence of steps. Other improvements in hardware included punched paper [[magnetic]] tape and punch cards. Programs and data were stored as patterns of holes in paper tape or cards.

Improvements in software included programming languages such as FORTRAN and COBOL. Initially, FORTRAN was designed to be used by people with graduate degrees in engineering, math, and physics. Gradually, it became apparent that college freshmen--and even high-school students--could learn to use this programming language. By the late 1950s, computer use had begun to creep into a few elementary and secondary schools.

Eventually, time-shared computing was developed. A time-shared computer system is designed so that it can serve a number of users--each with their own computer terminal--at the same time. They could be located many miles from the computer, but still interact with it just as easily as people who were in the building that housed the computer. This was a major improvement in the human-machine interface. It was the beginning of the computer networks that are so commonplace today.

Another major improvement was the development of the personal computer during the 1970s. People could have complete control of their own personal computers. They could customize the hardware and software to fit their own particular needs. Rapid improvements in the cost-effectiveness of microcomputer hardware have made possible the implementation of a variety of improvements in the human-microcomputer interface.

Quite a bit of the processing power in a modern microcomputer is used in the human-machine interface. The idea is to make computers easy to learn and easy to use. For example, a mouse is relatively easy to learn to use. It is easy to learn to recognize and remember the meaning of various icons (pictures) on a computer screen. The graphical user interface (GUI, pronounced "gooey") that makes use of icons and a mouse is far easier to learn to use and easier to use than the keyboard-based interface of all the early microcomputers.

Other components of a modern human-microcomputer interface make use of sound and color. Although voice and other sound output have been available for quite awhile, voice input is just now coming into widespread use. It seems clear that, eventually, voice input will replace keyboarding as the most common form of input that people use with computers.

One of the key ideas in designing a human-machine interface is to facilitate transfer of learning. For example, young children have considerable skill in recognizing pictures, in pointing, grasping, and so on. These skills form the rudiments of the GUI. Thus, even young children can easily learn to interact with a GUI&emdash;because of transfer of learning.

Much of the early commercial success of the Macintosh computer, developed by Apple Corporation, was due to a combination of its GUI and the consistency of human-user interface across many different pieces of applications software. The knowledge and skills gained in learning one piece of Macintosh software tend to transfer to learning to use other pieces of Macintosh software.

The design and implementation of good human-machine interfaces for a computer system adds a new dimension to the transfer of learning. If the interface is well designed, it is easy to learn to use. Moreover, a good interface is designed to accommodate the strengths and weaknesses of humans. It is fault tolerant. Humans often make mental slips--for example, saying one word when they mean another, or making a keyboarding error. In a poorly designed human-machine interface, such a tiny slip might destroy many hours of work. A superior interface is tolerant of such slips and makes it very difficult for the user to cause a disaster.

Artificial Intelligence

[[Here is a Website that may interst readers of this section of the book.

AI Topics [Online]. Accessed 11/7/01: http://www.aaai.org/AITopics/aitopics.html]]

The early human-machine interfaces for computers relied almost completely on the intelligent performance of the human user. One way to think about improvements in the human-machine interface is in terms of the machine becoming "smarter" or more intelligent.

For example, you can think of a computer that can read and interpret instructions represented as holes in a punch card as being more intelligent than a machine that must be rewired to change what it can do. Even such a small step-up in machine intelligence constitutes a major improvement in the human-machine interface.

In terms of human-machine interface, computers are a lot more intelligent now than they were a few decades ago. Moreover, a whole domain of study and research called artificial intelligence (AI) has developed. In some parts of the world, this domain is called machine intelligence. The terms artificial intelligence and machine intelligence suggest that it is a different kind of intelligence--not a human type.

Many different research teams throughout the world are working on various aspects of AI. These researchers have experienced considerable success. For example, you probably make use of a spell checker when you write using a word processor. Current spell checkers are good at detecting errors and at suggesting corrections. Some are now able to learn the patterns of keyboarding and spelling errors that a particular user makes and, thus, improve their spell-checking performance. Spell checkers provide an example of some of the progress that has occurred in AI.

The field of AI has had a long and checkered history. Early researchers in the field of AI were confident that "soon" (typically they felt that it would take about 10 years from the time that they were presenting their findings) there would be computers more intelligent than humans. Such predictions have been made by a variety of leading researchers many different times over the last 30 years! For a summary of current research in various aspects of AI, refer to Communications of the ACM (March and July, 1994).

However, one of the things that these researchers have concluded is that humans easily and routinely accomplish creatively intelligent problem-solving tasks that are very difficult relative to the capabilities of the very best of the artificially intelligent computer systems. To put it bluntly, humans are far more creatively intelligent than today's best AI systems.

To get some insight into this, consider the following paragraph:

I want you too go two the store four me. Their, I want you to by to pounds of coffe. I need them rite away. So, as you come hone form the store, please hury.

If you run this paragraph through a spell checker, it will catch the misspelling of coffee and hurry. This leaves the paragraph:

I want you too go two the store four me. Their, I want you to by to pounds of coffee. I need them rite away. So, as you come hone form the store, please hurry.

The grammar checker that is built into the word processor being used by the author of this book produced a suggestion for each sentence.

Figure 7.5 Analysis of first sentence.

Figure 7.6 Analysis of second sentence.

Figure 7.7 Analysis of third sentence.

As you can see, the grammar checker is not making suggestions based on a human-like understanding of the paragraph! Implementation of the suggested changes produces the following paragraph:

I want you to go two the store four me. There, I want you to by to pounds of coffee. I need them rite away. So, as you come hone form the stores, please hurry.

It takes a high level of understanding of the English language to detect and correct the errors in this paragraph. It is difficult to develop computer programs that actually have the necessary level of human-type understanding.

This grammar-checking example illustrates some of the difficulties of developing voice input systems for a computer. Voice input is a problem that researchers in AI have been working on for many years. It has proven to be a very difficult problem. We now have commercially available voice input systems in which the computer can recognize spoken words and translate them into written text. However, in such voice input systems the computer has little or no understanding of the meaning of the words. Errors are frequent. The human user must detect and correct the errors if a high level of accuracy is desired.

The problem of natural-language translation is also very complex. Significant progress has occurred in computer translation from one natural language to another. However, the quality of such translations still leaves much to be desired because the computer programs have no innate understanding of the meaning of what is being translated.

Grammar-checking, voice input, and language translation are all closely related. They all illustrate problems that humans can learn to solve but that today's computers don't solve very well. Current artificially intelligent computer systems are not good at tasks that require human understanding. The best of current AI software has little understanding of what it means to be a human being.

This is a very important idea. While computers can do many things far better than humans, the opposite is also true&emdash;humans can do many things far better than computers. If great speed, accuracy, and attention to the smallest details are needed, computers are apt to be the answer. If understanding of natural language and of human beings is needed, computers have a very long way to go.

Expert Systems

One component of AI that has shown significant progress is called expert systems. The basic idea is to develop a computer program, called an expert system, that has a significant level of expertise within a particular domain. For example, consider the domain of interpreting electrocardiographs (graphs of electrical signals from the heart). Software that can do this as well as human experts was developed well over a decade ago. It has come into widespread use.

Expert systems are rule-based systems. They are developed by a very careful search for the rules that a human expert (or a number of human experts) use in solving the problems in a narrow domain. The rules are stated in an if-then format. For example, in a medical diagnosis, test for Condition A. If it exists, apply Treatment Q. If it is unclear whether Condition A exists, test for Condition B, and so on.

One of the unforeseen results of research on expert systems is that in many narrow domains, it has been found that a high level of performance requires only a few hundred or perhaps a thousand rules. It is hard to believe that human expertise in some narrow areas of specialization can be captured by such a small set of rules.

Once an initial working version of an expert system has been developed for a particular problem domain, the expert system can be improved. This can be done by analyzing its performance. When it performs poorly&emdash;for example, by making an outright error&emdash;the rules leading to this error can be examined. The rules can be modified, or additional rules can be added. By this approach, the performance of an expert system can be substantially improved over a period of time. This type of improvement process can be thought of as humans "teaching" the computer to be more of an expert.

There are now thousands of expert systems in everyday use, each with a high level of performance in a very narrow domain. Uses vary from medical diagnosis to prospecting for minerals to scientific research to processing loan applications at a bank. Such expert systems are useless when presented with problems outside of their narrow domains. They may (seemingly) produce answers to a problem, but the answers are totally unrelated to the problem. For example, you might interact with a medical diagnostic expert system, describing certain characteristics of your car to it. Your description might include talking about rust-colored spots and a feeling of sluggishness. The medical system might conclude that your car has measles and is constipated!

This is an important idea. When you view yourself as a problem solver, you know that you have strengths and weaknesses. You can judge the usefulness of your ideas in a particular problem domain. You are apt to be able to tell when you are making major mistakes. These statements do not hold true for computer-based expert systems. Today's expert systems are not self-aware, like a human is. The expert systems have no sense of their capabilities and limitations. They are not able to detect when they are making major errors.

It seems clear that expert systems bring a new dimension to problem solving. One way to think about an expert system is as a BBR. An expert system BBR, by itself, does not know when it should be used. It does not know when it makes a major mistake. But a human can learn both when to apply a BBR and how to interpret the results. The combination of human and machine working together can be a powerful aid to problem solving.

Expert systems have some strengths, and they have some weaknesses. With appropriate education and training, a human can learn to make use of the strengths of an expert system and overcome some of its weaknesses. The combination of human and machine working together to solve problems can be far more powerful than either one working alone.

Agents

In recent years, a great deal of research work in artificial intelligence has focused on developing agents&emdash;think of an agent as a computer-based intelligent, personal secretary&emdash;that can help enhance the human-machine interface.

An agent might be quite specialized. For example, you might have an agent that screens your incoming telephone calls. It would need to have the intelligence to decide how important a call was. The most important calls might be passed on directly to you, even when you are in an important meeting. Some calls would be rejected outright

As another example, suppose that you are particularly interested in European art of the 18th century. You might have an agent that "reads" all periodicals that are being published online. Its goal is to find every article that might relate to European art of the 18th century. You might want this agent to produce brief abstracts of the articles and to prioritize the articles in terms of likely relevance to your interests.

Such agents are being created by researchers in artificial intelligence. Eventually, you will make routine use of many of these agents.

Intelligent Digital Connectivity

Previous sections in this book have discussed networking groups of people, for example, by use of telecommunications systems and groupware software. Artificial intelligence may be incorporated into the computer network and the groupware. The overall concept of such a system is sometimes referred to as intelligent digital connectivity.

Intelligent digital connectivity is several things. It is digitized information&emdash;text, graphics, color, sound, and video. It is an Information Superhighway that links people and machines. It is computer systems that aid in storing, processing, and retrieving digitized information. It is intellectual technology that is being produced by researchers throughout the world. It is the progress that is occurring in the computer and cognitive sciences. It is artificial intelligence&emdash;for example, expert systems and intelligent agents&emdash;that is emerging as an important part of the human-machine interface.

Intelligent digital connectivity provides the foundation for a new type of intelligence that is called distributed intelligence. Distributed intelligence is the combined intelligence of people, machines, and the networks connecting them.

It is interesting to think of the overall capabilities and intelligence of such a distributed intelligence system. Each person in the system brings in various types and levels of human intelligence. The individual computers being used have varying levels of machine intelligence. Also, each person may well have personalized his/her computer by modifications to the hardware and software. Next, there is the intelligence built into the networking software and the groupware. Finally, the whole system may provide access to large amounts of computing power and to large numbers of databases.

The "intelligence" of a distributed intelligence system varies with its components. This brings a new dimension to problem solving. Suppose that a problem to be solved or a task to be accomplished is clearly beyond the capabilities of one person. A distributed intelligence system is needed. But, each of the components has a cost, and there is a wide range of components (including people) to select from. How does one build a cost-effective distributed intelligence system to fit the needs of a particular problem?

The flexibility involved in creating and disbanding a distributed intelligence system can be contrasted with the flexibility in industrial manufacturing. A manufacturing facility tends to have limited flexibility. It is not easily remodeled (retooled) and/or moved to a different site. Thus, it is clear that the idea of a distributed intelligence system brings new challenges to business and industry management. Many of today's business leaders lack the knowledge, skills, and experience to effectively deal with the flexibility that distributed intelligence brings to solving problems and accomplishing tasks.

The Future of Hardware, Software, and Connectivity

The personal microcomputer is now about 20 years old. During this time the personal microcomputer has progressed from being a toy to becoming an indispensable tool for tens of millions of people. Microcomputers have made this progress because of sustained rapid improvement in both hardware and software. Hardware improvements include major increases in processor speed, high-speed memory, and disk memory. Software improvements include major improvements in the human-machine interface and the development of a wide range of personal and group productivity tools.

The pace of change shows no signs of lessening. During the next decade, the pace of change will be similar to what it was in the past decade. In terms of hardware, this has been an exponential rate of change. Thus, over a span of about four or five years, the best of the new microcomputers may have four times the speed and four times the memory of the machines that they are replacing.

As previously mentioned, quite a bit of the computing power of a microcomputer is used in the human-machine interface. Voice input is a natural, easy to learn, and easy to use type of interface. However, it requires both very sophisticated software and an immense amount of computing power. Because of the continuing progress in hardware and software, voice input will likely be commonplace in 10 years.

Connectivity is growing at a fast pace. You are familiar with cellular telephones, earth satellites that broadcast television directly to peoples' homes, and other types of wireless communication systems. You are also familiar with fiber optics that allow interactive television. The combination of the wireless and the hard-wired forms of connectivity are tying the world together into a worldwide computer network. Eventually, a computer workstation will be a communication center, allowing quick access to major sources of information and to people throughout the world.

Activities and Self-Assessment

  1. Discuss why it is so difficult to write a computer program that can detect homonym errors in writing and automatically correct them. Give some examples that illustrate the difficulties.
  2. Discuss why the language translation problem is so difficult. If you know two languages, give examples based on translating from one to the other. Otherwise, give examples of ambiguous meaning in the language that you speak.
  3. Select a narrow problem-solving domain where you have a relatively high level of expertise. Give some examples of if-then rules that capture some of your expertise.
  4. A handheld calculator can be thought of as a machine that has a high level of intelligence in a very narrow domain&emdash;a domain of doing four different types of calculations. A calculator does not know when to multiply or divide. Discuss the human-calculator system. What are the strengths and weaknesses of each component? How can the overall system be improved?
  5. Go back to the definition of intelligence given in Chapter 3. Compare and contrast this definition of intelligence with current progress in AI (machine intelligence). Use this analysis as a starting point for creating a definition of intelligence for distributed intelligence systems.
  6. Select a problem-solving domain of interest to you. Analyze it from the point of view of distributed intelligence systems. How could your unique interests, educations, and experience fill an effective, complementary role in a distributed intelligence system designed to work in that domain?

Top of Page