We need to increase the effectiveness of education by understanding more profoundly how humans such as we are can better adapt to the volatile world we are creating through the information revolution. I hope to contribute to this end by addressing these questions:
1. A powerful idea in early Logo research was giving children the experience of being a mathematician as opposed to teaching them some mathematical ideas. How can this approach be extended across the broader spectrum of the technical disciplines ? Can we analyze knowledge and human learning so that the outcome will suggest directions for development of computer based learning environments ?
2.An objective for popular technical education is that people should have in their minds "thinkable models" -- representations of things and processes simple enough that they can be used in thought experiments. The organization of cognitive structures for technical knowledge could be imagined to reflect a network of appropriately connected thinkable models. AI, as the science of representations, has focused in the main on language-like representations. How can we enlarge our vision of representations to include the greater variety of ways of thinking that are useful to people ?
3.The hope is to broaden access to scientific ideas. Can thinkable models help us reach out to students now left behind by current instructional methods? How can learning environments be designed to communicate a broad range of introductory technical knowledge ?
What People Do Well
Feynman's Story: If our objective is to get usable knowledge in people's minds, we should ask what they are good at. What do people do really well, at their best ?
When I was a young man, it was once my privilege to spend an evening with a man, Richard Feynman, known more for his work in physics than in psychology. Nonetheless, what Feynman said about thinking and learning deserves consideration. When asked how he got to be so good at solving problems, Feynman offered a description of his practice as an undergraduate student which is fundamental to the view developed in this paper. He recalled that whenever he actually solved a new problem -- by whatever method he could manage -- his exploitation of that small victory had only begun. He would then step back from the problem and try to see what other ways of looking at it were possible and to ask in what other formalism one might describe the problem. He would then work through the "same" problem to its solution in those secondary formalisms using the primary solution for guidance. 
Feynman's reflection upon these different schemes of representation, his developing understanding of the relation of one to another and the details of their intertranslatability led to his mastery of selection among and application of varieties of descriptions and formalisms.
A new division of labor: In Feynman's story, we can see a way of looking at the balance between algorithm execution and problem recognition. Depth is needed to push through analysis with rigor. Breadth pays. The exploration of alternative representations and prosecution of problem solving in their terms is the activity which leads to mastery of individual representations and understanding of which is the best fit among those possible. This suggests the possibility of a new division of labor. We people need all the help we can get. Whatever help machine intelligence can give us should be exploited for the exhaustive exploration of fecund problems in order that the human learner can improve the ability to recognize problems and select the best representational framework for addressing any new problem encountered. Excessive dependency on mechanized knowledge can be avoided by following a proposal of Feurzeig (in Artificial Intelligence and Education, Lawler and Yazdani, Eds.,1987) to design intelligent microworlds; such are learning environments which permit the user to decide whether the computer is to execute some function in its repertoire (whether understood by the user or not); to demonstrate its means of solving a particular problem or class of problems; or even to provide coaching and challenging problems when the user wants that guidance and testing. If we are more and more willing to relegate to machines, even conditionally, computationally burdensome algorithmic knowledge, what is left for people to know ? What can be their contribution to solving problems ?
People are best at recognizing problems and classifying situations (the kind of logical process that C.S. Peirce called "abduction", 1878). Some might call it speculation; theory building is a fancier name. The process is one of making hypotheses to answer some question which will not go away. How can we support what is naturally strongest in human capability ? It would serve people well to own a collection of valid thinkable models; thinkable models are descriptions of things and relations simple enough for use as tools for thought and as the basis of thought experiments. The following simple taxonomy attempts to relate such models to existing knowledge. (What the internal counterparts of these public models might be is a knotty question which I attempted to address in part otherwheres, Lawler, '81, '85, 86, 87. Here we will assume merely there is something in the head reflecting the public models without specifying what it is.)
The primary characteristic of a perspective is that it defines "what's what". Since such an assertion of the applicability of a description to a thing frequently involves questions of purposes, values are often implicated in such a perspective. Consider, for example, the "argument from design" for the existence of a deity. Following the heyday of classical mechanics, theologians argued that since the universe had been shown to be a perfect clockwork mechanism, the existence of such a clock implied the existence of a clockmaker. Characteristically, the essential power for thought in a perspective is that from knowing "what's what", "what follows" is "intuitively obvious". Different perspectives lead to different conclusions on the same issue. Looking back from the Apollo spacecraft, we have seen the earth with changed eyes. The earth, no longer the center of the universe, is now a physical system, essentially a container for the biosphere (Heppenheimer, 1977). As engineers, we can not help remarking the strange design that holds the contents on the outside of the container.
Minimal models go beyond perspectives to a focus on processes as well as things. Such models bring together language and object oriented descriptions of the world. They provide a repertoire of relations by which we can judge that some circumstance is of a recognizable type. Minimal models provide guidance in areas of action that are important in human concerns but beyond detailed comprehension in any thorough sense. Essentially, they provide a more or less credible cover story which asserts that specific kinds of things exist and that they interact according to some common sense scenario. P. R. Sarkar's law of social cycles is presented as such a minimal model in the recently popular book The Great Depression of 1990 (Batra, 1985). His social universe is divided into four kinds of people. Soldiers solve problems with force; intellectuals make cunning arguments; bankers accumulate wealth; and the laborers work for everyone else. The law of social cycles is as follows. Whenever the situation is a mess, soldiers seize control. Intellectuals provide justification for the soldiers' rule but eventually take over by cunning. In turn, both are then subjugated by the bankers who eventually control wealth so thoroughly that the soldiers and intellectuals are forced into the laboring class -- which leads to revolutions and the seizing of power by those of soldier mentality.
Perspectives and minimal models are useful but not coercive. Such are the kinds of theories we develop when we can do no better. We use them when we must to make sense of things too important to ignore and too difficult to determine in some way that we can really count on. Both must be judged primarily by their everyday usefulness, for once the explanation or cover story is separated from the rest of the theory, there is little or nothing left.
Technical and Explanatory Models
Our world is filled with mysteries of an everyday sort. Does your average man know how his television works or what keeps airplanes from falling out of the sky ? Technical models are capable of answering such questions. Typically, the model postulates some decomposition of the domain, then in explicit fashion indicates how the behavior of the component parts interacts in such ways as to generate the observed behavior of the aggregate. Technically trained people know the Bernoulli effect permits flight and may know enough of electromagnetics, circuit theory, and component design to understand how fluctuations in fields become video images. Even so, the phenomena remain mysteries to the common man; this is because they lack a simple cover story that can be related to the phenomena of everyday experience. Some theories do attain a crisp formulation that answers such objections. Such then are explanatory models. The kinematic theory of gases is a straightforward example. It is based on five assumptions which characterize the elemental level of description of the perfect gas:
1. Gases consist of molecules so small and far apart that their actual volume is negligible compared to the space between them.
2. There are no attractive forces between the molecules.
3. They are in rapid, random, straightline motion, colliding with each other and with the walls of their container.
4. In each collision, there may be a transfer but no net loss of kinetic energy.
5. Although different molecules have different speeds, the average kinetic energy of all the molecules is proportional to the absolute temperature. This atomistic description of a perfect gas permits subsumption of earlier descriptions captured in Boyle's Law, Charles' Law, and Dalton's Law through their integration via a rigorous derivation of the equation of state for perfect gases, such as one might find in an elementary physics text. In short form, one imagines that the gas is contained in a box. Even though the volume of the gas is mostly empty space, the gas occupies the entire space and the amount of that space affects the other properties of the gas. Pressure, defined as force per unit area, is exerted by gases because the molecules collide with the walls of the container. Temperature is a quantitative measure of the average motion of the molecules. The derivation of the equation of state is based on arguments about the frequency and force of collisions with which the atoms strike the sides of the container. When one encounters such a theory, usually an emphasis is placed on the rigor of the argument through which the elemental level of description is connected with observable features of the matter in question. More important ultimately for one's ability to think about a theory is the availability and clear applicability of connection with the everyday cover story. Boyle's Law (volume and pressure are inversely proportional) becomes an intuitively obvious conclusion; when a box is halved in volume, the molecules will collide twice as often with the walls. Similarly Charles' Law (pressure is proportional to temperature) is obvious given that when a gas is hotter the molecules move faster and bang into the walls more frequently. Arguing for the linearity of the relationship requires, of course, more precision and use of the underlying technical model.
Such explanatory models are not limited to the physical sciences, as can be seen in the theory of evolutionarily stable strategies, developed by J. Maynard Smith and advocated by R. Dawkins in The Selfish Gene (1976). The theory explains why it is not always the case that the powerful and aggressive individuals dominate populations. The answer, worked out through a rigorous application of game theory, is that in any situation of conflict the best strategy, as judged by the survival of any one individual's genes, will depend on the strategies that others in the population follow. The elements of Maynard Smith's universe are conflicting invididuals, a cost benefit function evaluating the outcomes of fighting (e.g. death for loss; control of a harem of brooding females for victory), and the strategies that individuals might follow (these strategies would generally be instinctual, even though they are discussed in terms of human stereotypes). This cover story serves to make the entities and the theory easier to think about.
Although thinkable models may or may not provide essential content, they do provide ideas and sometimes values useful in organizing later experiences and the knowledge constructed therefrom. Learning environment design can be seen as an effort to produce a medium of representational and functional elements in terms of which learners can develop dependable, thinkable models relevant to some specific domain. (When implemented on microcomputers, they are frequently called microworlds.) An outstanding example of a learning environment is the one developed by Papert (Mindstorms, 1980) and colleagues, the turtle geometry component of the Logo programming language. The turtle is a computer controlled robot which can move and draw following commands such as pendown, forward 100 (steps), and right 90 (degrees). A central virtue claimed for turtle geometry, as implemented in Logo, was the potential for incremental learning made possible through the learner's ability to write progressively more complex procedures for controlling the robot turtle. Specifically, what learning environments add to explanatory models is an environment in which learning can develop in a natural way, an environment in which self-construction is more natural than instruction. It would be helpful if there were a systematic approach one could follow in exploring possible domains as candidates for the development of learning environments.
What situations should be the most fruitful in selecting a domain for learning environment design ? Either technical or explanatory models would be suitable for their durable foundation. Most useful would be the succesful construction of learning environments for technical theories that have no explanatory models. Consider, for example, the molecular shell model of quantum electrodynamics (I know of nothing in everyday life that would provide an accessible model for this theory. Anyone attempting to create a microworld for the domain would find it worthwhile to consult Drescher, 1988). Similarly, developing a learning environment for explanatory models, such as the Maynard Smith theory of evolutionarily stable strategies, would render the ideas more generally accessible. In that specific case, an expert system shell with an interface permitting the learner to define congeries of strategies for entry in a database controlling the behavior of the simulation would be a feasible implementation today. The span of human knowledge is extensive. Field specific opportuntites exist, as do many more based on interactions of ideas from several fields. Ecologically oriented learning environments, such as the Moro simulation of Dšrner et al. (1986), focus on just such interactions.
We may begin our inquiry by asking of AI what are the different ways in which we can describe any given problem or situation. The Handbook of Artificial Intelligence offers us a list of the seven major categories of representation schemes, which we may choose to consider the formal front-line.
2. Procedural Representations
3. Semantic networks
4. Production systems
5. Semantic primitives
6. Direct (analogical) representation
7. Frames and Scripts
To notice that six of these seven are language-like shows, representatively I believe, the balance of work and the development of work in knowledge representation. Theories of mental model development and the development of qualitative reasoning systems are recent trends in AI, Cognitive Science and Education which are generally congenial to the point of view advocated here. (See Lawler and Yazdani, 1987, and Davis, 1984, for some examples and discussion.) To the extent that language is important to people and also is the primary channel through which we communicate with computers, AI's focus on language-like representations is neither surprising nor inappropriate. On the other hand, people make extensive use of ways of thinking that are different from language-like thought. From the human viewpoint, one might want the persepctive to be inverted -- we might ask, for example, how AI representations fit to those that are natural for people and how they sort out against the variety we witness in human differences and even within individuals.
Ways in Which People Differ
An anthropologist will point to culture as a force shaping the focus and choices of the individual. A physiological psychologist would describe people in terms of their senses. A social psychologist might describe people in terms of their personality characteristics. A cognitive psychologist would focus on the spectrum of a person's knowledge, its range, depth, and its interconnectedness. What I offer here is a combination of the last with a perceptual orientation, influenced by seeing in the Galton phenomenon a key to human representations and the spiral of learning (Lawler, 1987).
The Galton Phenomenon
Some people report their thoughts are very rich in imagery (visual thinkers), others that thinking procedes as internal dialogue (audile thinkers), while for others internal kinesthetics is paramount (motile thinkers). These three types reflect the dominance of one or the other major sensory motor subsystems (vision, sound, movement) and are generally mixed across the population. (The three types of thinkers are rarely seen in a pure form, except perhaps in the case of extreme physiological handicaps.) One can accept such reports as data without accepting them at face value. There are no images inside the head, nor any homunculus to view them. However, if the different sensory motor systems encode their memories and respond to external stimuli in terms of different primitives, it would not be strange if a human's internal representations also reflect the origins of the specific experiences through which the knowledge was acquired. I propose that representations for use by people must relate to possibilities of human system; and further, that these possibilities are limited by the human senses and the structures derived within those sensory systems as shaped by experience. It is easy to believe that there is a significant difference, psychologically, between arguments in these different types of representational schemes:
type argument example language oriented the syllogism barbara; algebraic equations visually oriented Venn diagram inclusion locomotion oriented Logo total turtletrip theorem manipulation oriented constructive geometric proofs
The Human Senses
Folk wisdom speaks of five senses (sight, sound, touch, taste, and smell) and provides slogans which characterize their role in the life of the mind, e.g.: " I hear; I forget. I see; I remember. I do; I understand." Introductory psychology courses tell us of two additional interior senses, those of equilibrium and bodily movement (Kagan and Havemann, 1968). A view more committed to physiological organization (as in Morgan, 1965, for example) subdivides the spectrum of sensation into different categories: the chemical senses, the visual, auditory, and the somatic senses. One can go deep in studying perception and making inferences about human representations and how experience might be encoded, in specific detail, by the particular mode of perception and action through which an experience was mediated. Such an effort is made in chapter 5 of Lawler (1985); Langer (1953) makes a related effort in her attempt to describe the primary illusion created by the various arts. There is room for complicating our understanding of the schemes of representation that are useful and meaningful to people. The organization of the senses is sufficiently well understood that we may be able to use it as a primary characterization to relate what and how people think to the modalities of mind in detail.
For current purposes, however, it is appropriate to ask how a focus on modes of mental perception and action could illiuminate problems of psychological, educational, and social interest. The basic principle of the following analysis is that different people find one or another kind of model more accessible, depending on their individual balance of commitments to the various sensory systems. Genetic (Piagetian) psychology leads us to emphasize action and motor behavior as well as perception. The primary categories of human variation I advance as useful for analysis are the three primary sensory systems (sight, sound, movement) and their coordinations (represented by manipulations involving hand-eye coordination). These primary categories can be detailed further at a finer grain. Movement, for example, subdivides in categories of movement in place (e.g. twisting the body), locomotion, and manipulation (chapter 5, Lawler, 1985). One may ask how various arguments reflect engagement with the different modes of human experience. Let us apply this perspective to some classic examples of problem solving, both to illuminate the differences through example, and to probe what light may be cast on educational issues. Let us examine various Proofs of the Pythagorean Theorem.
 The arguments of this paper appear as a part of the first chapter of Artificial Intelligence and Education, Vol. 2, Ablex.
 Among the many themes of that evening in 1959, Feynman also described his experiments on the multiple modalities of human thought. Notes on this theme are now published as chapter 4, "It's Simple as One, Two, Three..." (Feynman, 1988). Apparently Feynman never did take the time to write up his notions as a psychological paper (See Ralph Leighton's comments in the introduction to that book). His early observations on this theme inspired, in part, my own later attempt to develop a similar view (chapter 5, "Cognitive Organization", in Lawler, 1985.)
 It would be a poor joke to say that the earth is merely an ill-designed jar. Everything we value is near the surface, and depth in this case is less important than relations among things of the surface. From a super-terrestrial perspective, we can appreciate how important it is for an environment to permit incremental development -- as in evolution -- and to be relatively well protected against the energy flux of the sun and the flotsam of star dust.