Within a function-oriented structuralist view of human learning, a central challenge is explaining the transition from from naivete to mastery. This is likewise a major issue for machine learning. We report here progress on that theme with programming experiments taking guidance from a human case study [3]. The domain is tictactoe (or noughts and crosses). The human case serves as the developmental prototype; it answers the question "why this way?" The machine case serves as an experimental laboratory for asking "how hard or simple might the development be?" The overall strategy has been to start with quite limited programs, reflecting specific important characteristics of immature thought, and have them get smarter by escaping from their original limitations. The performance objectives are to develop programs that will achieve primitive forms of abstraction, create internal reflections of external objects and processes, and learn without instruction.
From Anterior Structures to Mature Performance
Piaget's "conservation" experiments are strong evidence that knowledge in the naive mind leads to reasoning surprisingly different from that in expert minds [4]. Such studies lead us to focus on the issues of what are the precursors of and the processes leading to mature performances. I have argued that in the human case mature skills can arise from small but significant changes in the organization of pre-existing, fragmentary bodies of common-sense knowledge [5] which represent the things of everyday experience and operations on them. If only one could specify the character and function of antecedent structures, he could explain large scale behavior changes as saltations emergent from minimal internal organizational changes.
The Neophyte: particularity and egocentricity
Children's early cognition is usually described as "concrete", a term which has two significant dimensions of meaning. The broader meaning is that the child's knowledge is based upon personal experience. It is in this sense that concrete knowledge is very particular, that is, depending on the specific details of the learner's interaction with people and things. Lawler's subject was observed beginning to play tictactoe strategically by imitating a three move plan for establishing a fork another child performed. The characteristics of her knowledge at that time were particularity and egocentricity. Particularity: when her sole plan was blocked, she was unable to develop any alternative [6]. Egocentricity: she did not attend to the moves of her opponent unless they directly interfered with her single plan. She was committed to her own objectives and unconcerned to the point of indifference about the plans of her antagonist [7]. In the setting of a competitive game, this was bound to change. But how, if a mind constructs itself from such beginnings, is it possible to escape the particularity and egocentrcity characteristic of early experiences? The journey from neophyte to master is a long one. One hope of the human study was tracing the path of such development. One objective of the machine study is constructing such a path.
Representation of Knowledge
The representation used to model Lawler's subject's naive knowledge, preented in detail in "Learning Strategies through Interaction" by Lawler and Selfridge, 1985, has the parts necessary for adaptive functioning. Learning what to do is essential: GOALS are explicitly represented. Knowing how to achieve a goal is essential: ACTION PLANS are explicitly represented. Knowing when a planned action will work and when it won't is essential: CONSTRAINTS limiting application of actions are represented explicitly. The structure composed of this triad, a GAC (Goal, Action, Constraints), is our representation of a strategy for achieving a fork in tictactoe. Goals are considered as a three element set of the learner's marks which take part in a fork. This is the first element of a strategy. Plans of three step length, which add the order of achieving goal steps, are represented as lists. Constraints on plans are two element sublists, the first element being the step of the plan to which the constraint attaches and the second being the set of cell numbers of the opponent's moves which defeated the plan in a previous game. In our simulations, REO (a relatively expert opponent) can win, block, and apply various rules of cell choice -- though ignorant of any strategies of the sort IT is learning. Within the execution of our simulation, the structure of GAC 1 below will lead to the three games shown depending on the opponent's moves (letters are for IT's moves, numbers for REO's):
GAC 1 GOAL ACTION CONSTRAINT {1 3 9} [1 9 3] <[3 {2 5 8}]> win by plan plan defeat constrained cell numbers draw A | 3 | C A | | C A | C | 3 1 | 2 | 3 | 1 | D 2 | 1 | 3 4 | 1 | E 4 | 5 | 6 2 | | B | | B D | 2 | B 7 | 8 | 9
The representations and learning mechanisms are committed to cell-specificity; they are also self-centered, focussing on the learner's own plans and knowledge (as they must since, by principle, IT begins not knowing what the opponent will do; IT does not have the ability to model or predict an opponent's moves in any abstract way) [8]. The result of learning simulations is a descent network which specifies all the goals and plans learned as modifications of the generating precursors of each. The Structure of IT and how it fits in it's virtual universe are sketched below in Figure 1.
NOTES: Solid lines represent invocation; dashes show control return. GEN represents the possibility of various experiences, and is thus part of the world rather than an experimental tool. REO is a "reasonably expert opponent." THINGS are external tokens perceptible by both REO and IT; NOTIONS are things of the internal world available to IT for both playing and learning.Escaping from Particularity
"A mathematician who tries to carry out a proof thinks of a well-defined mathematical object, which he is studying just at this moment. If he now believes that he has found a proof, he notices then, as he carefully examines all the sequences of inference, that only very few of the special properties in the object at issue have really played any significant role in the proof. It is consequently possible to carry out the same proof also for other objects possessing only those properties which had to be used. Here lies the simple idea of the axiomatic method: instead of explaining which objects should be examined, one has to specify only the properties of the objects which are to be used. These properties are placed as axioms at the start. It is no longer necessary to explain what the objects that should be studied really are...."Robust data argue that well articulated, reflexive forms of thought are less accessible to children than adults. The possibility that mature, reflexive abstraction is unavailable to naive minds raises this theoretical question: what process of functional abstraction precedes such fully articulated reflexive abstraction; could such a precursor be the kernel from which such a mature form of functional abstraction may grow?
N. Bourbaki, in Fang, p. 69.
The Multi-modal Mind
Let us discriminate among the major components of the sensori-motor system and their cognitive descendents, even while assuming the preeminence of that system as the basis of mind. Imagine the entire sensori-motor system of the body as made up of a few large, related, but distinct sub-systems, each characterized by the special states and motions of the major body parts, thus:
Body Parts | S-M Subsystem | Major Operations |
---|---|---|
Trunk | Somatic | Being here |
Legs | Locomotive | Moving from here to there |
Head-eyes | Capital/visual | Looking at that there |
Arms-hands | Manipulative | Changing that there |
Tongue/ears | Linguistic | Saying/hearing whatever |
Redescriptive Abstraction
I propose that the multi-modal structure of the human mind permits development of a significant precursor to reflexive abstraction. The interaction of different modes of the mind in processes of explaining unanticipated outcomes of behavior can alter the operational interpretation and solution of a problem. Eventually, a change of balance can effectively substitute an alternative representation for the original; this could occur if the alternative representation is the more effective in formulating and coping with the encountered problem. In terms of the domain of our explorations and our representations, there is no escape from the particularity of the GAC representation unless some other description is engaged. A description of the same circumstance, rooted in a different mode of experience, would surely have both enough commonality and difference to provide an alternative, applicable description. I identify the GAC absolute grid as one capturing important characteristics of the visual mode [11]; other descriptions based on the somatic or locomotive subsystems of mind could provide alternative descriptions which would by their very nature permit escape from the particularity of the former.
Why should explanation be involved? Peirce argues that "doubt is the motor of thought" and that mental activity ceases when no unanswered questions remain [12]. Circumstances requiring explanation typically involve surprises; the immediate implication is that the result was neither intuitively obvious nor were there adequate processes of inference available beforehand to predict the outcome (at least none such were invoked).
We propose that a different set of functional descriptions, in another modal system, can provide explanation for a set of structures controlling ongoing activity. The initial purpose served by alternative representations is explanation. Symmetry, however, is a salient characteristic of body centered descriptions; this is the basis of their explanatory power when applied where other descriptions are inadequate. Going beyond explanation, when such an alternative description is applied to circumvent frustrations encountered in play, one will have the alternate structure applied with an emergent purpose. Through such a sequence of events, the interaction of multiple representations permits a concrete form of abstraction to develop, an abstraction emergent from the application of alternative descriptions. In the following scenario, I will trace the interaction of different modes of mind as an example of how this early form of functional abstraction, a possible precursor to any consciously articulated reflexive abstraction because it involves "external interpretation" more than reflexive analysis, permits breaking out of the original description's concreteness with its limitations of particularity. To do so, I need to establish the basic kinds of alternative descriptions to be involved.
Alternative Descriptions in Tictactoe
I begin with the assumptions that the GAC formulation is primarily visual in character and that one should seek familiar schemes for representating things, relations, and actions that are from a different mode of experience. Descriptions based on activity lead to the somatic and locomotive body-part systems as the two obvious, primary candidates. I offer two suggestions for concretizing this search: let's consider first an "imaginary body-projection" onto the tictactoe grid as the somatic candidate description; and second, an "imaginary walk" through the tictactoe grid as the locomotive candidate description [13]. How would this work in practice?
Somatic Symmetries
Let's consider two essentially different types of symmetry for the tictactoe grid. Flipping symmetry will name the relation between a pair of forks (or more complex structures) when they are congruent after the grid is rotated around some axis lying in the plane of the grid. Examples of symmetrical forks might be {139} and {179} [14]. An example of an explanation for this fork symmetry based upon an alternative, somatic descrtiption would be the following:
If I sat in the center of the grid and lay down with my head in cell 1 and my feet in cell 9, then cell 3 would be at my left hand. The forks {139} and {179} are the same in the way that my right and left hands are the same, for cell 7 would be at my right hand.Such an explanation focusses on symmetry with respect to the body axis. A similar argument can be made for plan symmetry in the common fork {137} achieved by two different plans [1 3 7] and [1 7 3].
If I sit in the center of the grid and lie down with my head in cell 1, the cell 3 is at my left hand and cell 7 at my right. If the plan is to move first at the head, next at the left hand, then at the right [1 3 7] then the other plan is the same to the same extent that it doesn't matter if I lie there with my face up or my face down.It is harder to argue that such flipping forms of description are as natural for symmetries such as those of forks {139} and {137} because the axis of symmetry lies where no ego-owned markers are placed (along the cells {258}) and because other body parts have to be invoked as placeholders, as in the following:
If I sat in the center of the grid, with my head going up between cells 1 and 3, my shoulders would be there at 1 and 3 and the other parts of the forks would be the same as are my right and left hands.As this elaboration departs from the explanatory simplicity of the former, one should consider contrasting another model, and thus turn to explanations based on walking around.
Locomotive Symmetry
In contrast with the last explanation which placed a body axis along a line of empty cells, the locomotive symmetries involve moving from one ego-occupied cell to another. Consider now the type of locomotive description that could be used to explain the equivalence of these same forks {139} and {137} [15].
Suppose I start at cell 1, walk to cell 9, then turn and walk to cell 3. Facing center in place leaves me with occupied cells at my right and left hands. For the fork {137}, if I stood at cell 1, I would also have other occupied cells at my right and left hand. The forks are the same if nothing is changed by my jumping from one corner to the next and swinging around to the center.This Jump-and-Swing model of symmetry does more than explain a surprising win; the outcome is creative, as can be seen in the following scenario where it enables breaking out of the particularity of the GAC representation.
SCENARIO 1: From one corner to another:
After describing different types of symmetries, and justifying their activitation to explain surprising serendipitous victories in play, we now ask whether they can have more than explanatory value. The conclusion is that the "flipping symmetries" do not generate novelties through interactions in this model even though they are natural explanations of surprises. The rotational or jump-and-swing symmetries can do so, however, through the kind of tortuous but feasible path presented in the following scenario.
Generating a Second Descent Network
Let's supose the IT plays with minimal look ahead. Remember also that IT knows nothing of opening advantage. IT has played successfully to victories even when the second and third step of its known plans were foiled, but never so when the first step was blocked. Supose now that REO begins a game with a move to cell one. All of the existing plans in IT's repertoire are useless. But IT knows that the GOAL {137} is the same as {139} by rotational symmetry, therefore it can try to generate the alternative plan for that symmetrical goal. The attempt to create and use the plan, based on "jumping" from the pivot of cell 3 to a new pivot at cell 1, will fail on a later move, but IT doesn't know that [16].
That game establishes the plan [7 3 1] in IT's repertoire. When IT once again has the first move, should it choose to begin a game in cell 7, it has a decent chance of winning either the game [7 5 3 1 9 ...] or [7 5 3 9 1 ...]. Such a victory will establish a new prototypical game, comparable in status to [1 9 3] from which a second descent network can be derived. This does NOT argue that such a second descent network will actually be developed in all its fullness (though it may). What it DOES show is one plausible scenario for how the incredibly particular descriptions of GACs can break away from one element of their fixity -- commitment to opening in cell 1. The alternative description has served as a bridge to permit developing a second set of equally particular goals and plans.
Redescriptive Abstraction and Analogy
One might say that emergent abstraction via redescription is "merely analogy". I propose an antithetical view: emergent abstraction explains why analogy is so natural and so important in human cognition. Redescriptive abstraction is a primary operation of the multi-modal mind; it is the way we must think to explain surprises to ourselves. We judge analogy and metaphor important because redescriptive abstraction is subsumed under those names.
Further, I speculate it is THE essential general developmental mechanism. This process can be the bootstrap for ego-centric cognitive development because accomplished without reference to moves or actions of the other agent of play.
"...The internalization of socially rooted and historically developed activities is the distinguishing feature of human psychology, the basis of the qualitative leap from animal to human psychology. As yet, the barest outline of this process is known...."If the higher psychological processes to which Vygotsky refers are characteristic of productive intelligence in all forms, the issues of the progressive development of self-control and the internalization of exterior agents and context are profound transformations which need to be understood in both natural and artificial intelligence. The general objective of this section is to describe how it is possible for an egocentric system to transcend its limited focus. The central idea is that the system will adapt to an environmental change because of an insistent purpose; it will do so by interpreting the actions of its antagonist in terms of its own possibilities of play. Two essential milestones on the path of intelligent behavior in interactive circumstances are first, simulation of the activity of an opponent, and second, the internalization of some control elements from the context of play.
L. S. Vygotsky
In the human case, learning sometimes goes forward by homely binding, an instruction by people or things in what this or that means or how it works. Another kind of learning, which I call "lonely discovery," is the consequence of commitment to continuation of an interaction, despites the loss of the external partner. Such a desire, which can definitionally permit only vicarious satisfaction, is the motor of that internalization of "the world and the other" which is the quintessence of higher psychological processes [18]. We use the case study experiences in respect of these issues to guide the development of two examples/scenarios of how a machine can confront such challenges. We will consider how a system can develop through interaction in such a way that when the environment becomes impoverished, the system can begin to function more richly, and therefore become generally more capable. The particular problems through which I will approach these issues are the inception of multi-role play (one player as both protagonist and antagonist) and the inception of guarded (or mental) play. I do not want to impute to IT the motive of understanding the play of an opponent to whom it initially pays little attention. Therefore, we grant the system an initial purpose of continuing play even under such limitations as to amount to a crippling of the environment. From this initial purpose emerges another, that of the proper understanding of an antogonist's game. A major side effect of the solution I propose to this problem is creativity, in the specific sense of enabling the discovery of strategies of play not known beforehand nor learned by another's instruction. The ultimate achievement of such developmental mechanisms as I propose here is to learn new strategies through analysis of games played by others, i.e. learning by observation.
SCENARIO 2: The Beginning of Multi-role play:
The Human Case
After many sessions of her playing tictactoe with me, in one experiment I asked the subject to play against her brother so that I might better observe her play with another person. She surprised her brother by her significant progress at play (she beat him honestly and knew she would do so in specific games). When I was called away to answer a knock at the door, I asked the children not to play any more games together until I returned. Coming back, I found the game below on the chalk board. When I asked if she had let herself win, she explained that she had been 'making smart moves for me and the other guy.'
A | 3 | C
| 1 | D
2 | | B
The Form of the Solution for Machines
If the deprivation of interaction in the social milieu is one motor of human cognitive development, within the world of machine intelligence the corresponding circumstance would be the crippling of some function of other programmed modules of the system. The desired consequence of this crippling should be one where continuing in the well worn path is an easily detectable, losing manoeuver, thus necessitating changes in the functions of existing structures. Further, there should exist some alternative which is the marginally different application of an already existing structure capable of providing a functional solution to the problem which the "social" vagary creates. This paper offers two examples of such challenges and possible outcomes in the reorganization of this system of game simulation functions.
The deprivation of interaction leads to the introjection of "the other" within the "self" through the assignment of one of alternative functions (strategic play) to the "ego" (IT) and another (tactical play) to the "alter-ego" (let's call this agent REO-sim). What forces this reassignment is crippling the environment so that a decision needs to be taken on an issue which was immanent in but transactionally insignificant in the interactive context [19]. What makes this introjection possible is the successful application of established structures for a new function. Obviously, not every attempt to apply an old structure for a new function would be successful [20]; consequently, the character of structures which permits such successful re-application, their functional lability, needs to be established through some sort of experience, either of actual or imaginative interaction. In a system within which such imaginative experience is not yet possible, actual interchanges are needed.
The question raised by simulations was how extensive would be changes required to permit the system of programs to mimic the kind of behavior Lawler's subject showed in this incident. For IT, the situation equivalent to having no opponent is: whenever IT returns its latest move, IT receives control again with no move made by REO. There are three possible responses to this situation:
For the transition from one mode of response to another I offer no general, theoretical justification. There are reasons. Very little change was required to the original code because of the modular separation of strategic and tactical play. This is an important observation if and only if the modularity of the code for tactical and strategic play is justified by psychological data or epistemological argument.
The assumption of the modularity of cognitive structures and IT's pervasive use of modularity is based on the empirical witness of Lawler's case study. If the human mind is organized as that study suggests, then it should be easy for the kinds of developments described here to occur. Further, if the transition is representable by no more than the insertion of a control element, choosing between formerly competing or serialized subfunctions; and if the transition is driven by events in the environment upsetting ongoing proccesses which "want" to continue, the only "theory" possible is one about the characteristics of structure which permit this adaptivity. My structural assertion in this context is that the coadaptation of disparate cognitive structures is the key element of mind enabling the "internalization" of external agents and objects [22].
SCENARIO 3: The Beginning of Guarded play:
The Human Case
When she was already quite adept at playing Tictactoe against an internalized opponent, Lawler's subject was confronted with a new challenge: given the first two moves of a game, to tell whether she could certainly win, might possibly win, or would certainly lose. When she was refused her request for materials on which to represent possible games graphically, she proceeded to play out mentally sequences of moves which led to determinate games. This is the quintessence of mental play.
In this example, as in the former, constraints upon interaction with the external world -- in a framework committed to continuing the activity -- led to the application of existing structures to the satisfaction of new ends [23]; the ends are new in the specific sense that knowledge and know-how developed for playing games against an opponent, worked out with graphical tokens, were applied to answer speculative questions about the possible outcomes of games worked out in the mind. This functional lability of structure is the key to adaptive behavior and thus to learning.
The Machine Case
In the inception of multi-role play, the prohibition of the antagonist role was the stimulus for the reorganization of functioning knowledge. In the machine case, this was achieved through a "crippling" of the output function of the opponent, REO. The next extension asks what function should be crippled to impel the development of guarded play.
Tree generation within the module GEN is the primary function which creates all the possibilities of play; thus it is the candidate program from whose internalization mental play might emerge. GEN contains a mixture of interrelated LOOP macros and recursive invocations. Note, however, that these programs were created as experimental tools, as mechanisms to explore the learning of IT through experiencing particular games. Consequently, the mechanisms have no grounded epistemic status; their functions need be replicated but their mechanisms may be replaced freely by some alternative if that seems more natural.
Because IT does not contain any such tree-generation modules, rebuilding the GEN module structures within IT would require creating such structure from nothing. Because subfunction invocation with arguments is the primary mechanism within IT for transferring control, an invocation oriented solution is the preferred one: this is doing something already given within the module.
The essential insight IT needs for an invocation solution is that if it can be called with an argument by GEN, it can call itself successively with a series of arguments drawn from a list [24].
The remaining issue is how the outcomes of these generated executions of games are handled; that is, the record keeping function is affected as well as the tree generation function. Two alternatives appear to be first, the (unjustified) rebuilding within IT of the list-manipulation aspects of record keeping, or second, the acceptance of an imperfect result in the following specific sense. If the aim of the game is to win, the desired outcome of play is a specific string of cell numbers which comprise a valid win for the first player. If such a single game is the result of the recursive internalization of the GEN module's tree-generation function, the result is an impoverished one (as compared to a list of all possible outcomes) but nonetheless one that will serve an everyday function of winning a game [25].
Conclusions
The immediate cause for internalizing some exterior function is a constriction of the surrounding context. Given the objective of continuing activity despite this constriction, a person or a programming module can proceed by simulating the crippled functions of the environment of with components of its own function. The functional lability of existing structures in response to a changed external circumstance is the key to internalization of exterior agents and context elements. In the very simple cases presented here, a machine learning system can internalize portions of the outer world as people do. There is no guarantee that any structure will work when applied in some non-intended function. On the other hand, setting up systems of programs to employ this technique in coping with an uncomprehended environment is surely worth considering for any mechanized learning system.
The test of the value of such a capability is creativity. If learning from one's own experience is a criterion of intelligence, is it not smarter to learn from another's experience ? Such a capability is an emergent, with a few simple programming changes, of the facilities for multi-role and guarded play.
Learning Without Instruction
With the developments sketched so far, all the capabilities needed for learning from another by observation are in place. The most dramatic evidence for the accuracy of this claim in the human case comes from Lawler's subject's invention of a new strategy of play based upon her later analysis of a game played against herself at an earlier time [26]. In summary, reviewing a game played only to the point where she believed a draw would follow, Lawler's subject recognized that she had abandoned the game while a single further move would have led to her winning. She then worked through the moves she had made, both as protagonist and antagonist, and convinced herself that she had created a new strategy with which to win on condition that her opponent made any one of four responses to her opening corner selection. The kinds of abilities employed in her analysis were those of multi-role play, guarded play, and specific knowledge of three sorts: of the particular game, about her own habits (starting in cell one), and procedures of play (she knew SHE would have made forced moves at need) [27].
SCENARIO 4: Analysis through synthesis
The Machine Case
What then need be added for IT to perform a similar feat of creative analysis ? When presented with an externally generated game, nothing would be easier for IT to analyze IF the order of moves were preserved. HERE the challenge is different: the set of moves to be made is prescribed, but the order is to be determined. Lawler's subject's game is below; the tree of possible games following after. When a string is forced into a forbidden move (one not part of the presented pattern), the branch is pruned [28].
X | O | O
X | X |
X | | O
FUNCTIONS IMPLEMENTED:
As a player becomes more adventurous with guarded play -- willing to start in the center and various corner cells, willing to move to side cells as well -- the number of winnable games possible becomes quite large. This explosion of possible won games, the fact that there is too much to remember and all the games are superficially similar, introduces the need to impose a more abstract order on the experience. Answering that need demands feature based abstraction and conceptualization, the focus of work still ongoing.
Going beyond earlier conclusions in the human study, the discovery remarked here is the role of the multi-modal mind in creating the potential for abstraction emerging from redescription. This is an example of the functionality of coadaptation in cognitive development. The conjecture is advanced that the multi-modal structure is central to understanding the possibility of human cognitive development. Further, emerging abstraction through redescription can be appreciated as a primitive form of functional abstraction, of which reflexive abstraction is a more mature form. Redescriptive abstraction helps explain the importance of analogy and metaphor in human thinking and learning.
In this research, we have focussed only on the interaction between visual and kinesthetic systems. The other modes of mind, related to the linguistic system and and the touch-salient manipulative system, add significant further dimensions of possible complexity to this non-uniformitarian model of mind. Such models, although basically simple, are complex enough to permit interesting development through plausible, internal interactions; that is, they permit the possibility of learning through thinking -- a desirable outcome for any view of human minds, and one that may prove of some value with machines as well.
References
Caple, Balda, and Willis. Work reported in "How did Vertebrates take to the air?" by Roger Lewin, Science, July 1, 1983. See also American Naturalist, 1983.
Fang, J. Towards a Philosophy of Modern Mathematics. Hauppauge, New York: Paideia series in modern mathematics, vol.1, 1970.
Fann, K. T. Peirce's Theory of Abduction. The Hague: Martinus Nijhoff.
Jacob, F. "Evolution and Tinkering" in Science, June 10, 1977, and The Possible and the Actual. New York, Pantheon Books, 1982.
Lawler, R. Computer Experience and Cognitive Development. Chichester, England, and New York: Ellis Horwood, Ltd. and John Wiley Inc., 1985.
Lawler, R. and Selfridge, O. " Learning Concrete Strategies through Interaction". Proceedings of the Cognitive Science Society Annual Conference, 1985.
Piaget, J. The Child's Conception of Number. New York: Norton and Co., 1952.
Piaget, J. Biology and Knowledge. Chicago: University of Chicago Press, 1971.
Piaget, J. The Language and Thought of the Child. New York: New American Library.
Peirce, C.S. "The Fixation of Belief" in Chance, Love and Logic. M. Cohen, ed. New York: George Braziller, Inc., 1956.
Peirce, C.S. "Deduction, Induction, and Hypothesis" in Chance, Love and Logic.
Satinoff, N. "Neural Organization and the Evolution of Thermal Regulation in Mammals", Science, July 7, 1978.
Selfridge, M.G.R. and Selfridge, O.G. "How Children Learn to Count: a computer model", 1985.
Vygotsky, L.S. Mind in Society. Eds. Michael Cole, Vera John-Steiner, Sylvia Scribner, and Ellen Souberman. Cambridge, Mass: Harvard University, 1978.
Acknowledgements
This paper began in a collaboration withg Oliver Selfridge to extend work in "How Children Learn to Count" (Selfridge and Selfridge) with ideas of CECD. With Oliver's genial prodding, I have carried forward that effort to confront the issue of abstraction from highly particular descriptions. Special thanks are due to Sheldon White, who first pointed out the similarity of my conclusions to those of Vygotsky. He has repeatedly emphasized the importance of ideas about the internationalization of external processes and urged me to develop them.
Publication notes:
Text notes: