OLPC Memo 5:  Education and Psychology

Marvin Minsky

22 January 2009


What goals do we want our schools to achieve? Most parents agree that their children should learn about History, Language, Science and Math, and get some instruction in Health, Sports, and Art.  Most parents also want their children taught to behave in what they regard to be civilized ways.   And surely, most parents would also agree that schools should help children learn good ways to think.  However, while schools have good ways to teach facts about subjects, many pupils still fail to build adequate skills for applying that knowledge. [[1]]


But if “good thinking” is one of our principal goals, then why don’t schools try to explicitly teach about how human Learning and Reasoning work?  Instead we tacitly assume that if we simply provide enough knowledge, then each child’s brain will ‘self-organize’ appropriate ways to apply those facts.  Then would it make sense for us to include a subject called “Human Psychology” as part of the grade-school curriculum?  I don’t think that we can do this yet, because, few present-day teachers would agree about which “Theories of Thinking” to teach.


So instead, we’ll propose a different approach: to provide our children with ideas they could use to invent their own theories about themselves!  The rest of this essay will suggest some benefits that could come from this, and some practical ways to accomplish it—by engaging children in various kinds of constructive, computer-related projects.


§1. Why we can’t yet include  “Psychology” in the Primary School Curriculum.


Today’s most popular “theories of learning” are based on a century of experiments in which we place a pigeon or rat in a certain situation and reward it with food if it responds with a certain action. Then later, that animal will more often behave in that same way.


“… Of several responses made to the same situation, those which are… closely followed by satisfaction to the animal will, other things being equal, be more firmly connected with the situation, so that, when that situation recurs, those responses will be more likely to recur…while responses followed by discomfort will become less likely to occur.” —Edward L. Thorndike 1911


Those experiments also showed that a quicker reward has a larger effect, so educators often use problem-sets in which each new task is almost the same as the last. This causes most answers to be correct, which therefore results in more frequent rewards—and this helps to make classrooms more pleasant. And indeed, those carefully graded assignments work well to prepare for short-answer tests—but they don’t help prepare us for real-world life, where problems don’t come in neat sequences. [[2]]


However, this type of learning only works well when the animal already can recognize each such situation or “stimulus” and the required reaction or  “response” already is in that animal’s repertoire.  To ensure these conditions, most classic experiments constrained each animal simply to choose between only two different buttons to press. Then that reward-based theory of learning worked remarkably well to predict how sub-human animals react to those simplified situations. But that theory only described those animals’ external behaviors —their motor responses to sensory inputs—and never progressed to shed more light on how people learn to react in their minds, to more complex problems and situations. [[3]]


In any case, children are different from pigeons and rats, and those classic experiments don’t help much to explain how people learn to represent knowledge in their brains and, later, retrieve the information that might be most relevant—so that they can reason, plan, and construct new ideas.  So when we try to extend those old theories from pigeons to people, we find that they tell us little about the higher-levels of minds that distinguish us from those animals.


More advanced ideas about psychology began to emerge in the 1940s from the new field called Cybernetics—which then evolved into Cognitive Science and Artificial Intelligence.  These new sciences are constantly producing new ideas about minds—but those concepts are still changing so fast that they aren’t yet stable enough to teach.  So by default, the ideas from those early years of research on the ‘external behavior’ of animals still dominate the context in which most present-day teachers are taught to teach. 

§2. Some deficiencies of behavior-based theories.


Humans learn Negative Expertise. We tend to think of knowledge in positive terms—and of “experts” as people who know what to do—but much of an expert’s ability comes from knowing about the most common mistakes—and thus, knowing which things one should not do.  Thus, much of a person’s competence is based on learning to detect—and suppress—unproductive ways to think. Also, an adequate theory of learning should also cover the “reflective skills” that people use to recognize exceptions to generalizations, to eliminate tactics that waste too much time, and more generally, to make longer-range planes and form broader perspectives.  In any case, the early behaviorists rarely recognized the importance of “Negative Expertise” because they cannot directly observe how this affects our ways to behave. In particular, when people deal with difficult problems, we often learn more from our failures than from our successes. [[4]]


Human Minds make high-level Credit-Assignments.  New situations are seldom the same as previous ones, so after you solve a difficult problem, it won’t help much to just remember the actions that solved it—because you probably had to make many attempts before you found a successful one—so it wouldn’t make sense to “reward” all those prior attempts.  It sometimes helps to reward your most recent reactions—but generally, it’s more important to recognize which earlier strategic decisions should get credit for one’s latest success.  We briefly discussed this in Memo 2, but also see Section 8-5 of The Emotion Machine.


Human Minds think about what they’re thinking about.  After you’ve ‘wasted’ a lot of time by using a process that finally failed, you usually try to figure out some cause or reason for why it failed.  However, those older theories never tried to describe the methods that people use for thinking about what we’ve been thinking about.  But I’m convinced that these “self-reflective” processes are the principal ones that people use for developing new ways to think.  I doubt that simply “rewarding success” helps much to promote one’s mental development.


Psychology Student: But isn’t it an established fact that every child is born with a certain IQ—that is, an unchangeable quantity of innate mental ability?


There is a popular belief that each person’s “amount of intelligence” is fixed, because those IQ numbers don’t usually change very much after early childhood.  However, the evidence for this may be biased because of ignoring other important causes for this:


"… Biographical information on a sample of twenty men of genius suggests that the typical developmental pattern includes as important aspects: (1) a high degree of attention focused upon the child by parents and other adults, expressed in intensive educational measures and, usually, abundant love; (2) isolation from other children, especially outside the family; and (3) a rich efflorescence of fantasy as a reaction to the preceding conditions." … [If so, then] the mass education of our public school system is, in its way, a vast experiment on the effect of reducing all three factors to a minimum: accordingly, it should tend to suppress the occurrence of genius."—Harold McCurdy, 1960 [[5]]


This also could be related to the process we mentioned in Memo 2:


A new class of 6-year-old children will soon begin to share similar ways to think and behave.  Then, next year, when they are 7 years old, most of those pupils will still remain in that group—and thus will tend to perpetuate those same patterns of activity. The next year, they will be 8-year-olds, but will continue to share many attitudes, values, and cognitive strategies.  So as those children proceed through their K-12 grades, large portions of their ways to think will remain much like those of 6-year-olds!


Human Thinking involves Prediction, Comparing, and Planning.  The standard reward-based learning schemes envision a pigeon or rat as simply containing a database of two-part rules like If the situation is S, Do action R.” [[6]] But while such rules can describe much of our external behavior, this ignores the fact that when you consider (in your mind) two actions that you might possibly do, then (before you perform any actions at all) you can try to predict the result of each—and then compare those imagined results!  This means that to extend our theory from pigeons to people, we’ll need to include some ways to predict the effects of possible actions.


However, If-Do rules don’t predict the results of their actions—so we’ll also need 3-part If-Do-Then like, “If the situation is S, and you do R, Then situation T will result.”   Chapter 5 of The Emotion Machine explains how to link sets of these 3-part rules into networks of knowledge that we can use to simulate ‘virtual worlds' inside our minds, so that we can imagine the sequences of possible actions that we call our “Future Plans.”  But before the dawn of computers and programs, it was hard to envision such simulations—so those early “Behaviorist” psychologists were forced to ignore the fact that people make longer-range plans and strategies. [[7]]


Human Thinking is largely directed by Goals. You don’t open every door you pass, or pick up every object you see.  You don’t often use an If->Do rule unless it serves some current motive or goal. However before the era of Cybernetics, few psychologists tried to make theories about what goals are and how they work.  Instead, they simply assumed that each animal brain contains a separate set of rules to engage one for each of that animal’s goals, intentions, motives or drives—one set of rules to use when hungry and another set to use when angry, etc.  But finally in 1957, Allen Newell and Herbert Simon proposed a more constructive theory: a purposeful or goal-based action is one that results from using a special kind of process (called GPS) that operates on two different descriptions:


S is a description of the situation that the machine is currently in.

G is a description of a future state that the machine “wants” to be in.

GPS is a process that repeatedly looks for a difference between S and G, and then performs some action on S that is likely to lessen that difference.


Such a process will result in behavior that appear to do just what we mean by “pursuing a goal, ” because it will persist at removing the differences it perceives between ‘what it has’ and ‘what it wants—that is, unless some other process intervenes to change or remove that G. [[8]]


However, human learning doesn’t always depend only on getting pleasant rewards. We can adapt to small changes in familiar conditions by making small changes to skills that we already know.  But when thrust into strange environments, we may need to abandon our older techniques—and this can lead to unpleasant feelings like pain and grief.  Chapter 9 of The Emotion Machine discusses some ways to prevent such discomforts from keeping us from learning new things—whereas the old reward based theories ignored what one could call the ”Pleasure of Exploration-Pain.”


 “Pleasure pursues objects that are beautiful, melodious, fragrant, savory, soft. But curiosity, seeking new experiences, will even seek out the contrary of these, not to experience the discomfort that may come with them, but from a passion for experimenting and knowledge.” —Augustine, Confessions, Book 10.


 “Why do children enjoy the rides in amusement parks, knowing that they will be scared, even sick?  Why do explorers endure discomfort and pain—knowing that their very purpose will disperse once they arrive?  And what makes people work for years at jobs they hate, so that someday they will be able to—they seem to have forgotten what! …  It is the same for solving difficult problems, or climbing freezing mountain peaks, or playing pipe organs with one's feet: some parts of the mind find these horrible, while other parts enjoy forcing those first parts to work for them.”The Society of Mind, Chap. 9.04.


In other words, adventurousness can overcome unpleasantness: when "pleasant" or “positive” rewards fail to help us learn more difficult subjects, we can make ourselves enjoy discomforts and pains—by acquiring what Augustine called “a passion for experimenting and knowledge.” 


Citizen: How can you speak of “enjoying” discomfort? Isn’t that a self-contradiction?


This only seems paradoxical when you think of yourself as an entity that can feel only one feeling at a time. But if you imagine your mind as a system that runs a host of concurrent processes, then you can see how some parts of your mind might ‘enjoy’ making other parts ‘suffer’—as when ‘it feels good’ to have just discovered a new kind of blunder or mistake.  In fact, that’s exactly what sport coaches teach!  Of course, those athletes still feel physical pains, just as artists and scientists feel mental pains. But somehow, they manage to train themselves to keep those pains from spiraling into the awful cascades we call suffering. [[9]]


Scientist: Few things bring more pleasure to me than replacing my old ideas—and then showing that my new theories are better than those of my competitors.

Artist: It hurts to discard a hard-earned technique, but nothing surpasses the thrill of conceiving new ways to represent and to think about things.


One needs to learn not only what works, but also what to do when failure looms. I don’t like that tale of “The Little Engine that Could” with its helpless injunction to simply repeat “I think I can, I think I can.” A better motto would be to think “perhaps its time to try something else”—because every setback can offer a chance for a new phase of mental development.


The traditional theories do not explain Pleasure.  From before the dawn of psychology, everyone has always agreed that pleasant rewards help us to learn, but I’ve never seen any plausible explanations for this. Here is what may be a new way to explain how ‘pleasure’ works:


“Pleasure” is a word we use to describe a process that’s used to keep a mind from “changing the subject” of the person’s most recent concern.  One might need such a function to preserve the current contents of memory for long enough for Credit-Assignment to be accomplished!


So, while we usually see Pleasure as positive, we also should see it as negative—because, during the time in which Credit Assignment proceeds, it suppresses competing activities.  Similarly, we often need to suppress other interests to keep ourselves working on difficult problems.


But if pleasure has negative aspects, then why does it “feel good” to us?  People often answer this by saying that feelings are so basic, simple, and direct that there’s nothing much to say about them! However, in Psychology, it’s often the case that the things that “seem” simplest turn out to be the ones that are the most complex!  In this case, “I feel good” might mean, “Right now, I’ve suppressed all my other concerns, so all of my problems seem to be gone.”


§3. Teaching Cybernetics instead of Psychology


All this suggests that our ideas about psychology are still developing so rapidly that it wouldn’t make sense for us to select any current “theory of thinking” to teach.  So instead, we’ll propose a different approach: to provide our children with ideas they could use to invent their own theories about themselves!  The rest of this essay will suggest that such ideas could come from engaging children in projects that involve making machines that have ‘lifelike’ behaviors. Such projects would engage and integrate many concepts that we separately treat today, in Physics, Biology, and Mathematics—and in Social Studies, Psychology, and Economics—along with other important principles that don’t fit into any of those traditional subjects.


A flood of new concepts about what machines could do began to emerge in the 1940s, from research in the field called Cybernetics—which soon then led to other fields called Control Theory, Computer Science, Artificial Intelligence, and Cognitive Psychology.  Each of those new sciences brought hosts of new ideas about how to build systems that actually do some of the things we use ‘thinking’ to do. So now, in the spirit of Seymour Papert’s “constructionism,” we can enable our children to experiment with networks composed of collections of parts that support many sorts of knowledge-based processes. [[10]] This is important because that’s what we are:


Consequently, it makes sense to expect children to use such ideas to make better ways to think about themselves.  We could start by encouraging them to build simplified models of the sensor and motor systems of animals—and put these creatures into environments to experiment, first with each one’s separate behavior, and then with the social relations in groups of them.


Do we have enough evidence that such experiments will improve those children’s self-images?  Our answer must be ambiguous: in the past we’ve seen many cases in which such experiments appeared to work well, but those LOGO-based projects were never extended to large enough scales. But such projects are far more feasible today, because equipment is now so much cheaper.


Some critics might complain that subjects like Cybernetics and Computer Science do not deserve a special place in the elementary school curriculum, because they are too specialized.  However, Computer Science is not only about computers themselves; more generally, it provides us with a whole new world of ways to understand complex processes—including the ones that go on in own mind.  For until those new techniques arrived (such as programming languages for describing processes, and data structures for representing knowledge), we had no expressions that people could use to articulate—and then communicate—good new ideas about such things. [[11]]


Understanding Systems with Feedback Loops.  While rule-based systems work well to explain much of how other species behave, human beings do much more than directly react to “Situations” or “Stimuli.” For example (as in the GPS systems we mention above), we often make mental comparisons between (1) the real-world situation that we are actually “in” with (2) some “goal-situation” we’d prefer to be in.  Then we can use our descriptions of those differences to decide which of the actions that we could would most likely to help us to achieve that goal.


Now, this might seem so obvious as to make you wonder why I’m repeating it.  But myself, I’ve often wondered why this idea wasn’t clear to the early Behaviorists!  For if we adopt this process-based concept of what goals are, this changes our view of our own behaviors: we react less to Situations themselves, than to their Differences to our goals!


In other words, if we think in terms of goal-based processes, we can see our actions as mainly resulting from “feedback loops” that work to change differences rather than things. Of course it is perfectly obvious that some such loops increase differences, while others tend to reduce them.  It also is not very hard to see that when feedback increases differences, this can lead to exponential growth (which can never be sustained for long, because it will soon exhaust some resource)—and its only a little harder to see why, when feedback reduces differences, this can make a process more stable and reliable.  However, it also turns out that there is also an astonishing range of phenomena that come from systems that lie between those extremes—and the study of such processes was what led to the new science called Cybernetics.  For example, when feedback-loops have time-delays, this can lead to repetitive oscillations, or even to the complex phenomena that mathematicians describe as “chaotic.”


In any case, virtually every real-world system includes some of those kinds of feedback loops, so it seems to me that everyone should know these ideas—but unless we teach people names for them, they’ll find it hard to think about such things, or even to learn to recognize them. An easy way to learn about feedback loops is to make (or better, to simulate) a turtle-like robot that follows a line on the floor, by detecting when the line is to the right or left, and using that information for steering. [[12]]


That basic idea is simple enough, but soon the child will find that the system is likely to get off the track  (because of excessive overshoot, or by encountering an obstacle).  Then the child can invent various ways to prevent or recover from this (say, by discovering ways to search for it).  Also, in the case of objects that move, one can invent many different ways to try to track and predict their future paths. But this opens the door to another whole world: our child might now want to go further, to make this animal try to predict and pursue another one that also moves—or even one that is trying to escape!  Then, wow, we’ve suddenly entered the social realm!


Other Suggestions for Cybernetics Projects


It would take a whole book to begin to discuss the range of new concepts that emerged in the 1950s from combining ideas from Psychology, Cybernetics, and other sciences.  I don’t see any tidy way to organize this huge body of knowledge, so the rest of this paper will only list a number of different approaches for this.  First, I’ll mention a few good sources, each of which suggests many such projects, and then I’ll discuss a few specific ones.


In an elegant book called “Vehicles,” Valentino Braitenberg explained many important “Cybernetic” ideas about machines that use various feedback schemes.  Unfortunately, that book is now hard to find, but many computer programs make it easy to experiment with those ideas: See

http://people.cs.uchicago.edu/~wiseman/vehicles/, or http://instruct.westvalley.edu/lafave/Vehicles_online.html, or http://www.cogs.susx.ac.uk/users/christ/popbugs/intro.html

There is a large and wonderful collection of intriguing projects on Ken Perlin’s homepage at http://mrl.nyu.edu/~perlin/.

Many more projects are described in LogoWorks: Challenging Programs in Logo, Cynthia Solomon, Margaret Minsky, and Brian Harvey, Eds. McGraw-Hill 1986 and Computer Environments for Children: A Reflection on Theories of Learning and Education, Cynthia Solomon, MIT Press, 1988)

An old but still relevant introduction to early research in Artificial Intelligence AI is presented in http://web.media.mit.edu/~minsky/papers/PR1971.html.

Quite a few more stimulating projects are described by Mitchel Resnick at http://web.media.mit.edu/~mres/, andhttp://web.media.mit.edu/~mres/papers/Clubhouse/Clubhouse.htm, and http://llk.media.mit.edu/

Also see Idit Harel’s site at http://www.mamamedia.com/

Many other projects can be found by starting at http://wiki.laptop.org/go/Activities.


Robotic Projects. Most present-day robotic projects use wheeled vehicles for mobility, because these are so easy to build.  However, I’d like to see more projects that try to design robots that walk, because they would be more versatile.  It’s not very difficult to make 4-leg walking machines, but 2-legged ones are much harder to stabilize—but once you can keep them from falling down, you can go on to make them jump and run!  And of course, there’s a whole new world of ideas that will come from building micro-worlds in which several such creatures interact; you could either control them all from a central place, or just give them some ways to communicate. For discussions about making robots cooperate, see examples at http://www.cs.cmu.edu/~robosoccer/main/, http://en.wikipedia.org/wiki/Flocking_(behavior), and http://www.lalena.com/AI/Flock/


Simulated vs. Physical Robots.  It is remarkable how engaged people become when they first see mechanical, physical robots.  But eventually, our students learn more by working with computer programs that simulate robots, because then one can better design and control one’s experiments.  One trouble with physical robots is that too much of the students’ time is consumed with trivial bugs that are hard to fix; also, it is hard to control enough of the robot’s environment to make the results reproducible.  However, to make good use of our young people’s time, we’ll need to provide them with “virtual’ physical worlds that work quickly and realistically.


However, we also should note that in recent years, increasingly fewer children have developed skills for making and fixing mechanical things, because so few products today are repairable.  So building physical robots can be an excellent way to remedy this growing helplessness—and one can find many robot construction kits with a Google search for “robot kits.”  However, there are advantages to learning to build machines using general-purpose construction sets, such as Erector, TinkerToy, and Fischertechnik—but these are becoming hard to get.  LEGO is more widely available, but is generally less versatile.  The next few years will bring new sorts of fabrication techniques, e.g., 3-D printers, which today are still too expensive for schools. But still, there's nothing wrong with learning some old-fashioned carpentry!


Balancing and manipulation.  Build a moving hand that can balance a stick—first in 1 and then in 2 dimensions.  Try to discover the limitations of such processes. Try to build a juggling machine. I don’t know of any projects that have made machines that can tie a knot or button a shirt.


Optimization and problem solving. To recognize a particular object from a collection of several things, one will need some way to determine which of them best fits a certain description.  A powerful method for solving such problems is to describe things in terms of coordinates, parameters, or variables—and then to find “the path of steepest ascent. ” Such “hill-climbing” programs can often provide us with numerical ways to “optimize” things when our more clever symbolic methods fail (as they most often will, in everyday life).


Visual recognition projects. Recognizing printed and handwritten characters and words.  Why is it easier to recognize an entire word than to recognize the letters that make it?   Can you make a program that will distinguish pictures of cats from pictures of dogs?   (I don’t know of any successful such project.) There do exist programs that do well at recognizing faces, but I’ve not yet heard of any program that can look at a picture of a typical room and identify the most common types of objects in view.  One problem is that the appearance of an object will depend on one’s point of view, so the programmer will need to invent some concepts to deal with this.  So these kinds of projects can demonstrate the importance of making appropriate kinds of abstractions.


Processing Voices and other Sounds.  One can also learn a lot about perceptual processes by making programs that classify features of everyday sounds.  Can you design a program to distinguish the tones of a violin, trumpet, and clarinet?  How far can you get toward recognizing such everyday sounds as footsteps, voices, the clinks of cups and plates, coughs, crumpling paper, etc?  Could children invent better tricks for this than the ones that most engineers would use?  Experiment with grammars and parsing procedures. Then take any other project we’ve mentioned here and add an interface to it so that you can control it by using verbal commands.


Performance systems. Some children might like to make a program that can read a Music score—and then actually perform the music. Or try to experiment with phonetics to make a program that reads a story aloud; this would lead to all sorts of ideas about pronunciation, accent, and stress—and up to the threshold of rhetoric.


Puzzles and Game-Playing programs: Write a program to do your math homework (and see if your teacher will accept the results) or write programs to solve such spatial puzzles as Tangrams, Pentominos, Magic Squares, or Crossword-like puzzles, etc.   Many children become so addicted to playing particular games that this can become a serious problem.  However, we might be able to exploit this by encouraging children to try to write “front end” programs to play existing games—or to design and program their own new games.


Cognitive projects:  How difficult would it be to make a system that does some humanlike commonsense reasoning?  Answer: this would be very hard, indeed: in fact, it is at the forefront of current research in Artificial Intelligence.  Nevertheless, it would be interesting for children to see how much they can do, using Logic alone, and then to add other forms of reasoning—for example, by using analogies.  And very such project could be improved by enabling it to improve itself, by adding abilities to learn.  For example, each part of a system might be improved by adding a reinforcement-based neural network to it. 


Experimenting with “Rule-Based Systems.”  


Theories about Finite-State Machines.  Modern Computer Science includes a great collection of useful ideas about different kinds of processes—many of which can be built into systems that even young children could make and then “play” with.  These could include simple logical reasoning systems, or machines that do arithmetic, or ones that play some popular games.  One productive way to begin would be to experiment with networks of logical ANDs, ORs, and NOTs.  Indeed, a good way to learn about computers might be to begin with such “logical networks” instead of conventional programming languages—because this yields different insights into what computers are and how they work. [[13]] In any case, if our goal is to attract children to technical concepts, this ‘finite state’ approach might turn out to be more productive (and more enjoyable) than making them learn Arithmetic. [[14]]


Learning “Critical Thinking”.  This may seem somewhat out of place, but I want to recommend encouraging every child to learn a number of magic tricks—because (1) this is a good introduction to “the psychology of perception” and (2) it also leads to important insights about how what we perceive is affected by context, expectation and deception. For children who can’t get lessons from local magicians, it would be good to have some online magic-teaching programs.

§4. How it can help to think of oneself as a Machine


Many people are firmly convinced that to have a mechanical image of oneself must lead to a depressing sense of helplessness—because it means that you’re doomed to remain what you are, and there’s nothing that you can do about this.  However, I’ll argue exactly the opposite: seeing yourself as a kind of machine can be a liberating idea—because whatever you might dislike about yourself, that might be caused by a bug that you could fix! For example, contrast these pairs of self-images:


I’m not good at math. ---There are some bugs in my symbolic processes.

I’m just not very smart ------ Some of my programs need improvements.

I don’t like this subject. ---------- My current goals need better priorities.

I am confused. -----------Some of my processes may conflict with others.



If you think of yourself in terms of “I”, then you’ll see yourself as a single thing, that has no parts to change or rearrange.  But using “My” can help you to envision yourself as composed of parts, which could enable you to imagine specific changes that might improve your ways to think. In other words, if you can represent your mind as made of potentially repairable machinery, then you can think about remedies. For example, you might be able to diagnose some bugs or deficiencies in the apparatus that you use for everyday functions like these: 


Time-management. ----- Organizing Searches. -------Splitting problems into parts.

Selecting good ways to represent things. -----Making appropriate cognitive maps.

Allocating short-term memory. ---------- Making appropriate Credit Assignments.


It seems clear that some children are better than others are at doing this kind of “self reflection.” Could this be a skill that we could teach?  Perhaps, but this might not yet be practical because we don’t yet know enough about our human mental machinery.  However, the types of projects this essay recommends could help us to promote that goal, by giving our children more tools to use for constructing better views of themselves!

[1] Thanks to Cynthia Solomon and Gloria Rudisch for many ideas in this essay.

[2] Later researchers discovered that those effects last much longer when rewards come less predictably.  It also turns out that punishment helps to suppress “wrong” responses—but those behaviors can reappear after punishment stops, sometimes even with greater strength—so simply withdrawing reward tends to be more effective. For more details, see http://en.wikipedia.org/wiki/Reinforcement and http://findarticles.com/p/articles/mi_6884/is_/ai_n28321088.

[3] This essay uses the term  "reward-based” instead of the more common word “reinforcement,” which in the infancy of Psychology was meant to suggest that learning strengthens direct “connections” between stimuli and responses.  But this is not an adequate image of what happens in human brains, where different processes work at multiple levels to change different structures and representations, in ways that rarely can be observed from watching a person’s external behavior. See Chapter §5 of The Emotion Machine.

[4] See http://web.media.mit.edu/~minsky/papers/NegExp.mss.txt

[5] Harold G. McCurdy, The Childhood Pattern of Genius. Horizon Magazine, May 1960, pp. 32-38

[6] Some of those rules would be built-in from birth (like "If the light is too bright, then close my eyes") whereas new rules are learned at later times.

[7] See the review by Kenneth MacCorquodale reprinted at http://www.behavior.org/computer-modeling/

[8] See Section §6.3 of The Emotion Machine for more details about the Newell-Simon idea.  Conjecture: it seems likely that, in the brain, such information is represented, at least in part, by the so-called “Mirror Neurons” that V.S. Ramachandran has identified.

[9] Chapter 3 of The Emotion Machine shows why we need to distinguish between having pain, hurting, and suffering.

[10] See http://en.wikipedia.org/wiki/Constructionist_learning

[11] On the surface, Computer Science may seem to resemble Mathematics, but one can see it as complementary: Mathematics provides us with complex ways to look more deeply into seemingly simple things—whereas Computer Science tend to give us simpler ways to think about more complicated things.

[12] See more discussions of feedback systems in http://en.wikipedia.org/wiki/Feedback, http://en.wikipedia.org/wiki/Control_theory, and http://www.well.com/~abs/curriculum.html. Note that the Newell-Simon theory of goals can be seen as a negative feedback process.

[13]  Also, building so-called “Expert Systems” provides another interesting alternative to conventional programming.  See http://en.wikipedia.org/wiki/Expert_system.

[14] In this approach to Mathematics, one doesn’t need to memorize those arithmetic tables that some children take hundreds of hours to learn.  Few adults recollect how long a single hour can seem to a six-year-old child—and this could be one more reason why so many adults loathe mathematics.  See  http://www.triviumpursuit.com/articles/research_on_teaching_math.php