Brief Academic Biography of Marvin Minsky
Abstracts   Bibliography   People   Home

Brief Academic Biography of Marvin Minsky

Marvin Minsky is Toshiba Professor of Media Arts and Sciences, and Professor of Electrical Engineering and Computer Science, at the Massachusetts Institute of Technology. His research has led to both theoretical and practical advances in artificial intelligence, cognitive psychology, neural networks, and the theory of Turing Machines and recursive functions. (In 1961 he solved Emil Post's problem of "Tag", and showed that any computer can be simulated by a machine with only two registers and two simple instructions.) He has made other contributions in the domains of graphics, symbolic mathematical computation, knowledge representation, commonsensical semantics, machine perception, and both symbolic and connectionist learning. He has also been involved with advanced technologies for exploring space.

Professor Minsky was a pioneer of robotics and telepresence. He designed and built some of the first visual scanners, mechanical hands with tactile sensors, one of the first LOGO "turtles," and their software and hardware interfaces. These influenced many subsequent robotic projects.

In 1951 he built the first randomly wired neural network learning machine (called SNARC, for Stochastic Neural-Analog Reinforcement Computer), based on reinforcing the synaptic connections that contributed to recent reactions. In 1956, when a Junior Fellow at Harvard, he invented and built the first Confocal Scanning Microscope, an optical instrument with unprecedented resolution and image quality.

Since the early 1950s, Marvin Minsky has worked on using computational ideas to characterize human psychological processes, as well as working to endow machines with intelligence. His seminal 1961 paper, "Steps Towards Artificial Intelligence" surveyed and analyzed what had been done before, and outlined many major problems that the infant discipline would later later need to face. The 1963 paper, "Matter, Mind, and Models" addressed the problem of making self-aware machines. In "Perceptrons," 1969, Minsky and Seymour Papert characterized the capabilities and limitations of loop-free learning and pattern recognition machines. See my further remarks below. In "A Framework for Representing Knowledge" (1974) Minsky put forth a model of knowledge representation to account for many phenomena in cognition, language understanding, and visual perception. These representations, called "frames," inherited their variable assignments from previously defined frames, and are often considered to be an early form of object oriented programming.

In the early 1970s, Minsky and Papert began formulating a theory called The Society of Mind which combined insights from developmental child psychology and their experience with research on Artificial Intelligence. The Society of Mind proposes that intelligence is not the product of any singular mechanism, but comes from the managed interaction of a diverse variety of resourceful agents. They argued that such diversity is necessary because different tasks require fundamentally different mechanisms; this transforms psychology from a fruitless quest for a few "basic" principles into a search for mechanisms that a mind could use to manage the interaction of many diverse elements.

Bits and pieces of this theory emerged in papers through the 70s and early 80s. Papert turned his energies to applying these new ideas to transforming education while Minsky continued to work primarily on the theory. In 1985, he published "The Society of Mind," a book in which 270 interconnected one-page ideas reflect the structure of the theory itself. Each page either proposes one such mechanism to account for some psychological phenomena or addresses a problem introduced by some proposed solution of another page. In 2006, Minsky published a sequel, "The Emotion Machine," which proposes theories that could account for human higher-level feelings, goals, emotions, and conscious thoughts in terms of multiple levels of processes, some of which can reflect on the others. By providing us with mulitple different "ways to think," these processes could account for much of our uniquely human resourcefulness.

EDUCATION

The Fieldston School, New York.
Bronx High School of Science, New York
Phillips Academy, Andover, Massachusetts
United States Navy, 1944-45
B.A. Mathematics Harvard University 1946-50
Ph.D. Mathematics Princeton University 1951-54
Junior Fellow, Harvard Society of Fellows, 1954-1957

PROFESSIONAL

Toshiba Professor of Media Arts and Sciences, M.I.T, 1990-present
Donner Professor of Science, M.I.T., 1974-1989
Professor, Department of Electrical Engineering, M.I.T., 1974
Co-Director, M.I.T. Artificial Intelligence Laboratory, 1959-1974
Assistant Professor of Mathematics, M.I.T., 1958
Founder, M.I.T. Artificial Intelligence Project, 1959
Staff Member, M.I.T. Lincoln Laboratory, 1957-1958

HONORS

Turing Award, Association for Computing Machinery, 1970
Doubleday Lecturer, Smithsonian Institution, 1978
Messenger Lecturer, Cornell University, 1979
Dr. Honoris Causa, Free University of Brussels, 1986
Dr. Honoris Causa, Pine Manor College, 1987
Killian Award, MIT, 1989
Japan Prize Laureate, 1990
Research Excellence Award, IJCAI 1991
Joseph Priestly Award, 1995
Rank Prize, Royal Society of Medicine, 1995
Computer Pioneer Award, IEEE Computer Society, 1995
R.W. Wood Prize, Optical Society of America, 2001
Benjamin Franklin Medal, Franklin Institute, 2001
In Praise of Reason Award, World Skeptics Congress, 2002

SOCIETIES

President, American Association for Artificial Intelligence, 1981-82
Fellow, American Academy of Arts and Sciences
Fellow, Institute of Electrical and Electronic Engineers
Fellow, Harvard Society of Fellows
Fellow, CSICOP
Board of Advisors, National Dance Institute
Board of Advisors, Planetary Society
Board of Governors, National Space Society
Awards Council, American Academy of Achievement
Member, U.S. National Academy of Engineering
Member, U.S. National Academy of Sciences
Member, Argentine National Academy of Science
Member, League for Programming Freedom

CORPORATE AFFILIATIONS

Director, Information International, Inc., 1961-1984
Founder, LOGO Computer Systems, Inc.
Founder, Thinking Machines, Inc.
Fellow, Walt Disney Imagineering

INVENTIONS

1951 SNARC: First Neural Network Simulator
1955 Confocal Scanning Microscope: U.S.Patent 3013467
1963 First head-mounted graphical display
1963 Concept of Binary-Tree Robotic Manipulator
1967 Serpentine Hydraulic Robot Arm (Boston Museum of Science)
1970 The "Muse" Musical Variations Synthesizer (with E. Fredkin)
1972 First LOGO "turtle" device (with S. Papert)

BOOKS

"Neural Nets and the Brain Model Problem," Ph.D. dissertation, Princeton University, 1954. The first publication of theories and theorems about learning in neural networks, secondary reinforcement, circulating dynamic storage and synaptic modifications.

Computation: Finite and Infinite Machines, Prentice-Hall, 1967. A standard text in Computer Science. Out of print now, but soon to reappear.

Semantic Information Processing, MIT Press, 1968. This collection had a strong influence on modern computational linguistics.

Perceptrons, (with Seymour A. Papert), MIT Press, 1969 (Enlarged edition, 1988), developed the modern theory of computational geometry and established fundamental limitations of loop-free connectionist learning machines. Many textbooks wrongly state that these limits apply only to networks with one or two layers, but it appears that those authors did not read or understand our book! For it is easy to show that virtually all our conclusions also apply to feedforward networks of any depth (with smaller, but still-exponential rates of coefficient-growth). Therefore, the popular rumor is wrong: that Back-Propagation remedies this, because no matter how fast such a machine can learn, it can't find solutions that don't exist. Another sign that technical standards in that field are too weak: I've seen no publications at all that report any patterns that porder- or diameter-limited networks fail to learn, although such counterexamples are easy to make!

Artificial Intelligence, with Seymour Papert, Univ. of Oregon Press, 1972. Out of print, and we'd love to buy a copy!

Robotics, Doubleday, 1986. Edited collection of essays about robotics, with Introduction and Postscript by Minsky.

The Society of Mind, Simon and Schuster, 1987. The first comprehensive description of the Society of Mind theory of intellectual structure and development. There was also an interactive CD-ROM version published by Voyager, inc., in 1996.

The Turing Option, with Harry Harrison, Warner Books, New York, 1992. Science fiction thriller about the construction of a superintelligent robot in the year 2023.

The Emotion Machine. The sequel to The Society of Mind, Simon and Schuster, 2006.

Abstracts   Bibliography   People.   Home