Let's #TalkConcurrency with Carl Hewitt
by Erlang Solutions
The last in our trio of concurrency one-on-one interviews is Carl Hewitt; the designer of logic programming language Planner. Also known for Actor models, Carl does a fantastic job of describing how he and others came up with Planner, and where he feels concurrency will go from here.
About Carl Hewitt
Carl has been highly influential in the development of logic, functional and concurrent programming, with his most recognisable work being the Planner (logic programming language) as well as his work on the Actor model of computation.
During his educational years, Carl was at MIT (Massachusetts Institute of Technology), which is famed for its work with science, computing and engineering. Here, he gained his PhD in applied mathematics in 1971 and then continued to work for the Department of Electrical Engineering and Computer Science until 2000 when he became emeritus.
Below is an edited transcript of Carl’s #TalkConcurrency video…
Carl: My name is Carl Hewitt. I originally got started at the MIT Artificial Intelligence Laboratory with professors Marvin Minsky, Seymour Papert, and Mike Patterson, graduate students Terry Winograd and Gerry Sussman and Eugene Charniak, and all the hackers. We were very excited about the field, and I needed a thesis topic. I went to talk to Marvin about my bad ideas and he would say, “Well, I’ve always been interested in plans, assertions, and goals”. Then I went to my office to work some more. When I went back to talk more about my bad ideas, he said, "I’m kind of interested in plans, and assertions, and goals”. Then it occurred to me that, by golly, I can make a programming language based on plans, assertions and goals which would be much more high level than LISP. So I designed Planner, which was used by Terry Winograd in his famous blocks world SHRDLU demo. We thought that we had really made some progress until we discovered all the things that Planner couldn’t do because of the PDP-10 was such a tiny sequential machine.
A number of ideas were floating around for new computational models. For example, there were Petri nets and capability systems. I knew that Planner was inadequate and was working on extensions to add concurrency. In November 1972, Alan Kay gave a seminar at MIT about his new Smalltalk 71 programming language, which made use of message passing. To somehow unify all these disparate models of computation, we had a eureka moment that, "We can unify all of these things into one concept, namely, that of an Actor. Actors can do all of digital computation. There’s just one fundamental abstraction, one fundamental primitive idea. If Actors can be done right, it’s going to be enormously important because it’ll enable integration between large numbers of machines to communicate with each other as well as providing enormous concurrency on a single machine.
It has taken an enormous amount of time to figure out precisely what an Actor is and how to make Actors performant. Now that we can create the hardware to make these gazillions of Actors perform, we can create Scalable Intelligent Systems for the first time in history. Creating such systems was the dream that attracted us to the MIT Al Lab. It was the dream of Marvin Minsky, Allen Newell, Herbert Simon, John McCarthy. Unfortunately, they didn’t live to see it.
The funny thing is, they thought the task was to make an artificial human. Turing proposed this in his Turing test. We can’t do that for quite a while, but we can make Scalable Intelligent Systems for doing things like pain management, and a whole bunch of other things, that are scalable in the sense that they’re not bounded in any important dimensions. There are no hard barriers to improvement.
For the first time in history, using massive concurrency and things like Actors, we can create Scalable Intelligent Systems for important human endeavours. It’s been a long journey of developing these kinds of systems. Developing technology for massive inconsistency robust ontologies was necessary because Intelligent Systems will have vast amounts of inconsistent information. A tiny example from history is that a photon is both a wave and not a wave. It’s both a particle and not a particle. That’s an inherent inconsistency that the physicists have learned to live with through quantum mechanics.
Scalable Intelligent System ontologies are going to be chock full of inconsistencies. Analogous the physicists with their quantum mechanics, we have invented technology for dealing with pervasive inconsistencies. The only way to make Scalable Intelligent Systems performant is through massive concurrency using many cores on a chip. Modularity comes from each Actor that keeps its own local state and processes messages concurrently along with the zillions of other Actors on the same chip.
Tens of thousands of Actor cores on one chip could be possible using carbon nanotubes and/or nanoscale vacuum-channel transistors. China’s Minister of Science has projected that there’s going to be a revolution in Intelligent Systems by 2025. The challenge is: who’s going to do it? I think that China is fully capable of doing it: they have the commitment from the top, engineering talent, and a technology supply base. I don’t see any reason they can’t do it. Who else is going to do it? We wonder. Thank you.
[00:05:11] [END OF AUDIO]
If you want to learn more about EE380 colloquium Scalable Intelligent Systems Build and Deploy by 2025, check out more of Carl’s work at the Stanford University website.Go back to the blog