6 Comments
User's avatar
Rob Nelson's avatar

Since you bring up Simon and Heyck's excellent biography, Let me hijack your comments section to present Simon's diagnosis of this problem:

The word "think" itself is even more troublesome. In the common culture it denotes an unanalyzed, partly intuitive, partly subconscious and unconscious, sometimes creative set of mental processes that sometimes allows humans to solve problems, make decisions, or design something. What do these mental processes have in common with the processes computers follow when they execute their programs? The common culture finds almost nothing in common between them. One reason is that human thinking has never been described, only labeled. Certain contemporary psychological research, however, has been producing computer programs that duplicate the human information processing called thinking in considerable detail. When a psychologist who has been steeped in this new scientific culture says "Machines think," he has in mind the behavior of computers governed by such programs. He means something quite definite and precise that has no satisfactory translation into the language of the common culture. If you wish to converse with him (which you well may not!) you will have to follow him into the scientific culture.

That's from a talk he gave at Johns Hopkins in 1971 and available here: https://gwern.net/doc/design/1971-simon.pdf

Expand full comment
Rob Nelson's avatar

I disagree with Simon's statement that human thinking has never been described. Principles of Psychology does this in ways that translate brilliantly between the two cultures. The problem is that this sort of thing won't get you tenure, which James saw coming and warned about in The PhD Octopus.

Expand full comment
Shreeharsh Kelkar's avatar

Thanks for the comment! Hijack away! And thanks for linking to that talk -- it's fascinating! And I will definitely take your bait!

I will say that I am not too concerned about whether the idea that "thinking" is information processing is *correct*; it is, ultimately, a lens through which to view the world, a generative metaphor that allows the people who wield it to study the world in interesting ways (by simulations, by writing programs, by doing experiments, etc.). Should they be using that lens? I think that is a battle among psychologists (or education researchers or whoever) that I, as an observer, don't have much to say about. So I wouldn't disagree with you when you say that Simon is wrong to think that "thinking" has never been described; that it has indeed been described in a way that is both scientific and humanistic by, say, William James; but to me, that is a factional battle between experts who purport to study the mind, a battle I take some scholastic interest in, but one that I am fundamentally agnostic on.

What I can do, however, is try to channel Simon in defending what he says in that speech. Heyck's biography does a good job at explaining why the "what if the mind is a program?" metaphor was so attractive to social scientists in the post-WW2 era. Simon and many of his colleagues had witnessed an organizational revolution never seen before. The United States had just won a massive world war partly because its government had been able to carry out successfully a massive managerial exercise in logistics: the US military had been kept supplied with material and new technologies (radar) that had made it the most powerful war machine in existence. For Simon and his colleagues, the possibilities of interdisciplinary problem-driven research were immense and part of it involved thinking about ways of managing large organizations.

Simon says this in the talk you posted as well; he says that:

"Having explained what I mean by an information-rich world, I am now ready to tackle the main question. How can we design organizations, business firms, and government agencies to operate effectively in such a world? How can we arrange to conserve and effectively allocate their scarce attention?" (and then he speaks about the post office and others; later about the office of the president)

This is where Simon thinks that his new scientific definitions of words like "think," "information," and "organization" will eventually lead; this is why the computer program is so important because both conceptually and practically it helps him think about the problem of managing large organizations which he thinks is THE defining problem of the age.

This is where he's coming from when he says that thinking has never been described *scientifically* because he wants a scientific language to describe thinking that can help him figure out the problem of managing large organizations.

Now, you might argue that this is a dreadfully impoverished way of thinking about thinking. Good organization *should not* be the goal of science. That's all fair enough. But that's where Simon is coming from.

Expand full comment
Rob Nelson's avatar

This helps me fill in the picture of Simon for me as I work my way through Sciences of the Artificial. I think one of the problems we face in making sense of generative AI is that the language of human cognition pulls us toward anthropomorphizing large AI models. Simon is far more aware of this problem than most of the early AI dudes I've read. His distinction between common vs. scientific language is a good start. If words like "attention" and "reason" when used in the context of large AI models would come with asterisks and precise definitions, there would be a lot less confusion.

I can see why he is a favorite among the Stafford Beer revivalists. What else should a newbie to SImon read?

Expand full comment
Shreeharsh Kelkar's avatar

Well, I haven't read the new cybernetics books yet but should I? Any recommendations?

On Simon, though, the biography by Heyck is very good (but you can also read the two shorter pieces I have linked to because they make the argument pretty well. I also recommend Phil Agre on Simon's essay "The architecture of complexity": https://pages.gseis.ucla.edu/faculty/agre/simon.html

As to AI and anthropomorphizing, this piece by Phil Agre is very good: https://pages.gseis.ucla.edu/faculty/agre/critical.html (I drew on his book for this post but this piece is quite good and makes the larger point that anthropomorphizing has always been built into the DNA of the AI enterprise).

Expand full comment
Rob Nelson's avatar

Thanks, Heyck is on order and Agre is in the queue. Dan Davies The Unaccountability Machine is the place to start on cybernetics. It is the best book I've read in the past 12 months.

I've started Andrew Pickering's The Cybernetic Brain. So far, I like it quite a bit. It expands the focus beyond Ashby and Beer, and has the kind of absolutely killer quotes that can only come from deep research. For example, William S Burroughs from Naked Lunch on 'thinking machines": "the first stirrings of hideous insect life."

Expand full comment