In the early days, back in the 50s, people like von Neumann and didn’t believe in symbolic AI, they were far more inspired by the brain. Unfortunately, they both died much too young, and their voice wasn’t heard.
And in the early days of AI, people were completely convinced that the representations you need for intelligence were symbolic expressions of some kind. Not quite logic, but something like logic, and that the essence of intelligence was reasoning.
What’s happened now is, there’s a completely different view, which is that what a thought is just a great big vector of neural activity, so contrast that with a thought being a symbolic expression. And I think the people who thought that thoughts were symbolic expressions just made a huge mistake.
What comes in is a string of words, and what comes out is a string of words. And because of that, strings of words are the obvious way to represent things. So they thought what must be in between was a string of words, or something like a string of words. And I think what’s in between is nothing like a string of words. I think the idea that thoughts must be in some kind of language is as silly as the idea that understanding the layout of a spatial scene must be in pixels, pixels come in, then pixels would come out, but what’s in between isn’t pixels.
And so I think thoughts are just these great big vectors, and that big vectors have causal powers. They cause other big vectors, and that’s utterly unlike the standard AI view that thoughts are symbolic expressions.