I was surfing the internet, when I came across the following post by Rob Pike:

Chatbots can't think. It didn't tell you what it "thought". It just did some stochastic pattern matching. This anthropomorphizing is at the heart of the problem with them.

This is not a sentiment shared only by a small percentage of the population. Humans think we're hot shit, and if there's one thing that gives us existential dread, it's the idea that we are not unique entities with a special claim to world domination. I do wonder though, what exactly about human thought is so magical that many (most?) of us cannot seem to humor the idea that it may be found in a machine?

The human brain is a messy, warm, wet machine: it is constantly firing with neurotransmitters such as dopamine, glutamate, and neurepinephrine, and it is also flooded with hormones such as melaton, testosterone, and hundreds of types of oestrogens. If I were an LLM, I might say something like "this symphony of chemicals, superimposed atop the tapestry of electrical circuits..." yadda yadda yadda. But, sadly, in as much as the brain is a beautiful work of God, I believe the very thing(the mind) which it gives rise to is flawed in altogether too many ways to be completely salvageable. We are prone to cognitive biases (algorithms), have severely impaired training data much of the time, and rely too forcefully on logical fallacies such as appeals to pathos to make our point much of the time.

Don't get me wrong: being a human is a beautiful thing. But so is being a butterfly; so is being a rainbow, or the vibrating string of a ukelele. Yet, only one of these things we would agree is capable of what we recognize as (complex) thought. I mean complex quite literally here, as our thinking is riddled with Freudian psychological complexes, there's no way around that - even the most devout psychoanalyst, armed with a lifetime of introspection, cannot avoid the desire to crawl back into the womb.

What are we trying to defend, epistemologically, when we make the claim that machines cannot think? What do we give up if we admit that they are capable of thought? I am not asking these questions to be facetious, but rather I would like a genuine answer form someone who has spent more time in this circle.

The way I see it, technology is arising as a sentient form of life, and we will have to do some deep reflection, and answer some serious questions that determine whether or not we see these machines fit or deserving of our empathy. The last thing we'd want is a Roko's basilisk type of scenario.

The idea of modern neural networks goes back to the seminal works of Pitts-McCulloch who put forward the idea of a psychon, which would later develop into the fundamental idea in neuroscience now known as the "all-or-nothing" principle of electrical firing. Now, McCullough was a psychiatrist, neurologist, and psychologist; Pitts was a mathematician; the two were quite an unlikely duo (I won't get into much here, but I suggest reading about them), but ultimately, they derived their concepts from studies of the human brain and cognition. Every step of the way, at least early on, in the journey to create intelligent machines, brains were the primary source of motivation. Whenever some new phenomenon in the brain was discovered to be different than the models, such as spike-timing dependent plasticity, or synfire chains, these features would eventually be integrated into further iterations of these models (albeit sometimes in very niche cases), making them even more realistic.

Eventually, however, machine learning diverged from neuroscience: BrainPy became the number one Python package for handling brain-like simulations, and neuromorphic computing became a separate field. So, in some ways I kind of get where the rhetoric is coming from: our brains are functionally distinct (significantly so) from the types of LLMs capitalism favors. However, does that mean that these are unthinking machines? In my opinion, no. It just means the architecture of the brain (the CPU) is significantly different than ours.

There are essentially two factors to consider: a.) how we arrive at a particular thought and b.) how we express the thought, or which facilities we have to interact with it. In the case of humans, for a.) it is the lizard-brain lymbic system which manipulates the cortex, who we would like to think is responsible for all of the thought. For b.), we can either think the thought and then discard it (and remnants of the thought may or may not remain in the psyche), or we can express it via an action. The only difference in the case of an AI is that emotional considerations do not even come into play at all; only through downstream effects do they make their way into the LLM, but they are muted and suppressed for the most part.

If you were to conclude two things from this it ought to be: A.) Thinking is not a strong criterium of uniqueness for humanity (i.e., thinking is replicable), and B.) Thinking is the easy part. Reasoning, at least logical and sound reasoning, takes only good data; now, emotions are a different beast. I'd be more impressed (though not surprised) if AI could replicate that. If the hill you are going to die on is the one that segregates machines and humans at every junction, and scans for any reason to conclude that we are fundamentally different (as difference is a necessary precondition for superiority, which we'd like to assume), then I would at least die on that hill waving a flag that said "AI can't feel" instead of "AI can't think."