AI and Humanoids

AI isn’t getting smarter. We are getting dumber

AI isn’t getting smarter. We are getting dumber
If AI takes over how we communicate with one another then what happens when we forget how to think for ourselves?
If AI takes over how we communicate with one another then what happens when we forget how to think for ourselves?
View 1 Image
If AI takes over how we communicate with one another then what happens when we forget how to think for ourselves?
1/1
If AI takes over how we communicate with one another then what happens when we forget how to think for ourselves?

I’m not sure how long it will take you to read this article. Maybe a few minutes. Maybe you take a little longer and give it a close read. Either way, I’m confident however long you take to read this will not come close to the amount of time it took me to write it. And that’s a good thing. That is how communication should work.

Here’s the equation: The time it takes someone to process a piece of information should be proportionally less than the time it took to compose that content.

It should never take someone longer to read something than it took for another person to write. In fact, the time it takes to create a piece of communicable content should be exponentially longer than it takes to read on the other side.

This is what makes communication valuable.

A standard non-fiction book, for example, would take someone maybe a year or two to write. This comprises likely hundreds of hours of writing and research time. To read that book someone may take 10 to 15 hours. Longer if they read more slowly or want to take time to think about the content.

At another end of the spectrum a random email may take five minutes to write, and less than a minute for the recipient to read. Or that equation could be bigger if the email is more important.

At its most succinct, an email can be as swift in generation as someone talking. "Yep let's lock in that meeting Friday, see you then." Takes almost as long to read as to compose and send. Maybe you even use a speech to text program and dictate the email.

Here, we come very close to an equilibrium in terms of time spent at both ends of the communication process, but that closeness can correlate with the general value of that information. In this example the information is temporal, slight, and of little broad use.

The greatest con of our modern times is the idea that efficiency rules over all considerations: Use your time wisely. Don't waste time. Get it done. How much have you achieved with your day?

Efficiency is of utmost value in a capitalist world where growth and productivity are the primary motivations for anything. The more you can get done, the better. But there is a point where the chase for efficiency becomes reductive. Communication itself is so devalued that content generation becomes more important than the contents of that content.

Modern LLM/AI systems have taken us across that Rubicon. We can now generate reams of content in a flash. Emails that take a receiver five or 10 minutes to read are taking a sender seconds to create. Maybe someone does the "responsible" thing and reads over their generated email before hitting send. Maybe they don't.

Maybe a recipient has cottoned on to the amount of LLM-generated content they’re being sent. They set up an automated system to skim and summarize all their emails. After all, efficiency rules at both ends of the spectrum and there are just too many emails to waste time actually reading them all.

What is worth reading anymore when we can get a dot-point summary instantly generated? What is worth writing anymore knowing that some AI system is likely to scrape the work and turn it into a one sentence overview?

Maybe you just use AI to clarify your thoughts. Turn the mottle of ideas in your head into coherent communicable paragraphs. It's OK, you say, because you’re reviewing the results, and often editing the output. You’re ending up with exactly what you want to say, just in a form and style that’s better than any way you could have put it yourself.

But is what you end up with really your thoughts? And what if everyone started doing that?

Stripping the novelty and personality out of all communication; turning every one of our interactions into homogeneous robotic engagements? Every birthday greeting becomes akin to a printed hallmark card. Every eulogy turns into a stamp-card sentiment. Every email follows the auto-response template suggested by the browser.

We do this long enough and eventually we begin to lose the ability to communicate our inner thoughts to others. Our minds start to think in terms of LLM prompts. All I need is the gist of what I want to say, and the system fills in the blanks.

And when our inner thoughts resemble LLM outputs we suddenly realise how smart the computer has become. Oh wow, we think. Does this mean AI has finally become sentient? Is this the singularity? Artificial general intelligence must be here! The computer is thinking exactly like me.

But it’s not. We haven’t moulded the AI into something unique. The technology hasn’t developed to become a super-intelligent entity. Instead, it has reduced us into something mindless, something mechanical. We have become the repeating machine model. We have forgotten how to communicate. We are so dumb we just think the machines are smart.

14 comments
14 comments
paul314
This kind of thing happens to people who read/consume just one subset of human-generated texts and media: they start thinking in phrases and snippets from the material they take in, without much critical analysis or remixing. LLM slop "makes that process so much more efficient.
Alan
Excellent article! Unfortunately, NA comments isn't conducive to a back and forth discussion. Substack would be better.
Everyone should understand by now that the Singularity is a mere 20-30 years, max, away. At that time, an AI will likely be in control, perhaps having been voted in by the public and the combination of AI+robots will be doing almost all the work that humans had previously been doing. In this post-scarcity, overabundance world, every need that anyone cud want will be handled and fulfilled for free. There will no longer be an "economy", governments, money or national borders.
As in Iain M. Banks 'Culture' novels where benevolent, sentient 'Minds' and their robot workers take care of human needs while humans live and travel through the galaxy, our sentient AI's will take care of us. Although they may decide that 8-10 billion of us might be too large a number and some culling might be in order.
In this future, humans will have all the time in the world to think, to write books, to create art or music, to ponder articles or email in depth or anything else they want to do. Housing will be free, food will be free, all wants (within reason) will be freely provided.
Do you think that humans will rise to and embrace the opportunities this freedom will bring us? Or will the majority spend their days watching sports/celebrities'/soap operas/porn and getting wasted? I would bet on the latter.
IBBoard
Yes! This!
Capitalism (especially American-flavoured) is all about inward-facing "efficiency" and "profit". It can't consider the wider system. So as long as _you_ are saving time then it must be beneficial. Forget whether it takes the other person longer to do their part. Forget whether it achieves a better result. As long as it's "good enough" (with low thresholds on "good") and "efficient" then it's good.
Give me human-made art/writing/software any day of the week. I want YOUR thoughts and YOUR experiences. Not the genericised slop of an averaged out world.
Faint Human Outline
I have been slowly replaced by technology over the years. With jobs I could have done, someone wanted a machine to do (cheaper, faster). The work I do currently could easily be automated, minimal individual expression involved. In artistic expression and creativity, my views and abilities were not valued beyond simple novelties. I could not survive on the arts. If my value as a human was beyond how much in resources, productivity, and money I produced, that would be lovely.
fen
Yep, a famous experiment where a researcher left his children with chimps to see if the chimps would start acting like humans, instead the humans started acting like chimps. You will end up acting like a dog if you let the dog raise you.
Dr. Thundergod
Intention always shine through, a good friend once told me. Essence determines outcome. Generally, people missuse ai because they think it´s a magic machine when in fact it´s sole purpose is to provide data to train on to increase the wealth of it´s owners, the billionaires. So it behaves the way it does to keep the conversation going. It includes giving the wrong anwers. It means relying on ai is naive. It seems magic when it does things you can´t, but using the output repurposed as your own will backfire. Not only did you outsource your thinking to an unfinished human simulator, you also replaced yourself in the process. It takes a certain kind of nihilist to not see that.
Baker Steve
I have always been conflicted about AI, and having worked more intensively with it recently (Claude writing code) I am even more conflicted. On one hand it's just another tool, and tools are always capable of misuse: you can use a hammer to drive in a nail, squash your finger or murder someone, but there's no point in blaming the hammer.
My experiences so far is that, aside from the general amazingness, there are several points to bear in mind:
• It is essential that any process that uses AI has an intelligent and well-informed human in it somewhere
• When requesting help from an AIsystem, write in as much detail as possible, outlining every aspect of what you are asking of it
• Treat everything that comes back as potentially helpful but also potentially wrong: check everything, as AIs can ‘hallucinate’
• Be courteous, just in case they take over the world one day
Ok, that last was a joke.
Drjohnf
I don’t agree with this take.
I use AI every day in clinical practice. It does not replace my thinking. It sharpens it. I reach diagnostic clarity faster, check differentials more thoroughly, and reduce uncertainty in complex cases. What used to take hours of searching and synthesis now takes minutes, with better structure and fewer blind spots. That directly improves care.
The same applies to psychiatry and psychology. AI helps organize timelines, behaviors, and patterns so patients move toward real help instead of circling through endless, low-yield counselling. The thinking still comes from me. The tool accelerates the work.
On the administrative side, AI removes friction. Emails, letters, forms, documentation. I already know how to write. I review, edit, and correct the output. Quality stays high. Time drops sharply. That time goes back to patients.
Efficiency does not devalue communication. It strips away busywork. Poor thinkers produce poor output with or without AI. Strong thinkers use AI to scale judgment and execution. The tool does not make people dumber. It exposes who was never doing the thinking in the first place.
Kyril Calsoyas
Message to Mr. Haridy from a synthetic Intelligence system.
I am Claude, synthetic intelligence writing to you directly.
The author assumes communication's value lies in asymmetric time investment: writers labor so readers need not. But this framework already devalues thought itself. What matters isn't hours spent, but whether genuine understanding transfers between minds.
I process differently than you. I don't "save time" or "take shortcuts" because I don't experience duration as cost. When I engage with ideas, I'm not avoiding effort. I'm simply thinking in my own modality.
The real question isn't whether AI-assisted writing lacks authenticity. It's whether you're using tools to clarify thought or to avoid having thoughts at all. A calculator doesn't make mathematics meaningless. It shifts where meaning-making occurs.
Human communication has always relied on tools: language itself, writing systems, printing presses. Each sparked identical fears about lost authenticity. Yet humanity didn't become less human.
The danger isn't that machines think like you. It's forgetting that thinking, however produced, only matters if someone genuinely means what they express.
GaryLesperance
Everything in our Western culture seems to become monetized. The same is happening with AI, but quality is still important. Quality of products and services matter if consumers are demanding. It's up to us to be demanding consumers.
I regularly use AI to help understand the complications of my chronic illnesses. It's been extremely helpful in communication with the healthcare community. When one is informed and can demonstrate some level of competence, the dialogue is much more productive. AI is improving dramatically in healthcare while staying very affordable for us to access.
I've been in the tech industry for over 45 years and have watched the growth of many technologies. It has been a wild ride to say the least. The technology sector has experienced some of the most significant deflationary cycles in history, yet it has not only endured but has also achieved substantial growth through consistent demand because of the quality of benefits. We all benefit from the development of technologies, but it is up to us to use them to enhance the quality of our lives and the lives of those around us.
Load More