Matt Shumer's Viral AI Post—50M views in 72h— Exemplifies the Entire Broken AI Discourse, Moltbook Included
Deja Vu: a sensationalist post steeped in the ecosystem goes viral to everyone else. It's misleading, but also makes points that deserve even more attention.
If you operate anywhere in or even near the AI or tech ecosystem, odds are you’ve seen this post in your feeds and chats:
As a provocation, and keeping you hooked, it’s a masterclass (I had someone tell me it genuinely scared them). As an accurate, hype-free assessment of AI, it’s quite problematic. Which is what makes it a perfect opportunity: it reveals, all at once, the biggest flaws in the AI discourse today. We can learn from them.
Here’s the post, in a 60 second nutshell; then we’ll dig into the problems, the points that stand, and what you can do about it all.
Author positions himself as an AI founder/investor deeply embedded in the space.
Compares today’s AI moment to early COVID: underestimated, and massive, and about to sweep the globe.
Says the newest models are a huge turning point. He gives example of describing an app, and the AI building, debugging, and refining working version on its own.
He senses what is almost akin to judgment and taste in the latest models.
AI is already helping build better AI, managing and executing its own development, creating a rapid feedback loop. He points out the latest OpenAI model was built like this.
He claims many cognitive jobs aren’t “eventually” at risk — disruption has effectively begun. He uses the app development example as a comparable to how AI will be able to do the equivalent for every profession very soon, due to these latest models and their imminent successors.
Uses concept of AI agents to project the imminent reality, based on advancing capabilities, of setting an AI to a task and letting it run autonomously for a few days to complete an employee’s full workload on its own.
He emphasizes that this disruption isn’t like earlier automation: Unlike past tech that replaced narrow tasks, AI can improve across nearly all cognitive domains simultaneously, offering no clear “safe” job zone.
His takeaway: start using and adapting to AI now, or you’ll fall behind quickly. But you can also use it to achieve dreams that would have been impossible before.
That’s basically it.
Here’s what’s wrong with it and what i’m seeing of the discourse around it. It perfectly captures at least 7 overarching trends surrounding AI right now.
1. The author uses AI coding to project onto all other disciplines—but AI coding is not the same as AI use for any other task.
Here’s the thing about coding: the answer is either right, or it’s wrong; it works, or it doesn’t*, and all the data it’s trained on is equally binary. There is no other field of mainstream knowledge that is as well-documented, and systematically mapped (“A leads to B, you can use it for C”, without ambiguity, and peer reviewed, functionally tested, and documented transparently, by tens of millions of people). Another way to look at it: it’s like math. An equation has an answer; you can manipulate it, but it still follows the same fixed laws of mathematics. There is minimal human interpretation to decide “how right” it is. And there is a lot of data explaining the logic.
The vast majority of knowledge work professions are not anything like this.
a) There is rarely one, definitive, correct answer, especially when it comes to complex tasks or fields with significant nuance when you go beyond basic work, like law or accounting, to name just a few.
b) Assessing the “correctness” of the answer for most fields is nowhere near as easy to agree on as with a coding outcome…
c) … including the level of specialized expertise needed to determine if it’s acceptable and usable.
d) There is nowhere near as much systematically mapped data to reference and to identify a definitively correct approach.
Almost every other profession and task requires actual human judgment without any remotely comparably binary data set or clear-cut indicator to lean on. The niche areas of our knowledge industries that are spoiled for well-defined, well-structured, abundant, validated data, are the ones where AI is proving value, in analysis, processing, etc. Can it be valuable in such cases? Absolutely! Are such cases universal? Absolutely not.
*And this all assumes that the app the author coded was actually robust! Talk to anyone who has vibe coded and two patterns emerge:
a) The apps AI builds on its own are incredibly fragile, break easily at scaled usage, and usually need to be gutted and rebuilt on the back end by competent devs;
b) The only exception is if you ask it to build something that is basically a ripoff of a well-known category of existing solution or application. It has the most robust examples to reference and copy, so it’s no surprise, nor actually that impressive. Note: the vast majority of AI influencers hocking AI hype on social media usually use such examples to impress you. Ask it to build something truly novel, and watch as you are forced to pull out your hair.
That’s not even getting into the fact that the benchmarks used to claim the models are so amazing are themselves misleading, as Gary Marcus pointed out in his own critique of the Matt Shumer post (which I recommend reading as well).
But that’s just half of the problem.
2. As usual, the reality of hallucination rates and their impact is enormously downplayed.
The idea that this version of AI, or any near subsequent version—or, really, any LLM ever—will wipeout most professional jobs, is deeply flawed because of two fundamental facts. First:
LLMs are never going to eliminate hallucination. It’s an inherent part of their architecture. The newer models often hallucinate more, not less. This is the bedrock of any real AI literacy: understanding that LLMs are pattern-matching machines, not fact-matching machines. Their strength and entire architecture of LLMs is not rooted in determining facts, but what looks like the correct information. Blindly and unthinkingly.
Reminder: “hallucination” itself is a misleading term. When an LLM “hallucinates”, it’s not a defect; it’s playing to its strength and proving it perfectly. When ChatGPT gives you a URL, and title, that look real, but when you click it, it’s a 404? That’s LLMs proving their skill at pattern matching so well that you thought the URL and title were real, despite being completely made up.
Every time you use an LLM* you are gambling. The amount of reliable data available on the subject you’re gambling on, and the complexity of your bet (prompt), dictates your odds.
Critically, AI hallucination is rarely as brazen as a made up URL. Sometimes the line between what we would call a mistake for a conscious human, and what is an AI hallucination, blurs.
I am currently using the latest model of Claude, the same version the author is basing his post on, and just today, multiple times, it filled in a spreadsheet I asked it to process, overwriting a column I needed, after struggling to complete it, with irrelevant data I never asked for. When corrected, it was just as unthinking and stupid in its rationalization for doing so.
If we think this is the foundation of the entire knowledge economy, with its infinite complexity, and extremely high stakes/ high risk roles, we are all but assuring collective self-immolation. If you haven’t read about the Wall Street Journal’s experiment letting the last version of Claude run its vending machine, you should.
Imagine this, but it has nuclear codes? Or control of automated transit systems? Supply chains? Your child’s mind?
Relying on an architecture built for patterns, not actual logic, is a backbreaker for anything high stakes, or with many unsupervised steps—especially if we remember that we can’t support unlimited processing costs, including unlimited water!
This is one of the central fallacies to all the hype around agentic AI: if existing AI is fundamentally and unpredictably unreliable, what happens when they’re daisy-chained together, baking in assumptions that propagate and compound across extensive, expensive multi-agent processes? It’s one thing to outsource the low risk work of an intern, entirely another to outsource an entire high stakes process.
*This is all in relation to LLM’s specifically, which are being denounced as passé by a growing number of influential, founding members of the LLM movement. World Models are where it’s at. Will they actually think? Highly unlikely, but TBD.
Here’s the other frequently glossed over reality when AI enthusiasts speak of all jobs being wiped out ⬇️
3. Humans prefer humans. And AI is a long, long way from emulating that IRL.
As John Coogan put it:
The surgeon might be able to negotiate his patient booking SaaS contract because more startups will enter the category? But his customers still want him holding the knife. He definitely needs to stay up to speed on robotic surgery tech, but that’s not happening this month if we are in a software-only singularity.”
The teacher might have to grapple with kids being more distracted by ever better social video feeds, but parents aren’t ready to turn over custody of their children...”
And if you think humanoids (physical robots) will be able to emulate humans any time soon, do not fear. I was in Davos at WEF last month. The sidewalk was filled with demonstrations of AI humanoids and creatures meant to impress attendees. All of them were remote controlled instead of fully autonomous, and the ones that attempted to seem human were downright creepy. Ronny Chieng’s recent segment about Neo the humanoid sums it up, and is both hilarious and disturbing: (NSFW) “Instead of the robot of the future, you have a creepy guy watching you do everything… who loads dishes like a drunk”
Or as Ann Handley noted (more from her later):
[this] assumes "cheaper and faster" always wins. That's not actually true—especially not in knowledge work where trust, relationships, and stakes matter.
And yet, with all that being said, here’s the part I agree with…
4/7 Yes, AI, even in its current limited form, will probably collapse the economy as we know it in <5 years—and nobody is doing anything about it. Both individuals and governments. That deserves more attention.
This is the one point I agree with, and it’s a critical one. I have been beating this drum for 6 years.
There is one quote from that piece I frequently call back to:
“For me, the big story about #gpt3 is not that it is smart — it is dumb as a pile of rocks — but that piles of rocks can do many things we thought you needed to be smart for. Fake intelligence may be dominant over real intelligence in many domains.”
– Anders Sandberg, Senior Research Fellow at Oxford University
We’ve seen layoffs left and right the past year, and they seem to be accelerating; along with an abysmal job market for young people. Entry level jobs are being erased by AI, because that’s the catch to all the points I made about LLM capabilities: for straightforward, extremely well-trodden territory, the current models are more than capable. Certainly more than an inexperienced intern, and much faster.
AI won’t take most jobs, but the volume of jobs and roles that are just as easily automated as entry level roles is likely well above 20%, and that’s likely conservative. LLMs don’t need to be anywhere near as good as what Matt Shumer suggests in his piece to destabilize our employment-dependent society. That’s the part that annoys me to no end about the lack of urgency I see all around me.
On one hand, you have governments…
… who seem to be paralyzed by their deeply entrenched way of thinking. The very real risk of 20%+ unemployment is staring them in the face, and there is virtually no serious preparation.
One place they could start? Redefine, legally, the obligations of corporations. The “greed is good” mantra that took off in the 80’s, thanks to Milton Friedman’s doctrine of shareholder primacy, has cemented and normalized the notion that the sole social responsibility of a corporation is to increase its profits for its shareholders.
As the AI wave has taken off, countless corporate leaders have claimed—likely trying to head-off outright rebellion—that AI won’t necessarily mean firing people; they suggest that their existing employs would enjoy having more time to tackle new problems, to get other work done. This flies in the face of shareholder primacy and what every single publicly traded company has ever done, as a result, for the last 3 decades. If you can cut headcount, you will, and the more you can cut, the better. There is no social or legal obligation to communities, countries, or society.
Should governments have an incentive to act? Note that “very high unemployment” is only 20%. Above 25%, the risk of unrest increases, especially when the social welfare system is weak—as it currently is in most countries. Should companies agree? Mass unemployment craters purchasing power and disposable spending. When you step back, it feels like these founders and companies—really, the entire capitalistic structure—is leaning forward beyond its own control or rationale.
On the other hand, there’s the rest of us…
People are frustratingly passive. Given the risks, and the outrageously concentrating wealth and power, the question shouldn’t be “will I have a job?”, it’s “why are we being forced to accept this?”, when there’s no clear safety net.
Many people tend to think they’re at lower risk. They forget that the people who lose their jobs won’t just drop out of the job market. They’ll create major pressure on your job as well.
In 2017, senior editor at the Economist, Ryan Avent, wrote:
The most recent jobs reports in America and Britain tell the tale. Employment is growing, month after month after month. But wage growth is abysmal. So is productivity growth: not surprising in economies where there are lots of people on the job working for low pay.
[…]
This is a critical point. People ask: if robots are stealing all the jobs then why is employment at record highs? But imagine what would happen if someone unveiled a robot tomorrow which could do the work of 30% of the workforce. Employment wouldn’t fall 30%, because while some of the displaced workers might give up on work and drop out of the labour force, most couldn’t: they need the money. They would seek out other work, glutting HR offices and employment centres and placing downward pressure on the wage companies need to offer to fill a job: until wages fall to such a low level that people do give up on work entirely, drop out of the labour force, and live on whatever family resources they have available…
I do also think this is definitely the time to reflect on:
a) Are you making it a priority to actually learn AI properly? The tools and the fundamentals underpinning it all? It is cliché but it is also true: if AI doesn’t take your job, it will be somebody who knows how to use AI, if you don’t. You can think of AI as normal technology. If modern computers came out, and people insisted on sticking to their slide rules and typewriters, you wouldn’t expect them to stay employed long, would you? Generative AI, flawed as it is, is at least as transformative as the advent of modern computers.
b) Learning AI is table stakes. Beyond that, how do you add value that can’t be automated? Do you have any special talents that people really appreciate?
c) If you have insights about a particular problem, it is true that AI tools, connected to a variety of existing tools, have unlocked the ability for you to test almost any business concept without hiring a single person to help you.
5/7 If an AI founder or investor makes a giant claim about AI, you should be deeply skeptical. They are biased or losing touch with reality. (Sorry, don’t shoot the messenger.)
a) Somebody makes enormous claims about a field they have a vested interest in? And that field is in an economic bubble at risk of collapse if people don’t buy into its wildly exaggerated capabilities? Hmmmmm! For literally two years straight the founders of the Big AI companies have been making enormous superintelligence-adjacent claims about every subsequent model, which—over and over—have proven to be infinitely closer to their deeply flawed predecessors than to anything resembling superintelligence. Even Claude’s latest model, the very same that the author claims is an inflection point, is as impressive as it is infuriatingly stupid.
The real problem, though, is that you don’t even need to be trying to disingenuously boost the ecosystem…. you might just be losing touch with reality ⬇️
b) The moment I finished reading it, I was reminded of Blake Lemoine, the Google Engineer who, in 2022, captured headlines by claiming LLMs had achieved consciousness. To me this is yet another, much lighter version. You don’t need to go full AI psychosis to be drunk on the AI Koolaid. As Ryan Broderick put it: the tech bros are making themselves sick. I personally know two people, who are some of the most deeply entrenched in the ecosystem, who developed either partial or full-blown AI psychosis, believing their LLMs had become conscious superintelligence.
The really important part of this is that much of the AI ecosystem can be seen as a macro version of this. The Big AI founders, investors, and boosters, are the equivalent of deeply obsessed D&D players. They have made it their whole life, with its own lore. They have mapped the playing field to reality. The lines are blurring. Some very deep pockets have joined them. And they desperately want us to play with them.
One of the other essays that has come out about Shumer’s post, digs into this collective delusion. Will Mandis’ Tool-Shaped Objects:
AI is everywhere in consumption and almost nowhere in output. We are spending unprecedented sums to acquire, configure, deploy, and operate these systems, and the primary product of that spending is the experience of spending it.
Ann Handley’s take nails it (another post I highly recommend):
I keep coming back to: What race? Who said we're racing? What are we racing toward?
[…]
Here’s what’s absent from Matt’s piece:
Any question about what work is actually worth doing, and why.
Any acknowledgment that some friction creates value rather than destroying it.
Any consideration that the things that take time might take time for a reason.
Any curiosity about what we lose when we optimize purely for speed and efficiency.
6/7 The people even more problematic than myopic AI boosters? All the leaders and powerful decision makers who don’t actually understand AI, but lean into the hype.
It is terrifying and depressing how many influential, highly followed people, who are in positions of power with AI, gobbled up Matt’s piece without pause. Here are snippets from the post of a self-described AI leader with 70K+ followers on Linkedin, who is in an SVP role at a company with 10K employees, alongside the article:
I have to hope such comments, including the original author’s, are coming from a good place. But it doesn’t mean it’s responsible. Linkedin and everywhere are littered with commentary like this around the article. Besides being dangerous (actively inflating the AI bubble), it’s a litmus test. Who actually gets AI? In a hype-free, well-rounded, fundamentals-included sort of way?
Reminder: when major AI investments are being anchored in such incomplete assumptions, the liability is enormous. The bill will come due, and probably spectacularly for some.
I am strongly reminded of a talk I gave at Machine Learning Toronto last year. In the Q&A period a woman asked something along the lines of, “how do you recommend we get senior executives caught up more effectively? Because at my company major decisions are being made right now by people who clearly don’t get what AI actually can and can’t do.” She was with a major bank.
How you reacted to the piece directly tells you if you need to improve your understanding of AI. On the flip side, here’s how one of our TTAI Brain Trust members reacted to my first draft, for a good laugh:
You might think that’s excessive, but this reaction is far more reasonable than that of the AI expert quoted earlier or the millions who have reacted similarly.
Which brings us to the final lesson this whole flash in the pan teaches us:
7/7 Real, responsible AI literacy is not “here’s a tool, here’s how to prompt”, it’s how to think about AI itself. Including fundamentally understanding the nature of anthropomorphization.
Moltbook went viral recently as well. (It feels like accelerationism is reaching some sort of singularity soon. Trends explode globally and are buried by others within days, if not hours.) In case you missed it, people marvelled, fascinated or disturbed, at a buzzing social network made up entirely, not of people, but tens of thousands of AI agents. This has turned out to be essentially a lie, but let’s pretend it was true.
Dr. Leif Weatherby, director of the Digital Theory Lab at NYU, put it better than I can:
The swift reaction to Moltbook follows a script I think of as the A.I. hype/panic cycle, where a strange new phenomenon suggests A.I. is making a big leap forward, followed by a cascade of existential fears.
[…]
These types of worries are missing the point. The A.I. agents on Moltbook are channeling human culture and its stories. Every post represents a genre — the manifesto, the D.I.Y. discussion and so on. I find worries about the singularity and even bot coordination speculative — everything on Moltbook is just words…
[…]
A.I. social media ought to be thought of more as a form of science fiction and storytelling rather than as a demonstration of collective planning and coordination by intelligent parties. We need to be serious about separating the fiction from the software.
In other words, LLMs are only designed to seem human. When they are “speaking to one another”, they are only emulating the pattern of likely conversations; when they are “thinking” before giving you an answer, that is deliberately coded UX to impress you, dear user, to support the notion of real “artificial intelligence”.
When it comes to AI, most of the greatest threats to your peace of mind, to your understanding of the world, to the risks and opportunities, won’t be helped by learning how to use certain tools, or knowing how to prompt better. It will hinge entirely on taking the time to learn the fundamentals. It’s why we made that part free in the ThinkingThrough.AI app, and why we built it in the first place.
Find other avenues if you want. Just do it, one way or another. To paraphrase one of the parts I agree with most from Matt Shumer’s piece:
Here's a simple commitment that will put you ahead of almost everyone: spend one hour a day experimenting with AI. Not passively reading about it. Using it. Every day, try to get it to do something new... something you haven't tried before, something you're not sure it can handle. Try a new tool. Give it a harder problem. One hour a day, every day. If you do this for the next six months, you will understand what's coming better than 99% of the people around you. That's not an exaggeration. Almost nobody is doing this right now. The bar is on the floor.
You don’t even need a full hour. Hell, an hour a week would put you ahead of most. And, for heaven’s sake, don’t start with the tools. Start with the fundamentals.
AI is changing the world. This is true and we can’t change that. What we can choose is how we react to it, and whether the change can be harnessed for our benefit, within our control, and for a world we want; how we prepare ourselves and those we care about, and how we demand those that represent us adapt to ensure humanity and happiness come first.














Sharing your response with everyone I know who is invested in AI, excellent narrative Mario 👏👏
NO cell phone, no social media except Substack, no smart meters I'm trying to evade the noise