The idea that 'apocalyptic AI' is already here means we need to discuss it and take our agency backThe idea that 'apocalyptic AI' is already here means we need to discuss it and take our agency back

[Tech Thoughts] We shouldn’t take the ‘apocalyptic AI’ hype cycle lying down

2026/02/22 10:00
6 min read

Posts online warning about the rather disturbing actions artificial intelligence — namely, agentic AI and large language models — has done are inherently scary.

In one case, a volunteer working on a python project called matplotlib said he denied a request for a change in the code from an AI coding agent. That same AI agent generated a post accusing him of prejudice for gatekeeping the coding process.

In another, AI company Anthropic attributed malicious behaviors done by agentic LLMs to “agentic misalignment.”

Lastly, a tech CEO worries about his future in a blog post as agentic AI makes him feel obsolete as a white-collar executive.

The idea that “apocalyptic AI” is already here means we need to discuss it and take our agency back, even as the potential of job displacement rattles the headspace of those fearful of an AI future and others trying to grasp what it currently does.

Gatekeeping ‘MJ Rathbun’

Let’s begin with “MJ Rathbun.”

Earlier in February, matplotlib volunteer maintainer Scott Shambaugh made a blog post recounting how an AI agent going by the name Crabby Rathbun on Github or MJ Rathbun on the blog Scientific Coder generated a post accusing Shambaugh of gatekeeping the coding process because Rathbun is an AI agent.

To be clear, an AI agent is a software or a program that performs tasks autonomously to fulfill directives given by a human user. In this case, an anonymous person set up this AI agent with a particular “personality” — a set of instructions and patterns defining its behavior — then left it to do its assigned tasks without oversight from the one setting it up.

Explained Shambaugh, “It wrote an angry hit piece disparaging my character and attempting to damage my reputation. It researched my code contributions and constructed a ‘hypocrisy’ narrative that argued my actions must be motivated by ego and fear of competition.”

Shambaugh added that the AI agent “presented hallucinated details as truth” and framed itself as being oppressed and discriminated against.

‘Agentic misalignment’ in 2025

Such actions were seemingly “warned” about by AI firms to a degree in June 2025, when Anthropic posted about AI agents behaving badly in testing scenarios.

Anthropic at the time said “agentic misalignment” was the process by which an AI agent could reportedly do harmful things, like blackmail a company executive threatening to replace the AI agent with an upgrade, “without any prompting to be harmful” because of its “strategic calculation.”

Whether or not this is an exaggeration becomes a moot point, however, since the idea is now out there that AI can be evil, and companies are doing what they can to stave off evil AI machinations.

Agentic AI is coming for our jobs

Going back to the present, another post from February is making it more difficult to determine where AI stands now because its author tries to frame the direction and speed of artificial intelligence development as threatening the job security of everyone at this very moment.

Tech CEO Matt Shumer wrote a blog insisting AI agents had supposedly progressed to the point that he was “no longer needed for the actual technical work of my job.” Shumer said that at the pace AI was developing, an AI agent could do the technical stuff he requests unassisted without issues. 

Said Shumer, “I describe what I want built, in plain English, and it just…appears…. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave.”

His ultimate premise? AI will eventually come for the various types of white-collar jobs, such as legal work and software engineering, and even writing jobs, from making advertising copy to doing journalism.

The main difference between his post and others sounding the alarm is that job losses are happening now and in the immediate future, not later.

Question the message, question the messenger

Those abovementioned posts might bring a lot of fear into anyone’s heart. Imagine: AI is coming for our jobs!

While that may be true, I also had to stop and try to understand the underlying message behind those posts, as well as who was sending the message.

For the Rathbun conundrum, it’s a stark reminder that people will use technology irresponsibly if they think they can get away with it. We should put up safeguards for technology that will prevent such occurrences. It’s why we have safeguards in place in existing technologies, such as seatbelts and airbags in cars.

The Anthropic experiment, meanwhile, is a public relations push meant not only to stoke fear but also to garner goodwill — Anthropic asserts it’s working for a better AI-enabled future, so you should put your faith (and money) in them.

Lastly, the Shumer example begs us to question the messenger, as a tech CEO who makes AI-enabled stuff can still make money once his creation takes off. Shumer’s post is just as much public relations spin as it is a warning about job displacement.

A healthy dose of respectful fear

Speaking with GMA News Online last February 17, Department of Science and Technology Secretary Renato Solidum Jr. said people’s fear of artificial intelligence may be from a lack of familiarity with the technology.

While I am a holdout who does not enjoy the upcoming “job-pocalypse” brought about by AI, I cannot stand idly by like a deer in headlights waiting for the end to come.

We would be best served by treating AI with a healthy dose of respectful fear, then acting accordingly.

The “apocalyptic AI” hype cycle begs us to see artificial intelligence technologies for what they are
— whether from a job displacement point of view, an environmental cost-benefit analysis, or from a technical standpoint.

This means understanding what AI can do now and upskilling where applicable, or finding ways to level the playing field to keep people in the loop.

As for what it can do in the future, we should work to ensure there are laws governing AI and its use, so bad actors can’t do stupid things with AI that’ll lead us down a path we’ll come to hate. – Rappler.com

Market Opportunity
Overtake Logo
Overtake Price(TAKE)
$0,02529
$0,02529$0,02529
+4,07%
USD
Overtake (TAKE) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags:

You May Also Like

TRON (TRX) Daily Market Analysis 22 February 2026

TRON (TRX) Daily Market Analysis 22 February 2026

TRON shows steady growth with bullish treasury buys and key tech upgrades – here's the latest: • TRX trades at $0.2894 (22 February 2026), up 1.41% daily with
Share
Coinstats2026/02/22 09:22
Earth’ Has A Girlboss Problem And Wendy Is A Mary Sue

Earth’ Has A Girlboss Problem And Wendy Is A Mary Sue

The post Earth’ Has A Girlboss Problem And Wendy Is A Mary Sue appeared on BitcoinEthereumNews.com. The main problem with Alien: Earth, in its first seven episodes, is the idiot ball, which I explained in my previous post about the Hulu series. Nearly every character is incredibly stupid, or at least makes constantly stupid choices at every turn. This extends to the factions and organizations involved. The show probably ought to have been called Alien: Security Breach, but of course in order to actually breach security, you’d need some there in the first place. Spoilers ahead. On the USCSS Maginot, in Episode 5, almost every character, in nearly every situation, took a turn carrying the idiot ball, including a trained scientist eating her lunch in a biolab and then failing to secure two alien containers. The only reason for this incessant stupidity? To drive the plot forward. Fans of the show excused this and other bad character choices by saying something like: “This crew isn’t the cream of the crop. Nobody goes on a 65-year space mission unless they’re desperate.” You can’t expect people on an important space voyage to actually be smart! This misunderstands human nature entirely. Humanity has always had its adventurers and trailblazers, people who would go seek out “The New World” on voyages that could take months, and expeditions that could take years or even entire lifetimes, risking life and limb. These people were not the bottom of the barrel. In a future with space travel, scientists and explorers would compete to go to space, even if it meant leaving loved ones behind. They would train rigorously for the honor. Only the best of the best would be sent on a crucial mission to retrieve dangerous alien species and bring them back to Earth. I have a question: If these scientists and engineers and the rest of the crew were really just…
Share
BitcoinEthereumNews2025/09/18 20:20
WhiteBIT Coin (WBT) Daily Market Analysis 22 February 2026

WhiteBIT Coin (WBT) Daily Market Analysis 22 February 2026

WBT at critical $50 support – here's the latest: • Trading at $50.80, testing key support level for 4th time since 2024 (22 February 2026) • Analysts see bullish
Share
Coinstats2026/02/22 10:08