When people ask me what artificial intelligence is going to do to jobs, they’re usually hoping for a clean answer: catastrophe or overhype, mass unemployment or business as usual. What I found after months of reporting is that the truth is harder to pin down—and that our difficulty predicting it may be the most important part of
In 1869, a group of Massachusetts reformers persuaded the state to try a simple idea: counting.
The Second Industrial Revolution was belching its way through New England, teaching mill and factory owners a lesson most M.B.A. students now learn in their first semester: that efficiency gains tend to come from somewhere, and that somewhere is usually somebody else. The new machines weren’t just spinning cotton or shaping steel. They were operating at speeds that the human body—an elegant piece of engineering designed over millions of years for entirely different purposes—simply wasn’t built to match. The owners knew this, just as they knew that there’s a limit to how much misery people are willing to tolerate before they start setting fire to things.
Still, the machines pressed on.
…
Ai isn’t good enough to take many jobs
Owner of the Atlantic is in the Epstein Files. They also wrote an article shaming Americas reaction to the Brian Thompson killing with no acknowledgement of the trauma we all experience in this corrupt system. Not going to give them any traffic
Oh wow, somehow I missed both of those things. Shit.
AI is snake oil and the ones ruining the jobs are the corporations and billionaires. AI will be a net positive for society once we make it a public project and reclaim the stolen wealth of the oligarchy, who use it to maximize their extraction and destroy society. Cool article, or whatever.
The TLDR of this article is “we can’t predict the impact of AI because we can’t predict the future.” It takes apparently 15,000 words to say that. It just talks about what people are saying about AI without any purpose, along with random irrelevant things. This article is a waste of time.
Based on your description, I expected the article to be worthless (and it definitely was worthless!), but I didn’t expect the author to start breathlessly talking about Steve Bannon as if he’s some paragon of populist “AI safety” wisdom that transcends the Republican and Democrat parties.
For anybody who’s not aware, Steve Bannon is a key architect of the first and second Trump administration. And the fact that Bannon is part of the AI safety grift, which should be a red flag that it is a bad thing, this author twists it into a green flag that Bannon might be a good guy after all.
I don’t feel at all like I’m the smartest person in any given room, but lately I feel like the movie idiocracy. Where I’m just some average guy, and the rest of the world is letting AI do their thinking for them. The end result is, crops won’t grow, because the lot of you are trying to water them with gatoraide. Top scientists in the country are so blinded by why science fails them, never realizing it’s because gatoraide controls the farming industry, and helps write the laws to ensure further grasp of control. Regardless of results.
And everybody else just goes with it. What will happen in the future? Click this article to read about it! Answer: No one knows what would happen if you water plants with water.
Here is how the AI experiment plays out…
Corporations cling and force this stuff down our throats, despite it not working. They do this for 2-3 generations to normalize it. With time and tech advancements they continue to develop it.
They keep using it where people don’t push back. Which for AI, is most things. I don’t see a major pushback on google including AI in search results. I don’t see a major pushback from MOST people on AI being in every element of Windows 11. I see people here hating on microsoft, but linux users are like 4% of the market.
So they continue using the stuff people don’t rock the boat over, while not improving services. Eventually they get more and more of these AI services in every aspect of your life.
The one place they spend all their effort improving is survailance. Watching you watch yourself, and sending them the data.
Alexa could listen for “Hey Alexa” or it could listen for sneezing. Then send that information to HQ where they can now sell that data, that you sneeze 37 times per day in the spring, or 3 times a day in the winter.
Now your insurance rates go up for allergy medication before you even see your doctor.
Thats just one example. Like one dot of a painting of millions of dots. But it all starts with people who don’t have critical thinking skills. They just don’t even question why TVs in the 90s were expensive, but by 2020 they were basically free.
So they buy their cheap smart tvs, and smart fridge, and everything else. Happy as can be. Not even realizing that its all just corporations bringing us closer and closer to 1984.
And in 30 years, not having a smartphone will be illegal. Not having a trackable device with you 24/7 will be illegal. They’ll justify it by saying “think of the children!”. And people will fall for it, yet again. Just as they always do.
Well, the U.K. recently tried to require citizens own and maintain a propriety device completely beholden to U.S. companies in order to be alive (effectively), so.
…in the words of Ian Malcom:
“God damn do I hate always being right all the time…”
Also in the words of Ian Malcolm:
sexy growling and laughing noises
Whos that
I’ve found that to be the case more and more with The Atlantic in recent years: long articles that might sound impressive but don’t actually say much or could’ve said things much more succinctly. I usually don’t read their articles anymore.
There are gobs of money to be made selling enterprise software, but dulling the impact of AI is also a useful feint. This is a technology that can digest a hundred reports before you’ve finished your coffee, draft and analyze documents faster than teams of paralegals, compose music indistinguishable from the genius of a pop star or a Juilliard grad, code—really code, not just copy-paste from Stack Overflow—with the precision of a top engineer. Tasks that once required skill, judgment, and years of training are now being executed, relentlessly and indifferently, by software that learns as it goes.
Literally not true.
It can’t “analyze” documents. There’s no thinking involved with these machines. It outputs the statistically most likely thing that looks like analysis.
And it’s not even close as good as the top engineer. If it was there would be no engineers TODAY.
This is why I get so frustrated when people demand I integrate this stuff into every workflow. It’s not thinking at all. It’s just regurgitating text based on input and hoping for the best.
And let’s not forget the asinine claim about music composition. Yeah, this is a bullshit fluff piece to keep attention on AI.
Could AI blow up the world tomorrow? Who knows! The future is unpredictable, so it’s basically a 50-50, right? /s
LodeMike, I’m curious about something. What’s the latest set of AI models and tools you’ve used personally? Have you used Opus 4.5 or 4.6, for instance?
I am not disagreeing with the points you’ve made, but it’s been my experience that the increase in capabilities over the last six months has been so rapid that it’s hard to realistically evaluate what the current frontier models are capable of unless you’ve uused them meaningfully and with some frequency.
I’d welcome your perspective.
Opus like the audio codec?
I use the GPT mini or similar models
Cool fearbait bruh
The world isn’t ready for what I want to do to AI
Good thing AI is something that doesn’t exist in physical space that someone can tamper with…
Uhh. Necrophilia?
That literally doesn’t make any sense.
Can it be necrophilia if it has never lived?
To everyone shitting on the article because of where AI is now: remember how little time passed between will smith spaghetti and sora 2?
Those gains won’t continue into the future. Transformers are a mostly flushed technology, at least from the strictly tech/math side. New use cases or specialized sandboxes are still new tech (keyboard counts as a sandbox).
Moore’s Law isn’t quite dead. And quantum computing is a generation away. Computers will continue getting exponentially faster.
No.
We know how they work. They’re purely statistical models. They don’t create, they recreate training data based on how well it was stored in the model.
The problem is with hardware requirements scaling exponentially with AI performance. Just look at RAM and computation consumption increasing compared to the performance of the models.
Anthropic recently announced that since the performance of one agent isn’t good enough it will just run teams of agents in parallel on single queries, thus just multiplying the hardware consumption.
Exponential growth can only continue for so long.
No. How much time?






