The article title is click bait here is the full article:
Wondering what your career looks like in our increasingly uncertain, AI-powered future? According to Palantir CEO Alex Karp, it’s going to involve less of the comfortable office work to which most people aspire, a more old fashioned grunt work with your hands.
Speaking at the World Economic Forum yesterday, Karp insisted that the future of work is vocational — not just for those already in manufacturing and the skilled trades, but for the majority of humanity.
In the age of AI, Karp told attendees at a forum, a strong formal education in any of the humanities will soon spell certain doom.
“You went to an elite school, and you studied philosophy; hopefully you have some other skill,” he warned, adding that AI “will destroy humanities jobs.”
Karp, who himself holds humanities degrees from the elite liberal arts institutions of Haverford College and Stanford Law, will presumably be alright. With a net worth of $15.5 billion — well within the top 0.1 percent of global wealth owners — the Palantir CEO has enough money and power to live like a feudal lord (and that’s before AI even takes over.)
The rest of us, he indicates, will be stuck on the assembly line, building whatever the tech companies require.
“If you’re a vocational technician, or like, we’re building batteries for a battery company… now you’re very valuable, if not irreplaceable,” Karp insisted. “I mean, y’know, not to divert to my usual political screeds, but there will be more than enough jobs for the citizens of your nation, especially those with vocational training.”
Now, there’s nothing wrong with vocational work or manufacturing. The global economy runs on these jobs. But in a theoretical world so fundamentally transformed by AI that intellectual labor essentially ceases to exist, it’s telling that tech billionaires like Karp see the rest of humanity as their worker bees.
It seems that the AI revolution never seems to threaten those who stand to profit the most from it — just the 99.9 percent of us building their batteries.



I feel like so much LLM-generated code is bound to deteriorate code quality and blow out of the context size to such an extent that the LLM is eventually gonna become paralyzed
I do agree, LLM generated code is inaccurate, which is why they have to have the throw it back in stage and a human eye looking at it.
They told me their main concern is that they aren’t sure they are going to properly understand the code the AI is spitting out to be able to properly audit it (which is fair), then of course any issue with the code will fall on them since it’s their job to give final say of “yes this is good”
At that point they’re just the responsibility circuit breaker, put there to get the blame if things go wrong.