An actually interesting use of artificial intelligence being able to accomplish something, when put in the hands of expert mathematicians. Definitely a lot of coaxing it back to doing the task correctly but it is pretty cool that it can solve problems (even if they are math nerd ones) in a way that are independently verifiable.
I’d feel a lot less annoyed at my code being used to train the AI (without my consent) if the AI’s benefits weren’t funnelled into private pockets.
I’d feel a lot less annoyed at AI if it wasn’t constantly use to replace jobs and then fail at it. Actually, AI isn’t replacing jobs, it’s being used as an excuse to do layoffs while pretending your company is being innovative, so as not to scare off investors.
Without a profit motive there wouldn’t be ChatGPT Health, which is just faking medical skills while being wrong as often as a coin toss, in exchange for money. If I did that I’d be sued for negligence and/or fraud.
You can read Marx’s chapters on technology in Capital volume 1, and what he describes from his own time about how tech is developed and for whose benefit and specifically how it has to exploit workers in order to be useful to capital; it matches so closely with the development of AI that we are seeing.
Exactly. It should all be treated as another tool in the toolbelt. To me, it reminds me of when GUI editors came along in IDEs like Visual Studio. It honestly feels the same. Tech CEOs immediately clamor to say that tech jobs are dead, the market for engineers dips. Engineers freak out and refuse to learn the technology while others learn what it is. Those who learn and use it as a tool elevate themselves and move faster. There is a non-trivial group of people who refuse to use the GUI tools on principal. Eventually the CEOs realize they made a mistake, and then more work comes in faster than ever before. Eventually over the years/decades everyone starts using the tech as a tool.
It’s the same with an AI. Like it’s following the exact same pattern to a T. CEOs starting to realize that it’s just a tool that can be used, but it needs people at the helm to know how to use it. Devs are split, some it’s accelerating their work if they know what it’s doing, others see a useless boondoggle and refuse to use it but are probably only hurting themselves because every interview is asking “are you using AI”. I’d say we’re finally starting to normalize on it’s usage as a tool.
That said, I have to come to the defense of my terminal UI (TUI) comrades with some anecdotal experience.
I’ve got all the same tools in Neovim as my VSCode/Cursor colleagues, with a deeper understanding of how it all works under the hood.
They have no idea what an LSP is. They just know the marketing buzzword “IntelliSense.” As we build out our AI toolchains, it doesn’t even occur to them that an agent can talk to an LSP to improve code generation because all they know are VSCode extensions. I had to pick and evaluate my MCP servers from day one as opposed to just accepting the defaults, and the quality of my results shows it. The same can be done in GUI editors, but since you’re never forced to configure these things yourself, the exposure is just lower. I’ve had to run numerous trainings explaining that MCPs are traditionally meant to be run locally, because folks haven’t built the mental model that comes with wiring it all up yourself.
Again, totally agree with your overall point. This is more of a PSA for any aspiring engineers: TUIs are still alive and well.
That’s fair. A huge difference is how much money is behind the crazy hype machine, and how desperate they are to keep the hype going. Most actual tech people I know, work with, and are connected with in the field have normalized on tech usage. Knowing when to use it and when not to use it. It’s only the tech bros at the top who are still like “Yeah bro it’s totally going to get rid of labor bro we’re all gonna have androids who do all the work bro just trust me just 200 billion more dollars bro I promise”
The problem is always techbros. Large Language Models, Deep Learning, these kinds of things are potentially valuable when put to work in the right arena.
A techbro will never put them in the right arena. It’s always a false promise built on flimsy reputational credit.
An actually interesting use of artificial intelligence being able to accomplish something, when put in the hands of expert mathematicians. Definitely a lot of coaxing it back to doing the task correctly but it is pretty cool that it can solve problems (even if they are math nerd ones) in a way that are independently verifiable.
In the hands of experts these are definitely useful. I’ve always felt that.
Ai should be used to augment humans, not replace them.
Unfortunately we have idiots making decisions looking at the sycophant BS machine without knowing what the job actually does
Hot take: AI is not the enemy; capitalism is.
Exactly, when you dig into all the complaints people have about this tech, they’re ultimately just symptoms of the underlying capitalist relations.
Yes.
I’d feel a lot less annoyed at my code being used to train the AI (without my consent) if the AI’s benefits weren’t funnelled into private pockets.
I’d feel a lot less annoyed at AI if it wasn’t constantly use to replace jobs and then fail at it. Actually, AI isn’t replacing jobs, it’s being used as an excuse to do layoffs while pretending your company is being innovative, so as not to scare off investors.
Without a profit motive there wouldn’t be ChatGPT Health, which is just faking medical skills while being wrong as often as a coin toss, in exchange for money. If I did that I’d be sued for negligence and/or fraud.
You can read Marx’s chapters on technology in Capital volume 1, and what he describes from his own time about how tech is developed and for whose benefit and specifically how it has to exploit workers in order to be useful to capital; it matches so closely with the development of AI that we are seeing.
Exactly. It should all be treated as another tool in the toolbelt. To me, it reminds me of when GUI editors came along in IDEs like Visual Studio. It honestly feels the same. Tech CEOs immediately clamor to say that tech jobs are dead, the market for engineers dips. Engineers freak out and refuse to learn the technology while others learn what it is. Those who learn and use it as a tool elevate themselves and move faster. There is a non-trivial group of people who refuse to use the GUI tools on principal. Eventually the CEOs realize they made a mistake, and then more work comes in faster than ever before. Eventually over the years/decades everyone starts using the tech as a tool.
It’s the same with an AI. Like it’s following the exact same pattern to a T. CEOs starting to realize that it’s just a tool that can be used, but it needs people at the helm to know how to use it. Devs are split, some it’s accelerating their work if they know what it’s doing, others see a useless boondoggle and refuse to use it but are probably only hurting themselves because every interview is asking “are you using AI”. I’d say we’re finally starting to normalize on it’s usage as a tool.
Totally agree with your overall point.
That said, I have to come to the defense of my terminal UI (TUI) comrades with some anecdotal experience.
I’ve got all the same tools in Neovim as my VSCode/Cursor colleagues, with a deeper understanding of how it all works under the hood.
They have no idea what an LSP is. They just know the marketing buzzword “IntelliSense.” As we build out our AI toolchains, it doesn’t even occur to them that an agent can talk to an LSP to improve code generation because all they know are VSCode extensions. I had to pick and evaluate my MCP servers from day one as opposed to just accepting the defaults, and the quality of my results shows it. The same can be done in GUI editors, but since you’re never forced to configure these things yourself, the exposure is just lower. I’ve had to run numerous trainings explaining that MCPs are traditionally meant to be run locally, because folks haven’t built the mental model that comes with wiring it all up yourself.
Again, totally agree with your overall point. This is more of a PSA for any aspiring engineers: TUIs are still alive and well.
Remember when they built a $50 billion server farm in my back yard to run a GUI IDE?
I think your attitude toward AI in the abstract is pretty good, it matches my experience in tech, but also theres something much larger going on here
That’s fair. A huge difference is how much money is behind the crazy hype machine, and how desperate they are to keep the hype going. Most actual tech people I know, work with, and are connected with in the field have normalized on tech usage. Knowing when to use it and when not to use it. It’s only the tech bros at the top who are still like “Yeah bro it’s totally going to get rid of labor bro we’re all gonna have androids who do all the work bro just trust me just 200 billion more dollars bro I promise”
The problem is always techbros. Large Language Models, Deep Learning, these kinds of things are potentially valuable when put to work in the right arena.
A techbro will never put them in the right arena. It’s always a false promise built on flimsy reputational credit.
Techbros are the result of the capitalist mode of production. You have to get rid of capitalism to get rid of techbros.
🙏 Inshallah