- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
How stupid do you have to be to believe that only 8% of companies have seen failed AI projects? We can’t manage this consistently with CRUD apps and people think that this number isn’t laughable? Some companies have seen benefits during the LLM craze, but not 92% of them. 34% of companies report that generative AI specifically has been assisting with strategic decision making? What the actual fuck are you talking about?
…
I don’t believe you. No one with a brain believes you, and if your board believes what you just wrote on the survey then they should fire you.
This. Many of these tools are good at incredibly basic boilerplate that’s just a hint outside of say a wizard. But to hear some of these AI grifters talk, this stuff is going to render programmers obsolete.
There’s a reality to these tools. That reality is they’re helpful at times, but they are hardly transformative at the levels the grifters go on about.
I interviewed a candidate for a senior role, and they asked if they could use AI tools. I told them to use whatever they normally would, I only care that they get a working answer and that they can explain the code to me.
The problem was fairly basic, something like randomly generate two points and find the distance between them, and we had given them the details (e.g. distance is a straight line). They used AI, which went well until it generated the Manhattan distance instead of the Pythagorean theorem. They didn’t correct it, so we pointed it out and gave them the equation (totally fine, most people forget it under pressure). Anyway, they refactored the code and used AI again to make the same mistake, didn’t catch it, and we ended up pointing it out again.
Anyway, at the end of the challenge, we asked them how confident they felt about the code and what they’d need to do to feel more confident (nudge toward unit testing). They said their code was 100% correct and they’d be ready to ship it.
They didn’t pass the interview.
And that’s generally my opinion about AI in general, it’s probably making you stupider.
I’ve seen people defend using AI this way by comparing it to using a calculator in a math class, i.e. if the technology knows it, I don’t need to.
And I feel like, for the kind of people whose grasp of technology, knowledge, and education are so juvenile that they would believe such a thing, AI isn’t making them dumber. They were already dumb. What the AI does is make code they don’t understand more accessible, which is to say, it’s just enabling dumb people to be more dangerous while instilling them with an unearned confidence that only compounds the danger.
Wait wait wait so… this person forgot the pythagorean theorem?
Like that is the most basic task. It’s
d = sqrt((x1 - x2)^2 + (y1 - y2)^2)
, right?That was off the top of my head, this person didn’t understand that? Do I get a job now?
I have seen a lot of programmers talk about how much time it saves them. It’s entirely possible it makes them very fast at making garbage code. One thing I’ve known for a long time is that understanding code is much harder than writing it, and so asking an LLM to generate your code sounds like it’s just creating harder work for you, unless you don’t care about getting it right.
Yup, you’re hired as whatever position you want. :)
Our instructions were basically:
It was technically different (we phrased it as a top-down game, but same gist). AI generated manhattan distance (
abs(x2 - x1) + abs(x2 - x1)
) probably due to other clues in the text, but the instructions were clear. The candidate didn’t notice what it was doing, we pointed it out, then they asked for the algorithm, which we provided.Our better candidates remember the equation like you did. But we don’t require it, since not all applicants finished college (this one did). We’re more concerned about code structure, asking proper questions, and software design process, but math knowledge is cool too (we do a bit of that).
College? Pythagorean Theorem is mid-level high school math.
I did once talk to a high school math teacher about a graphics program I was hacking away on at the time, and she was surprised that I actually use the stuff she teaches. Which is to say that I wouldn’t expect most programmers to know it exactly off the top of their head, but I would expect they’ve been exposed to it and can look it up if needed. I happen to have it pretty well ingrained in my brain.
I use them like wikipedia: it’s a good starting point and that’s it (and this comparison is a disservice to wikipedia).
Yes, and then you take the time to dig a little deeper and use something agent based like aider or crewai or autogen. It is amazing how many people are stuck in the mindset of “if the simplest tools from over a year aren’t very good, then there’s no way there are any good tools now.”
It’s like seeing the original Planet of the Apes and then arguing against how realistic the Apes are in the new movies without ever seeing them. Sure, you can convince people who really want unrealistic Apes to be the reality, and people who only saw the original, but you’ll do nothing for anyone who actually saw the new movies.
Also, a lot of people who are using AI have become quiet about it of late exactly because of reactions like this article’s. Okay, you’ll “piledrive” me if I mention AI? So I won’t mention AI. I’ll just carry on using it to make whatever I’m making without telling you.
There’s some great stuff out there, but of course people aren’t going to hear about it broadly if every time it gets mentioned it gets “piledriven.”
Pretty much me. I am using it everywhere but usually not interested in mentioning it to some internet trolls.
You can check my profile if you want, or not. 7 months ago I baked my first loaf of bread. I got the recipe from chatgpt. Over 7 months I have been going over with it on recipes and techniques, and as of this month I now have a part time gig job making artisan breads for a restaurant.
There is no way I could have progressed this fast without that tool. Keep in mind I have a family and a career in engineering, not exactly an abundance of time to take classes.
I mentioned this once on lemmy and some boomer shit starting screaming how learning how to bake with the help of an AI didn’t count and I need to buy baking books.
Edit: spelling
And if you need examples of people being piledriven, you can browse my history a bit. :) Since I’m not doing anything with AI that would suffer “professionally” from backlash (such as might happen to an artist who becomes the target of anti-AI witch-hunters) I’ve not been shy about talking about the good things AI can do and how I use it. Or at calling out biased or inaccurate arguments against various AI applications. As a result I get a lot of downvotes.
Fundamentally, I think it’s just that people are afraid. They’re seeing a big risk from this new technology of losing their jobs, their lifestyles, and control over their lives. And that’s a real concern that should be treated seriously, IMO. But fear is not a good cultivator of rational thought or honest discourse. It’s not helping people work towards solving those real concerns.
Yeah, this is exactly what I think it is. I’m a bit concerned about how hard it’s going to hit a large number of people when they realize that they’re echo chamber of “LLMs are garbage and have no benefits” was so completely wrong. I agree that there are scary aspects of all this, but pretending like they don’t exist will just make it harder to deal with. It’s like denying that the smoke alarm is going off until your arm is on fire.