Never have I had to implement any kind of ridiculous algorithm to pass tests with huge amounts of data in the least amount of memory, as the competitive websites show.
It has been mostly about:
Finding the correct library for a job and understanding it well, to prevent footguns and blocking future features
Design patterns for better build times
Making sane UI options and deciding resource alloc/dealloc points that would match user interaction expectations
cmake
But then again, I haven’t worked in FinTech or Big Data companies, neither have I made an SQL server.
There are some times when I wish I were better at regexp and scripting.
Times when I am writing a similar kind of thing again and again, which is just different enough (and small enough number of repetitions) that it doesn’t seem viable to make the script.
At those times, I tend to think - maybe Cursor would have done this part well - but have no real idea since I have never used it.
On the other hand, if I had a scripting endpoint from clang, [1], I would have used that to make a batch processor for even a repetition as small as 10 and wouldn’t have thought once about AI.
which would have taggified parts of code (in the same tone as “parts of speech”) like functions declaration, return type, function name, type qualifier etc. ↩︎
Well, this kind of AI won’t ever be useful as a programmer. It doesn’t think. It doesn’t reason. It cannot make decisions besides using a ton of computational power and enormous deep neural networks to shit out a series of words that seem like they should follow your prompt. An LLM is just a really, really good next-word guesser.
So when you ask it to solve the Tower of Hanoi problem, great it can do that. Because it saw someone else’s answer. But if you ask it to solve it for a tower than is 20 disks high it will fail because no one ever talks about going that far and it flounders. It’s not actually reasoning to solve the problem - it’s regurgitating answers it has ingested from stolen internet conversations. It’s not even attempting to solve the general case because it’s not trying to solve the problem, it’s responding to your prompt.
That said - an LLM is also great as an interface to allow natural language and code as prompts for other tools. This is where the actually productive advancements will be made. Those tools are garbage today but they’ll certainly improve.
It’s not gate keeping it is true. I know devs that say ai tools are useful but all the ones that say it makes them multiples more productive are actually doing negative work because I have to deal with their terrible code they don’t even understand.
True, I use some local model by Jetbrains that only completes a single line and that’s my sweet spot, it usually guesses the line well and saves me some time without forcing me to read multiple lines of code I didn’t write.
The claims that AI will be surpassing humans in programming are pretty ridiculous. But let’s be honest - most programming is rather mundane.
Never have I had to implement any kind of ridiculous algorithm to pass tests with huge amounts of data in the least amount of memory, as the competitive websites show.
It has been mostly about:
cmake
But then again, I haven’t worked in FinTech or Big Data companies, neither have I made an SQL server.
Because actually writing code is the least important part of programming.
I mean, not the least important, it is an important part. But way less than a common person thinks.
Pretty sure that autocomplete would be terrible at these tasks too.
There are some times when I wish I were better at regexp and scripting.
Times when I am writing a similar kind of thing again and again, which is just different enough (and small enough number of repetitions) that it doesn’t seem viable to make the script.
At those times, I tend to think - maybe Cursor would have done this part well - but have no real idea since I have never used it.
On the other hand, if I had a scripting endpoint from clang, [1], I would have used that to make a batch processor for even a repetition as small as 10 and wouldn’t have thought once about AI.
which would have taggified parts of code (in the same tone as “parts of speech”) like functions declaration, return type, function name, type qualifier etc. ↩︎
Well, this kind of AI won’t ever be useful as a programmer. It doesn’t think. It doesn’t reason. It cannot make decisions besides using a ton of computational power and enormous deep neural networks to shit out a series of words that seem like they should follow your prompt. An LLM is just a really, really good next-word guesser.
So when you ask it to solve the Tower of Hanoi problem, great it can do that. Because it saw someone else’s answer. But if you ask it to solve it for a tower than is 20 disks high it will fail because no one ever talks about going that far and it flounders. It’s not actually reasoning to solve the problem - it’s regurgitating answers it has ingested from stolen internet conversations. It’s not even attempting to solve the general case because it’s not trying to solve the problem, it’s responding to your prompt.
That said - an LLM is also great as an interface to allow natural language and code as prompts for other tools. This is where the actually productive advancements will be made. Those tools are garbage today but they’ll certainly improve.
It already is.
You mean useful to a programmer, or as useful as a programmer?
Ah - yeah I read that wrong. It’s useful to a programmer.
I explicitly meant “as”. It’s great as autocomplete. Not as an agent to complete programming tasks.
I love the weird need to downplay just how good AIs are by calling them “autocomplete”.
Did you even read my earlier comment?
My productivity has at least tripled since I started using Cursor. People are actually underestimating the effects that AI will have in the industry
It means the AI is very helpful to you. This also means you are as good as 1/3 of an AI in coding skills…
Which is not a great news for you mate.
Ah knock it off. Jesus you sound like people in the '90s mocking “intellisense” in the IDE as somehow making programmers “less real programmers”.
It’s all needless gatekeeping and purity test BS. Use tools that are useful. Don’t worry if it makes you less of a man.
It’s not gate keeping it is true. I know devs that say ai tools are useful but all the ones that say it makes them multiples more productive are actually doing negative work because I have to deal with their terrible code they don’t even understand.
The devs I know use it as a tool and check their work and fully understand the code they’ve produced.
So your experience vs. mine. I suspect you just work with shitty developers who would be producing shitty work whether they were using AI or not.
True, I use some local model by Jetbrains that only completes a single line and that’s my sweet spot, it usually guesses the line well and saves me some time without forcing me to read multiple lines of code I didn’t write.
Tripled is an understatement for me. Cursor and Claude Code are a godsend for OE for me.