

Well, if I’m not, then neither is an LLM.
But for most projects built with modern tooling, the documentation is fine, and they mostly have simple CLIs for scaffolding a new application.
Developer and refugee from Reddit


Well, if I’m not, then neither is an LLM.
But for most projects built with modern tooling, the documentation is fine, and they mostly have simple CLIs for scaffolding a new application.


Yeah, I have never spent “days” setting anything up. Anyone who can’t do it without spending “days” struggling with it is not reading the documentation.


Sadly, there are some who don’t even know it, because they’re buying services from someone else that buys them from someone else that buys them from Amazon. So they’re currently wondering what the fuck is even going on, since they thought they weren’t using AWS.


I’m a software developer and my company is piloting the use of LLMs via Copilot right now. All of them suck to varying degrees, but everyone’s consensus is that GPT5 is the worst of them. (To be fair, no one has tested Grok, but that’s because no one in the company wants to.)


On top of that, there’s so much AI slop all over the internet now that the training for their models is going to get worse, not better.


They’ll ask their parents, or look up cooking instructions on actual websites.


Venture capital drying up.
Here’s the thing… No LLM provider’s business is making a profit. None of them. Not OpenAI. Not Anthropic. Not even Google (they’re profitable in other areas, obviously). OpenAI optimistically believes it might start being profitable in 2029.
What’s keeping them afloat? Venture capital. And what happens when those investors decide to stop throwing good money after bad?
BOOM.
I… write code. It does stuff. Usually the wrong stuff, until I’ve iterated over it a few times and gotten it to do the right stuff. I don’t “click around in a GUI.” If a tutorial is making you do that, it’s a bad tutorial.
My pleasure! And if you’re being the GM, remember to keep track of the character trouble for each character. It’s basically a built-in way to make everything personal for the characters, as well as a mechanic to offer them extra fate points in return for invoking the trouble.
My favorite example is this: Imagine you’ve got Indiana Jones as a player character in your game. His trouble would be, “Snakes… Why’d it have to be snakes?” He gets a fate point when you invoke it (if he accepts), but in return, it guarantees that he’s falling into a pit of snakes. Instant drama!


There are tricks to getting better output from it, especially if you’re using Copilot in VS Code and your employer is paying for access to models, but it’s still asking for trouble if you’re not extremely careful, extremely detailed, and extremely precise with your prompts.
And even then it absolutely will fuck up. If it actually succeeds at building something that technically works, you’ll spend considerable time afterwards going through its output and removing unnecessary crap it added, fixing duplications, securing insecure garbage, removing mocks (God… So many fucking mocks), and so on.
I think about what my employer is spending on it a lot. It can’t possibly be worth it.


Yeah, code bloat with LLMs is fucking monstrous. If you use them, get used to immediately scouring your code for duplications.
It always is with these guys.


Dude, time is stopped. You can keep going until you want time to start again. And I’ve got a list for you:
I’m sure I could come up with more.


40 dollars? That’s all? I’m salaried and can work any hours I want to, so I guess I’m logging in at work and writing a few lines of code or answering some emails.
Twenty minutes will do it. Probably round it up to half an hour just to be safe. And honestly, if I get into the zone, an hour will fly by, easy.


After working on a team that uses LLMs in agentic mode for almost a year, I’d say this is probably accurate.
Most of the work at this point for a big chunk of the team is trying to figure out prompts that will make it do what they want, without producing any user-facing results at all. The rest of us will use it to generate small bits of code, such as one-off scripts to accomplish a specific task - the only area where it’s actually useful.
The shine wears off quickly after the fourth or fifth time it “finishes” a feature by mocking data because so many publicly facing repos it trained on have mock data in them so it thinks that’s useful.
In our case, there are enough upper management folks who are opposed to it that I doubt it will last or ever be enforced. For people like me, it really doesn’t make any sense to enforce it in the first place, because all of my teammates are in other states and countries.
Making me go to the office just means you can’t schedule early meetings with me, because I’ll be commuting during that time.
My office just did the same thing. And the backlash is enormous. No one wants it. No one likes it.
The funny thing is that I’m actually an Arch user. I’m just not a dick about it.
Yeah, this sucks. Use the distro you like, people.
The thing is, it really won’t. The context window isn’t large enough, especially for a decently-sized application, and that seems to be a fundamental limitation. Make the context window too large, and the LLM gets massively offtrack very easily, because there’s too much in it to distract it.
And LLMs don’t remember anything. The next time you interact with it and put the whole codebase into its context window again, it won’t know what it did before, even if the last session was ten minutes ago. That’s why they so frequently create bloat.