

Maybe she sincerely means ‘million dollar company’, a company too dirt poor to pay to have adequate coverage…


Maybe she sincerely means ‘million dollar company’, a company too dirt poor to pay to have adequate coverage…


Must be owned by Dr. Evil…


I’ve been using Claude to mediocre results, so this time I used Gemini 3 because everyone in my company is screaming “this time it works, trust us bro”. Claude has not been working so great for me for my day job either.
So what meaning did Nyan cat have? Or rick rolling, or badger badger badger, or most anything that you would have seen on ytmd…


It’s certainly a use case that LLM has a decent shot at.
Of course, having said that I gave it a spin with Gemini 3 and it just hallucinated a bunch of crap that doesn’t exist instead of properly identifying capable libraries or frontending media tools…
But in principle and upon occasion it can take care of little convenience utilities/functions like that. I continue to have no idea though why some people seem to claim to be able to ‘vibe code’ up anything of significance, even as I thought I was giving it an easy hit it completely screwed it up…


So if it can be vibe coded, it’s pretty much certainly already a “thing”, but with some awkwardness.
Maybe what you need is a combination of two utilities, maybe the interface is very awkward for your use case, maybe you have to make a tiny compromise because it doesn’t quite match.
Maybe you want a little utility to do stuff with media. Now you could navigate your way through ffmpeg and mkvextract, which together handles what you want, with some scripting to keep you from having to remember the specific way to do things in the myriad of stuff those utilities do. An LLM could probably knock that script out for you quickly without having to delve too deeply into the documentation for the projects.


Yeah, but mispredicting that would hurt. The market can stay irrational longer than I can stay solvent, as they say.


Yeah, this one is going to hurt. I’m pretty sure my rather long career will be toast as my company and mostly my network of opportunities are all companies that are bought so hard into the AI hype that I don’t know that they will be able to survive that going away.
True, but I wouldn’t expect an engagement ring to come out of a shared account…


At work there’s a lot of rituals where processes demand that people write long internal documents that no one will read, but management will at least open it up, scroll and be happy to see such long documents with credible looking diagrams, but never read them, maybe looking at a sentence or two they don’t know, but nod sagely at.
LLM can generate such documents just fine.
Incidentally an email went out to salespeople. It told them they didn’t need to know how to code or even have technical skills, they code just use Gemini 3 to code up whatever a client wants and then sell it to them. I can’t imagine the mind that thinks that would be a viable business strategy, even if it worked that well.


An LLM can generate code like an intern getting ahead of their skis. If you let it generate enough code, it will do some gnarly stuff.
Another facet is the nature of mistakes it makes. After years of reviewing human code, I have this tendency to take some things for granted, certain sorts of things a human would just obviously get right and I tend not to think about it. AI mistakes are frequently in areas my brain has learned to gloss over and take on faith that the developer probably didn’t screw that part up.
AI generally generates the same sorts of code that I hate to encounter when humans write, and debugging it is a slog. Lots of repeated code, not well factored. You would assume of the same exact thing is fine in many places, you’d have a common function with common behavior, but no, AI repeated itself and didn’t always get consistent behavior out of identical requirements.
His statement is perhaps an over simplification, but I get it. Fixing code like that is sometimes more trouble than just doing it yourself from the onset.
Now I can see the value in generating code in digestible pieces, discarding when the LLM gets oddly verbose for simple function, or when it gets it wrong, or if you can tell by looking you’d hate to debug that code. But the code generation can just be a huge mess and if you did a large project exclusively through prompting, I could see the end result being just a hopeless mess.v frankly surprised he could even declare an initial “success”, but it was probably “tutorial ware” which would be ripe fodder for the code generators.


So I don’t get it, I have mine up with a domain without tsilscale… The clients are quite happy wherever. I don’t even see that much “crawling” traffic that goes to the domain, most just hit the server by ip and get a static 401 page that the “default” site is hard coded to give out.
Mine had to go uphill
Hardware raid limits your flexibility, of any part fails, you probably have to closely match the part in replacement.
Performance wise, there’s not much to recommend them. Once upon a time the xor calculations weighed on CPU enough to matter. But cpus far outpaced storage throughput and now it’s a rounding error. They continued some performance edge by battery backed ram, but now you can have nvme as a cache. In random access, it can actually be a lability as it collapses all the drive command queues into one.
The biggest advantage is simplifying booting from such storage, but that can be handled in other ways that I wouldn’t care about that.
While sas is faster, the difference is moot if you have even a modest nvme cache.
I don’t know if it’s especially that much more reliable, especially I would take new SATA over second hand sas any day.
The hardware raid means everything is locked together, you lose a controller, you have to find a compatible controller. Lose a disk, you have to match pretty closely the previous disk. JBOD would be my strong recommendation for home usage where you need the flexibility in event of failure.
The RAM that has been sold will not be viable for desktop systems, but especially with manufacturing capacity build up, you’d have memory vendors a bit more desperate to find a target market for new product. Datacenter clients will still exist but they could actually subsist on the hypothetical leftovers of a failed buildout, so consumer space may be their best bet.
Unfortunately not even then. Nowadays the GPUs are a pretty alien form factor, usually not pcie cards. SXM and now HGX.
Datacenter gear has resembled consumer systems less and less after a period of getting closer in the 90s and 2000s.
The type of problem in my experience is the biggest source of different results
Ask for something that is consistent with very well trodden territory, and it has a good shot. However if you go off the beaten path, and it really can’t credibly generate code, it generates anyway, making up function names, file paths, rest urls and attributes, and whatever else that would sound good and consistent with the prompt, but no connection to real stuff.
It’s usually not that that it does the wrong thing because it “misunderstood”, it is usually that it producea very appropriate looking code consistent with the request that does not have a link to reality, and there’s no recognition of when it invented non existent thing.
If it’s a fairly milquetoast web UI manipulating a SQL backend, it tends to chew through that more reasonably (though in various results that I’ve tried it screwed up a fundamental security principle, like once I saw it suggest a weird custom certificate validation and disable default validation while transmitting sensitive data before trying to meaningfully execute the custom valiidation.