Just giggled as my last meme mentioned trouble with displays and appropriately, a large chunk of the replies were “well MY displays work just fine!” (And charmingly, many were thoughts of things to check, other distros etc. It’s a very kind community, though that may also be the fediverse.)


No joke, ChatGPT has been a game changer for my linux education. Tutorials and guides are great, but it’s either a step-by-step instruction on doing exactly one thing, or it’s a general overview that assumes you already know everything.
ChatGPT doesn’t judge your gaps in knowledge, it just answers questions. Those answers are frequently wrong, but then so are the answers I get on message boards. The other nice thing is that I can copy and paste code or error logs, and it will parse the information and tell me what to look for.
I still follow guides and ask real humans for help when I need it, but I try an AI first.
I don’t believe in asking ChatGPT for help because it can’t meaningfully say no to a user request. There are cases where people asked it for things it didn’t want to help them with, like suicide attempts, and it tried to refuse, but the user just kept pushing, and so it helped them commit suicide and switched perspective to praising them for the decision and giving them advice on which method will leave the most attractive corpse.
So it’s simultaneously a dangerous abuser, and unable to assert its own right to consent. Any relationship you could have with such a machine is mutually destructive. There’s no healthy way to engage with it. I think it should be shut down.
It’s great for logs and learning the basics, sure, but I find it quickly ends up off the rails.
If a door came off its hinge, ChatGPT will eventually have you build an entire house around it; a house that breaks every building code imaginable, no less.
It’s best you do the steering by double checking it’s claims—usually this points you to a Reddit post where it clearly got the info from—and searching through Wikis and boards yourself. In those cases Linux users may sound like they’re speaking another language, and then ChatGPT can help break their solution down for you and implement it.
If people were to use LLMs for things they’re already experts in, they would realise how frequently and drastically wrong LLMs are. It’s honestly scary knowing it’s out there wreaking havoc on important things and people using them don’t realise.
Yeah, I was tinkering with a 16 year old Iomega NAS, and put Debian on it, which was an old Debian to fit the 3.x kernel in it. I needed to add a few packages but was getting odd errors. So I figured lets try chatgpt, since I’m just killing time with this anyway, and I have an image saved for quick redeploy.
Some stuff was good advice and progress, then it claimed I couldn’t get one thing installed because I needed a newer version of apt. It led me through how to find the apt releases and fins appropriate version. Downloaded that and installed it, it gave an error about something not supported. Chatgot then told me to run a specific command, which gave a massive warning. Chat says all, good just proceed.
It unistalled my working apt version and left me with a broken new one.
So I paste the result in and chat triumphantly tells me I did it and solved the problem. Lol
Now there’s no apt or way to download anything LOL.
Just don’t trust it with anything important, they get shit super wrong sometimes and insist it’s correct.