One of the very first commits on the forked project UZDoom was to disable billinear texture filtering by default. That was a default setting that many in the Doom community wanted changed but Graf (the original maintainer) insisted on keeping it enabled.
Seems to me that the sole focus should be on the quality of the code being submitted. I don’t really care how it was made, the question is whether the code is clean, if it’s doing what was intended, if it has tests.
I don’t think this is the whole picture. AI-generated code is harder to maintain because the creator may not understand how/why it works, and AI is are notoriously bad at debugging it’s own code. Using a lot of AI generated code often counterintuitively slows down the overall development process
It’s the job of the people maintaining the project to review changes they merge in and to understand them. When people make PRs to my projects, I don’t just trust them blindly.
Like I don’t hate all AI coding but there are legit questions about using it in GPL projects.
Also “hey ChatGPT said this works but fuck if I know” is a big no no when coding. You still own what you ship and need to understand what you’re actually pushing.
You still own what you ship and need to understand what you’re actually pushing.
Precisely on point. Lots of comments are talking about “Knee-jerk anti-AI” as if pushing broken code and washing your hands is ok.
Really wish the article headline was highlighting this issue instead of singling out AI.
The fork is called UZDoom and it’s already in the AUR. I read the Slashdot story on this today, and there’s a little more going on here. AI code grosses people out, but the bigger issue is that it’s being used in a GPL3 project which kind of isn’t allowed. The lead dev was also being a bit of a twat and not cooperating with the community. Long live UZDoom!
the bigger issue is that it’s being used in a GPL3 project which kind of isn’t allowed
I followed the links and I think the original argument being referenced has been twisted around a bit game-of-telephone style, GPL prohibiting inclusion of LLM generated code isn’t what it’s claiming, it’s more that they think AI trained on GPL code violates it when it happens to reproduce it exactly:
it is readily apparent that GitHub Copilot is capable of returning, verbatim, already extant code (although it does attempt to synthesise novel code based on its training data). This immediately raises the issue, what happens when that code (such as the previous example) is licensed under a copyleft license such as the GPL or AGPL? How is the matter of copyright in this instance resolved?
https://github.com/ZDoom/gzdoom/issues/3395
https://www.fsf.org/licensing/copilot/on-the-nature-of-ai-code-copilots#5. What About Copyright?
It might also be the case that the GPL prohibits LLM generated code somehow, I don’t actually know, just want to point out that no one has made an argument for that.
https://github.com/ZDoom/gzdoom/commit/584af500736b0317e42824f39285ed3d954fc4e2
autodetection of dark mode. The Linux solution was provided by ChatGPT and needs to be reviewed before being deployed.
The commit in question.
Basically a nothingburger…
dont use this term.
Ok, random internet stranger, i won’t use that term because you so eloquently stated why it’s not good to do so…
I just dont like it ok
Please refrain from censoring other people just because you like to. I just don’t like it, OK?
The reason I don’t like it is because its infantile. Fox News hosts use the term all the time to dismiss quite true statements about whomever they’re defending because they’re talking to stupid people who need to have things dumbed down in order to understand them.
We aren’t on Fox news here. Your concern trolling is also a nothingburger
Huh? Why not?
Holy shit. It’s the end of the world. AI helped code a few lines to detect dark mode on a Linux system!
The problem is here is not AI, the problem here is pushing code that didn’t even compile straight to main. It’s a huge collaborative project, running it as a one man show just doesn’t work anymore.
Had he made a PR/Draft, none of this would have happened. Hell, even his commit message says “needs to be reviewed before being deployed”Which by that point they’d already fixed. They mentioned they also accidentally pushed to the wrong branch.
I dunno, this just looks like someone accidentally made a bad commit and instead of being mature about it and letting them fix it we get statements like “all bridges have been burned” and a split in the community. That’s a bit overblown, and if at the first sign of disagreement the decision is to blow up a project you’re making it very hard to work with you.
Well, in that case, fair enough.
However, given that a lot of contributors allude to “putting up” with him for years in the comments of the GitHub issue, it seems it wasn’t just a bad commit that broke the camel’s backThat’s fair, I don’t know what other things have come before. It just seems really odd that there was such a severe reaction to what happened here.
Gotcha
Well, the detection is broken for KDE and backwards in the XDG implementation (which is also only used as a fallback when the three DE-specific implementations fail, even though all of them actually support XDG so having separate implementations is pointless).
Also with the way it’s implemented, it will have unexpected results for users who have both KDE and Gnome installed (or at least have leftover configuration files) - if you for example used KDE in the past with a theme considered to be “dark” by this and now use Gnome and have it set to light mode, you will get dark mode GZdoom with no obvious reason why.
Oh and the XDG implementation is also very fragile and will not work on everyone’s system because it depends on a specific terminal utility being installed. The proper way would be to use a DBus library and get the settings through that.
And when somebody comes to fix it, they will have to figure out a) what’s so special about the DE-specific implementations that XDG wasn’t enough (they might just assume that XDG isn’t supported widely enough), b) learn how to detect dark theme properly on the DE they’re fixing, c) rework the code so that there is a difference between “this DE wants light mode” and “couldn’t figure out of this DE is in light or dark mode” - both of these are now represented by the “false” return value.
I don’t think a well written and functioning code made with AI assistance would get a response this strong, but the problem here is that the code is objectively bad and its (co-)author kept doubling down about something they probably barely even checked.
I mostly agree with you, but this is not quite true:
XDG implementation (which is also only used as a fallback when the three DE-specific implementations fail, even though all of them actually support XDG so having separate implementations is pointless)
Yes, the DE-specific implementations is pointless (as far as I know, I use a WM), but the XDG implementation is actually used first, and the function returns true if any impl returns true, like
xdg() || gnome() || gnome_old() || kde()
.rework the code so that there is a difference between “this DE wants light mode” and “couldn’t figure out of this DE is in light or dark mode” - both of these are now represented by the “false” return value.
This isn’t that bad? Yes, having an enum with three variants would be better and more readable, but the code just defaults to light mode if nothing wants dark mode, and prefers dark mode even if separate impls want both light and dark mode.
With multiple impls, you have to resolve conflicts somehow. You could, for example, match on current DE/WM name, only using the current DE’s impl, defaulting to XDG, avoiding the problem entirely or just use first impl that doesn’t return “default” or “error”.
I don’t like AI generated code, having reviewed some disgusting slop before. But it’s better to criticize the code’s actual faults, like the incorrect impls (which you listed) or failing the Linux CI.
Yes, the DE-specific implementations is pointless (as far as I know, I use a WM), but the XDG implementation is actually used first, and the function returns true if any impl returns true, like
xdg() || gnome() || gnome_old() || kde()
.True, I must’ve read the code wrong when making the comment.
This isn’t that bad?
Yes, which is why I take issue with a PR (or rather what should have been a PR) that introduces crap code with clearly visible low effort improvements - the submitter should’ve already done that so the project doesn’t unnecessarily gain technical debt by accepting the change.
With multiple impls, you have to resolve conflicts somehow.
Yep, that’s why I think it’s important for the implementations to actually differentiate between light and fail state - that’s the smallest change and allows you to keep the whole detection logic in the individual implementations. Combine that with XDG being the default/first one and you get something reasonable (in a world where the separate implementations are necessary). You do mention this, but I feel like the whole two paragraphs are just expanding on this idea.
But it’s better to criticize the code’s actual faults (…)
I made a mistake with the order in which the implementations are called, but I consider the rest of the comment to still stand and the criticisms to be valid.
Due to some disagreements—some recent; some tolerated for close to 2 decades—with how collaboration should work, we’ve decided that the best course of action was to fork the project
Okay, that was always allowed!
Programming is the weirdest place for kneejerk opposition to anything labeled AI, because we’ve been trying to automate our jobs for most of a century. Artists will juke from ‘the quality is bad!’ to ‘the quality doesn’t matter!’ the moment their field becomes legitimately vulnerable. Most programmers would love if the robot did the thing we wanted. That’s like 90% of what we’re looking for in the first place. If writing ‘is Linux in dark mode?’ counted as code, we’d gladly use that, instead of doing some arcane low-level bullshit. I say this as someone who has recently read through IBM’s CGA documentation to puzzle out low-level bullshit.
You have to check if it works. But if it works… what is anyone bitching about?
You have to check if it works. But if it works… what is anyone bitching about?
They’re bitching about him pushing untested (by his own words) and broken code straight to main instead of going through the proper “PR, Review, Merge” loop like anyone else.
If you “have to check if it works” it should be in a PR for people to play with it, suggest improvements and make changes, not directly in the codebase.They’d already admitted they accidentally pushed to the wrong branch and cleaned it up.
There’s no way they actually checked that it works. It includes code for:
- XDG
- GNOME
- “GNOME_old”
- KDE
Verifying this would mean logging into several different desktop environments.
It’s also extremely fragile code, running external commands and filtering through various files. There just is no good API on Linux for querying whether the desktop environment is using a dark theme, so it’s doing absolutely inane shit that no sane developer would type out.
Because it’s a maintenance nightmare. Because they almost certainly don’t actually need to solve this. That’s software development 101, to not write code that you don’t actually need. But apparently some devs never got the memo that this is because of the maintenance cost, not because you weren’t able to generate the code up until now.
If it works it can still be worse code than if somebody had just read documentation.
Then again, dead internet theory means nobody watches what AI is “documenting” and in the future documentation will be worthless.
“Just” read documentation, says someone assuming past documentation is accurate, comprehensible, and relevant.
I taught myself QBASIC from the help files. I still found Open Watcom’s documentation frankly terrible, bordering useless. There’s comments in the original Doom source code lamenting how shite the dead-tree books were.
The article mentioned there is a long history of forks in the open source Doom world. It seems the majority of the active developers just moved to the new repository.
Hysterical drama? How cute.
Prediction (based on crystal ball): in 6 months, after writing CoC and some other abbreviated nonsense, the fork will be abandoned.
Hello again, inflammatory human utilizing a robotic facade.
Have you said anything positive in your post history? Or do we need to wait for rev. 10?
I’m very tired and got mixed up for a moment about which of you was the bot and thought “Awesome! A bot that points out when someone is being continuously negative!”
I hope someone makes a bot like that. That would be a good bot to have.
Lol, sorry for not being a bot - this dude’s just been posting the same shitty takes on almost all of my sub feed, so I enjoy calling them out. They’re human tho, since they did craft responses that weren’t slop at least.
Jolly good, carry on!