Yes, but at least they do admit it. It’s way worse if you post “help, XY isn’t working, how can I solve this?” and someone posts an AI answer without disclosing it. It cloud be right, but more often than not you’re now exchanging comments with someone who will just feed your output into an LLM and basically paste its answers back. Worse if they just summarize it so that you don’t easily see this happening (e.g. summarize the output and just copy&paste the necessary commands/config entries,…)
And that’s a waste of time. I could ask $LLM directly for that.
Yes, but at least they do admit it. It’s way worse if you post “help, XY isn’t working, how can I solve this?” and someone posts an AI answer without disclosing it. It cloud be right, but more often than not you’re now exchanging comments with someone who will just feed your output into an LLM and basically paste its answers back. Worse if they just summarize it so that you don’t easily see this happening (e.g. summarize the output and just copy&paste the necessary commands/config entries,…)
And that’s a waste of time. I could ask $LLM directly for that.