I need to be honest about something: I wrote this post with an AI.
Not just edited by AI. Written with it. I fed it my GDPR data, asked it to analyze the files, and said “help me write a post about this.” The same tool I’m criticizing is the tool I used to criticize it.
And tomorrow, this conversation — me processing my discomfort about data collection — will probably end up in another memories.json somewhere. Another entry in another psychological profile. Another data point about how I think, what I fear, what I’m willing to expose.
I could have written this myself. It would have taken longer. It might have been worse. But I didn’t, because the AI is faster, and I’ve gotten used to outsourcing my thinking.
That’s the trap. It’s not that AI is evil. It’s that it’s genuinely useful. So useful that you keep feeding it, even when you know exactly what it costs.
Even admitting it just seems embarrassing to me, but w/e. Brain-dead gonna brain-dead.
Even admitting it just seems embarrassing to me, but w/e. Brain-dead gonna brain-dead.