

Not great performance at all.
That’s better than I was expecting to be perfectly honest.
I’m pretty impressed with the technology, but clearly it’s not ready for field use.
Not great performance at all.
That’s better than I was expecting to be perfectly honest.
I’m pretty impressed with the technology, but clearly it’s not ready for field use.
Great article! For a few years, I was always deterred from projects because they had already been done and better, so there was no reason to do it. Now, though, I just enjoy implementing things in my own janky way and learning a bit along the way.
It’s also not all-or-none. Someone who otherwise is really interested in learning the material may just skate through using AI in a class that is uninteresting to them but required. Or someone might have life come up with a particularly strict instructor who doesn’t accept late work, and using AI is just a means to not fall behind.
The ones who are running everything through an LLM are stupid and ultimately shooting themselves in the foot. The others may just be taking a shortcut through some busy work or ensuring a life event doesn’t tank their grade.
I see both points. You’re totally right that for a company, it’s just the result that matters. However, to Bradley’s, since he’s specifically talking about art direction, the journey is important in so much as getting a passable result. I’ve only dabbled with 2D and 3D art, but converting to 3D requires an understanding of the geometries of things and how they look from different angles. Some things look cool from one angle and really bad from another. Doing the real work allows you to figure that out and abandon a design before too much work is put in or modify it so it works better.
When it comes to software, though, I’m kinda on the fence. I like to use AI for small bits of code and knocking out boilerplate so that I can focus on making the “real” part of the code good. I hope the real, creative, and hard parts of a project aren’t being LLM’d away, but I wouldn’t be surprised if that’s a mandate from some MBA.
Yeah, it’s not technically impossible to stop web scrapers, but it’s difficult to have a lasting, effective solution. One easy way is to block their user-agent assuming the scraper uses an identifiable user-agent, but that can be easily circumvented. The also easy and somewhat more effective way is to block scrapers’ and caching services’ IP addresses, but that turns into a game of whack-a-mole. You could also have a paywall or login to view content and not approve a certain org, but that only will work for certain use cases, and that also is easy to circumvent. If stopping a single org’s scraping is the hill to die on, good luck.
That said, I’m all for fighting ICE, even if it’s futile. Just slowing them down and frustrating them is useful.
Easier and more painless to just pay another company to do that and not have to worry about server security, spam, the endless SSH requests for ‘admin’, etc etc
Definitely. I do it for fun though. I’m kind of a masochist 😂
Rolling your own email is a pain. That said, I use a VPS and host my own server with domain name and site for $5/month. Setting it up was a pain, but once you get all the records right so you’re not considered spam, it works really well. That said, I haven’t done anything with webmail; I strictly use IMAP and SMTP.
Damn, don’t go giving them ideas!
Ah, gotcha. I didn’t go too deep into the code, just did a cursory look. I think it’s still an interesting concept.
Agreed! In my little online ecosystem, there’s been a number of stories of federal workers resisting, cities teaching people their rights to thwart ICE, etc. Since that may or may not make it out of certain bubbles, we should be boosting stories like that and sending them around so others don’t feel alone and do feel empowered to resist.
I don’t know why this is getting downvoted. It seems like an interesting concept for certain use cases, and it looks like it’s just a tiny team.
It’s not the most practical thing in the universe, but I have a small VPS that I host my email on for myself and a couple others (5 addresses in total). It’s a bit of a pain to set up, but once it’s working, it is really nice to have that kind of control.
This is why I am dreading when my 2017 dumb TV dies. It’s really telling that dumb TVs, which should be cheaper to produce and sell, are either not available or very expensive (as in commercial displays). Really proves the point that the consumer is really the product.
The thing I’m heartened by is that there is a fundamental misunderstanding of LLMs among the MBA/“leadership” group. They actually think these models are intelligent. I’ve heard people say, “Well, just ask the AI,” meaning asking ChatGPT. Anyone who actually does that and thinks they have a leg up are insane and kidding themselves. If they outsource their thinking and coding to an LLM, they might start getting ahead quickly, but they will then fall behind just as quickly because the quality will be middling at best. They don’t understand how to best use the technology, and they will end up hanging themselves with it.
At the end of the day, all AI is just stupid number tricks. They’re very fancy, impressive number tricks, but it’s just a number trick that just happens to be useful. Solely relying on AI will lead to the downfall of an organization.
Congratulations! Enjoy the journey! You’ll look back in a few years and wonder how you ever managed with a Windows set up while you slip into the comfy-ness of your customized system.
Yes! “AI” defined as only LLMs and the party trick applications is a bubble. AI in general has been around for decades and will only continue to grow.
Yeah, they’ll probably have to check everything. Though, I wonder if even just checking that everything is good to go would save time from manually re-writing it all. While it may not be a smashing success, it could still prove useful.
I dunno, I’m interested to see how this plays out.
I think this is an interesting idea. If they’re able to pull it off, I think it will cement the usefulness of LLMs. I have my doubts, but it’s worth trying. I’d imagine that the LLM is specially tuned to be more adept at this task. Your bog-standard GPT-4 or Claude will probably be unreliable.
I can see the allure for places wanting to keep certain trouble-makers out as a precaution, but this gets so close to a privatized social credit score that it’s beyond uncomfortable.
That’s why I’m so impressed with how well it’s actually working. When they get off that really weird self-imposed restriction, it could be an interesting technology.