• 6 Posts
  • 401 Comments
Joined 3 years ago
cake
Cake day: June 18th, 2023

help-circle
  • I’ve had a similar experience at my job, where we’ve gotten an unlimited access to a few models.

    There’s one huge problem I’ve very quickly ran into - skill attrition. You very quickly get lazy, and stop being able to critically think about problems. Hell, I’ve only had access to it for two weeks, and I’m starting to see the effects. “Can you add this button?” is a very simple change that I could probably make immediately, but AI can make it a little bit faster, and without me putting in the effort. Or it can at least show me the correct script to put it in, without me having to go scouring the code looking for it. It’s addicting, and quite scary. YMMV, you might have stronger willpower and be able to switch between lazy and locked in mode, but I very quickly found out I can’t.

    But is it useful? That very much depends on what do you want out of your job, and both cases have major (and mostly similar) problems.

    If you don’t really care about the quality of your job, and are there just to work your 8/5 and get money, hoping to just balance effort vs. quality so they won’t fire you, the it might help. Especially at this point, where management isn’t really used to it that much, you can get away with a lot. But, eventually, you will very probably need to look for a new job, and good luck getting through an interview when you haven’t really thought about code without the help of an AI for the past two years. The fact that you started coding before AI is the only advantage you now have against literally EVERYONE who can do the same job with AI. And every day you don’t write a piece of code from scratch, you are loosing that advantage.

    I have I job I don’t particularly care about, but I still use it as a learning opportunity. It might be vastly different in other projects, but my job is mostly just support and bugfixing on a game that has been released for years at this point by a large developer, so nothing really involved, so I can usually afford to use my time to research things I wasn’t familiar with, look into things we could do better thanks to new tech or updates that have been released, and how to refactor or rewrite our code into it. Or making tools that would make our testing easier. I could just not do that, easily get my paycheck, and be glad I have a somewhat stable position, but that would not help me much. In this case, AI is actively harmful for what I’m trying to get out of my job, even if it works pretty well. It only erodes my skills I have, which are not very practiced even without AI, since bug fixing isn’t really much of development. Adding AI to the mix would just throw away my years of college and dozens of projects I’ve learned on. And I won’t learn anything new.

    Obviously, if you care about your job output and want to do it perfectly, you don’t want to erode your skills, and you don’t want AI output in your code. AI by definition outputs mediocre and average work, riddled with hard-to-spot bugs, and you should not be ok with mediocre if you really care about the work you do and leave behind.

    Especially the point about the pretty large probability of having to seek a new job eventually is IMO the most important thing that’s really worth considering, before you go all in on AI. It’s something that a lot of programmers spend years (and in less developed countries thousands of dollars) in learning, and throwing it away in favor of a service that will very soon need to massively ramp up their costs to get out of red and earn billions they have invested into it is not worth it.

    Currently, AI is cheap. It also actively harms your ability to do the job without it. They have also invested billions of dollars that they need to eventually make up, and you will eventually need to pass a job interview. Keep that in mind when deciding to offload your thinking to AI.



  • In a hypothetical situation where you get a law passed in your country, where it’s mandatory to perform age verification on all social media apps, it’s simple.

    No verification? Jail time. Will they go after you? They could, if someone pointed them towards your server. (I think they even have to, at least in our country, the government has to persecute a crime they are made aware of if I remember my college law courses right)

    In some states, if I understand it right (based on a quick googling, might be false) failing to do verification for porn can be considered as a felony. It’s a slightly different example (porn vs. social networks), but if the laws are written in the same way, there’s not really much you can do about it.

    Completely anonymous hosting that’s in no way tied to you (through IP, credit card, location, domain, logs, etc) is difficult. While you’d still probably be fine if you have a private-use server, you’d still give anyone who doesn’t like you and knows about it a pretty easy way how to make your life a lot more difficult. This of course heavily depends on how would (will) the laws be written in your country, but give the track record of lawmakers understanding tech, there is a chance that even small self-hosted stuff would catch flak. If it’s written in such a way to not be i.e limited by user count, then there’s not much you can do.

    A lawyer would probably be able to talk you out of it, but you’d still be charged and it would suck (and be expensive) to deal with.

    So, yeah. “How could the government force me to enable it” boils down to “jail time”. I mean, it’s basically a similar question like “how could the government stop me from using Telegram or VPNs”, and IIRC there are some examples for that already.

    EDIT: Not having public sign-up enabled could be a way around it, since random people can’t make an account there, so you’re basically doing age-verification by a veto. However, if someone under-age got into your server, they then have a leverage on you, since they are there illegally (in the hypothetical scenario).





  • As far as I know, Cloudfare is the only registrar that offers you wholesale price, as in the price asked by the tld owners. So, you a registrar can’t go lower, because that’s what they pay for it.

    But, a lot of registrars will give you first year at a heavy discount (so, at a loss), just so they can ramp up the price to wholesale + a lot extra. I got my domain for like 5$, and they then asked for 40$ for renewal, while wholesale is around 25$.

    So, I just transfered to Cloudfare for the renewal. Tbh I don’t remember if it was the first or second year, and what are the transfer rules, but I think it should be possible to just buy a first year at heavy discount with i.e Namecheap or something, and immediately transfer to Cloudfare for the first renewal at wholesale price.




  • Ah, damn. Bitwarden has Agents.md. That doesn’t really fill me with confidence, and it’s the most critical software I use.

    I need to update my threat model, I’ve trusted them quite a lot to the point of using Bitwarden for MFA for less-important services (so it’s not really MFA, since both my password and MFA token is in Bitwarden, but it’s super convenient), and only had Yubikey for my Bitwarden account, so as long as the app itself isn’t compromised I should be good (and Bitwarden has a pretty good track record as far as I know), but if they are going to start vibe-coding their tools then it’s probably time to move to a proper MFA.


  • Oh, damn. You’re right.

    When I first saw this, I read through the readme, and it sounded pretty cool. Needless to say, I know nothing about physics.

    I didn’t suspect AI in the slightest, until I saw this comment thread.

    Now I’m pretty taken aback. Looking at it again, it should be pretty obvious. I wonder what was it about the way it was presented that made me believe it and not suspect AI in the slightest, because that’s a mistake I don’t really want to do again.

    Probably a combination of passionate presentation, topic I know nothing about combined with topic I love (game engines), and my whole interaction being “this is pretty cool” and moving on. I did try looking for some actual sources about the Tesla’s mythical “standart model”, which I found none, plus got suspicious about definiton of “standart model” feeling like it doesn’t match what the text was talking about, and I just moved on, but the conclusion I had was “i wonder what will turn up out of it”, instead of “probably llm halucination” as ot should’ve been.

    Oh well, I guess it’s time to properly lock in on actual textbook knowledge in fields I’m interrested in, because recognizing stuff like this in tutorials/posts and eventually books will be only harder, and it won’t be really feasible to rely on “I’ll research it on the internet when I need it”



  • I don’t really do courses anymore, but one thing that kind of matches the questions was playing through Turing Complete.

    It’s a game where you start with NAND gates, and slowly build up from there. Other gates, then a counter, adder, single-bit memory, etc, where every puzzle uses the component design’s you’ve build before. Eventually you build up to an ALU, RAM, add instructions and connect it up to a working CPU.

    It’s super fun, and even though hardware isn’t really something I usually look into, it has taught me a lot, way more than college courses about CPU architecture. Plus, seeing (and actually programming, in later levels) on a CPU of your own design, using your own opcodes, is actually pretty cool.





  • This is a really good point.

    This post is a great example of what will skipping a research and just trusting the first solution you find lead to.

    When you are researching the thing yourself, you usually don’t find the solution immediately. And if you immediately have something that seems to work, you’re even less likely to give up on that idea.

    However, even taking this into account (because the same can probably happen even if you do research the thing yourself - jumping to a first solution), I don’t understand how it’s possible that the post doesn’t make a single mention of any remote desktop protocols. I’m struggling to figure out how would you have to phrase your questions/promts/research so that VNC/RDP, you know - the tools made for exactly the problem they are trying to solve - does not comes up even once during your development.

    Like, every single search I’ve tried about this problem has immediately led me to RDP/VNC. The only way how I can see the ignorance displayed in the post is that they ignored it on purpose - lacking any real knowledge about the problem they are trying to solve, they simply jumped to “we’ll have a 60 FPS HD stream!”, and their problem statement never was “how to do low-bandwith remote desktop/video sharing”, but “how to stream 60 FPS low-latency desktop”.

    It’s mindboggling. I’d love to see the thought and development process that was behind this abomination.


  • Uh, I’m pretty damn sure I have seen an office with hundreds of people, all connected remotely to workstations, on enterprise network, without any of the problems they are talking about. I’ve worked remotely from a coffee shop Wifi without any lag or issues. What the hell are they going on about? Have they never heard about VNC or RDP?

    But our WebSocket streaming layer sits on top of the Moonlight protocol

    Oh. I mean, I’m sitting on my own Wifi, one wall between me with a laptop (it is 10 years old, though) and my computer running Sunshite/Moonlight stream, and I run into issues pretty often even on 30FPS stream. It’s made for super low-latency game streaming, that’s expected. It’s extremely wrong tool for the job.

    We’re building Helix, an AI platform where autonomous coding agents…

    Oh. So that’s why.

    Lol.