Little bit of everything!

Avid Swiftie (come join us at [email protected] )

Gaming (Mass Effect, Witcher, and too much Satisfactory)

Sci-fi

I live for 90s TV sitcoms

  • 74 Posts
  • 3.76K Comments
Joined 2 years ago
cake
Cake day: June 2nd, 2023

help-circle


  • There’s many ways to do this. Saving the disk state is one, I believe that’s what the other person suggested - essentially stores the disk as an image which then you use for future vms as your jumping off point. This is also essentially how workstations are deployed at companies. (Essentially being the key word). Cloud providers have different names for this too, in AWS this is called their AMI.

    Another option is Ansible, which essentially handles deploying a VM by running your scripts for you. I haven’t played too much with this, and I doubt it works with VirtualBox, but it’s something you may want to look into, it would definitely uplevel your skills.

    Thirdly is dependent on what you actually use your VM for, you haven’t given your use cases but this is one of the reasons containerization became such a thing - because when running an app we mostly don’t care about the underlying system. It may be worth it to learn about docker.







  • That’s the nuance of AI that anyone who has done any actual work with ML has known for decades now. ML is amazing. It’s not perfect. It’s actually pretty far from perfect. So you should never ever use it as a solo check, but it can be great for a double check.

    Such as with cancer. AI can be a wonderful choice to detecting a melanoma, if used correctly. Such as:

    • a doctor has already cleared a mole, but if you want to know if it warrants a second opinion by another doctor. You could have the model to have a confidence of say, 80% sure that the first doctor is correct in that it is fine.

    • if you do not have access to a doctor immediately, it can be a fine check, again only to a certain percentage. Say that in this case in the future you are worried but cannot access a doctor easily. A patient could snap a photo and in this case a very high confidence rating would say that it is probably fine, with a disclaimer that it is just an AI model and if it changes or you are still worried, get it checked.

    Unfortunately, all of that nuance in that it is all just probabilities is completely lost on both the creators of all of these AI tools, and the risks are not actually passed to the users so blind trust is the number one problem.

    We see it here with police too. “It said it’s them”. No, it only said to a specific confidence that it might be them. That’s a very different thing. You should never use it to find someone, only to verify someone.

    I actually really like how airport security implemented it because it’s actually using it well. Here’s an ID, it has a photo of a person. Compare it to the photo taken there in person, and it should verify to a very high confidence that they are the same person. If in doubt, there’s a human there to also verify it. That’s good ML usage.