I advocate for logical and consistent viewpoints on controversial topics. If you’re looking at my profile, I’ve probably made you mad by doing so.

  • 1 Post
  • 149 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle
  • Ace T'Ken@lemmy.catoLemmy Shitpost@lemmy.worldLemmy be like
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 month ago

    DB0 has a rather famous record of banning users who do not agree with AI. See [email protected] or others for many threads complaining about it.

    You have no way of knowing what the scale would be as it’s all a thought experiment, however, so let’s play at that. if you see AI as a nearly universal good and want to encourage people to use it, why not incorporate it into things? Why not foist it into the state OS or whatever?

    Buuuuut… keep in mind that in previous Communist regimes (even if you disagree that they were “real” Communists), what the state says will apply. If the state is actively pro-AI, then by default, you are using it. Are you too good to use what your brothers and sisters have said is good and will definitely 100% save labour? Are you wasteful, Comrade? Why do you hate your country?


  • Ace T'Ken@lemmy.catoLemmy Shitpost@lemmy.worldLemmy be like
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    1 month ago

    I’ll answer. Because some people see these systems as “good” regardless of political affiliation and want them furthered and see any cost as worth it. If an anarchist / communist sees these systems in a positive light, then they will absolutely try and use them at scale. These people absolutely exist and you could find many examples of them on Lemmy. Try DB0.


  • Ace T'Ken@lemmy.catoLemmy Shitpost@lemmy.worldLemmy be like
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    1 month ago

    Hi. I’m in charge of an IT firm that is been contracted to carry out one of these data centers somewhat unwillingly in our city. We are currently in the groundbreaking phase but I am looking at papers and power requirements. You are absolutely wrong on the power requirements unless you mean per query on a light load on an easy plan, but these will be handling millions if not billions of queries per day. Keeping in mind that a single user query can also be dozens, hundreds, or thousands of separate queries… Generating a single image is dramatically more than you are stating.

    Edit: I don’t think your statement addresses the amount of water it requires as well. There are serious concerns that our massive water reservoir and lake near where I live will not even be close to enough.

    Edit 2: Also, we were told to spec for at least 10x growth within the next 5 years which, unless there are massive gains in efficiency, I don’t think there are any places on the planet capable of meeting the needs of, even if the models become substantially more efficient.











  • Ace T'Ken@lemmy.catoLemmy.ca's Main Community@lemmy.caUpgraded to 0.19.11
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    5 months ago

    Sure! So, for example, our current weekly topic and this new topic by a user have been downvoted by user named @[email protected]. Looking around, there’s a few others within they’ve downvoted as well with no upvotes anywhere. Checking the modlog shows they’ve been banned from other communities for vote manipulation as well (among other things). They don’t need to be able to do this.

    Checking older posts, I see that someone named @[email protected] had gone in and downvoted dozens of things with no upvotes. Nearly every topic we had at the time, in fact (or at least as many as I looked at). This is not only against community rules, but it’s a pretty shit thing to do.

    We’ve also got old blank accounts like @[email protected] and @[email protected] with zero posts or anything of any kind downvoting. They’re not contributing anything anywhere, so they don’t need to be there at all.

    That’s just a few examples, but there’s more.





  • The part that doesn’t make sense is how a guess on a QC in a binary is any better than a scientist just guessing an outcome from a binary. Yeah, it can do it a lot, but if you can’t test the outcome to verify if it’s correct or not, how is it better than any other way of guessing outcomes?

    Statistically, it absolutely isn’t. Even if it continually narrows things down via guesses, it’s still no more valuable than any other guesses. Because in all the whitepapers I’ve seen, it’s not calculating anything because it can’t. It’s simply assuming that one option is correct.

    In the real world, it’s not a calculation and it doesn’t assist in… anything really. It’s no better than a random number generator assigning those numbers to a result. I don’t get the utility other than potentially breaking numerical cryptography.


  • So that’s the part that gets me stuck. There is no clear answer and it has no way to check the result as QC aren’t capable of doing so (otherwise they wouldn’t be using QC since they can only be based on binary inputs and binary guesses of true / false outcomes on a massive scale). How can it decide that it is “correct” and that the task is completed?

    Computations based on guesses of true / false can only be so accurate with no way to check the result in the moment.


  • I appreciate the reply!

    I made the attempt, but couldn’t parse that first link. I gathered that it was about error correction due to the absolutely massive number of them that crop up in QC, but I admit that I can’t get much further with it as the industry language is thick on that paper. Error reduction is good, but it still isn’t on any viable data, and it’s still a massive amount of errors even post-correction. It’s more of a small refinement to an existing questionable system, which is okay, but doesn’t really do much unless I’m misunderstanding.

    The Willow (and others) examples I’m skeptical on. We already have different types of chips for different kinds of operations, such as CPUs, GPUs, NPUs, etc. This is just one more kind of chip that will be found in computers of the future. Of course, these can sometimes be combined into a single chip too, but you get the idea.

    The factorization of integers is one operation that is simple on a quantum computer. Since that is an essential part of public / private key cryptography, those encryption schemes have been recently upgraded with algorithms that a quantum computer cannot so easily unravel.

    With quantum computing, a system of qubits can be set up in such a way that it’s like a machine that physically simulates the problem. It runs this experiment over and over again and measures the outcome, until one answer is the clear winner. For the right type of problem, and with enough qubits, this is unbelievably fast.

    Problem is, this only works for systems that have a known answer (like cryptography) with a verifiable result, otherwise the system never knows when the equation is “complete”. It’s also of note that none of these organizations are publishing their benchmarking algorithms so when they talk about speed, they aren’t exactly being forthright. I can write code that runs faster on an Apple 2e than a modern x64 processor, doesn’t mean the Apple 2e is faster. Then factor in how fast quantum systems degrade and it’s… not really useful in power expenditure or financially to do much beyond a large corporation or government breaking encryption.


  • Well, I love being wrong! Are you able to show a documented quantum experiment that was carried out on a quantum computer (and not an emulator using a traditional architecture)?

    How about a use case that isn’t simply for breaking encryption, benchmarking, or something deeply theoretical that they have no way to know how to actually program for or use in the real world?

    I’m not requesting these proofs to be snarky, but simply because I’ve never seen anything else beyond what I listed.

    When I see all the large corporations mentioning the processing power of these things, they’re simply mentioning how many times they can get an emulated tied bit to flip, and then claiming grandiose things for investors. That’s pretty much it. To me, that’s fraudulent (or borderline) corporate BS.