• 0 Posts
  • 1K Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle
  • People’s laziness?

    Well yes, that is a huge one. I know people who when faced with Google’s credible password suggestion say “hell no, I could never remember that”, then proceed to use a leet-speak thinking computers can’t guess those because of years of ‘use a special character to make your password secure’. People at work giving their password to someone else to take care of someething because everything else is a pain and the stakes are low to them. People being told their bank is using a new authentication provider and so they log dutifully into the cited ‘auth provider’, because this is the sort of thing that (generally not banks) do to people.

    to an extent

    Exactly, it mitigates, but still a gap. If they phish for your bank credential, you give them your real bank password. It’s unique, great, but the only thing the attacker wanted was the bank password anyway. If they phish a TOTP, then they have to make sure they use it within a minute, but it can be used.

    actually destroys any additional security added by 2fa

    From the user perspective that knows they are using machine generated passwords, yes, that setup is redundant. However from the service provider perspective, that has no way of enforcing good password hygiene, then at least gives the service provider control over generating the secret. Sure a ‘we pick the password for the user’ would get to the same end, but no one accepts that.

    But this proves that if you are fanatical about MFA, then TOTP doesn’t guarantee it anyway, since the secret can be stuffed into a password manager. Passkey has an ecosystem more affirmatively trying to enforce those MFA principles, even if it is, ultimately, generally in the power of the user to overcome them if they were so empowered (you can restrict to certain vendor keys, but that’s not practical for most scenarios).

    My perspective is that MFA is overblown and mostly fixes some specific weaknesses: -“Thing you know” largely sucks as a factor, if I human can know it, then a machine can guess it, and on the service provider there’s so much risk that such a factor can be guessed at a faster rate than you want, despite mitigations. Especially since you generally let a human select the factor in the first place. It helps mitigate the risk of a lost/stolen badge on a door by also requiring a paired code in terms of physical security, but that’s a context where the building operator can reasonably audit attempts at the secret, which is generally not the case for online services as well. So broadly speaking, the additional factor is just trying to mitigate the crappy nature of “thing you know” -“Thing you have” used to be easier to lose track of or get cloned. A magstripe badge gets run through a skimmer, and that gets replicated. A single-purpose security card gets lost and you don’t think about it because you don’t need it for anything else. The “thing you have” nowadays is likely to lock itself and require local unlocking, essentially being the ‘second factor’ enforced client side. Generally Passkey implementations require just that, locally managed ‘second factor’.

    So broadly ‘2fa is important’ is mostly ‘passwords are bad’ and to the extent it is important, Passkeys are more likely to enforce it than other approaches anyway.


  • Ok, I’ll concede that Chrome makes Google a relatively more popular password manager than I considered, and it tries to steer users toward generated passwords that are credible. Further by being browser integrated, it mitigates some phishing by declining to autofill with the DNS or TLS situation is inconsistent. However I definitely see people discard the suggestions and choose a word and think ‘leet-speak’ makes it hard (“I could never remember that, I need to pick something I remember”). Using it for passwords still means the weak point is human behavior (in selecting the password, in opting not to reuse the password, and in terms of divulging it to phishing attempt).

    If you ascribe to Google password manager being a good solution, it also handles passkeys. That removes the ‘human can divulge the fundamental secret that can be reused’ while taking full advantage of the password manager convenience.


  • Password managers are a workaround, and broadly speaking the general system is still weak because password managers have relatively low adoption and plenty of people are walking around with poorly managed credentials. Also doesn’t do anything to mitigate a phishing attack, should the user get fooled they will leak a password they care about.

    2FA is broad, but I’m wagering you specifically mean TOTP, numbers that change based on a shared secret. Problems there are: -Transcribing the code is a pain -Password managers mitigate that, but the most commonly ‘default’ password managers (e.g. built into the browser) do nothing for them -Still susceptible to phishing, albeit on a shorter time scale

    Pub/priv key based tech is the right approach, but passkey does wrap it up with some obnoxious stuff.


  • Passkeys are a technology that were surpassed 10 years before their introduction

    Question is by what? I could see an argument that it is an overcomplication of some ill-defined application of x509 certificates or ssh user keys, but roughly they all are comparable fundamental technologies.

    The biggest gripe to me is that they are too fussy about when they are allowed and how they are stored rather than leaving it up to the user. You want to use a passkey to a site that you manually trusted? Tough, not allowed. You want to use against an IP address, even if that IP address has a valid certificate? Tough, not allowed.


  • Yeah, but can they handle the collapse of going back to the company before the AI boom? They’ve increased in market cap 5000%, attracted a lot of stakeholders that never would have bothered with nVidia if not for the LLM boom. If LLM pops, then will nVidia survive with their new set of stakeholders that didn’t sign up for a ‘mere graphics company’?

    They’ve reshaped their entire product strategy to be LLM focused. Who knows what the demand is for their current products without the LLM bump. Discrete GPUs were becoming increasingly niche since ‘good enough’ integrated GPUs kind of were denting their market.

    They could survive a pop, but they may not have the right backers to do so anymore…


  • Nah, they already converted all their business clients to recurring revenue and are, relatively, not very exposed to the LLM thing. Sure they will have overspent a bit on datacenters and nVidia gear, but they continue to basically have most of global business solidly giving them money continuously to keep Office and Azure.

    In terms of longer term tech companies that could be under existential threat, I’d put Supermicro in there. They are a long term fixture in the market that was generally pretty modest and had a bit of a boost from the hyperscalers as ‘cloud’ took off, but frankly a lot of industry folks were not sure exactly how Supermicro was getting the business results they reported while doing the things they were doing. Then AI bubble pulled them up hard and was a double edged sword as the extra scrutiny seemingly revealed the answer was dubious accounting all along. The finding would have been enough to just destroy their company, except they were ‘in’ on AI enough to be buoyed above the catastrophe.

    A longer stretch, but nVidia might have some struggles. The AI boom has driven their market cap about 5000%. They’ve largely redefined most of their company to be LLM centric, with other use cases left having to make the most of whatever they do for LLM. How will their stakeholders react to a huge drop from the most important company on earth to a respectable but modest vendor of stuff for graphics? How strong is the appetite for GPU when the visual results aren’t really that much more striking than they were 3 generations of hardware back?


  • Broadly speaking, I’d say simulation theory is pretty much more akin to religion than science, since it’s not really testable. We can draw analogies based on what we see in our own works, but ultimately it’s not really evidence based, just ‘hey, it’s funny that things look like simulation artifacts…’

    There’s a couple of ways one may consider it distinct from a typical theology:

    • Generally theology fixates on a “divine” being or beings as superior entities that we may appeal to or somehow guess what they want of us and be rewarded for guessing correctly. Simulation theory would have the higher order beings likely being less elevated in status.
    • One could consider the possibility as shaping our behavior to the extent we come anywhere close to making a lower order universe. Theology doesn’t generally present the possibility that we could serve that role relative to another.

  • But that sounds like disproving a scenario no one claimed to be the case: that everything we perceive is as substantial as we think it is and can be simulated at full scale in real time by our own universe.

    Part of the whole reason people think of simulation theory as worth bothering to contemplate is because they find quantum physics and relativity to be unsatisyingly “weird”. They like to think of how things break down at relativistic velocities and quantum scale as the sorts of ways a simulation would be limited if we tried, so they like to imagine a higher order universe that doesn’t have those pesky “weird” behaviors and we are only stuck with those due to simulation limits within this hypothetical higher order universe.

    Nothing about it is practical, but a lot of these science themed “why” exercises aren’t themselves practical or sciency.




  • If a service were going to passkeys for sake of law enforcement or works be so much easier for them to just comply with bypassing auth to access the user data altogether. Passkey implementations originally only supported very credible offline mechanisms and only relaxed those requirements when it became clear the vast majority of people couldn’t handle replacing their devices with passkeys.

    For screen lock for the common person it was either that or nothing at all. So demanding a PIN only worked because most of the time the user didn’t have to deal with it owing to touching a fingerprint or face unlock.

    People hate passwords and mitigate that aggravation by giving random Internet forum the same password as their bank account. I wouldn’t want to take user passwords because I know I have a much higher risk of a compromise somehow leading to compromise of actually important accounts elsewhere.


  • So first is to accept this is more philosophy/religious sort of discussion rather than science, because it’s not falsifiable.

    One thing is that we don’t need to presume infinite recursion, just accept that there can be some recursion. Just like how a SNES game could run on a SNES emulator running inside qemu running on a computer of a different architecture. Each step limits the next and maybe you couldn’t have anything credible at the end of some chain, but the chain can nonetheless exist.

    If U0 existed, U1 has no way of knowing the nature of U0. U1 has no way of knowing ‘absolute complexity’, knowing how long of a time is actually ‘long’, or how long time passes in U0 compared to U1. We see it already in our simulations, a hypothetical self-aware game engine would have some interesting concepts about reality, and hope they aren’t in a Bethesda game. Presuming they could have an accurate measurement of their world, they could conclude the observed triangles were the smallest particles. They would be unable to even know that everything they couldn’t perceive is not actually there, since when they go to observe it is made on demand. They’d have a set of physics based on the game engine, which superficially looks like ours, but we know they are simplifications with side effects. If you clip a chair just right in a corner of the room, it can jump out through the seemingly solid walls. For us that would be mostly ridiculous (quantum stuff gets weird…), but for them they’d just accept it as a weird quirk of physics (like we accept quantum stuff and time getting all weird based on relative velocity).

    We don’t know that all this history took place, or even our own memories. Almost all games have participants act based on some history and claimed memories, even though you know the scenario has only been playing out in any modeled way for minutes. The environment and all participants had lore and memories pre-loaded.

    Similarly, we don’t know all this fancy physics is substantial or merely superficial “special effects”. Some sci-fi game in-universe might marvel at the impossibly complicated physics of their interstellar travel but we would know it’s just hand waving around some pretty special effects.

    This is why it’s kind of pointless to consider this concept as a ‘hard science’ and disproving it is just a pointless exercise since you can always undermine such an argument by saying the results were just as the simulation made them to be.




  • Except how bad was it for Microsoft?

    They didn’t lose share. For the people that rightfully saw Metro as a painful dumb direction in Windows design language, they just stuck with Windows 7. Microsoft didn’t have upside they wanted, but they didn’t have the downside.

    They tried to pump life into their mobile platform by throwing their desktop platform under the bus. Because they have zero competitive pressure, they attempt to do that with essentially zero downsides. Just like now they can make their OS little more than an advertising platform for the Microsoft Store and Microsoft services without real repurcussion.


  • With many bearaucracies there’s plenty of practically valueless work going on.

    Because some executive wants to brag about having over a hundred people under them. Because some proceas requires a sort of document be created that hasn’t been used in decades but no one has the time to validate what does or does not matter anymore. Because of a lot of little nonsense reasons where the path of least resistance is to keep plugging away. Because if you are 99 percent sure something is a waste of time and you optimize it, there’s a 1% chance you’ll catch hell for a mistake and almost no chance you get great recognition for the efficiency boost if it pans out.


  • Guess it’s a matter of degree, that was the sort of stuff I was alluding to in the first part, that you have all this convoluted instrumentation that you can dig into, and as you say perhaps even more maddening because at some times it’s needlessly over complicating something simple, and then at just the wrong time it tries to simplify something and ends up sealing off just the flexibility you might need.


  • jj4211@lemmy.worldtolinuxmemes@lemmy.world*Permanently Deleted*
    link
    fedilink
    arrow-up
    45
    arrow-down
    1
    ·
    17 days ago

    The things is you really can’t be that good with windows.

    Sure you can get good with registry and group policy and other stuff that is needlessly complicated to do relatively simple stuff. You can know your way around WMI and .net and powershell…

    But at some point, the software actively hides the specifics of what is wrong. You can’t crack open something to see why it’s showing some ambigious hexadecimal code or a plain screen. You can’t add tracing to step through their code to see what unexpected condition they hit that they didn’t prepare to handle. On Linux you are likely to be able to plainly see a stack trace, download the source code, maybe trace it, modify the source code.

    Windows is like welding the hood shut and wondering why mechanics have a hard time with the car.