• 1 Post
  • 21 Comments
Joined 5 months ago
cake
Cake day: January 25th, 2024

help-circle
  • You would go for a Raspberry Pi when you need something it was invented for.

    Putting a computer on your motorcycle or robot or solar powered RV. Super small space or low-low power availability things, or direct GPIO control.

    A MiniMicro will run laps around a Pi for general compute, but you can’t run it off a cell phone battery pack. People only related Pis to general compute because of the push to sell them as affordable school computers, not because they were awesome at it, because they were cheap and just barely enough.


  • PassingThrough@lemmy.worldtoMemes@lemmy.mlGet rich quick
    link
    fedilink
    English
    arrow-up
    1
    ·
    18 days ago

    Forgive me, I’m no AI expert to fully compare the needed tokens per second measurement to relate to the average query Siri might handle, but I will say this:

    Even in your article, only the largest model ran at 8/tps, others ran much faster, and none of these were optimized for a task, just benchmarking.

    Would it be impossible for Apple to be running an optimized model specific to expected mobile tasks, and leverage their own hardware more efficiently than we can, to meet their needs?

    I imagine they cut out most worldly knowledge etc/use a lightweight model, which is why there is still a need to link to ChatGPT or Apple for some requests, would this let them trim Siri down to perform well enough on phones for most requests? They also advertised launching AI on M1-2 chip devices, which are not M3-Max either…


  • PassingThrough@lemmy.worldtoMemes@lemmy.mlGet rich quick
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    18 days ago

    Onboard AI chips will allow this to be local.

    Phones do not have the power to ~~~

    Perhaps this is why these features will only be available on iPhone 15 Pro/Max and newer? Gotta have those latest and greatest chips.

    It will be fun to see how it all shakes out. If the AI can’t run most queries on the phone with all this advertising of local processing…there’ll be one hell of a lawsuit coming up.

    EDIT: Finished looking for what I thought I remembered…

    Additionally, Siri has been locally processed since iOS 15.

    https://www.macrumors.com/how-to/use-on-device-siri-iphone-ipad/


  • PassingThrough@lemmy.worldtoMemes@lemmy.mlGet rich quick
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    19 days ago

    I think there’s a larger picture at play here that is being missed.

    Getting the weather is a standard feature for years now. Nothing AI about it.

    What is “AI” is, Hey Siri, what is the weather at my daughter’s recital coming up?

    The AI processing, calculated on-device if what they claim is true, is:

    1. the determination of who your daughter is
    2. What is a recital? An event? Are there any upcoming calendar events that match this concept?
    3. Is the “daughter” associated with this event by description or invitation? Yes? OK, what’s the address?
    4. Submit zip code of recital calendar event involving the kid to the weather API, and churn out a reply that includes all this information…

    Well {Your phone contact name}, it looks like it will {remote weather response} during your {calendar event from phone} with {daughter from contacts} on {event date}.

    That is the idea between on-device and cloud processing. The phone already has your contacts and calendar and does that work offline rather than educating an online server about your family, events and location, and requests the bare minimum from the internet, in this case nothing more than if you opened the weather app yourself and put in a zip code.


  • Plug it into a monitor or TV and keep an eye on the console.

    I have an older NUC that will not cooperate with certain brands of NVMe drive under PVE…the issue sounds like yours where it would work for an arbitrary amount of time before crashing the file system, attempting to remount read-only and rendering the system inert and unable to handle changes like plugging a monitor in later, yet it would still be “on”.


  • My understanding is that this is a rage-baitey misunderstanding.

    Yes, they are renaming the base game (to improve search results, it is speculated) but otherwise this is more of a soft-reboot, a free DLC(for owners of current DLC) with some core mechanic overhauls.

    It’s not even going to stop being an MMORPG, the marketing team was just allergic to the acronym for some reason.

    In fact, it was confirmed that for base game players without DLC, this is all just a big nothing. We’ll still keep our progress and data, just not get any new DLC content(obviously), though the rebalancing will still trickle down.

    New World isn’t shutting down, it’s getting a new DLC and a less generic name, the marketing guys just tried to oversell it like a new game. Guess they earned their bonus because everyone is talking about it now…


  • Genuine curiosity…what are some proposed solutions we think Valve can implement to solve this crisis?

    I ask because the line about VAC being a joke gave me a thought…VAC is such a joke because it is so simple and un-invasive. Do we really want VAC “upgraded” to the level of more effective Anti-cheats, where it cuts down the bots but is now a monitoring kernel service? Just a few weeks ago people were in an uproar about the new Vanguard anti-cheat…do we want that for Valve? Or do we think they can do it a better way?

    As an aside, honestly in my mind community servers with a cooperative ban list plugin might be the most effective solution of all…it would still be a game of whack a mole since they can always churn out new accounts, but that’s what gives me pause about other solutions because the only real solutions to slow cheaters start to sound like charging for the game(to make account creation costly) or implementing a bulletproof system of hardware bans, which means invasive solutions that can be certain they aren’t virtual machines or such.





  • Second this. I don’t believe the chef would care.

    Whether all at once, over hours, for one table or six, all you are to the chef is plates to be filled. Except for timing a table’s dishes to send out at once they wouldn’t even care what table to go to, much less if the same customer is making repeat orders or a quick table turnaround on multiple customers. He gets his pay all the same either way.

    No, I think this is solely with the server. Your choices annoyed her, and if there were tips involved even more so. Quicker you are in and out is the quicker you leave your tip and she gets another customer in to tip, which depending on your location could be very important to her livelihood.




  • I’ll take a compromise where “3.1” is etched in each head end, and I can trust that “3.1” means something, and start with that.

    The real crux of the issue is that there is no way to identify the ability of a port or cable without trying it, and even if labeled there is/was too much freedom to simply deviate and escape spec.

    I grabbed a cable from my box to use with my docking station. Short length, hefty girth, firm head ends, certainly felt like a featured video/data/Dock cable…it did not work. I did work with my numpad/USB-A port bus thing though, so it had some data ability(did not test if it was 2.0 or 3.0). The cable that DID work with my docking station was actually a much thinner, weaker feeling one from a portable monitor I also had. So you can’t even judge by wiring density.

    And now we have companies using the port to deviate from spec completely, like the Raspberry Pi 5 technically using USB-C, but at a power level unsupported by spec. Or my video glasses that use USB-C connections all over, with a proprietary design that ensures only their products work together.

    Universal appearance, non-universal function, universal confusion.

    I hate it. At least with HDMI, RCA, 3.5mm, Micro-USB…I could readily identify what a port and plug was good for, and 99/100 the unknown origin random wires I had in a box worked just fine.


  • Actually, that leads me to another point:

    One upon a time, the concept behind a universal USB-C connector was so we could do exactly that.

    Laptop? Phone? Camera? America? Germany? Japan? Power? Connect the to TV? Internet?

    Wouldn’t matter anymore. USB-C to cover it all. Voltage high for the laptop, low for the camera, all available just the same in every country, universal. So yes, fill the airports and hotels with them. Use them for power and to play videos on the TV. Because we weren’t supposed to have to question the voltage or abilities of the ports and cables in use.

    Did/will that future materialize?


  • I feel the only place for a €1 cable is met by those USB-A to C cables that you get with things for 5V charging. That’s it. And it’s very obvious what the limits on those are by the A plug end.

    Anything that wants to be USB-C on both ends should be fully compatible with a marked spec, not indistinguishable from a 5V junk wire or freely cherry picking what they feel like paying for.

    Simply marking on the cable itself what generation it’s good for would be a nice start, but the real issue is the cherry picking. The generation numbers don’t cover a wire that does maximum everything except video. Or a proprietary setup that does power transfer in excess of spec(Dell, Raspberry Pi 5). But they all have the same ends and lack of demarcation, leading to the confusion.




  • I wouldn’t say it’s only Critical, LTSC still gets average security fixes. They don’t get Feature updates, but they still get Security updates, is how it’s normally put. And it’s not as bad as it sounds. Even as a gamer stability is a good thing, and there are plenty of third party softwares for any desirable “features” that get delayed or skipped. If LTSC gets any fewer security updates it’s because it has less built in crap to need updating.

    I’ve never needed funny graphics in my taskbar search bar or Bing in my start menu or the Edge bar or whatever it was that now clutters my friend’s task bars as of the last Feature update. But I still get my security fixes and Defender definitions every Patch Tuesday.

    But the trick is getting a copy, true.


  • I won’t claim to know for sure, but I’ll place my bet on it still being about motivated by profit and growth. Supposedly Windows 10 was supposed to be the last Windows ever, and move to an eternal patching process, but I guess that didn’t stick. So obviously just keeping you on Windows isn’t enough, they found a need to create a refresh.

    I did notice that refresh has new hardware requirements, like TPM modules and such. Deals with the OEMs to get people to buy/build new PCs?

    There’s talk of advertisements and sponsored links in the very Start Menu, so partnerships with advertisers to get closer to your daily activities?

    I won’t say I know for sure, because I only use Windows for video games. So, I too will be running Windows 10 until the games don’t work anymore. Might I recommend, if you can get a copy, Windows 10 LTSC? It is a bared bones version of Windows made (by Microsoft) for enterprises and governments who would never buy into consumer features like advertising and analytics, so it’s very clean, fast, and not full of spying junk or ads like the Home versions. And it hasn’t bugged me once about upgrading. All my games run fine after some one-time minor command prompt foolery to get the Store and XBOX game pass apps back.

    EDIT: Also, LTSC is Long-Term Support Channel, so additionally it will be supported longer than the regular editions, and be safer longer. Unless they change their minds this time around of course, but I doubt it. You don’t rush the government through a PC upgrade if you want them to fund you.