I had some similar and obscure corruption issues that wound up being a symptom of failing ram in a main server node. After that, only issues have been conflicts. So I’d suggest checking hardware health in addition to the ideas about backups vs sync.
I had some similar and obscure corruption issues that wound up being a symptom of failing ram in a main server node. After that, only issues have been conflicts. So I’d suggest checking hardware health in addition to the ideas about backups vs sync.
Was about to post a Hugging Face link til I finished reading. For what it’s worth, once you have Ollama installed it’s a single command to download, install, and immediately drop into a chat with a model, either from Ollama’s library or Hugging Face, or anyone else. On Arch the entire process to get it working with gpu acceleration was installing 2 packages then start ollama.
Orthokeratology lenses reshape your cornea overnight. Been using them for years, heartily recommend.
Key detail: they’re not dropping it because they’re giving up, the judge dismissed it without prejudice, which means that in 4 years they can pick the case back up. Under a Trump DoJ the case would likely have ended with prejudice, closing it permanently.
I haven’t gone through all their work, but some of the delisted maintainers were working on driver support for Baikal, a Russia based electronics company. Their work includes semiconductors, ARM processors. Given the sanctions against Russia, especially for dual use stuff like domestic semiconductors, I would expect that Linus and other maintainers were told or concluded that by signing off and merging their code they’d be personally violating sanctions.
I recently removed in editor AI cause I noticed I was acquiring muscle memory for my brain, not thinking through the rest past the start of a snippet that would get an LLM to auto complete. I’m still using LLMs, particularly for languages and libraries I’m not familiar with, but using the artifacts editors in ChatGPT and Claude.
Given the ease of implantation of end to end encryption now, it’s a reasonable assumption that anything not e2ee is being data mined. E2ee has extensive security benefits, for example even if your data is dumped the info is still useless. So, there has to be a compelling reason to not use it.
People haven’t really changed. As always, power corrupts. When the rewards are great enough, it seems people are often enough willing to compromise their integrity.
Key detail in the actual memo is that they’re not using just an LLM. “Wallach anticipates proposals that include novel combinations of software analysis, such as static and dynamic analysis, and large language models.”
They also are clearly aware of scope limitations. They explicitly call out some software, like entire kernels or pointer arithmetic heavy code, as being out of scope. They also seem to not anticipate 100% automation.
So with context, they seem open to any solutions to “how can we convert legacy C to Rust.” Obviously LLMs and machine learning are attractive avenues of investigation, current models are demonstrably able to write some valid Rust and transliterate some code. I use them, they work more often than not for simpler tasks.
TL;DR: they want to accelerate converting C to Rust. LLMs and machine learning are some techniques they’re investigating as components.
I have LTS and zen kernels installed in addition to the default Arch one, that should prevent this yes?
What do you mean by “this stuff?” Machine learning models are a fundamental part of spam prevention, have been for years. The concept is just flipping it around for use by the individual, not the platform.
If by reliably you mean 99% certainty of one particular review, yeah I wouldn’t believe it either. 95% confidence interval of what proportion of a given page’s reviews are bots, now that’s plausible. If a human can tell if a review was botted you can certainly train a model to do so as well.
Cool it with the universal AI hate. There are many kinds of AI, detecting fake reviews is a totally reasonable and useful case.
KDE Connect and Syncthing do the trick for most stuff. For all else, all hail the USB C M.2 NVME enclosure.
Title worried me for a moment that they were dropping Steam Input; happy to see they seem intent on the opposite.
Well this is a tremendous step in the wrong direction. The economic problem is the ad supported model in the first place, no matter how it’s run. This is the same thing Google does, they keep user data to themselves and sell the ad placement. So now Mozilla has the same economic incentives as Google. Unfathomably bad move.
If you read carefully this is actually very similar to the Steam news. I doubt Valve or GOG care, but generally the games are “sold” by the publisher as non transferable licenses for you to play them. So the part that matters isn’t up to them.
Wonder how we should interpret the country “XD” being on the list. As far as I can tell its never been used for any real country.
Funny how the DOS equivalent of ls is dir, so before the GUI folder metaphor.
Most if not all leading models use synthetic data extensively to do exactly this. However, the synthetic data needs to be well defined and essentially programmed by the data scientists. If you don’t define the data very carefully, ideally math or programs you can verify as correct automatically, it’s worse than useless. The scope is usually very narrow, no hitchhikers guide to the galaxy rewrite.
But in any case he’s probably just parroting whatever his engineers pitched him to look smart and in charge.