I somehow didn’t think a regular JIT solution might be applicable here, but it is. Thank you! There seems to be a number of projects doing JIT for C++, will look at them.
I somehow didn’t think a regular JIT solution might be applicable here, but it is. Thank you! There seems to be a number of projects doing JIT for C++, will look at them.
So far I’ve been following recommendations from this person: https://old.reddit.com/r/NewMaxx/comments/16xhbi5/ssd_guides_resources_ssd_help_post_your_questions/
Try dmraid, it’s been designed to take over various formats of hardware RAID cards.
Kernel is not a monolithic application, and you cannot develop it like one. There are tons of actors: independent developers, small support companies (like Collabora), corporations, all with different priorities. There is a large number of independent forks (e.g. for obscure devices), that will never be merged, but need to merge e.g. security patches from the mainline. A single project management tool won’t do, not your typical business grade tracking&reporting tool.
CI is already there. Not a central one—again, distributed across different organizations. Different organizations have different needs for CI, e.g. supporting weird architectures that they need to develop against.
There is a reason Torvalds created git—existing tools just wouldn’t work. There might be a place for a similar revolution regarding a bugtracker…
This plea for help is specifically for non-coding, but still deeply technical work.
The thread is an attempt to merge a new file system, bcachefs
. This is a large change, requiring a lot of review from experienced developers, and getting anyone to do this work turned out to be difficult. Darrick here started talking how, in general, all development of file systems in Linux is troubled by lack of manpower.
I guess the best start would be to have a person to organize volunteers.
Another idea that just occurred to me. Maybe position: absolute; both the real content and the gibberish content with the same top, left, width, and height attributes so that the real content and the gibberish overlap and occupy the same location on the page. Make sure both the real and gibberish content elements have no background so that remains clear. Put the gibberish content in the DOM before the real content. (I think that will ensure that the gibberish appears behind the real content even without setting the z-index.) And then make JS set the color of the text in the gibberish element the same color as the background so humans can’t see it.
Be aware that these techniques can affect accessibility for people using screen readers.
lemmy.ml is hosted in EU, and lemmynsfw.com uses CloudFlare, which operates in EU. Worst case, issue a GDPR request to both.
Yep, thank you, that’s pretty close to what I imagined!
I do not have notes from that time anymore, sorry. I do recall though that after following a chain of citations I ended up at the paper in the center of this controversy. Nobody sane would cite in now except to point out its flaws, but if there’s a modern paper that cites a 10 year old paper that cites a 30 year old paper that cites it—people usually won’t notice.
From my experience, despite all the citogenesis described in other comments here, Wikipedia citations are still better vetted than in many, many scientific papers, let alone regular journalism :/ I recall spending days following citation links in already well-cited papers to basically debunk basic statements in the field.
A lack of planning on your part doesn’t constitute an emergency on mine.
Though I kind of think Japanese grammar cannot express this thought and the closest you can get is Ganbatte!
Good question! I quickly found this table, though this is yearly statistics only: https://www150.statcan.gc.ca/t1/tbl1/en/tv.action?pid=3510019201
Yep, it’s EU. File transfer shouldn’t be bad if your files are large, though it’s best if you tested it first—it might depend on your ISP’s peering and your prefered transfer protocols/tooling. Whether it’s reputable for your purpose, you probably have to do your own research. Also, remember that the offer I mentioned would only be equivalent in durability to a single-box RAID5 for your purposes, so not exactly equivalent to Google’s.
There’s Jottacloud with unlimited storage for 10 EUR/month, but they gradually slow down after first 5 TB. 30 TB might be a bit too much. There’s Hetzner with their dedicated 4×10TB machines for ~52 EUR, you could do RAID5 and have somewhat redundant 30 TB, at the cost of self-managing a dedicated machine. There are several providers doing regular S3 (which you can take advantage of with tools like rclone) with decent redundancy for 4-5 USD/TB + egress. For high-value data you should be probably spending more than 100 USD/month for 30TB in the cloud, or invest in actual hardware. Do you need hot access to this dataset, or is a cold storage archive enough?
Will they keep the dense email list view as an option? Seeing more than the 14 email messages visible on the screenshot in the post is useful to sort out large folders.
I’m surprised federation isn’t based on asymmetric cryptography. Let the public/private keys identify instances, as opposed to domains that risk being blocked by governments or bought by malicious third parties if the instance owner forgets to prolong it.
With that, implementing a change in domain names would be simple.
Last job, we started writing mixing bits of Kotlin in an otherwise mostly-Java in a monolithic Spring-based service. Good experience.
I’d probably be fine with hundreds or thousands of these hanging in memory. I suspect the generated code for a single query would be in hundreds of kilobytes, maybe a megabyte. But yeah, this is one of those technical details I’d worry about.
Not sure how a HTTP server would solve the CPU bottleneck of scanning terabytes of data per query?