Off-and-on trying out an account over at @[email protected] due to scraping bots bogging down lemmy.today to the point of near-unusability.

  • 29 Posts
  • 4.24K Comments
Joined 2 年前
cake
Cake day: 2023年10月4日

help-circle


  • tal@lemmy.todaytoAsk Lemmy@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    2
    ·
    21 小时前

    They’re generally interoperable, so it’s not a huge deal to use one or the other, though there’s presently more native client support for Lemmy. I use Interstellar on Android when I’m using a native client with PieFed.

    I kind of think that it’s not a terrible idea to have an account on two different home instances anyway, just so that if one goes down for a while for some reason, you can still use the other to post with. With Reddit, if the thing was down, that was it (though to be fair, Reddit reliability has been much better in recent years than it was in the early years, when there were some pretty bad stretches).


  • Bonus: people should stop being lazy and learn to setup a server infrastructure instead of using “the cloud”. Your data are safer, you save money and give less power to gargantuan cloud companies.

    If change happens here, I’m pretty sure that it’s going to be in the form of some sort of zero-administration standardized server module that a company sells that has no phone-home capability and that you stick on your local network.

    Society probably isn’t going to make everyone a system and network administrator, in much the same way that it’s not going to make everyone a medical doctor or an arborist. Would be expensive to provide everyone with that skillset.


  • Like you, I tend to feel that in general, people need to stop trying to force people to live the way they think is best. Unless there is a very real, very serious impact on others (“I enjoy driving through town while firing a machine gun randomly out my car windows”), people should be permitted to choose how to live as far as possible. Flip side is that they gotta accept potential negative consequences of doing so. Obviously, there’s gonna be some line to draw on what consitutes “seriously affecting others”, and there’s going to be different people who have different positions on where that line should be. Does maybe spreading disease because you’re not wearing a facemask during a pandemic count? What about others breathing sidestream smoke from a cigarette smoker in a restaurant? But I tend towards a position that society should generally be less-restrictive on what people do as long as the harm is to themselves.

    However.

    I would also point out that in some areas, this comes up because someone is receiving some form of aid. Take food stamps. Those are designed to make it easy to obtain food, but hard to obtain alcohol. In that case, the aid is being provided by someone else. I think that it’s reasonable for those other people to say “I am willing to buy you food, but I don’t want to fund your alcohol habit. I should have the ability to make that decision.” That is, they chose to provide food aid because food is a necessity, but alcohol isn’t.

    I think that there’s a qualitative difference between saying “I don’t want to pay to buy someone else alcohol” and “I want to pass a law prohibiting someone from consuming alcohol that they’ve bought themselves.”


  • I am very much on Team PC for video games, but the fact that consoles are a closed, locked-down system — something I typically think of as a drawback — can be a real strength for some game applications.

    If you want to play a competitive multiplayer video game on a level footing, you don’t want people modifying the software on their system to give them an advantage. There are all sorts of companies with intrusive anticheat software on PC trying doing a half-assed job trying to make an open system work like a closed one. The console guys have more-or-less solved this.

    And then there’s the hardware aspect. There is an entire industry on the PC selling “gamer” hardware that aims to give a player some degree of an edge. Higher resolution monitors with faster refresh rates driven by rendering hardware that can render more frames. Mice that report their position more-frequently. Hardware with extra buttons to invoke macros. A lot of that industry is built around figuring out ways to inject pay-to-win into competitive multiplayer video games.

    I’m pretty sure that the great majority of video game players do not really want pay-to-win in the competitive multiplayer video games that they play. Consoles simply do a much better job there.

    Now, if you take competitive multiplayer out of the mix, then suddenly the open hardware and software situation on the PC becomes an advantage. You can mod games to add features and content and provide a more-immersive experience. It means that I can play all sorts of older games and have a experience that improves over time when doing so.

    But a lot of people do want to play competitive multiplayer games, and unless something major changes, consoles have a major area where they are simply better-suited to gaming.

    Two ways that it might change:

    • If single player gaming displaces competitive multiplayer. My guess is that single player games with sophisticated video game AI will tend to increasingly encroach on that, though not overnight. Multiplayer saw one huge boost in the past two decades or so, which was widespread, high-bandwidth low-latency network access. But I think that that’s probably a one-off. I can’t think of any huge future multiplayer-specific improvements like that that will come along. And I can imagine a lot of future improvements to video game AI.

    • If PCs get some sort of locked down trusted computing environment, probably with its own memory and processor, that runs alongside the open emvironment. Basically, part of a console in a PC.

    But absent one of those, I think that there are going to be gaming areas where the console excels that the PC does not.


  • A major part of that is, I think, that desktop OSes are, “by default, insecure” against local software. Like, you install a program on the system, it immediately has access to all of your data.

    That wasn’t an unreasonable model in the era when computers weren’t all persistently connected to a network, but now, all it takes is someone getting one piece of malware on the computer, and it’s trivial to exfiltrate all your data. Yes, there are technologies that let you stick software in a sandbox, on desktop OSes, but it’s hard and requires technology knowledge. It’s not a general solution for everyone.

    Mobile OSes are better about this in that they have a concept of limiting access that an app has to only some data, but it’s still got a lot of problems; I think that a lot of software shouldn’t have network access at all, some information shouldn’t be readily available, and there should be defense-in-depth, so that a single failure doesn’t compromise everything. I really don’t think that we’ve “solved” this yet, even on mobile OSes.




  • I can believe that LLMs might wind up being a technical dead end (or not; I could also imagine them being a component of a larger system). My own guess is that language, while important to thinking, won’t be the base unit of how thought is processed the way it is on current LLMs.

    Ditto for diffusion models used to generate images today.

    I can also believe that there might be surges and declines in funding. We’ve seen that in the past.

    But I am very confident that AI is not, over the long term, going to go away. I will confidently state that we will see systems that will use machine learning to increasingly perform human-like tasks over time.

    And I’ll say with lower, though still pretty high confidence, that the computation done by future AI will very probably be done on hardware oriented towards parallel processing. It might not look like the parallel hardware today. Maybe we find that we can deal with a lot more sparseness and dedicated subsystems that individually require less storage. Yes, neural nets approximate something that happens in the human brain, and our current systems use neural nets. But the human brain runs at something like a 90 Hz clock and definitely has specialized subsystems, so it’s a substantially-different system from something like Nvidia’s parallel compute hardware today (1,590,000,000 Hz and homogenous hardware).

    I think that the only real scenario where we have something that puts the kibosh on AI is if we reach a consensus that superintelligent AI is an unsolveable existential threat (and I think that we’re likely to still go as far as we can on limited forms of AI while still trying to maintain enough of a buffer to not fall into the abyss).

    EDIT: That being said, it may very well be that future AI won’t be called AI, and that we think of it differently, not as some kind of special category based around a set of specific technologies. For example, OCR (optical character recognition) software or speech recognition software today both typically make use of machine learning — those are established, general-use product categories that get used every day — but we typically don’t call them “AI” in popular use in 2025. When I call my credit card company, say, and navigate a menu system that uses a computer using speech recognition, I don’t say that I’m “using AI”. Same sort of way that we don’t call semi trucks or sports cars “horseless carriages” in 2025, though they derive from devices that were once called that. We don’t use the term “labor-saving device” any more — I think of a dishwasher or a vacuum cleaner as distinct devices and don’t really think of them as associated devices. But back when they were being invented, the idea of machines in the household that could automate human work using electricity did fall into a sort of bin like that.










  • Women’s clothes tend to be more prone to vanity sizing than men’s.

    Vanity sizing, or size inflation, is the phenomenon of ready-to-wear clothing of the same nominal size becoming bigger in physical size over time.

    Vanity sizing is a common fashion industry practice used today that often involves labeling clothes with smaller sizes than their actual measurements size. Experts believe that this practice targets consumer’s preferences and perceptions.


  • The malware continuously monitors its access to GitHub (for exfiltration) and npm (for propagation). If an infected system loses access to both channels simultaneously, it triggers immediate data destruction on the compromised machine. On Windows, it attempts to delete all user files and overwrite disk sectors. On Unix systems, it uses shred to overwrite files before deletion, making recovery nearly impossible.

    shred is intended to overwrite the actual on-disk contents by overwriting data in the file prior to unlinking the files. However, shred isn’t as effective on journalled filesystems, because writing in this fashion doesn’t overwrite the contents on-disk like this. Normally, ext3, ext4, and btrfs are journalled. Most people are not running ext2 in 2025, save maybe on their /boot partition, if they have that as a separate partition.