Off-and-on trying out an account over at @[email protected] due to scraping bots bogging down lemmy.today to the point of near-unusability.

  • 36 Posts
  • 4.57K Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle
  • tal@lemmy.todaytoAsk Lemmy@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    8 hours ago

    Depends on your definition of “early days”.

    If you go to the 1990s, when it really started to enter public awareness:

    • IPv6 wasn’t a thing.

    • HTTP wasn’t as dominant a protocol as it is today. Use of FTP, telnet, gopher, NNTP, IRC, and so forth were more-common relative to the Web compared to today.

    • A lot of protocols weren’t encrypted.

    • If you were accessing the Internet via a dial-up modem (which was probably what you were doing in the 1990s if you were coming from home), you could download maybe 7 kB per second. You had maybe 100 milliseconds of latency — quite substantial compared to most modern network connections — on the first hop. This had a real impact on, say, real-time multiplayer video games.

    • Email spam was an increasing problem.

    • Personal computers were considerably more costly in real terms than they are today. Additionally, computer speed doubled about every 18 months, which meant that computers became obsolete very quickly. Tended to be wealthier people using it relative to today.

    • A higher proportion of technical or academic people due to universities and technical companies being connected.

    • Internationalization wasn’t great. Today, one can just generally use Unicode and write whatever language one wants wherever. Seeing Web pages displayed using the wrong text encoding wasn’t that uncommon. No emojis, either.

    • On the Web, lots of small, independent sites. If you want to look at some of them, the Wayback Machine at Archive.org is handy. Animated GIFs and patterened backgrounds weren’t uncommon.

    • Universities were more prominent as places to obtain free software or the like.

    • In the late '90s, for the Web, it wasn’t quite worked out how people would actually use the thing. One school of thought is that people would adopt “portal sites” that they’d always go to when opening their Web browser. In practice, this didn’t really turn out to be what happened, but trying to win “portal marketshare”

    • The '90s had computers that couldn’t display 24-bit color. Computers displaying 8-bit color chose a “palette” of colors, and could only show that many at one time. If you had an image on a Web page that contained a color that wasn’t on that palette, a “close” color was used. Eventually, the world converged on 216 “safe” colors that one could expect a computer to display, so many images didn’t contain all that many colors. Photographic images were often dithered.

    • Due in part to bandwidth limitations as well as computational limitations, video over the network was more of a novelty than a practical thing. No YouTube or equivalent. RealPlayer, a browser plugin, was one of the more-prominent ways to stream video.

    • Major personal computer OSes — MacOS and Windows 9X — were quite unstable compared to where personal computer OSes were in maybe the mid-2000s on. Web browsers were also quite unstable. Crashes were a thing.

    • Much higher expectations for data privacy. I remember when it was considered outright scandalous for software to “phone home” to just indicate, say, a version number. Today, vast amounts of software are harvesting all kinds of data, and there is software whose entire business model is based on doing so.

    • Search engines were a lot worse. Google today uses some kind of heuristics to rapidly index things like major news sites. Getting outdated links or limited coverage of the Web was a lot more common (though we didn’t have to worry about the current glut of AI-generated spam Websites).

    • Many top-level-domains have come into use. One saw far fewer in the 1990s. I’d say mostly .com, .net, and .org, plus the country codes.

    • Consumer broadband routers with built-in, enabled firewalls weren’t really much of a thing. It was far more common to be able to talk to arbitrary machines. A lot more stuff is firewalled off today.

    • Probably not something that the typical person would have noticed, but lots of institutions ran public SNMP on routers and made it accessible to the Internet at large. I remember mapping out entire networks for many different organizations. You could sit there, watch the traffic flow, see the size of all the network links, etc. Places started to tamp down on that, saw exposing that information as being a security risk.

    • In the US, some users accessed the Internet via gatewayed access from commercial dial-up services that were essentially giant BBSes, places like Compuserve or American Online. These had originally been aimed more at being stand-alone services and essentially died out as people just became interested in Internet access.

    • Some large technology-oriented companies and institutions controlled huge amounts of the IPv4 address space. Apple still has a Class A network (about 1/256ths of the IPv4 Internet’s addresses). Ford still does as well. But MIT and Stanford used to have their own as well.

    • Websites where one went to interact with other users, like forums, were around, but early, and far fewer people were using them.

    • Java was originally intended to be used in Web browsers in applets, something along the way that Javascript is today. It didn’t succeed.

    • It wasn’t yet clear in the late 1990s that Microsoft wouldn’t “take over” the Web by providing a dominant Web browser and managing to institutionalize use of proprietary Microsoft technologies like ActiveX.



  • Notably, this and dotfiles are popular among devs using Mac, since MacOS has nearly all settings available either via config files or the defaults system from the command line. In comparison, Windows is total ass about configuring via the command line, and even Cinnamon gives me some headache by either not reloading or straight up overwriting my settings.

    The application-level format isn’t really designed for end user consumption, but WINE uses a text representation of the Windows registry. I imagine that one could probably put that in a git registry and that there’s some way to apply that to a Windows registry. Or maybe a collectiom of .reg files, which are also text.



  • Oh, yeah, it’s not that ollama itself is opening holes (other than adding something listening on a local port), or telling people to do that. I’m saying that the ollama team is explicitly promoting bad practices. I’m just saying that I’d guess that there are a number of people who are doing things like fully-exposing or port-forwarding to ollama or whatever because they want to be using the parallel compute hardware on their computer remotely. The easiest way to do that is to just expose ollama without setting up some kind of authentication mechanism, so…it’s gonna happen.

    I remember someone on here who had their phone and desktop set up so that they couldn’t reach each other by default. They were fine with that, but they really wanted their phone to be able to access the LLM on their computer, and I was helping walk them through it. It was hard and confusing for them — they didn’t really have a background in the stuff, but badly wanted the functionality. In their case, they just wanted local access, while the phone was on their home WiFi network. But…I can say pretty confidently that there are people who want access all the time, to access the thing remotely.



  • The incident began from June 2025. Multiple independaent security researchers have assessed that the threat acotor is likely a Chinese state-sponsored group, which would explain the highly selective targeting obseved during the campaign.

    I do kind of wonder about the emacs package management infrastructure system. Like, if attacking things that text editors use online is an actively-used vector.


  • I mean, the article is talking about providing public inbound access, rather than having the software go outbound.

    I suspect that in some cases, people just aren’t aware that they are providing access to the world, and it’s unintentional. Or maybe they just don’t know how to set up a VPN or SSH tunnel or some kind of authenticated reverse proxy or something like that, and want to provide public access for remote use from, say, a phone or laptop or something, which is a legit use case.

    ollama targets being easy to set up. I do kinda think that there’s an argument that maybe it should try to facilitate configuration for that setup, even though it expands the scope of what they’re doing, since I figure that there are probably a lot of people without a lot of, say, networking familiarity who just want to play with local LLMs setting these up.

    EDIT: I do kind of think that there’s a good argument that the consumer router situation plus personal firewall situation is kind of not good today. Like, “I want to have a computer at my house that I want to access remotely via some secure, authenticated mechanism without dicking it up via misconfiguration” is something that people understandably want to do and should be more straightforward.

    I mean, we did it with Bluetooth, did a consumer-friendly way to establish secure communication over insecure airwaves. We don’t really have that for accessing hardware remotely via the Internet.




  • (10^100) + 1 − (10^100) is 1, not 0.

    A “computer algebra system” would have accomplished a similar goal, but been much slower and much more complicated

    $ maxima -q
    
    (%i1) (10^100)+1-(10^100);
    
    (%o1)                                  1
    (%i2) 
    

    There’s no perceptible delay on my laptop here, and I use maxima on my phone and my computers. And a CAS gives you a lot more power to do other things.





  • First, the Linux kernel doesn’t support resource forks at all. They aren’t part of POSIX nor do they really fit the unix file philosophy.

    The resource fork isn’t gonna be really meaningful to essentially all Linux software, but there have been ways to access filesystems that do have resource forks. IIRC, there was some client to mount some Apple file server protocol, exposed the resource forks as a file with a different name and the data fork as just a regular file.

    https://www.kernel.org/doc/html/latest/filesystems/hfsplus.html

    Linux does support HFS+, which has resource forks, as the hfsplus driver, so I imagine that it provides access one way or another.

    searches

    https://superuser.com/questions/363602/how-to-access-resource-fork-of-hfs-filesystem-on-linux

    Add /..namedfork/rsrc to the end of the file name to access the resource fork.

    Also, pretty esoteric, but NTFS, the current Windows file system, also has a resource fork, though it’s not typically used.

    searches

    Ah, the WP article that OP, @[email protected] linked to describes it.

    The Windows NT NTFS can support forks (and so can be a file server for Mac files), the native feature providing that support is called an alternate data stream. Windows operating system features (such as the standard Summary tab in the Properties page for non-Office files) and Windows applications use them and Microsoft was developing a next-generation file system that has this sort of feature as basis.