• Toribor@corndog.social
      link
      fedilink
      English
      arrow-up
      53
      ·
      edit-2
      1 year ago

      I’d argue with this, but it seems like image and video file extensions have become a lawless zone with no rules so I don’t even think they count.

        • fibojoly@sh.itjust.works
          link
          fedilink
          arrow-up
          8
          ·
          edit-2
          1 year ago

          Back in the day, when bandwidth was precious and porn sites would parcel a video into 10 second extracts, one per page, you could zip a bunch of these mpeg files together into an uncompressed zip, then rename it .mpeg and read it in VLC as a single video. Amazing stuff.

        • Natanael@slrpnk.net
          link
          fedilink
          arrow-up
          4
          ·
          1 year ago

          Tar.gz is pretty much like zip. Technically tar mimics a file system more closely but like who makes use of that?

          • AVincentInSpace@pawb.social
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            1
            ·
            edit-2
            1 year ago

            Tar mimics a filesystem more closely? Tf???

            TAR stands for Tape ARchive. It’s called that because it’s designed to be written to (and read from) non-seekable magnetic tape, meaning it’s written linearly. The metadata for each file (name, mtime etc.) immediately precedes its contents. There is no global table of contents like you’d find on an actual filesystem. In fact, most implementations of tar don’t even put the separate files on gzip boundaries meaning you can’t decompress any given file without decompressing all of the files before it. With a tape backup system, you don’t care, but with a filesystem you absolutely do.

            PKZIP mimics the traditional filesystem structure much more closely. The table of contents is at the end instead of the beginning, which is a bit strange as filesystems go, but it is a table of contents consisting of a list of filenames and offsets into the file where they can be found. Each file in a zip archive is compressed separately, meaning you can pull out any given file from a ZIP archive without any prior state, and you can even use different compression algorithms on a per-file basis (few programs make use of this). For obvious reasons, the ZIP format prioritizes storage space over modification speed (the table of contents is a single centralized list and files must be contiguous), meaning if you tried to use it as a filesystem it would utterly suck – but you can very readily find software that will let you read, edit, and delete files in-place as though it were a folder without rewriting the entire archive. That’s not really possible with a .tar file.

            You could make the argument that tar is able to more closely mimic a POSIX filesystem since it captures the UNIX permission bits and ZIP doesn’t (ustar was designed for UNIX and pkzip was designed for DOS) but that’s not a great metric.

      • AVincentInSpace@pawb.social
        link
        fedilink
        arrow-up
        15
        ·
        edit-2
        1 year ago

        See, ZIP files are strange because unlike most other archive formats, they put the “header” and table of contents at the end, and all of the members (files within the zip file) are listed in that table of contents as offsets relative to the start of the file. There’s nothing that says that the first member has to begin at the start of the file, or that they have to be contiguous. This means you can concatenate an arbitrary amount of data at the beginning of a ZIP file (such as an exe that opens its argv[0] as a zip file and extracts it) and it will still be valid. (Fun fact! You can also concatenate up to 64KiB at the end and it will still be valid, after you do some finagling. This means that when a program opens a ZIP file it has to search through the last 64KiB to find the “header” with the table of contents. This is why writing a ZIP parser is really annoying.)

        As long as whatever’s parsing the .exe doesn’t look past the end of its data, and whatever’s parsing the .zip doesn’t look past the beginning of its data, both can go about their business blissfully unaware of the other’s existence. Of course, there’s no real reason to concatenate an executable with a zip file that wouldn’t access the zip file, but you get the idea.

        A common way to package software is to make a self-extracting zip archive in this manner. This is absolutely NOT to say that all .exe files are self extracting .zip archives.

      • Appoxo@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        11
        arrow-down
        1
        ·
        edit-2
        1 year ago

        No. But the Windows office suite is
        You can rename a docx and extract it.
        Don’t know how it is with ppt/x and xls/x

        • MonkderZweite@feddit.ch
          link
          fedilink
          arrow-up
          8
          ·
          1 year ago

          xls & co. (the older ones) are something custom. Only after standardization as OOXML (a shitshow btw, there’s a lengthy wiki article about it) they got zip.

          • Appoxo@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            4
            ·
            1 year ago

            The whole Word and Libre/OO-Writer world is a shit show.
            So complex and everyone decides to interpret it a bit differently.
            Not even Libre and OO can be interoperabel between the same file and feature.

      • unalivejoy@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        Just because you can open it with 7-zip doesn’t mean it’s a zip file. Some exes are also zip files.

  • 𝒍𝒆𝒎𝒂𝒏𝒏@lemmy.one
    link
    fedilink
    arrow-up
    88
    arrow-down
    2
    ·
    1 year ago

    Ah, good ol’ Microsoft Office. Taken advantage of their documents being a renamed .zip format to send forbidden attachments to myself via email lol

    On the flip side, there’s stuff like the Audacity app, that saves each audio project as an SQLite database 😳

    • Hexagon@feddit.it
      link
      fedilink
      arrow-up
      50
      arrow-down
      1
      ·
      1 year ago

      Also .jar files. And good ol’ winamp skins. And CBZ comics. And EPUB books. And Mozilla extensions. And APK apps. And…

          • neo (he/him)@lemmy.comfysnug.space
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            a lot of the time i handle it manually since i try to pack things in “volumes” that most closely mimic physical releases, and writing the code to get that information would be slower than just looking it up manually

            so, for example, the first volume of bleach has 7 chapters, so i’d pack those 7 chapters together into one cbz, the second volume in another cbz, etc.

    • beeb@lemm.ee
      link
      fedilink
      arrow-up
      28
      arrow-down
      1
      ·
      1 year ago

      an SQLite database

      Genius! Why bother importing and exporting

          • xigoi@lemmy.sdf.org
            link
            fedilink
            arrow-up
            9
            ·
            1 year ago

            I prefer games that embrace the difference from Minecraft instead of trying to emulate it. My favorite is MeseCraft.

        • AVincentInSpace@pawb.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          So does Scrap Mechanic (sandbox game that’s basically Space Engineers on the ground – or, more loosely, Minecraft but with physics and you can build cars) also uses sqlite to save worlds. It also uses uncompressed JSON files to store user creations.

    • mogoh@lemmy.ml
      link
      fedilink
      arrow-up
      16
      ·
      1 year ago

      that saves each audio project as an SQLite database 😳

      Is this a problem? I thought this would be a normal use case for SQLite.

      • fiah@discuss.tchncs.de
        link
        fedilink
        arrow-up
        15
        ·
        1 year ago

        doesn’t sqlite explicitly encourage this? I recall claims about storing blobs in a sqlite db having better performance than trying to do your own file operations

        • MNByChoice@midwest.social
          link
          fedilink
          arrow-up
          13
          arrow-down
          6
          ·
          edit-2
          1 year ago

          Thanks for the hint. I had to look that up. (The linked page is worth a read and has lots of details and caveats.)

          The scope is narrow, and well documented. Be very wary of over generalizing.

          The measurements in this article were made during the week of 2017-06-05 using a version of SQLite in between 3.19.2 and 3.20.0. You may expect future versions of SQLite to perform even better.

          https://www.sqlite.org/fasterthanfs.html

          SQLite reads and writes small blobs (for example, thumbnail images) 35% faster¹ than the same blobs can be read from or written to individual files on disk using fread() or fwrite().

          Furthermore, a single SQLite database holding 10-kilobyte blobs uses about 20% less disk space than storing the blobs in individual files.)

          Edit 5: consolidated my edits.

    • dan@upvote.au
      link
      fedilink
      arrow-up
      9
      ·
      edit-2
      1 year ago

      SQLite explicitly encourages using it as an on-disk binary format. The format is well-documented and well-supported, backwards compatible (there’s been no major version changes since 2004), and the developers have promised to support it at least until the year 2050. It has quick seek times if your data is properly indexed, the SQLite library is distributed as a single C file that you can embed directly into your app, and it’s probably the most tested library in the world, with something like 500x more test code than library code.

      Unless you’re a developer that really understands the intricacies of designing a binary data storage format, it’s usually far better to just use SQLite.

      • H4mi@lemm.ee
        link
        fedilink
        arrow-up
        18
        ·
        1 year ago

        I don’t even use a file system on my storage drives. I just write the file contents raw and try to memorize where.

          • 257m@lemmy.ml
            link
            fedilink
            arrow-up
            3
            ·
            1 year ago

            Sounds inefficient. You can only store 8 gigs and goes away when you shut off your computer? I just put it on punch cards and feed it into my machine.

      • dan@upvote.au
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Linux mostly doesn’t use file extensions… It relies on “magic bytes” in the file.

        Same with the web in general - it relies purely on MIME type (e.g. text/html for HTML files) and doesn’t care about extensions at all.

        • fibojoly@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          “Magic bytes”? We just called them headers, back in my day (even if sometimes they are at the end of the file)

          • dan@upvote.au
            link
            fedilink
            arrow-up
            4
            ·
            1 year ago

            The library that handles it is literally called “libmagic”. I’d guess the phrase “magic bytes” comes from the programming concept of a magic number?

            • fibojoly@sh.itjust.works
              link
              fedilink
              arrow-up
              2
              ·
              1 year ago

              I did not know about that one! It makes sense though, because a lot of headers would start with, well yeah, “magic numbers”. Makes sense.

  • observantTrapezium@lemmy.ca
    link
    fedilink
    arrow-up
    41
    arrow-down
    1
    ·
    1 year ago

    Nothing wrong with that… Most people don’t need to reinvent the wheel, and choosing a filename extension meaningful to the particular use case is better then leaving it as .zip or .db or whatever.

    • CoderKat@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      Totally depends on what the use case is. The biggest problem is that you basically always have to compress and uncompress the file when transferring it. It makes for a good storage format, but a bad format for passing around in ways that need to be constantly read and written.

      Plus often we’re talking plain text files being zipped and those plain text formats need to be parsed as well. I’ve written code for systems where we had to do annoying migrations because the serialized format is just so inefficient that it adds up eventually.

      • dan@upvote.au
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        They both have their use cases. Zstandard is for compression of a stream of data (or a single file), while 7-Zip is actually two parts: A directory structure (like tar) plus a compression algorithm (like LZMA which it uses by default) in a single app.

        7-Zip is actually adding zstd support: https://sourceforge.net/p/sevenzip/feature-requests/1580/

        • TechieDamien@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          1 year ago

          Well when using zstd, you tar first, something like tar -I zstd -cf my_tar.tar.zst my_files/*. You almost never call zstd directly and always use some kind of wrapper.

          • dan@upvote.au
            link
            fedilink
            arrow-up
            7
            ·
            1 year ago

            Sure, you can tar first. That has various issues though, for example if you just want to extract one file in the middle of the archive, it still needs to decompress everything up to that point. Something like 7-Zip is more sophisticated in terms of how it indexes files in the archive, so I’m looking forward to them adding zstd support.

            FWIW most of my uses of zstd don’t involve tar, but it’s in things like Borgbackup, database systems, etc.

        • Gamma@programming.dev
          link
          fedilink
          English
          arrow-up
          11
          ·
          edit-2
          1 year ago

          Thought I’d check on the Linux source tree tar. zstd -19 vs lzma -9:

          ❯ ls -lh
          total 1,6G
          -rw-r--r-- 1 pmo pmo 1,4G Sep 13 22:16 linux-6.6-rc1.tar
          -rw-r--r-- 1 pmo pmo 128M Sep 13 22:16 linux-6.6-rc1.tar.lzma
          -rw-r--r-- 1 pmo pmo 138M Sep 13 22:16 linux-6.6-rc1.tar.zst
          

          About +8% compared to lzma. Decompression time though:

          zstd -d -k -T0 *.zst  0,68s user 0,46s system 162% cpu 0,700 total
          lzma -d -k -T0 *.lzma  4,75s user 0,51s system 99% cpu 5,274 total
          

          Yeah, I’m going with zstd all the way.

        • gamer@lemm.ee
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          As always, you gotta know both so that you can pick the right tool for the job.