Over time, Lemmy instances are going to keep aquiring more, and more data. Even if, in the best case, they are not caching content and they are just storing the data posted to communities local to the server, there will still be a virtually limitless growth in server storage requirements. Eventually, it may get to a point where it is no longer economically feesible to host all of the infrastructure to keep expanding the server’s storage. What happens at this point? Will servers begin to periodically purge old content? I have concerns that there will be a permanent horizon (as Lemmy becomes more popular, the rate of growth in storage requirements will also increase, thereby reducing the distance to this horizon) over which old – and still very useful – data will cease to exist. Is there any plan to archive this old data?

  • ubergeek77A
    link
    fedilink
    English
    arrow-up
    76
    arrow-down
    2
    ·
    edit-2
    1 year ago

    Pictrs 0.4 recently added support for object storage. This is fantastic, because object storage is dirt cheap compared to traditional block storage (like a VM filesystem). This helps a lot for image storage, which is a large part of the problem, but it’s not the whole problem.

    I know Lemmy uses Postgres for everything else, but they should really invest time into moving towards something more sustainable for long term/permanent hosting. Paid Postgres services are obscenely upcharged and prohibitively expensive, so that’s not an option.

    I’m armchair architecting here so I’m not sure what that would look like for Lemmy (Cloudflare KV? Redis?)

    Still, even my own private instance has been growing at a rate of about 700MB per day, and I don’t even subscribe to that many things. I can’t imagine what the major instances are dealing with. This isn’t sustainable unless we want to start purging old data, which will kill Lemmy long term.


    EDIT: Turns out ~90% of my Lemmy data is just for debugging and not needed:

    https://github.com/LemmyNet/lemmy/issues/3103#issuecomment-1631643416

    • krogoth@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      16
      ·
      edit-2
      1 year ago

      I’m not really sure that a K/V service is a more scalable option than Postgres for storing text posts and the like. If you’re not performing complex queries or requiring microsecond latencies then Postgres doesn’t require that much compute or memory.

      People can get unnecessary scared of relational databases if they’ve had bad experiences with databases that are used poorly, but attempting to force relational data into a K/V can lead to the application layer essentially just doing a less efficient job of the same types of queries that the database would normally handle. Maybe there’ll be some future need to offload post and comment bodies into object storage or something but that seems incredibly premature.

      Object storage for pictrs is definitely a fantastic addition, though.

    • teolan@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      The 700MB are the postgres data or everything including the images?

      I’m under the impression that text should be very cheap to store inside postgres.

      • cestvrai@lemm.ee
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        1 year ago

        Keep in mind that you are also storing metadata for the post (i.e. creation time), relations (i.e. which used posted) and an index.

        Might not be much now but these things really add up over the years.

        • teolan@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Yes but those are in general a couple of bytes at most. The average comment will be less than 1KB. Metadata that goes with it will be barely more.

          On the other hand most images will be around 1MB, or 1000x times larger. Sure it depends on the type of instance but text should be a long way from filling a hard drive. From what I’ve seen on github the database size is actually mostly debugging information, so it might explain the weirdness.

      • ubergeek77A
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        On average, 500MB is Postgres, 200MB is Pictrs thumbnails. Postgres is growing faster than Pictrs is.

      • Zoe Codez@lemmy.digital-alchemy.app
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        My local instance that I run for myself is about a week old. Has 2.5G in pictrs, 609M in postgres. One of those things that’ll vary for every setup

    • Lodion 🇦🇺@aussie.zone
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      The largest table holds data that is only needed by Lemmy briefly. There is a scheduled job to clear it… Every 6 months. There are active discussions on how best to handle this.

      On my instance I’ve set a cronjob to delete everything but the most recent 100k rows of that table every hour.

    • Qazwsxedcrfv000@lemmy.unknownsys.com
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      It would be greater if it can also leverage IPFS. So we can have unique identifiers per media object and hence deduplication in a P2P network which in my opinion is more federvise affinitive. I have been thinking of making such an alternative media backend for a while.

      • ubergeek77A
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        1 year ago

        Hey, that’s a Vultr guide! I use Vultr, thanks!

        By the way, how are your costs on EC2? My understanding is that hosting on EC2 would be cost prohibitive from data transfer costs alone, not to mention their monthly rates for instances are pretty much always below the cost of a VPS.

        Now if only someone could do this for the Postgres data. I wonder if S3FS would be able to handle the load of a running database, that would be a nice way to save costs.

        • Toribor@corndog.uk
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Currently I’m just running a single user instance on a t2.micro. I’ve definitely locked it up at least twice after subscribing to a big batch of external communities so it’s definitely undersized if were to open it up to more users. I only have one other small service running on that instance though so Lemmy is definitely using the bulk of that capacity at least when it’s got work to do.

          Costs are about $11.25 a month for the instance and about $2.50 for block storage (which is oversized now that pict-rs is on S3). I’m guessing that pict-rs s3 costs will be just a few pennies a day unless I start posting a lot on my own instance, probably less than a dollar a month.

          Data transfer costs for me are zero though. I’m not using a load balancer or moving things between regions so I don’t expect that to change.

          • dudeami0@lemmy.dudeami.win
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            As for the data transfer costs, any network data originating from AWS that hits an external network (an end user or another region) typically will incur a charge. To quote their blog post:

            A general rule of thumb is that all traffic originating from the internet into AWS enters for free, but traffic exiting AWS is chargeable outside of the free tier—typically in the $0.08–$0.12 range per GB, though some response traffic egress can be free. The free tier provides 100GB of free data transfer out per month as of December 1, 2021.

            So you won’t be charged for incoming federated content, but serving content to the end user will count as traffic exiting AWS. I am not sure of your exact setup (AWS pricing is complex) but typically this is charged. This is probably negligible for a single-user instance, but I would be careful serving images from your instance to popular instances as this could incur unexpected costs.

          • ubergeek77A
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            1 year ago

            Just FYI, you could save about $5 a month and get 2x the performance if you moved that to a VPS not on AWS. $11 a month for t2.micro, especially if it’s locking up, is basically you being scammed if I’m being honest 😅

            AWS isn’t really designed for long running tasks like this unless you get a long term commitment discount. It’s intended for enterprises and priced that way. For a hobbyist like you, I’d definitely recommend Vultr or something.

            Also, be careful about those bandwidth costs. Most of the time it’s never free to serve data out like that. You may not be using a load balancer, but double check those bandwidth costs, I remember something about paying for bandwidth I didn’t expect.

            Definitely consider moving to a $5 or $10 instance on Vultr, they have block storage too. You could either save money, or spend the same for 3-4x the performance.

            • Toribor@corndog.uk
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Yeah it’s likely that I’ll move this eventually. This instance was only setup so I had a test environment to learn AWS.

    • Legarth@lemmy.fmhy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Isn’t it mostly pictures and movies taking up space, posts and comments that is just text doesn’t take up much.

      I would be fine with text is forever but pictures and movies are deleted after time.

      • WhatASave@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        Just think of all those old, helpful forum posts from years past with tinypic and Photobucket links that are dead. I agree memes can probably die out after time but anything informative would be bad to lose imo

      • freeskier@centennialstate.social
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        For large instances pictures is probably the bigger consumer of space, but for small instances the database size is the bigger issue because of federation. Also, mass storage for media is cheap, fast storage for databases is not. With my host I can get 1TB of object storage for $5 a month. Attached NVMe storage is $1 per month per 10 GB.

        For my small instance the database is almost 4x as large as pictrs, and growing fast.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      AWS Postgres instances aren’t that expensive, and they handle upgrades and backups for you.

      That said, I’m interested in distributed storage, and maybe this fall/winter when I get some time off I’ll try making a lemmy fork that’s based on a distributed hash table. There are going to be a ton of issues (i.e. data will be immutable), but I have a few ideas on how to mitigate most of the issues I know about.