Over time, Lemmy instances are going to keep aquiring more, and more data. Even if, in the best case, they are not caching content and they are just storing the data posted to communities local to the server, there will still be a virtually limitless growth in server storage requirements. Eventually, it may get to a point where it is no longer economically feesible to host all of the infrastructure to keep expanding the server’s storage. What happens at this point? Will servers begin to periodically purge old content? I have concerns that there will be a permanent horizon (as Lemmy becomes more popular, the rate of growth in storage requirements will also increase, thereby reducing the distance to this horizon) over which old – and still very useful – data will cease to exist. Is there any plan to archive this old data?

  • teolan@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    The 700MB are the postgres data or everything including the images?

    I’m under the impression that text should be very cheap to store inside postgres.

    • cestvrai@lemm.ee
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      1 year ago

      Keep in mind that you are also storing metadata for the post (i.e. creation time), relations (i.e. which used posted) and an index.

      Might not be much now but these things really add up over the years.

      • teolan@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Yes but those are in general a couple of bytes at most. The average comment will be less than 1KB. Metadata that goes with it will be barely more.

        On the other hand most images will be around 1MB, or 1000x times larger. Sure it depends on the type of instance but text should be a long way from filling a hard drive. From what I’ve seen on github the database size is actually mostly debugging information, so it might explain the weirdness.

    • ubergeek77A
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      On average, 500MB is Postgres, 200MB is Pictrs thumbnails. Postgres is growing faster than Pictrs is.

    • Zoe Codez@lemmy.digital-alchemy.app
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      My local instance that I run for myself is about a week old. Has 2.5G in pictrs, 609M in postgres. One of those things that’ll vary for every setup