has anyone tried to deploy the docker image to azure? I saw there is some official documentation for AWS but I have a $150 azure credit per month and if I can figure out how to deploy the docker image to azure container apps I can get another server going.
Can someone explain to me what’s the point in having a lot of small instances of something like Lemmy?
I’m very familiar with Azure, and looking at the docker-compose file and AWS setup, it’s very straightforward to setup a simple instance on Azure container apps. How much it costs you will highly depend on what you want to do with it and how you expect it to be used.
Like, how much traffic are you expecting?
The idea of having several instances is an architecture choice. Instead of having all the users and content served by a single monolith, the users and the content are spread around several instances that then talk to one another in the process called federation to serve that content for other instances.
This spread out architecture allows for lesser hosting costs per instance and if an instance goes down it does not mean the entire service goes down as a whole. You may experience technical difficulties on one instance while the rest are completely fine, just unable to get stuff out of the instance. Additionally, it allows for easier moderation as moderators (admins?) are instance specific. You don’t have to moderate the whole of Lemmy, keeping your own house clean is enough. This means you are likely to have a higher ratio of mods per users with more instances, which leads to better quality in moderation work. Then if some other instance is not behaving, stop federating from them to yours.
And then some. I hope this at least explains some of the design choices at fediverse?
Thanks for the reply. If you don’t mind, I still have few questions.
I understand the value of a distributed architecture and federation. What I wasn’t sure about is the value of tons (thousands? hundreds of thousands? millions?) of small instances vs few hundred or thousand large ones.
This spread out architecture allows for lesser hosting costs per instance and if an instance goes down it does not mean the entire service goes down as a whole.
It seems that federation would put more pressure on all popular instances, no? the more popular an instance, the more likely others to want to federate with it, the more work it needs to do to push data, the more calls, etc. I understand that relays could spread out the load, but you’re just pushing the problem one more level. I already see wildly different numbers of comments on the same thread between the different instances depending on the home vs federated, with low usage (talking about <100 comments). It seems to take a long time for things to sync, and some comments don’t seem to sync.
And while sure, your own personal instance of Lemmy might be up and fine, if the popular instances you federate with are down, you’re essentially cut off still, right?
Additionally, it allows for easier moderation as moderators (admins?) are instance specific. You don’t have to moderate the whole of Lemmy, keeping your own house clean is enough.
You have to moderate any instance you allow to federate with still, right? Like either you lock down which instances you want to federate with (have an allowlist) or you block abusive instances (have a denylist) either way it’s a lot of management still. More flexible, for sure, but not exactly a walk in the park, right?
I’ve been trying to setup a working instance using container apps, web apps for containers and ACI, but it remains finicky. Do you know of a bicep or deployment script that does this properly?
Did you get anywhere with this? I’m trying to get it working on Container Apps. I’ve got it all setup using Terraform Cloud, everything’s deployed, etc. But I’m having some issues with the reverse proxy (I think anyways). Happy to share it if interested.
Did you get anywhere with this? I’ve got it all setup in Container Apps (deployed using Terraform Cloud). I’ve having some issues getting it working though - I think it’s something to do with the reverse proxy setup. I’m happy to share the code once I get it into a bit of a better state.
I’m actually considering doing an instance on gcp cloud run, just because my main work is in gcp and it’s a pretty decent way to run containers.
I’m thinking I might do a build of the image per the doco via GitHub actions and push the image into artifact registry (GCP service)…