Hello everyone,
I finally managed to get my hands on a Beelink EQ 14 to upgrade from the RPi running DietPi that I have been using for many years to host my services.
I have always was interested in using Proxmox and today is the day. Only problem is I am not sure where to start. For example, do you guys spin up a VM for every service you intend to run? Do you set it up as ext4, btrfs, or zfs? Do you attach external HDD/SSD to expand your storage (beyond the 2 PCIe slots in the Beelink in this example).
I’ve only started reading up on Proxmox just today so I am by no means knowledgeable on the topic
I hope to hear how you guys setup yours and how you use it in terms of hosting all your services (nextcloud, vaultwarden, cgit, pihole, unbound, etc…) and your ”Dos and Don’ts“
Thank you 😊
Install Proxmox with ZFS
Next configure the non enterprise repo or buy a subscription
I use one VM per service. WAN facing services, of which I only have a couple, are on a separate DMZ subnet and are firewalled off from the LAN.
It’s probably little overkill for a self hosted setup but I have enough server resources, experience, and paranoia to support it.
I prefer running true vms too, but it is resource intensive.
Playing with lxcs and docker could allow one to run more services on a little beelink.Yeah, with something that size you’re pretty much limited to containers.
Edit: Which is totally fine, OP. Self hosting is an opportunity to learn and your setup can be easily changed as your needs change over time.
I have a couple of publicly accessible services (vaultwarden, git, and searxng). Do you place them on a separate subnet via proxmox or through the router?
My understanding in networking is fundamental enough to properly setup OpenWrt with an inbound and outbound VPN tunnels along with policy based routing, and that’s where my networking knowledge ends.
Unless you wanna expose services to others my recommendation is always to hide your services behind a vpn connection.
I travel internationally and some of the countries In been to have been blocking my wireguard tunnel back home preventing me from accessing my vault. I tried setting it up with shadowsocks and broke my entire setup so I ended up resetting it.
Any suggestions that is not tailscale?
I find setting up an openvpn server with self-signed certificates + username and password login works well. You can even run it on tcp/443 instead of tcp/1194 if you want to make it less likely to be blocked.
I recommend you use containers instead of VMs when possible, as VMs have a huge overhead by comparison, but yes. Each service gets its own container, unless 2 services need to share data. My music container, for example, is host to both Gonic, slskd and Samba.
I wouldn’t do that as it complicates things unnecessarily. I would just run a container runtime inside LXC or VM.
Containers as in LXC?
Correct.
Side note- people will tell you not to put dockers in an LXC but fuck em. I don’t want to pollute my hypervisor with docker’s bullshit and the performance impact is negligeable.
I wouldn’t recommend running docker/podman in LXC, but that’s just because it seems to run better as a full VM in my experience.
No sense running it in the hypervisor, agreed.
LXC is great for everything else.
There are dozens of us!
There is barely any overhead with a Linux VM, a Debian minimal install only uses about 30MB of RAM! As an end user i find performance to be very similar with either setup.
as VMs have a huge overhead by comparison.
Not at all. The benefits outweighs the slight increased RAM usage by a huge margin.
I have Urbackup running in a dietpi VM. I have it set for 256mb of RAM. That includes the OS and the Urbackup service. It works perfectly fine.
I have an alpine VM that runs 32 docker containers using about 3.5GB of RAM. I wouldn’t call that bloat by any means.
A fresh Debian container uses 22 MiB of RAM. A fresh debian VM uses 200+ MiB of RAM.
A VM has to translate every single hardware interaction, a container doesn’t.I don’t want to fuck flies about the definition of ‘huge’ with you, but that’s kind of a huge difference.
Translate? You know that a CPU sits idle most of the time right?
What kind of potato are you running? Also, how many hundred services do you run on it anyway, complaining about 200mb. You better off running docker on baremetal, if you are that worried.
Do you know how much RAM Windows 11 uses on idle?
WTF
i have very few services and tend to lean into virtual machines instead of containers out of habit. i have proxmox running on an old mini-pc that needs to be replaced at some point. 16GB of RAM in it, 4 cores on the CPU (it’s an i3 at 2ghz), and a 100GB SSD.
VMs and services are as follows:
- ubuntu vm
- runs my omada controller in docker
- used to run all of my containers in docker but i migrated them to podman
- fedora vm
- runs several containers via podman
- alexandrite, where i’m composing this now!
- uptime kuma
- redlib for browsing reddit
- kanboard for organizing my contracting work
- runs several containers via podman
- dietpi in a vm to run pi-hole (migrated here when my pi zero-w cooked itself)
- this also handles internal dns for each server so i don’t have to type out IP addresses
- home assistant HAOS vm
home assistant backs itself up to my craptastic nas and the rest of the stuff doesn’t really have any backups. i wouldn’t be upset if they died, except for my kanboard instance. i can rebuild that from scratch if needed.
i’ll be investing in a new mini-pc and some more disks soon, though.
- ubuntu vm
For inspiration, here’s my list of services:
Name ID No. Primary Use heart (Node) ProxMox guard (CT) 202 AdGuard Home management (CT) 203 NginX Proxy Manager smarthome (VM) 804 Home Assistant HEIMDALLR (CT) 205 Samba/Nextcloud authentication (VM) 806 BitWarden mail (VM) 807 Mailcow notes (CT) 208 CouchDB messaging (CT) 209 Prosody media (CT) 211 Emby music (CT) 212 Navidrome books (CT) 213 AudioBookShelf security (CT) 214 AgentDVR realms (CT) 216 Minecraft Server blog (CT) 217 Ghost ourtube (CT) 218 ytdl-sub YouTube Archive cloud (CT) 219 NextCloud remote (CT) 221 Rustdesk Server Here is the overhead for everything. CPU is an i3 6100 and RAM is 2133MHz:
Quick note about my setup, some things threw a permissions hissy fit when in separate containers, so Media actually has Emby, Sonarr, Radarr, Prowlarr and two instances of qBittorrent. A few of my containers do have supplementary programs.
I have a single container for docker that runs 95% of services, and a few other containers and VMs for things that aren’t docker, or are windows/osx.
ext4 is the simple easy option, I tend to pick that on systems with lower amounts of RAM since ZFS does need some RAM for itself.
I do have an external USB HDD for backups to be stored on.
@modeh Id love to meet others who are just starting out with Proxmox and do some casual video calls/chats European tomezones to learn together /try stuff out.
Replace cgit with Forgejo. I really like the software from Jason, but Forgejo is a huge difference
Only reason I am thinking cgit is because I want a simple interface to show repos and commit history, not interested in doing pull requests, opening issues, etc…
I feel Forgejo would be “killing an ant with a sledgehammer” kinda situation for my needs.
Nonetheless, thank you for your suggestion.
I would start with one VM running portainer and once that is up and running I would recommend learning how to backup and restore the VM. If you have enough disks I would look into ZFS RAID 1 for redundancy.
https://pve.proxmox.com/wiki/ZFS_on_Linux
Learning the redundancy and backup systems before having too many services active allows you to screw up and redo.The Beelink comes with two PCIe slots, so I have two internal drives for now. Is it acceptable to attach external HDDs and set them up in a RAID configuration with the internal ones? I do plan on the Beelink being a NAS too (limited budget, can’t afford a separate dedicated NAS at the moment)
I wouldn’t use RAID on USB.
If you only got 2x m.2 slots then I would probably prioritize disk space over RAID1 and ensure you got a backup up and running. There are m.2 to sata adapters but your Bee-link doesn’t have a suitable psu for that.
portainer is cool. dockge is 😎
I remember trying both back when my server was new but missing something in dockge, can’t remember what right now.
As with most things homelab related, there is no real “right” or “wrong” way, because its about learning and playing around with cool new stuff! If you want to learn about different file systems, architectures, and software, do some reading, spin up a test VM (or LXC, my preference), and go nuts!
That being said, my architecture is built up of general purpose LXCs (one for my Arr stack, one for my game servers, one for my web stuff, etc). Each LXC runs the related services in docker, which all connect to a central Portainer instance for management.
Some things are exceptions though, such as Open Media Vault and HomeAssistant, which seem to work better as standalone VMs.
The services I run are usually something that are useful for me, and that I want to keep off public clouds. Vaultwarden for passwords and passkeys, DoneTick for my todo-list, etc. If I have a gap in my digital toolkit, I always look for something that I can host myself to fill thay gap. But also a lot of stuff I want to learn about, such as the Grafana stack for observability at the moment.
Thank you.
I guess I have more reading to do on Portainer and LXC. Using an RPi with DietPi, I didn’t have the need to learn any of this. Now is a good time as ever.
But generally speaking, how is a Linux container different (or worse) than a VM?
An LXC is isolated, system-wise, by default (unprivileged) and has very low resource requirements.
- Storage also expands when needed, i.e. you can say it can have 40GB but it’ll only use as much as needed and nothing bad will happen if your allocated storage is higher than your actual storage… Until the total usage approaches 100%. So there’s some flexibility. With a VM the storage is definite.
- Usually a Debian 12 container image takes up ~1.5GB.
- LXCs are perfectly good for most use cases. VMs, for me, only come in when necessary, when the desired program has more needs like root privileges, in which case a VM is much safer than giving an LXC access to the Proxmox system. Or when the program is a full OS, in the case of Home Assistant.
Separating each service ensures that if something breaks, there are zero collateral casualties.
A VM is properly isolated and has it’s own OS and kernel. This improves security at the cost of overhead.
If you are starved for hardware resources then running lxcs instead of vms could give you more bang for the buck.
You have that new machine to play with. So do it.
Install it and play around. If you do nothing that should “last forever” in these first days, you can tear it down and do it again in different ways.
I have recently played in the same way with the proxmox unattended install feature, and it was a lot fun. One text file and a bootable image on a stick.
Oh yeah, absolutely will do. Was simply hoping to get an idea of how self-hosters who’ve been using it for a while now set it up to get a rough picture of where I want to be once I am done screwing around with it.
I’ve been doing it for a couple of years. I don’t think I’ll ever be done screwing around with it.
Embrace the flux :)
@modeh Certainly no expert but would starting with setting up some cloudint image templates be somewhere in there?
Not even sure what that is, so most likely a no for me.
Template for setting up your new VMs - after setting up your first template its a few clicks and deploy for new VMs