you can trust the nix repositories aren’t going to change
That, I do not. And storing the source and such for every dependency would be bigger than, and result in essentially the same thing as an image.
I think you’re trying to achieve something different than what docker is for. Docker is like installing onto an empty computer then shipping the entire machine to the end user. You pretty much guarantee thing will work. (yes this is oversimplified)
And storing the source and such for every dependency would be bigger than, and result in the same thing as an image.
Let’s flip that around.
The insanity that would be downloading and storing everything you need, wrapping it all up into a massive tarball and then shipping it to anyone who wants to use the end product, and also by the way assuming that everything you need in order to rebuild it will always be available from every upstream source if you want to make any changes, is precisely what Docker does. And yes, it’s silly to trust that everything it’s referencing will always be available from whoever’s providing it.
(Also, security)
Docker is like installing onto an empty computer then shipping the entire machine to the end user.
Correct. Because it’s not capable enough to make actually-reproducible builds.
My point is, you can do that imaging (in a couple of different ways) with Nix, if you really wanted to. No one does, because it would be insane when you have other more effective tools that can accomplish the exact same goal without needing to ship the entire machine to the end user. There are good use cases for Docker, making it easy to scale services up as was the original intent is a really good one. The way people commonly use it today, as a way to make reproducible environments for ease of one-off deployment, is not one. In my opinion.
I’ve been tempted into a “my favorite technology is better” pissing match, I guess. Anyway, Nix is better.
I might just start bundling my apps inside an environment setup with nix inside docker. A lot of them are similar to identical, So those docker images actually share a lot of layers under the hood.
My apps after compiling and packaging are usually around 50mb. That’s 48mb of debian, which is entirely shared between all the images that I build. So the eventual size of my deployed applications isn’t nearly as big as they seem from the size of the tarball being sent around. So for 10 apps, that’s not 500mb, that’s 68mb.
If anything, the docker hub and registry are a bit of a mess.
That, I do not. And storing the source and such for every dependency would be bigger than, and result in essentially the same thing as an image.
I think you’re trying to achieve something different than what docker is for. Docker is like installing onto an empty computer then shipping the entire machine to the end user. You pretty much guarantee thing will work. (yes this is oversimplified)
Let’s flip that around.
The insanity that would be downloading and storing everything you need, wrapping it all up into a massive tarball and then shipping it to anyone who wants to use the end product, and also by the way assuming that everything you need in order to rebuild it will always be available from every upstream source if you want to make any changes, is precisely what Docker does. And yes, it’s silly to trust that everything it’s referencing will always be available from whoever’s providing it.
(Also, security)
Correct. Because it’s not capable enough to make actually-reproducible builds.
My point is, you can do that imaging (in a couple of different ways) with Nix, if you really wanted to. No one does, because it would be insane when you have other more effective tools that can accomplish the exact same goal without needing to ship the entire machine to the end user. There are good use cases for Docker, making it easy to scale services up as was the original intent is a really good one. The way people commonly use it today, as a way to make reproducible environments for ease of one-off deployment, is not one. In my opinion.
I’ve been tempted into a “my favorite technology is better” pissing match, I guess. Anyway, Nix is better.
I might just start bundling my apps inside an environment setup with nix inside docker. A lot of them are similar to identical, So those docker images actually share a lot of layers under the hood.
My apps after compiling and packaging are usually around 50mb. That’s 48mb of debian, which is entirely shared between all the images that I build. So the eventual size of my deployed applications isn’t nearly as big as they seem from the size of the tarball being sent around. So for 10 apps, that’s not 500mb, that’s 68mb.
If anything, the docker hub and registry are a bit of a mess.