Hi guys! I have several docker containers running on my home server set up with separated compose files for each of them, including a Pihole one that serves as my home DNS. I created a network for it and a fixed IP but I couldn’t find a way to set fixed IPs to the other containers that use the Pihole network. Well everything works but every now and then I have problems because the Pihole cant start first and grab the fixed IP and some other container gets its IP so nothing works because everything depends on the Pihole to work. My Pihole compose is like this:

`networks: casa:

driver: bridge
ipam:
  config:
    - subnet: "172.10.0.0/20"

networks:

  casa:
    ipv4_address: 172.10.0.2`

My Jellyfin container as an example is like this:

`_networks: - pihole_casa dns: - 172.10.0.2

networks: pihole_casa: external: true__`

I read the documentation about setting fixed IP but all I got was using one single compose file and with 12 containers that seems like a messy solution… I couldn’t set fixed IPs with different compose files. Do you guys have any suggestion about it?

Thanks!

TLDR: I want to set fixed IPs to containers in different compose files so all of them use Pihole as DNS and don´t steal Pihole’s IP in the startup

  • 𝙚𝙧𝙧𝙚@feddit.win
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    1 year ago

    I feel like I’m missing something here. I’ve never needed to worry about docker container IPs.

    I wonder why you want them to have fixed IPs. I guess I want to understand the problem it would solve.

    If all you want to do is have the other containers use the pihole as DNS… they’d already be doing so if the server is using it as the DNS server.

    At worst you’d need to provide

    dns:
      - <server IP where pihole is>
    

    property to the services.

    • fernandu00@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Maybe I’m the one missing something. Are you saying I just have to use the LAN IP adress in the dns line in the compose file? I used the docker IP adress because I had some issues with containers not communicating with each other because every time I create a new container docker creates a new network for it. I’m gonna give it a try once I get home. Thanks!

    • fernandu00@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      It worked! Thanks mate! I just removed the IP address and changed to hostnames and everything worked beautifully!

  • HAMSHAMA@lemmy.sdf.org
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    Hello, I think you would prefer to connect via internal dns names rather than static IPs. I have a slightly similar setup with docker compose creating a network for my apps.

    Docker compose that creates the network: https://github.com/HugoKlepsch/reverse-proxy-network/blob/master/docker-compose.yaml

    Here I run my app and set the hostname. Note this app is on two docker networks, but for access from nginx it is on the “reverse_proxy” network: https://github.com/HugoKlepsch/buzz/blob/b160aab1649819acacb80964d7b19d7beb1712f6/docker-compose.yaml#L21

    This compose is running nginx as a reverse proxy for my local apps. It uses the reverse proxy network: https://github.com/HugoKlepsch/reverse-proxy/blob/675e11d3d8a98e4692c6e6087de4428ac8d1ece8/docker-compose.yaml#L4

    Here is a configuration file where I tell nginx where to find one of my apps on the reverse proxy docker network. Note the name references the name of the app and the docker network: https://github.com/HugoKlepsch/reverse-proxy/blob/675e11d3d8a98e4692c6e6087de4428ac8d1ece8/config/conf.d/sites-available/buzz.conf#L2

    • fernandu00@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Wow what a reply! Thank you so much! I could set up all containers using the pihole as DNS using hostnames…your post helped me a lot!

  • tvcvt@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    I think what you’re describing can be accomplished with docker-compose’s depends_on option. I’m not certain how it works across compose files, but that would be the first place I’d look.

    • fernandu00@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Yeah…seems like the depends_on works with multiple services in one single compose file…and putting 10 services in tha same compose file is messy

      • narc0tic_bird@lemm.ee
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        You can split services into multiple YAML files (i.e. docker-compose.database.yml and docker-compose.backend.yml or whatever) and then use docker compose up -d docker-compose.database.yml docker-compose.backend.yml. Compose treats this like it would one file internally (i.e. the services are connected to each other via an internal network by default).

  • poVoq@slrpnk.net
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    Maybe not exactly what you are looking for but with Podman you can run any Docker container and control it via normal Systemd service files including all the fine grained control options that come with that.