I followed the docker installation instructions and added the certificate successfully but I get this status:
400 Bad Request | nginx
host nginx error logs:
2023/06/11 12:12:45 [debug] 10161#10161: *16 http upstream process header
2023/06/11 12:12:45 [error] 10161#10161: *16 connect() failed (111: Connection refused) while connecting to upstream, client: 198.199.109.53, server: mydomain.tld, request: "GET /version HTTP/1.1", upstream: "http://127.0.0.1:82/version", host: "xxx.xxx.xx.xxx"
2023/06/11 12:12:45 [debug] 10161#10161: *16 http next upstream, 2
2023/06/11 12:12:45 [debug] 10161#10161: *16 free rr peer 2 4
2023/06/11 12:12:45 [warn] 10161#10161: *16 upstream server temporarily disabled while connecting to upstream, client: 198.199.109.53, server: mydomain.tld, request: "GET /version HTTP/1.1", upstream: "http://127.0.0.1:82/version", host: "xxx.xxx.xx.xxx"
I replaced my host IP and domain for privacy
Please see my comments below for more info. I tried putting all text here in the body but it won’t let me post.
Other info
-
OS: Centos 7
-
DNS: Cloudflare
-
I can curl to localhost:82 in my vps
I am not familiar to nginx but I think I updated the ports this way:
docker container proxy internal port 80 expose to port 82 localhost in my host host nginx pickups localhost:82 to expose for my domain.tld:443
Is this cloudflare dns issue or my setup is wrong?
-
Host nginx.conf
spoiler
# For more information on configuration, see: # * Official English Documentation: http://nginx.org/en/docs/ # * Official Russian Documentation: http://nginx.org/ru/docs/ user nginx; worker_processes auto; error_log /var/log/nginx/error.log debug; pid /run/nginx.pid; # Load dynamic modules. See /usr/share/doc/nginx/README.dynamic. include /usr/share/nginx/modules/*.conf; events { worker_connections 1024; } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 4096; include /etc/nginx/mime.types; default_type application/octet-stream; # Load modular configuration files from the /etc/nginx/conf.d directory. # See http://nginx.org/en/docs/ngx_core_module.html#include # for more information. include /etc/nginx/conf.d/*.conf; server { listen 80; server_name _; root /usr/share/nginx/html; # Load configuration files for the default server block. include /etc/nginx/default.d/*.conf; error_page 404 /404.html; location = /404.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } } }
Comment out
proxy_set_header Host $host;
andsudo systemctl reload nginx
.
Host lemmy conf
spoiler
#worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; server { listen 443 ssl; # managed by Certbot server_name mydomain.tld www.mydomain.tld; #charset koi8-r; #access_log logs/host.access.log main; #location / { # root html; # index index.html index.htm; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } ssl_certificate /etc/letsencrypt/live/mydomain.tld/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/mydomain.tld/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot location / { proxy_pass http://localhost:82; proxy_set_header Host $host; include proxy_params; } } server { listen 80; server_name mydomain.tld www.mydomain.tld; if ($host = www.mydomain.tld) { return 301 https://$host$request_uri; } # managed by Certbot if ($host = mydomain.tld) { return 301 https://$host$request_uri; } # managed by Certbot return 404; # managed by Certbot }
Comment out
proxy_set_header Host $host;
andsudo systemctl reload nginx
.
Nginx container conf
spoiler
worker_processes 1; error_log /var/log/nginx/error.log debug; events { worker_connections 1024; } http { upstream lemmy { # this needs to map to the lemmy (server) docker service hostname server "lemmy:8536"; } upstream lemmy-ui { # this needs to map to the lemmy-ui docker service hostname server "lemmy-ui:1234"; } log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; server { # this is the port inside docker, not the public one yet listen 80; # listen 8536; # change if needed, this is facing the public web server_name localhost; server_tokens off; gzip on; gzip_types text/css application/javascript image/svg+xml; gzip_vary on; # Upload limit, relevant for pictrs client_max_body_size 20M; add_header X-Frame-Options SAMEORIGIN; add_header X-Content-Type-Options nosniff; add_header X-XSS-Protection "1; mode=block"; # frontend general requests location / { # distinguish between ui requests and backend # don't change lemmy-ui or lemmy here, they refer to the upstream definitions on top set $proxpass "http://lemmy-ui"; if ($http_accept = "application/activity+json") { set $proxpass "http://lemmy"; } if ($http_accept = "application/ld+json; profile=\"https://www.w3.org/ns/activitystreams\"") { set $proxpass "http://lemmy"; } if ($request_method = POST) { set $proxpass "http://lemmy"; } proxy_pass $proxpass; rewrite ^(.+)/+$ $1 permanent; # Send actual client IP upstream proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } # backend location ~ ^/(api|pictrs|feeds|nodeinfo|.well-known) { proxy_pass "http://lemmy"; # proxy common stuff proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; # Send actual client IP upstream proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } }
docker-compose
spoiler
version: "3.3" networks: # communication to web and clients lemmyexternalproxy: # communication between lemmy services lemmyinternal: driver: bridge internal: true services: proxy: image: nginx:1-alpine networks: - lemmyinternal - lemmyexternalproxy ports: # only ports facing any connection from outside - "127.0.0.1:82:80" - "127.0.0.1:444:443" volumes: - ./nginx.conf:/etc/nginx/nginx.conf:ro # setup your certbot and letsencrypt config - ./certbot:/var/www/certbot - ./letsencrypt:/etc/letsencrypt/live - ./nginx/logs:/var/log/nginx restart: always depends_on: - pictrs - lemmy-ui lemmy: image: dessalines/lemmy:0.17.3 hostname: lemmy networks: - lemmyinternal restart: always environment: - RUST_LOG="warn,lemmy_server=warn,lemmy_api=info,lemmy_api_common=info,lemmy_api_crud=info,lemmy_apub=info,lemmy_db_schema=info,lemmy_db_views=info,lemmy_db_views_actor=info,lemmy_db_views_moderator=info,lemmy_routes=info,lemmy_utils=info,lemmy_websocket=info" volumes: - ./lemmy.hjson:/config/config.hjson depends_on: - postgres - pictrs lemmy-ui: image: dessalines/lemmy-ui:0.17.3 networks: - lemmyinternal environment: # this needs to match the hostname defined in the lemmy service - LEMMY_UI_LEMMY_INTERNAL_HOST=lemmy:8536 # set the outside hostname here - LEMMY_UI_LEMMY_EXTERNAL_HOST=localhost:1236 - LEMMY_HTTPS=true depends_on: - lemmy restart: always pictrs: image: asonix/pictrs:0.3.1 # this needs to match the pictrs url in lemmy.hjson hostname: pictrs # we can set options to pictrs like this, here we set max. image size and forced format for conversion # entrypoint: /sbin/tini -- /usr/local/bin/pict-rs -p /mnt -m 4 --image-format webp networks: - lemmyinternal environment: - PICTRS__API_KEY=my_key user: 991:991 volumes: - ./volumes/pictrs:/mnt restart: always postgres: image: postgres:15-alpine # this needs to match the database host in lemmy.hson hostname: postgres networks: - lemmyinternal environment: - POSTGRES_USER=lemmy - POSTGRES_PASSWORD=mypass - POSTGRES_DB=lemmy volumes: - ./volumes/postgres:/var/lib/postgresql/data restart: always
The lemmy service needs access to the external network, too. It’s not in the docs, but there’s a bug on GitHub about it (on mobile, can’t find it).
I created a third network called lemmybridge and added it to my lemmy service definition.
Thank you for the tip, I’ve been fighting this damn problem for hours trying to figure out why my instance wasn’t able to make external requests.
Popped in a third network into the docker compose file and now things seem to be working
lemmy.hjson
spoiler
{ # for more info about the config, check out the documentation # https://join-lemmy.org/docs/en/administration/configuration.html # only few config options are covered in this example config setup: { # username for the admin user admin_username: "a_username" # password for the admin user admin_password: "some_password" # name of the site (can be changed later) site_name: "mydomain.tld" } # the domain name of your instance (eg "lemmy.ml") hostname: "mydomain.tld" # address where lemmy should listen for incoming requests bind: "0.0.0.0" # port where lemmy should listen for incoming requests port: 8536 # Whether the site is available over TLS. Needs to be true for federation to work. tls_enabled: true # pictrs host pictrs: { url: "http://pictrs:8080/" api_key: "my_key" } # settings related to the postgresql database database: { # name of the postgres database for lemmy database: "lemmy" # username to connect to postgres user: "lemmy" # password to connect to postgres password: "my db pass" # host where postgres is running host: "postgres" # port where postgres can be accessed port: 5432 # maximum number of active sql connections pool_size: 5 } federation: { enabled: true } slur_filter: ''' (*removed*(g|got|tard)?\b|cock\s?sucker(s|ing)?|ni((g{2,}|q)+|[gq]{2,})[e3r]+(s|z)?|*removed*?s?|*removed*?|\bspi(c|k)s?\b|\bchinks?|*removed*?|*removed*(es|ing|y)?|whor(es?|ing)|\btr(a|@)nn?(y|ies?)|\b(b|re|r)tard(ed)?s?) ''' }