How to set up an nginx reverse proxy with SSL termination in FreeNAS

Recently I decided to make a number of my services externally available, and so the need arose to put a reverse proxy in place to correctly direct queries to the appropriate server. This guide will present the way I configured this, and attempt to explain some of the design choices along the way. It’s aimed at beginners, so no prior knowledge of these services will be assumed and every effort will be made to explain what each configuration option does.

Design

So, I guess the first place to start is what is a reverse proxy, and why do you need one? In simplest terms, a reverse proxy is a type of proxy server that retrieves a resource on behalf of a client from one or more services. To illustrate this with a practical example, lets assume that I host two services on my network, and I want both to be externally available at the domains cloud.example.com and bitwarden.example.com. Unless I want to specify a port to access at the end of one of these domains, i.e. bitwarden.example.com:4343, both will need to be available on ports 80 and 443. It’s not possible to host two services on the same ports directly, and so this is where the reverse proxy comes in. The reverse proxy is hosted on ports 80 and 443, and it inspects the Host header in each request to determine which service to forward the request on to. This configuration looks like this:

As you can see, a request to the domain name is made from the internet, this is then forwarded by the router to the reverse proxy server, which determines which server the request is to go to. Additionally, this is a good opportunity to introduce SSL termination. This means that the reverse proxy handles all of the certificates for the servers it proxies to, instead of each service managing their own certificate. I’ve found this immensely useful, as it reduces the management load of configuring SSL for every service that I set up. Instead, I obtain a wildcard certificate (*.example.com) and configure it on the proxy server. This way, all hosts with a subdomain of example.com are covered under the certificate and the SSL configurations can be managed in one place.

Now that we know the problem a reverse proxy solves, lets set one up.

Jail Configuration

We’re going to run the reverse proxy in its own jail so that it can be managed easily in isolation from other services. To do this, SSH into your FreeNAS host. If you’re not sure how to do this, you can follow this guide to set it up. Assuming your FreeNAS host is on IP 192.168.0.8:

ssh root@192.168.0.8

If you’re using Windows, you’ll need to use PuTTY or WSL or some other unix emulator. Refer to the above guide for more detail.

Create the jail

Once you’ve established a SSH connection, you can create the jail as follows:

iocage create -n reverse-proxy -r 11.2-RELEASE ip4_addr="vnet0|192.168.0.9/24" defaultrouter="192.168.0.1" vnet="on" allow_raw_sockets="1" boot="on"

To break this down into it’s consituent components:

  • iocage create: calls on the iocage command to create a new iocage jail
  • -n reverse-proxy: gives the jail the name ‘reverse-proxy’
  • -r 11.2-RELEASE: specifies the release of FreeBSD to be installed in the jail.
  • ip4_addr="vnet0|192.168.0.9/24": provides the networking specification; an IP/mask for the jail, and the interface to use, vnet0. This should be something convenient to you on the subnet you wish it to be on. The selection is arbitrary, though if you’re new to this it’s advisable for simplicity to select something on the same subnet as your router.
  • defaultrouter="192.168.0.1": specifies the router for your network, change this as is relevant for you
  • vnet="on": enables the vnet interface
  • allow_raw_sockets="1": enables raw sockets, which enables the use of programs such as traceroute and ping within the jail, as well as interactions with various network subsystems
  • boot="on": enables the jail to be auto-started at boot time.
    More detail on the parameters that can be used to configure a jail on creation can be found in the man page for iocage

Now to see the status of the newly created jail, execute the following:

iocage list

This will present a print out similar to the following:

+-----+---------------+-------+--------------+----------------+
| JID |     NAME      | STATE |   RELEASE    |      IP4       |
+=====+===============+=======+==============+================+
| 1   | reverse-proxy | up    | 11.2-RELEASE | 192.168.0.9    |
+-----+---------------+-------+--------------+----------------+

Enter the jail by taking note of the JID value and executing the following:

jexec <JID> <SHELL>

For example,

jexec 1 tcsh

Install nginx

Begin the installation process by updating the package manager, and installing nginx (the web server we’re going to use for the reverse proxy) along with the nano text editor and python:

pkg update
pkg install nginx nano python

Enable nginx so that the service begins when the jail is started

sysrc nginx_enable=yes

SSL/TLS Termination

Since the rest of this procedure involves making some decisions about whether or not to use SSL/TLS termination, we’ll discuss it here.

This guide is going to assume that the reverse proxy will be responsible for maintaining the certificates for all of the servers that it proxies to. This does not have to be the case, however. An equally valid configuration would be to have each of the servers handle their own certificates and encryption, or some combination of both. I won’t address these alternatives in this guide, however with a small amount of research the instructions here shouldn’t be too difficult to adapt to your use case.

Additionally, this configuration will use a wildcard certificate. That is, a certificate for the domain *.example.com, which is valid for all subdomains of example.com. This will simplify the process, as only one certificate needs to be obtained and renewed. However, one requirement of obtaining a wildcard certificate from LetsEncrypt is that a DNS-01 challenge is used to verify ownership for the domain. This means that HTTP-01 challenges cannot be used with this method, meaning that you must be using a DNS service that gives you control over your DNS records, or an API plugin to allow for DNS challenges. Certbot have published a list of supported DNS plugins that will enable you to perform a DNS challenge directly. If you’re using one of these providers, I recommend using these. Alternatively, if your DNS provider does not have a plugin, but you have access to edit the DNS records, you can manually configure a TXT record, as described in the certbot documentation. If neither of these alternatives are sufficient for you, acme.sh is a script that has perhaps wider compatability for a range of DNS Providers. Specific compatability is detailed in this community maintained list.

Optionally, you could obtain a certificate for each subdomain that you wish to host and use HTTP-01 challenge validation. This does not require a plugin, and there are a range of ways to do this as described in the LetsEncrypt documentation. There are some basic instructions in this certbot guide, however more research may be required.

To reiterate, this guide will deal only with obtaining a wildcard certificate using a DNS-01 challenge. The DNS provider I use is AWS Route 53, and so this is the plugin I will use.

Certbot installation

Now, lets install certbot. Certbot is free, open source tool for obtaining and maintaining LetsEncrypt certificates. Install it as follows:

pkg install py37-certbot openssl

Additionally, you’ll need to install the appropriate plugin for DNS validation. To show a list of available plugins, execute:

pkg search certbot

At the time of writing, the (relevant) list of results looks like follows:

py37-certbot-1.0.0,1           Let's Encrypt client
py37-certbot-apache-1.0.0      Apache plugin for Certbot
py37-certbot-dns-cloudflare-1.0.0 Cloudflare DNS plugin for Certbot
py37-certbot-dns-cloudxns-1.0.0 CloudXNS DNS Authenticator plugin for Certbot
py37-certbot-dns-digitalocean-1.0.0 DigitalOcean DNS Authenticator plugin for Certbot
py37-certbot-dns-dnsimple-1.0.0 DNSimple DNS Authenticator plugin for Certbot
py37-certbot-dns-dnsmadeeasy-1.0.0 DNS Made Easy DNS Authenticator plugin for Certbot
py37-certbot-dns-gehirn-1.0.0  Gehirn Infrastructure Service DNS Authenticator plugin for Certbot
py37-certbot-dns-google-1.0.0  Google Cloud DNS Authenticator plugin for Certbot
py37-certbot-dns-linode-1.0.0  Linode DNS Authenticator plugin for Certbot
py37-certbot-dns-luadns-1.0.0  LuaDNS Authenticator plugin for Certbot
py37-certbot-dns-nsone-1.0.0   NS1 DNS Authenticator plugin for Certbot
py37-certbot-dns-ovh-1.0.0     OVH DNS Authenticator plugin for Certbot
py37-certbot-dns-rfc2136-1.0.0 RFC 2136 DNS Authenticator plugin for Certbot
py37-certbot-dns-route53-1.0.0 Route53 DNS Authenticator plugin for Certbot
py37-certbot-dns-sakuracloud-1.0.0 Sakura Cloud DNS Authenticator plugin for Certbot
py37-certbot-nginx-1.0.0       NGINX plugin for Certbot

Install the relevant plugin to you. For me, as mentioned, this is Route 53:

pkg install py37-certbot-dns-route53

Configure DNS plugin

To use the DNS plugin, you’re likely going to have to configure it. Consult the documentation for your relevant plugin. The py37-certbot-dns-route53 documentation lists the available methods to configure the Route 53 plugin, however Amazon have conveniently provided us with a CLI tool that will do it for us:

pkg install awscli

Before configuring it, you’ll need to create a Key Pair to provide, and limit, access to your AWS console. Bear in mind that if this server is compromised, the perpetrator will have access to this, so limiting the access this key pair has is advisable. The plugin documentation indicates that the following permissions are required:

  • route53:ListHostedZones
  • route53:GetChange
  • route53:ChangeResourceRecordSets

Now, initiate the configuration process:

aws configure

This will prompt you for four pieces of information:

  • AWS Access Key ID: From the key pair
  • AWS Secret Access Key: From the key pair
  • Default Region Name: The region closest to you, i.e. us-west-2. This should be available in your AWS dashboard
  • Default output format: text

Now, your configuration should be present in ~/.aws/config, and your credentials should be present in ~/.aws/credentials.

Request a wildcard certificate

To obtain a certificate, simply execute the following command:

certbot certonly --dns-route53 -d '*.example.com'

This will undertake a DNS-01 challenge to verify access to the domain you substitute for example.com using the credentials in the plugin that you set up previously.

Configure certificate auto-renewal

LetsEncrypt certificates are only valid for 90 days. To prevent these expiring, and having to manually repeat renew it, we can automate the renewal process. To do this, we’re going to add a cron job, which is essentially a command that runs at a specified interval. Set your default editor to nano and open up the crontab, where cron jobs are registered:

setenv EDITOR nano
crontab -e

Add the following line:

0 0,12 * * * /usr/local/bin/python -c 'import random; import time; time.sleep(random.random() * 3600)' && /usr/local/bin/certbot renew --quiet --deploy-hook "/usr/sbin/service nginx reload"

Save and Exit (Ctrl + X), and the cron job should be configured. This command will attempt to renew the certificate at midnight and noon every day.

One problem that I’ve had is that I’ve been able to get certificates to renew, however the certificate of the site still expires because the web server configuration hasn’t been reloaded. The --deploy-hook flag solves this issue for us, by reloading the web server when the certificate has been successfully updated.

Now we have our certificate to enable HTTPS, lets move on to configuring nginx.

Configure nginx

Before getting into specific configurations, it might be useful to outline the approach here. Because there is likely to be a number of duplications in the configuration files, some common snippets will be broken out into their own files to ease configuration management. The final list of configuration files we’ll end up with will be:

/usr/local/etc/nginx/nginx.conf
/usr/local/etc/nginx/vdomains/subdomain1.example.com.conf
/usr/local/etc/nginx/vdomains/subdomain2.example.com.conf
/usr/local/etc/nginx/snippets/example.com.cert.conf
/usr/local/etc/nginx/snippets/ssl-params.conf
/usr/local/etc/nginx/snippets/proxy-params.conf
/usr/local/etc/nginx/snippets/internal-access-rules.conf

Certificate configuration

To begin, we’ll start with the snippets:

cd /usr/local/etc/nginx
mkdir snippets
nano snippets/example.com.cert.conf

This file details the SSL/TLS certificate directives identifying the location of your certificates. Paste the following:

# certs sent to the client in SERVER HELLO are concatenated in ssl_certificate
ssl_certificate /usr/local/etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /usr/local/etc/letsencrypt/live/example.com/privkey.pem;

# verify chain of trust of OCSP response using Root CA and Intermediate certs
ssl_trusted_certificate /usr/local/etc/letsencrypt/live/example.com/chain.pem;

Remember to replace example.com with your domain, as requested when obtaining a wildcard certificate earlier. Save and Exit (Ctrl + X).

SSL configuration

Use the configuration generator at https://ssl-config.mozilla.org/ to generate a SSL configuration. I’d recommend only using either Intermediate or Modern. I’ve used Intermediate here because at the time of writing I had issues establishing a TLSv1.3 connection, whereas TLSv1.2 was consistently successful, however this compatability comes at the expense of security. The modern configuration is much more secure than the old configuration, for example.

nano snippets/ssl-params.conf

This is the contents of my file:

ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m;  # about 40000 sessions
ssl_session_tickets off;

# curl https://ssl-config.mozilla.org/ffdhe2048.txt > /usr/local/etc/ssl/dhparam.pem
ssl_dhparam /usr/local/etc/ssl/dhparam.pem;

# intermediate configuration
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;

# HSTS (ngx_http_headers_module is required) (63072000 seconds)
add_header Strict-Transport-Security "max-age=63072000" always;

# OCSP stapling
ssl_stapling on;
ssl_stapling_verify on;

# replace with the IP address of your resolver
resolver 192.168.0.1;

Replace the IP address of your resolver as directed, and then Save and Exit (Ctrl + X). If required by your desired configuration, you may also need to download the dhparam.pem certificate:

curl https://ssl-config.mozilla.org/ffdhe2048.txt > /usr/local/etc/ssl/dhparam.pem

Note that at the time of writing, the Modern configuration did not require this, but the Intermediate configuration did.

Proxy header configuration

nano snippets/proxy-params.conf

Paste the following:

proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Forwarded-Ssl on;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;

Save and Exit (Ctrl + X)

Access policy configuration

This is the policy that we’ll apply to services that you don’t want to be externally available, but still want to access it using HTTPS on your LAN.

nano snippets/internal-access-rules.conf

Populate it with the following:

allow 192.168.0.0/24;
deny all;

Replace the network with the subdomain relevant to you, and Save and Exit (Ctrl + X).

Virtual domain configuration

Create a new directory for virtual domains:

mkdir vdomains

This directory will contain the configurations for each of the subdomains you wish to proxy to. You need to create one configuration file for each subdomain.

Externally available subdomain

As an example, lets assume you have a Nextcloud server you want to proxy to such that it’s externally available outside your network. Create a configuration file for it:

nano vdomains/cloud.example.com.conf

Populate it as follows:

server {
        listen 443 ssl http2;

        server_name cloud.example.com;
        access_log /var/log/nginx/cloud.access.log;
        error_log /var/log/nginx/cloud.error.log;

        include snippets/example.com.cert.conf;
        include snippets/ssl-params.conf;

        location / {
                include snippets/proxy-params.conf;
                proxy_pass http://192.168.0.10;
        }
}

Then Save and Exit (Ctrl + X). Lets break this down so you understand what’s happening here. Each server can be handled within a server block. nginx iterates over the server blocks within it’s configuration in order until it finds one that matches the conditions of a request, and if no condition is matched, the server block marked as default_server is used.

The first statements:

listen 443 ssl http2;

server_name cloud.example.com;
access_log /var/log/nginx/cloud.access.log;
error_log /var/log/nginx/cloud.error.log;

This means that this server directive listens on port 443 for a HTTPS connection and enables HTTP/2 compatability. If a HTTPS request is made on port 443, and the Host header in the request matches the server_name directive, then this server block is matched and the directives are executed.

The access_log and error_log directives specify the location of these logs specifically for this server.

include snippets/example.com.cert.conf;
include snippets/ssl-params.conf;

These statement import the directives contained in the files we created earlier, specifically the certificate locations and the SSL parameters.

location / {
        include snippets/proxy-params.conf;
        proxy_pass http://192.168.0.10;
}

The location block is specific to the requested URI. In this case, the URI in question is /, the root. This means, that when the URL https://cloud.example.com is requested, this location directive is what’s executed. The include statement does the same thing as the snippets above; imports the directives contained in /usr/local/etc/nginx/snippets/proxy-params.conf that we created earlier. The proxy_pass statement is what redirects the request to the subdomain server. In this case, this is where the IP of the Nextcloud jail would go.

Internally available subdomain

If you don’t want this subdomain to be accessible outside of your local network, then you simply need to include the snippets/internal-access-rules.conf file we created earlier. Assuming you have a Heimdall server for example, your configuration file may be created as follows:

nano vdomains/heimdall.example.com.conf

And, assuming that the server is located at http://192.168.0.12, populate it as follows:

server {
        listen 443 ssl http2;

        server_name heimdall.example.com;
        access_log /var/log/nginx/heimdall.access.log;
        error_log /var/log/nginx/heimdall.error.log;

        include snippets/example.com.cert.conf;
        include snippets/ssl-params.conf;

        location / {
                include snippets/proxy-params.conf;
                include snippets/internal-access-rules.conf;
                proxy_pass http://192.168.0.12;
        }
}

nginx.conf

Now, nginx only looks at /usr/local/etc/nginx/nginx.conf when inspecting configuration, so we have to tie everything we’ve just done in there. Open the file:

nano nginx.conf

The first thing you’ll need to do is disable the default configuration. You can do this by renaming it to nginx.conf.bak as follows:

mv nginx.conf nginx.conf.bak

Then create a new nginx.conf file for our new configuration:

nano nginx.conf

And populate it as follows:

worker_processes  1;

events {
    worker_connections  1024;
}

http {
    include mime.types;
    default_type application/octet-stream;
    sendfile on;
    keepalive_timeout 65;

    # Redirect all HTTP traffic to HTTPS
    server {
        listen 80 default_server;
        listen [::]:80 default_server;

        return 301 https://$host$request_uri;
    }

    # Import server blocks for all subdomains
    include "vdomains/*.conf";
}

Save and Exit (Ctrl + X). The important parts of this are the server block listening on port 80, and the include statement. The server block redirects all HTTP traffic to HTTPS to ensure that the SSL/TLS configuration we set up is being used, and the include statement imports the server blocks from all of the virtual domain configuration files. Now we need to start the service:

service nginx start

If it has already started, just reload it. This is the step you’ll have to take after all configuration changes:

service nginx reload

Router configuration

Set up a NAT Port Forward to redirect all traffic received on port 80 at the WAN address to port 80 on the reverse proxy jail, and likewise for port 443. In pfSense (Firewall -> NAT), this looks like the following:

This will ensure that all requests to these addresses will pass through the reverse proxy.

DNS Configuration

In order to make these subdomains accessible both internally, and externally, you’ll need to add entries to a DNS resolver. To do this internally, you’ll need to add an entry for a Host Override, or whatever your router’s equivelant is. In pfSense, navigate to Service -> DNS Resolver -> Host Overrides. Assuming the subdomains proxy.example.com, cloud.example.com and heimdall.example.com, this would look like the following:

As can be seen, all subdomains are being resolved for the reverse proxy jail IP address of 192.168.0.9. For access to these services outside your network, you need to have a valid A record with your DNS provider. As an example, a valid A record would have the name cloud.example.com and the value would be your public IP address.

Certificate Authority Authorization (CAA) Records

If you have a DNS provider that supports it, it might be a good idea to add a CAA Record. A CAA record essentially allows you to declare which certificate authorities you actually use, and forbids other certificate authorities from issueing certificates for your domain. You can read more about these at SSLMate. SSLMate also provide a configuration tool to help you auto-generate your CAA record configuration.

And that’s it! You should be good to go. If you have any questions or need any clarification, leave a comment down below and i’ll try to help where I can. Also, if you notice any errors, please let me know so I can update the guide.

   Send article as PDF   

121 thoughts on “How to set up an nginx reverse proxy with SSL termination in FreeNAS

  1. Hey excellent writeup on this topic.

    Just a quick question. What’s the difference between using nginx as the reverse proxy vs using HA proxy? I know HA proxy is a load balancer, however just wondering if you could use the HA proxy module within FreeNAS to achieve the same ends as an alternative to setting up a freenas jail.

    Since SSL terminates at the reverse proxy, with any webservers running behind the proxy I assume you just configure them to run on port 80?

    1. Hey Kev, I’ve never used HAproxy so I’m not sure I can provide any good commentary on the differences. I used nginx primarily because it’s touted as pretty high performance for reverse proxying, and because it’s so ubiquitous as a web server it was a good excuse for me to learn about its configuration. I also found the configuration of nginx itself relatively straight forward; the complicated part to me seemed to be obtaining a certificate using certbot, especially with the DNS challenge. With that said, load balancing and reverse proxying are different things. From some quick research it looks like HAproxy is capable of reverse proxying, so it could be a viable alternative.

      Re: your second question, correct. You would just configure the proxied services to serve HTTP (port 80 not necessarily required, just specify the port in the proxy_pass directive, i.e. proxy_pass http://192.168.0.10:4567), and the reverse proxy will upgrade the connection to HTTPS.

  2. Hello Samuel,
    again an excellent guide. Thank you very much.
    However, because of your nextcloud guide I’m currently a little bit ahead on the nextcloud behind nginx reverse proxy jail configuration.

    I suggest to add “proxy_hide_header” lines before adding individual add_header lines,
    e.g. like
    proxy_hide_header Strict-Transport-Security
    add_header Strict-Transport-Security “max-age=63072000” always
    Otherwise for a certain security header option both nextcloud and nginx values are provided which rises comments in the SSL Labs test.
    Hope this helps. And now I will try to mimic your “snippets” in order to have a better overview of my config.
    Best regards, Markus

    1. Thanks for the suggestion Markus! I’ll definitely have a closer look at putting that in the guide. I got the same result with SSL Labs re: invalid HSTS configuration; I assumed it was because my Nextcloud instance is still looking after its own certificates and SSL policy. I plan on removing this when I upgrade to the latest version and update the nextcloud guide shortly, so I guess I’ll see for sure then, though I suspect this will solve the issue 🙂 proxy_hide_header might be a good intermediate solution. Cheers!

  3. Hello Samuel! Another great guide. I’m working my way through it.
    I created the new nginx.conf file and pasted in the new contents, I got the following error:

    Performing sanity check on nginx configuration:
    nginx: [emerg] “server” directive is not allowed here in /usr/local/etc/nginx/snippets/ssl-params.conf:2
    nginx: configuration file /usr/local/etc/nginx/nginx.conf test failed
    Starting nginx.
    nginx: [emerg] “server” directive is not allowed here in /usr/local/etc/nginx/snippets/ssl-params.conf:2
    /usr/local/etc/rc.d/nginx: WARNING: failed to start nginx

    What did I miss?

    1. Hi Phil, looks like the error is saying you have a server directive in snippets/ssl-params.conf – remove it; you just want the bare statements. This file is included in each of the vdomain conf files, so if you also have a server directive in ssl params we end up with something like the following:

      server {
          ...
          server {
              ...
          }
      }
      

      Which is not what we want. Make sure that your file is exactly as shown in the guide and reload nginx to see if it works 🙂 Let me know if you have any more issues.

      Cheers.

      1. Yes! That did it. Success! Thank you Samuel. Since I now have the wildcard certs in place with the reverse proxy, how do i remove the cert I originally created using your nextcloud guide?

        1. I actually haven’t found the time to go through this process myself yet. You could try going through the SSL instructions in reverse and undoing each command? IIRC there’s a revocation command for certbot that allows you to revoke a certificate and remove any lasting traces of it; have a look at the certbot documentation and see what it says. You would also need to revert your vhost settings in apache to provide content over HTTP instead over redirecting to HTTPS – so removing the <VirtualHost *:443> directive and the redirect in the <VirtualHost *:80> directive. Hope this helps 🙂 Cheers

  4. Have you ever thought about putting the proxy and web server in a different VLAN? I’m not sure if this is applicable to your host however its just another form of isolation from your other network.

    1. I’ve thought about it, but haven’t found the time to work out how appropriate it is. I actually bought a managed switch a while ago to play around with VLANs, but haven’t got around to it yet. Is this how you’ve set your network up?

      1. Yes I recently upgraded my switch hardware (using mostly Unifi switches however I do have a few DLink Managed switches as well). I couple these devices with pfsense similar to yours. I’m aware many of your servers are running on AWS which I haven’t yet made a migration to except for a few on Digital Ocean. Any public facing servers I’m putting in their own separate VLAN(s) along with IoT devices for home. Adding VLANs however does complicate a few things however particularly with certificate management and distribution.

        1. Ah that’s cool 🙂 VLANS could definitely be a good way to go; I’m looking forward to researching them more

  5. Any details how you set up your bitwarden server? I’m intereted in doing the same exact thing with the method you discussed above with nginx reverse proxy in front of the bitwarden server.

    Also in general I have a questions about the reverse proxies and termination. What part about your configuration makes nginx the termination point for SSL? I ask this because you have the following in your setup:

    location / {
    include snippets/proxy-params.conf;
    proxy_pass https://192.168.0.10;
    }
    So you are receiving SSL encrypted traffic into the proxy, and then ??re-encrypting it to the internal server at 192.168.0.10? Is the proxy acting as a MIM in this case?

    1. Hi Kev, thanks for pointing this out, you’re right it should be a proxy_pass to HTTP rather than HTTPS. It was something I had in my configuration for my cloud domain (as it still manages its own SSL until I find time to reconfigure it), but slipped through the cracks for getting updated in the guide. I’ve fixed it now. FWIW if you’re reading this and wondering how to continue letting a service behind the reverse proxy continue to manage its own certificate; this is how. proxy_pass to the HTTPS address, and add the proxy_hide_header directive to the relevant vdomain conf file to use the headers as passed from the endpoint, and not the reverse proxy.

      RE: Bitwarden, I was thinking about doing a guide on this but honestly the official instructions are pretty good. I just spun up a debian vm with bhyve and used docker to install it, then followed the prompts for installation. The official distribution is a bit heavy because it uses Microsoft SQL Server underneath, but I didn’t particularly feel like hacking around with any of the alternatives, which have their pro’s and cons (i.e. smaller/faster distribution, but IIRC they only reimplement the API, so don’t ship with the web vault, though I know there are instructions out there on how to get this – just seemed more trouble than it was worth). The examples I’m referring to are rubywarden and bitwarden_rs if you want to go and check them out. There’s a (rough) installation guide for Rubywarden here

  6. Hi Samuel. So I’m hung up on the DNS Configuration section. I don’t have a pfsense box yet. I’m planning on putting one together soon. Right now I have an edgerouter 4. Do you or anyone else have any experience getting this set up with this box?

    I do have the hairpin option set up and I can access my nextcloud internally and download files, but I get an unknown error when I try to upload a file.
    Do you think the issues are related?

      1. Phil, glad you got the upload issue sorted. With regard to DNS configuration, I’ve never used an edge router so I have no idea how to do it. Having said that, some quick research indicates that it might be possible by customising your DNS Forwarding Options. Specifically, it looks like the following command line setting may be roughly equivalent to pfSense’s Host Override (I’m assuming this is what you’re having trouble with and not the port forwarding?):

        set service dns forwarding options address=/cloud.domain.com/192.168.0.10
        

        Where cloud.domain.com is the address you want to redirect and 192.168.0.10 is the address of your reverse proxy jail. Hope this helps!

        Edit: I found this video that looks like it goes through how to set your edge router up to assign static DNS host names using the web interface on the edge router 4

  7. Wow, thank you, this was very useful!
    I wish I had found such a comprehensive tutorial a long time ago!
    My setup is almost identical to yours, except that:
    – I find it more convenient to keep all nginx settings in one file instead of using includes.
    – To access proxied hosts from the LAN by entering https://proxiedhost.mydomain.com, I set up NAT Reflection on pfSense (System > Advanced > Firewall & NAT) instead of Host Overrides. See https://docs.netgate.com/pfsense/en/latest/book/nat/nat-reflection.html for more info. I don’t know enough about networking to imagine all possible consequences of one setup vs. the other, but it’s been working flawlessly for me for a few years now and doesn’t require me to enter additional host overrides as I add web proxied hosts.
    – pfSense also takes care of renewing the Let’s Encrypt wildcard certificates and copying them to FreeNAS via scp, provided you’ve set up passwordless key-based SSH access to FreeNAS.

    Thank you for mentioning nginx access rules. I suspected they existed but never really took the time to look into them.

    1. Ahhh, thanks for mentioning the NAT Reflection! I suspected that there was probably a better way to do it than just host overrides, but I didn’t come across anything. I’m going to look into this to see if it’s more appropriate for my use case 🙂

  8. Hey Samuel. On your advice I went and checked out bitwarden_rs which is a fork written in rust (which you probably know). This guide came in very useful, since I was able to spin up two linux VMs (on FreeNAS) — one for the reverse proxy and the other for the bw_rs implementation. This guide was really helpful in that I only expose the bw server to the internal LAN and the instructions from your reverse proxy were very very helpful in this step. I had used a docker image via docker-compose before, however that actually was relatively easy to setup. The hardest part was setting up postfix as a relay server — with my postfix installation located on the reverse proxy. I had to configure postfix as a n encrypted SMTP relay for the LAN machines so that bitwarden — when sending mail — would send to postfix (located on reverse proxy), which then would forward to gmail account. I wish I could bypass gmail, however I’m not really interested in wading into the world of setting up my own mail server and dealing all the overhead of management. One thing I really like about bw_rs is that it gives you all the premium features out of the box. It also works really well with the browser extension and mobile apps. Thanks for all your help.

  9. Hello, I have the reverse proxy installed and it is working great! Thank You! However I would like to implement the configure ddns updates for my route53 and i have followed that part of your guide on installing nextcloud and have tried to use the ddns updates for route53 on the reverse proxy and I havent been able to get it to work. I did have to install bash. When i run “/usr/local/bin/bash /scripts/update-route53/update-route53.sh” I am getting an aws: error:

    aws: error: the following arguments are required: –hosted-zone-id, –change-batch
    /scripts/update-route53/update-route53.sh: line 92: –hosted-zone-id: command not found
    /scripts/update-route53/update-route53.sh: line 93: –change-batch: command not found

    I thought that maybe it was due to the fact i didnt have pip installed so i installed pip however i am now lost on what to look for next.
    Thank You very much for your guides and help as I know that I have learned so much!

    1. Fixed it! See below:

      nano /usr/local/etc/nginx/vdomains/subdomain1.example.com.conf

      server {
      listen 443 ssl http2;

      server_name subdomain1.example.com;
      access_log /var/log/nginx/cloud.access.log;
      error_log /var/log/nginx/cloud.error.log;

      include snippets/example.com.cert.conf;
      include snippets/ssl-params.conf;

      location / {
      include snippets/proxy-params.conf;
      proxy_pass http://192.168.0.0;
      }
      location = /.well-known/carddav {
      return 301 https://subdomain1.example.com/remote.php/dav;
      }
      location = /.well-known/caldav {
      return 301 https://subdomain1.example.com/remote.php/dav;
      }
      }

    2. Interesting. I hadn’t seen that. I also can’t really speak to it; it hasn’t been an issue for me. I have CardDAV set up to sync my contacts on my phone, and I’m not having any issues with my current configuration. Aside from that though, I guess you might need to add a location block to your server directive that takes the target URL and redirects it appropriately? Might be worth seeing if the current configuration works or not though, I don’t see any reason why it wouldn’t. Hope this helps.

  10. Also I got a “Request Entity Too Large” from my android Nextcloud client when trying to upload a ~3MB file, so I added this to the server {} line in nano /usr/local/etc/nginx/nginx.conf:

    client_max_body_size 1999M;

    1. You have to had this line in the ngnix.config

      “client_max_body_size 10240M;”
      bellow
      “keepalive_timeout 65;”

  11. I am trying to add a redirect for a generic TCP service using a stream { } argument, but I get an error while starting nginx:

    nginx: [emerg] unknown directive "stream" in /usr/local/etc/nginx/vdomains/...

    nginx -V shows “–with-stream=dynamic”, and my google-fu searching makes me think that has to be set to static. Any ideas?

  12. Great guide. Everything was going smoothly until I got to the part where I start up nginx. I get the following error:

    [code]root@reverse-proxy:/usr/local/etc/nginx # service nginx start
    Performing sanity check on nginx configuration:
    nginx: [emerg] BIO_new_file(“/usr/local/etc/ssl/dhparam.pem”) failed (SSL: error :02001002:system library:fopen:No such file or directory:fopen(‘/usr/local/etc/s sl/dhparam.pem’,’r’) error:2006D080:BIO routines:BIO_new_file:no such file)
    nginx: configuration file /usr/local/etc/nginx/nginx.conf test failed
    Starting nginx.
    nginx: [emerg] BIO_new_file(“/usr/local/etc/ssl/dhparam.pem”) failed (SSL: error :02001002:system library:fopen:No such file or directory:fopen(‘/usr/local/etc/s sl/dhparam.pem’,’r’) error:2006D080:BIO routines:BIO_new_file:no such file)
    /usr/local/etc/rc.d/nginx: WARNING: failed to start nginx
    [/code]

  13. Thank you for all your guides, I already used your nextcloud guide to understand what it is I’m going as opposed to just running a script. Great amount of detail and explanation, much appreciated.

    I am having trouble setting up the reverse proxy, however. Apart from nextcloud I have a simple html repair manual on a different jail and want to run Onlyoffice. When I access everything locally, it all works (but isn’t going through the reverse proxy), but when I go through the proxy only nextcloud is available. Neither the repair manual is accessible nor does Onlyoffice work. So there is a problem with how I set up my reverse proxy, but I fail to understand where. When I look at the error logs of the repair manual I keep seeing some references to /remote/webdav-folders that nextcloud utilizes, but don’t get where the comes from, I’m trying again from scratch now.

    The reason for setting up the reverse proxy is that I don’t want to expose all the different hosts directly and having to manage all the different certificates this entails. So in theory, is it not enough to have one certificate running on the reverse proxy and everything behind that is just running as http?

    1. Hi Jens, this is exactly why I set mine up this way. With regards to your problem, it’s possible that it has something to do with the way DAV works. Another user reported similar issues, and resolved it by redirecting the DAV endpoints specifically. From their comment:

      nano /usr/local/etc/nginx/vdomains/subdomain1.example.com.conf
      
      server {
      listen 443 ssl http2;
      
      server_name subdomain1.example.com;
      access_log /var/log/nginx/cloud.access.log;
      error_log /var/log/nginx/cloud.error.log;
      
      include snippets/example.com.cert.conf;
      include snippets/ssl-params.conf;
      
      location / {
      include snippets/proxy-params.conf;
      proxy_pass http://192.168.0.0;
      }
      location = /.well-known/carddav {
      return 301 https://subdomain1.example.com/remote.php/dav;
      }
      location = /.well-known/caldav {
      return 301 https://subdomain1.example.com/remote.php/dav;
      }
      }
      

      The difference here is that it redirects /.well-known/caldav and /.well-known/carddav to /remote.php/dav. You could try this and see how it goes, otherwise without posting any configuration or error messages there’s not much I can do to help, as I don’t use Onlyoffice. I’m also not sure what you mean when you say the repair manual isn’t available. How are you hosting it? You’ve said it’s in a jail but it’s not clear to me why/how it should be available. Have you created a vdomain entry for it? Hope this helps.

      1. I will have a look at the “dav’s”, that’s an interesting point.

        The repair manual is hosted via nginx in a seperate jail, it’s just a bunch of htmls and images that were created way back in the days of dial-up… it is available locally as “http://e24” or via its IP directly, and in the reverse proxy I’m pointing to it by “location /e24”, but that doesn’t work. It’s not finding a csrf token file that obviously in this stone-age website doesn’t exist, says the error log. This mainly served as a testbed for me to see if the “location /” setup works, before taking a deep dive at Onlyoffice and why that only works when served locally. I know there might be a few obstacles in Onlyoffice config to make it work behind a reverse proxy and think I have that figured out, but the fact that the “location /” is not working is throwing me off right now. I will have another look, but it’s been costing me much more time than I planned already, so I might just end up not using a reverse proxy and exposing all the services that are running locally that need exposure seperately and managing their certs… turns out the reverse proxy isn’t quite as easy as I though it would be… Your help, however, is much appreciated, either way!

        1. Jens, I think you might have misunderstood how to configure a vdomain. If you want it to be available locally at https://e24, you’ll need to set the server_name directive to e24 and the location to /, i.e. something like the following in /usr/local/etc/nginx/vdomains/e24.conf:

          server {
                  listen 443 ssl http2;
          
                  server_name e24;
                  access_log /var/log/nginx/e24.access.log;
                  error_log /var/log/nginx/e24.error.log;
          
                  proxy_hide_header Strict-Transport-Security;
                  include snippets/e24.cert.conf;
                  include snippets/ssl-params-intermediate.conf;
          
                  location / {
                          include snippets/proxy-params.conf;
                          proxy_pass http://<IP TO e24 JAIL>;
                  }
          }
          

          You’d then have a DNS entry to resolve https://e24 to your reverse proxy IP. Hope this helps.

          Edit: Important to note that you won’t be able to get a LetsEncrypt certificate for the domain e24; the reason I subdomained all of my jails was to utilise the wildcard certificate that I could obtain for *.example.com. It might be better to host your jail at e24.yourdomain.com, and then get a wildcard for *.yourdomain.com, which would encrypt all of your sites.

          1. Well, that makes sense now. The problem is I can only use CNAME in my (sub)domains to forward to the dyndns service built-in with my router (already a subdomain, as all Dyndns solutions I know of are), which in turn is going to generate a certificate error which I don’t want, so I guess I will revert to a different solution.

            Anyways, thanks a lot Samuel. People like you make the Internet worth keeping 😉

  14. Samuel — would you be open for a “guest” post submission? I don’t have my own wordpress website (at least right now but plans in the works). My topic would be setting up authelia which is a frontend to protect various domains or websites either via two factor authentication, duo push notification of YubiKey – https://github.com/authelia/authelia. This topic integrates nicely with your reverse proxy writeup and incorporates topics you’ve previously touched on (nginx, Let’s Encrypt Certs, smtp forwarding (gmail)) which also incorporating new topics such as docker, docker-compose that deal with container setup and administration.

    1. I would love a write-up on this, but you don’t need Samuel’s blog, you can create a how-to on github, a lot of guides are uploaded there!

      For example:

      https://github.com/seth586/guides/blob/master/FreeNAS/README.md

      Feel free to rip off the template and make a few pages! If you don’t want to make a github page, send me the writeup, and I can upload. I’ve been meaning to update my guide into a complete home server how-to page.

  15. Hey man!
    Thanks for the guide! I gave up doing this a few years back, but this writeup really helped me understand it all better! – just one evening made it happen!

    I got it working using the acmesh script, as I’m on duck dns.

    However, the last step my (ISP’s) router doesn’t seem to support, so I just thought I would skip that step, and to my surprise, it still works! Both internally and externally!

    Does this mean I did something wrong?

    1. Hi Mathias,

      When you say “last step”, are you referring to adding a CAA record to your DNS provider? If so, this isn’t done with your ISP, it will be done with whoever you have registered your domain with. For me, this is AWS so I added an entry in Route 53. It’s an entirely optional step, but it’s a setting that prevents other DNS Providers from issuing valid certificates for your domain. Given you’re using duck dns, I’m guessing you don’t own a domain, so it’s not something to worry about.

      Cheers,

      Sam

  16. Hey thank Samuel for the information. I’m in the process of writing this up which tends to be a lot more difficult than just setting things up since I need to completely verify every single step.

    Also I recently learned about GitHub pages. I have no idea why I hadn’t heard of this application before. It’s really neat and nice for hosting things like this. It uses Jekyll as its provisioning app. I believe you write with WordPress which is likely more fully featured, however it seems for this project at least the jekyll platform is mostly adequate. I’m learning about markdown and scss — seems like there is always something to learn.

    Anyway I have the template engine installed locally and have travis CI setup in the background to do the provisioning. I suspect I may have this writeup done in a week or show and I’ll submit you a link.

    1. Hey Kev,

      I appreciate the offer, but I have some uncertainty about the future of this blog at the moment. Not about it going down, but I’m looking at ways to implement CI/CD so that I can author all of these posts with Markdown and deploy from git commits. This seems to be reasonably easy to do for static websites without comments, but for dynamic sites such as WordPress this appears more complicated. I’m exploring a range of other CMS providers so there’s a possibility (probability) that migration would affect the availability/preservation of the posts and their historical comments/metadata.

      With this in mind, it looks like you’ve found a good solution and I’d been keen to read your article when you post it – I recently bought a couple of YubiKeys and I’m still trying to work out a way to use them that works best for me 🙂

      Cheers

  17. I followed this tutorial and my reverse proxy is acting up. It first started with communicating with the FreeNAS host, the internal subdomain I setup kept getting 503s I think I recall? Well I found out it wasn’t able to receive pings back from the FreeNAS host, as a last ditch effort I changed the IP of the jail and it was able to see the FreeNAS host again.

    But now I’m having issues with the server refusing to connect. It happens about 10-30 minutes after Nginx is loaded. If I reload Nginx, the issue goes away. The logs don’t have anything on these events. In fact I deleted them yesterday, nothing is in the error log since.

    I’m forwarding TCP ports 80 and 443from my Google Wifi router to the jail’s IP. Any help would be appreciated.

    1. Hi Tyler, I wish I could provide more help but this isn’t something I’ve ever seen before and I’m far from an expert on nginx. I suggest maybe checking out #nginx on freenode – they may be more helpful!

  18. Hi there,
    first I want to thank you for putting up your guides, I appreciate them a lot!
    I am a total beginner concerning networking and hope I am describing my problem in an accurate way.
    First of all: i am using a freeNAS system. There I am trying to setup a reverse proxy on a jail with ip 192.168.0.10 and am trying to route traffic to my nextcloud jail which is at 192.168.0.10.
    While following the instructions several questions arose.
    1) what is the resolver ip in the setup of ssl-param.conf
    I assume it is the ip of my network? (I am sorry for such a newbie question)
    2)i am using aws as dns resolver. There i have an dns entry for: example.com
    Next I set up an alias (at aws) for my nextcloud which looks like nextcloud.example.com. I hope this is correct?
    3) I forwarded in my router all incoming traffic to the reverse proxy (192.168.0.9) (ports: 80 & 443)
    when I try to https://nextcloud.example.com It refuses connection due to safety issue.
    I double checked all configuration files and I assume they are correct. Can anyone help me on this?
    Cheers

    1. Unfortunately i cannot edit my post.
      I was able to solve the problem, as you pointed out in the guide: using intermediate ssl-config (with TLSv 1.2) solved my issues.

      Feel free to remove this post.

      And once again : thank you for your guides! 😉
      Cheers

  19. Samuel – did you set your Nginx Reverse Proxy to Proxy to your Apache Reverse Proxy to Proxy to your Nextcloud? Did you terminate the SSL connection at the reverse proxy or re-encrypt to the backend? If there are “two reverse proxies in place” was there anything you did on the first reverse proxy to configure it for nextcloud? Do you have to change anything on the backend to make this work?

    1. Hey Kev,

      I haven’t changed anything from what I detail in my Nextcloud guide. From Nextcloud’s perspective, I proxy php requests to the fcgi handler with Apache. I also use the nginx reverse proxy to handle traffic to Nextcloud, with SSL termination. However, since I haven’t changed my Nextcloud configuration since I first set it up, Nextcloud currently still serves itself via HTTPS. I plan to change this so that it’s served over HTTP and no longer handles any certificate configuration itself, but time is a factor for me at the moment (too much studying!). Ultimately, this means that the vdomain configuration file for my nextcloud instance looks like the following:

      server {
              listen 443 ssl http2;
      
              server_name cloud.example.com;
              access_log /var/log/nginx/cloud.access.log;
              error_log /var/log/nginx/cloud.error.log;
      
              proxy_hide_header Strict-Transport-Security;
              include snippets/example.com.cert.conf;
              include snippets/ssl-params-intermediate.conf;
      
              location / {
                      include snippets/proxy-params.conf;
                      proxy_pass https://192.168.0.9;
              }
      }
      

      Note that the primary difference here is that it’s proxy_pass-ing to a HTTPS address and not a HTTP address. I suppose to answer your question, there’s no Apache reverse proxy, per-se.

  20. Hey Samuel — Quick question. I took your nextcloud blog information and just changed the webserver to nginx. It’s not that I don’t like Apache, its just there is a lot more info on configuring nextcloud with nginx. Anyway I want to put an nginx reverse proxy in front of my VM running nginx/nextcloud. I believe you have something similar with a VM running an nginx reverse proxy and an upstream VM with apache/nextcloud. Was there any specific headers you needed to use on the reverse proxy side when passing to the apache/nextcloud backend? I’m aware that the nextcloud config.php file likely needs the name of the reverse proxy included as a “trusted proxy”. Was there any additional changes you needed to make on the nextcloud end with the introduction of the reverse proxy? I’ve set up the reverse proxy and am in the process of trying to create a proxy pass to the backend using “proper” server and client authentication. I’ve got the https server authentication to the backend working on a test server (non nextcloud), and I’m slowly struggling but have a basic framework for client authentication certs with self-signed certs. I’m just missing the last nextcloud piece in the equation.

    1. Hi Kev,

      My vdomain configuration file for my nextcloud server looks like this:

      server {
              listen 443 ssl http2;
      
              server_name cloud.domain.com;
              access_log /var/log/nginx/cloud.access.log;
              error_log /var/log/nginx/cloud.error.log;
      
              proxy_hide_header Strict-Transport-Security;
              include snippets/domain.com.cert.conf;
              include snippets/ssl-params-intermediate.conf;
      
              location / {
                      include snippets/proxy-params.conf;
                      proxy_pass https://192.168.0.10;
              }
      }
      

      Note the https in the proxy_pass directive; this is there because I haven’t reconfigured nextcloud to serve via http yet; it’s still serving the page via https as per a previous iteration of my nextcloud installation guide (note: it still uses the wildcard certificate defined in my domain.com.cert.conf file). I haven’t made any adjustments to my config.php file or the “trusted_domains” field, it still works with just its own IP and the domain name, presumably because the proxy is forwarding the request with the ‘Host’: ‘cloud.domain.com’ header intact. Does this answer your question?

      Cheers

  21. @Samuel
    Hey thanks for the advice regarding the following header:
    proxy_hide_header Strict-Transport-Security;
    I wasn’t aware of this header

    In terms of proxy_pass https://….
    Nginx by default does not verify the upstream server. You could have the upstream server offer any certificate and nginx would accept it (by default). You can options however to verify the cert if you would like. For experimental purposes I’m trying to get my upstream working with server and client SSL certificates. Hopefully I’ll have a working example soon.

    1. Cheers Kev, that’s good to know. This is really just a stopgap until I can reconfigure everything. The intention is to have SSL terminate at the reverse proxy, so I’m planning on removing any of the responsibility of serving SSL/TLS from nextcloud entirely.

  22. Sorry to keep bothering you. I was able to setup an nginx reverse proxy in-front of an nginx/nextcloud installation (I used your original nextcloud documentation however I switched over to using nginx as the server rather than apache).

    I’m able to reverse proxy to nextcloud however I’m wondering if you have a collabora installation as well. My collabora docker container functions properly with nextcloud in the absence of a reverse proxy, however when I add in the reverse proxy, things don’t exactly work.

    1. Hey mate, I don’t unfortunately. I’d imagine it’s just a matter of forwarding the right traffic; but I haven’t looked at collabora at all. If you find a solution I’d be keen to hear what you had to do though!

      Cheers

    2. Hi Kevdog,

      I’m interested in also switching NextCloud over to Nginx, instead of Apache. Did you find a good set of steps and config to follow by?

      Angelo

    3. This is my vdomains file for collabora. Make sure to create a hostname on your router so your local IPs redirect collabora.mydomain.com to your reverse proxy IP address.

      server {
      listen 443 ssl;
      server_name collabora.mydomain.com;

      server_tokens off;

      include snippets/mydomain.com.cert.conf;

      # static files
      location ^~ /loleaflet {
      proxy_pass http://192.168.84.247:9980;
      proxy_set_header Host $http_host;
      }

      # WOPI discovery URL
      location ^~ /hosting/discovery {
      proxy_pass http://192.168.84.247:9980;
      proxy_set_header Host $http_host;
      }

      # Capabilities
      location ^~ /hosting/capabilities {
      proxy_pass http://192.168.84.247:9980;
      proxy_set_header Host $http_host;
      }

      # main websocket
      location ~ ^/lool/(.*)/ws$ {
      proxy_pass http://192.168.84.247:9980;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection "Upgrade";
      proxy_set_header Host $http_host;
      proxy_read_timeout 36000s;
      }

      # download, presentation and image upload
      location ~ ^/lool {
      proxy_pass http://192.168.84.247:9980;
      proxy_set_header Host $http_host;
      }

      # Admin Console websocket
      location ^~ /lool/adminws {
      proxy_pass http://192.168.84.247:9980;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection "Upgrade";
      proxy_set_header Host $http_host;
      proxy_read_timeout 36000s;
      }
      }

  23. Hey Sam,

    How do you use this reverse proxy to redirect to your main domain blog without a subdomain?

    For example, I currently have successful reverse-proxying of cloud.fubar.com but not http://www.fubar.com or fubar.com

    1. Found the solution! Run certbot with the syntax:

      certbot certonly --dns-route53 -d 'example.com,*.example.com'

      Then make sure your DNS A records for example.com and http://www.example.com point to your reverse proxy!

  24. Hey, which I have found this earlier, would have saved me a lot oft time. I set up my freeNAS and ended up with a simular setup. But I’m stuck with two thinks.
    1 I have a Telekom Speedport Router (manufaxturer is Huawai I think) and found no way how to do the NAT sruff.
    2 I can access my nextcloud from outside via my dynDNS Domain and all seams to work fine, but the nextcloud APP in Android (didn’t testet others) is looping endless in the last confirm Dialog to give access for the APP to nextcloud. No error logs. The only thing I see is in my access.log that the APP uses POST Request for lofin while the Browser uses GET.
    Maybe you have an idea about my two issues.
    Thank you and greatings from Germany.

    1. I think you are having the same issue I had, I found in a forum somewhere to add the line below to config.php, it took care of the problem for me
      ‘overwriteprotocol’ => “https”,

  25. I figured out the reason why TLS 1.3 won’t work:

    FreeNAS is basically just FreeBSD 11.3, and so all jails run FreeBSD 11.3. The problem is that FreeBSD 11.3 ships with OpenSSL v1.0.2. The first version of OpenSSL that comes shipped with TLS 1.3 support is v1.1.1, so the solution is to somehow upgrade the base package OpenSSL so that it has TLS 1.3 support. I’m not sure how to do that with FreeNAS, but if your host is running FreeBSD >=12, you should be able to just upgrade the jail and it’ll work out of the box, since FreeBSD >=12 ships with OpenSSL 1.1.1. Hope this helps for others! (im so sorry if you spent more than 8 hours messing with nginx configs like me in a vain attempt to get it working when it turned out to just be an our of date package)

    1. I’m sure this is part of the story, but perhaps not the whole story. I have 1.1.1g installed in my jail and it’s not working for me either. To upgrade, you can simply execute the following from within the jail:

      pkg install openssl
      
      1. When you install openssl using pkg, the binary is installed into /usr/local/bin rather than /usr/bin. So when nginx calls openssl, it calls the one bundled with FreeBSD and not the newer version (This should be confirmed if you run which openssl, or if you run openssl version). Simply moving the binary from /usr/local/bin didn’t work for me for some reason, so upgrading the OS itself did the trick.

    2. @Nick Nnien
      Sorry you had to suffer 8 hours to figure this out.

      I’m running FreeNAS 11.U2-1.
      I have a nextcloud jail (as per Samuel Dowling’s Guide), and I also have nginx with openssl 1.1.1

      uname -mrs

      FreeBSD 11.3-RELEASE-p7 amd64

      nginx -V

      nginx version: nginx/1.17.9
      built with OpenSSL 1.1.1g 21 Apr 2020
      TLS SNI support enabled

      OpenSSL 1.1.1 introduces an entirely new API so any application that depends on openssl needs to be recompiled agains the new version (if you are installing from ports). If you’ve installed things from ports, you can check what is compiled against openssl via:

      pkg info -r security/openssl

      The following method will not change the base openssl for the system — just for port installed packages. (This is kind of confusing to explain however you’ll see this on the command line):

      openssl version

      OpenSSL 1.0.2s-freebsd 28 May 2019

      pkg info openssl

      openssl-1.1.1g,1

      You what you need to do is:
      #1 – install openssl 1.1.1

      pkg install openssl

      Check that 1.1.1g is installed via:

      pkg info openssl

      openssl-1.1.1g,1

      #2 Prepare to build nginx from ports
      a. Edit /etc/make.conf and add the following:
      www_nginx-devel_DEFAULT_VERSIONS+=ssl=openssl111
      DEFAULT_VERSIONS+=ssl=openssl

      b. Install nginx-devel (from ports). This will conflict with the nginx pkg if you have this installed and it will remove it by default. If you’ve never installed from ports, you can do the following (these instructions are a little bit variable depending on the source your read, but this is what I did)

      cd /usr/ports/www/nginx-devel

      make clean

      make config

      This will bring up an ncurses menu where you can install any additional packages or modules with nginx. The source is here: https://www.freshports.org/www/nginx-devel. I just went with the defaults. You can always reinstall later if you find a missing missing package

      make

      make install (or reinstall if you are reinstalling)

      You might be prompted about the conflicting nginx package at this point since you are installing nginx-devel. Go ahead and install nginx-devel.

      Now check nginx version:

      nginx -V

      nginx version: nginx/1.17.9
      built with OpenSSL 1.1.1g 21 Apr 2020
      TLS SNI support enabled

      That should be about it. You’ll see now that nginx-devel is now dependent on openssl 1.1.1

      pkg info -r security/openssl

      openssl-1.1.1g,1:
      nginx-devel-1.17.9

      nginx-devel will now need to be manually updated from ports rather than through the pkg manager with this method (I believe).

      Hopefully this helps.

  26. Hi, Thanks so much for this detailed write-up! I was able to get this working pretty easily. I have a question though, what if I have a webpage or server that requires websocket support, how do I set that up? I was using NGINX Reverse Proxy written by JC21 for docker, it has a web ui front end where I can enable websocket support. Is this possible with this particular install?

    Thanks

    1. Hi Joshua, certainly should be possible; there’s nothing special about this configuration that would exclude it. You would just need to add the right directives to nginx.conf. The stream directive might be appropriate; see if you can use the discussion here as a framework to adapt to your desired configuration

  27. Thanks for the well written guide, and kudos on the streamlined command entering. A lot packed into this, but it went quickly with a bit of prior nginx tinkering. I’ve been scouring the web for guides more specific to my use case as this one is, and this is the best one I’ve found.
    Cheers,
    Tim

  28. Samuel,

    Perfect guide on how to set this up! I followed it step by step and all works fine!
    I have created according this manual a jail with this reverse proxy and a jail with nextcloud which works like a charm!
    I also created a jail with an FAMP (Apache2) stack with WordPress. WordPress works fine if I go from my internal network to the IP address of the jail but do you know what steps to take to have wordpress accessible from my external domain name? How would the vdomains file look like to have it forwarded to the WordPress stack? Do I need to adapt the WordPress stack as well?

    1. I just did this very setup, heres a cheat sheet:

      make your vdomains/example.com.conf:

      server {
          listen 443 ssl http2;
      
          server_name example.com;
      
          include snippets/example.com.cert.conf;
          include snippets/ssl-params.conf;
      
          location / {
              #include snippets/proxy-params.conf;
              proxy_set_header X-Forwarded-Proto https;
              proxy_pass http://192.168.1.221;
          }
      }
      

      If you are forwarding to http://www.example.com you do not need to change your SSL configuration. However, if you are using https://example.com you will need to add the domain to your letsencrypt

      certbot certonly --dns-route53 -d 'example.com,*.example.com'

      Add the following lines to your wp-config.php:

      if (!empty($_SERVER['HTTP_X_FORWARDED_PROTO']) && $_SERVER['HTTP_X_FORWARDED_PROTO'] === 'https') {
      $_SERVER['HTTPS'] = 'on';
      }
      define( 'WP_HOME', 'https://example.com' );
      define( 'WP_SITEURL', 'https://example.com' );

      1. Sam, before you approve moderation, can you please change my snippets/ .com domain on the above post and change it to example? I guess I didn’t proof-read, thanks!

  29. Hello, I have some questions.

    On a VM mounted on virtualbox, I have FreeNAS installed.
    This VM has a bridge configuration to take internet from my home network.
    I have created a jail, there I am configuring a reverse proxy to attend to all incoming requests to my freeNAS.
    My idea is to install a SSL Lets encrypt wilcard certificate over the jail with nginx. As investigated in:
    https://letsencrypt.org/es/docs/challenge-types/
    To be able to do it, lets encrypt says that I have to do the validation by DNS (DNS-01 challenge) and I have to create some TXT records and the DNS zone has to be accessible via API. They display a list of supported DNS services:
    https://community.letsencrypt.org/t/dns-providers-who-easily-integrate-with-lets-encrypt-dns-validation/86438
    In my case I plan to use Cloudflare.

    My # 1 question is:
    The proxy must be assigned a public IP so that it can resolve the DNS, but the jail has a local IP configured. In the jail I have a VNET + NAT configured without DHCP (fixed local IP).
    Should I use a Dynamic DNS service to be able to link my dynamic IP (from the ISP) with the local IP of the jail and then do a port forwarding on my router? The problem I am having is that the jail is under another subnet, the Jail IP is 172.6.0.2.
    When I want to configure port forwarding on my router with IP 172.6.0.2 it gives me the error that it is not on the same network.

    My # 2 question is:
    In the jail where I have the reverse proxy, how can I link my domain? What steps should I take?

    My # 3 question is:
    My FreeNAS private IP is 192.168.0.105 (NAT)
    My Jail’s private IP (r-proxy) is 172.6.0.2 (NAT)

    If I ping from my PC to the jail, I cannot access it.

    You can help?

    Cheers

    1. Hi Alejandro, a few points:
      1. The reverse proxy jail does not need a public IP. It just needs the appropriate traffic forwarded to it from your router. If this is to host a web server, usually this means ports 80 and 443, though there are some more uncommon ports that may also be appropriate.
      2. A dynamic DNS service updates a DNS name server with your public IP, so that whatever domain name you have points to the correct IP if it is non-static (usually residential IP’s change semi-regularly)
      3. The jail should not be under another subnet. The problem you’re having is that it literally is not on the same network, and you haven’t set up the routes to enable that. Best to give the jail an IP on your primary network to mitigate the need to implement any additional routing.
      4. In my guide, you would specify your domain name in the server_name directive in the appropriate vdomain file. Strictly speaking, you just need a server block. This is how you handle requests to a given domain name.
      5. To make sure your server gets the requests, you need to configure a DNS entry for your domain (in Cloudflare) such that it points to your public IP.
      6. Make your reverse proxy jail IP 192.168.0.106.

      Spend some time going over the guide, I cover a lot of this in a lot more detail. It might also be worth watching some videos on how DNS works, and how networking works to understand some of the principles if this guide hasn’t been sufficient.

      Hope this helps.

      1. I have successfully installed the letsencrypt certificate with certbot in my reverse-proxy with nginx in a jail in FreeNAS with the -manual method (I am not using the cloudflare plugin because now the API is not accessible for free accounts).

        ...
        Press Enter to Continue
        Waiting for verification ...
        Cleaning up challenges
        
        IMPORTANT NOTES:
         - Congratulations! Your certificate and chain have been saved at:
           /usr/local/etc/letsencrypt/live/redacted/fullchain.pem
           Your key file has been saved at:
           /usr/local/etc/letsencrypt/live/redacted/privkey.pem
           Your cert will expire on 2020-09-22. To obtain a new or tweaked
           version of this certificate in the future, simply run certbot
           again. To non-interactively renew * all * of your certificates, run
           "certbot renew"
         - If you like Certbot, please consider supporting our work by:
        
        Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
           Donating to EFF: https://eff.org/donate-le
        ...
        

        The problem I am having is that when I run the command:

        openssl s_client -connect redacted:443
        

        Details of the FreeNAS self-signed certificate appear to me, not the certificate that I installed in the jail corresponding to redacted:

        ...
        root @ reverse-proxy: ~ # openssl s_client -connect redacted:443
        CONNECTED (00000004)
        depth = 0 C = US, O = iXsystems, CN = localhost, emailAddress = info@ixsystems.com
        verify error: num = 18: self signed certificate
        verify return: 1
        depth = 0 C = US, O = iXsystems, CN = localhost, emailAddress = info@ixsystems.com
        
        verify return: 1
        
        Certificate chain
         0 s: /C=US/O=iXsystems/CN=localhost/emailAddress=info@ixsystems.com
        
        i: /C=US/O=iXsystems/CN=localhost/emailAddress=info@ixsystems.com
        
        Server certificate
        ----- BEGIN CERTIFICATE -----
        MIIDKTCCAhGgAwIBAgIBATANBgkqhkiG9w0BAQsFADBYMQswCQYDVQQGEwJVUzES
        MBAGA1UECgwJaVhzeXN0ZW1zMRIwEAYDVQQDDAlsb2NhbGhvc3QxITAfBgkqhkiG
        9w0BCQEWEmluZm9AaXhzeXN0ZW1zLmNvbTAeFw0yMDA1MzAxNzQ2NTZaFw0yMjA5
        MDIxNzQ2NTZaMFgxCzAJBgNVBAYTAlVTMRIwEAYDVQQKDAlpWHN5c3RlbXMxEjAQ
        BgNVBAMMCWxvY2FsaG9zdDEhMB8GCSqGSIb3DQEJARYSaW5mb0BpeHN5c3RlbXMu
        Y29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA06hASiO87SyiCsKd
        tqDe2Ao3sUubKdWJNUd8bA ++ Yh4WEpQ5AVxAGgw / labzdmdrCLFtf6JcEA9pt8Hh
        p5WUPIFIfggxV8cw34irlnP + RFOOInOAIFIVVg4zBmm867X1S7l4X5 + 7KNxsvJkS
        JffPrwQu78PcT0ryst1AVgUWyy45KnsXwN + 1 / RzlD3wpxCNxoUbsmVWmlsvXt + Zu
        AAsTNhad + R7SR / GgkXVikO0MzdrXQtQP4MjlXXK1nOtaQUSNnh51pC7YevCILKIe
        JhPVv2ThFPKfd4AN0MyAZYDlUeMZ / FQ4CVdwVGbEF + KYHTYEXd + U3F74KLwkHw1Z
        UKjUjwIDAQABMA0GCSqGSIb3DQEBCwUAA4IBAQCs89NZfM6D3Xjbz35mGXmHohmN
        tAmVTBy8RpDJtaM5fVFSSbc0gUx + Mzok + jXZpNLFUCfjFlop7bTTJFBjS91E8U53
        S6yNDTgJz6lz3sh1K9KvgERuxs2GF / BhDEF7CjZI + / 3h1VDeQcMuXnaRM / eUVR7B
        99sSInqclxqHZkmSSUA9deLrq2fZ / NaKUbcAIQXQrzgKA2hzDM5pKMMAcgaC + bJt
        Jt21CVRqbZs9I8E + OfBouwOO1LS8zCdMote2xMcNmMftGwFjmZgM + QteL + DrWUaz
        jws + sCW5Ro / dkjLJNPwuaI2Qeb5GjvFrztvGpKzABfpyLqW12mFxxxxxxxxx
        ----- END CERTIFICATE -----
        subject=/C=US/O=iXsystems/CN=localhost/emailAddress=info@ixsystems.com
        
        issuer=/C=US/O=iXsystems/CN=localhost/emailAddress=info@ixsystems.com
        
        No client certificate CA names sent
        Peer signing digest: SHA512
        
        Server Temp Key: ECDH, P-256, 256 bit
        
        SSL handshake has read 1488 bytes and written 433 bytes
        
        New, TLSv1 / SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
        Server public key is 2048 bit
        Secure Renegotiation IS supported
        Compression: NONE
        Expansion: NONE
        No ALPN negotiated
        SSL-Session:
            Protocol: TLSv1.2
            Cipher: ECDHE-RSA-AES256-GCM-SHA384
            Session-ID: 6C08C6604EEF1468C83664C444812680E7EFB5292BA9243B692BC8EC9571DDC6
            Session-ID-ctx:
            Master-Key: 80B794193E13CBF87018CFB650F87273AA7C04BF10A3948D70F20CD126B5645A60D0DD76E37CC5C6254E68C1ADEEFE3F
            Key-Arg: None
            PSK identity: None
            PSK identity hint: None
            SRP username: None
            TLS session ticket lifetime hint: 7200 (seconds)
            TLS session ticket:
            0000 - 29 3e 80 97 df 5c a8 69-20 82 4b df bc 3f 85 42)&gt; ... &#046; I .K ..?. B
            0010 - 1d d2 4f f6 56 bc 8a 0d-d7 1f 02 e6 11 06 45 e0 ..O.V ......... E.
            0020 - e0 e6 fa 02 69 a1 ea 34-53 e9 09 b0 d3 01 fd 73 .... i..4S ...... s
            0030 - 69 32 9a 74 1b 26 35 05-29 d3 8c 2d ad fa d4 fe i2.t. &amp; 5.) ..-....
            0040 - d9 72 f0 18 ce 93 07 36-ab a4 e3 e7 0c 30 2b 8e .r ..... 6 ..... 0+.
            0050 - b0 5c f8 b5 84 9d 73 38-3f 03 18 bd 86 00 87 e9. \ .... s8? .......
            0060 - 05 ae 35 e4 48 19 c4 c2-e4 5d 77 eb ea fd 24 bb ..5.H ....] w ... $.
            0070 - 38 79 2b 4f f2 b2 ba 6e-dc 19 e3 1d 5a f1 cb f3 8y + O ... n .... Z ...
            0080 - 9a 75 5c 93 19 34 6f 58-4b 46 e5 1b 87 35 75 0e .u \ .. 4oXKF ... 5u.
            0090 - 85 79 62 e5 9d ba 51 d0-42 5f 34 12 d7 41 38 91 .yb ... Q.B_4..A8.
            00a0 - 73 1d 1c bd 8a 4c 9e f1-1f 9c 31 1e b4 3b ad ed s .... L .... 1 ..; ..
        
        Start Time: 1592971138
        Timeout: 300 (sec)
        
        Verify return code: 18 (self signed certificate)
        
        closed
        root @ reverse-proxy: ~ #
        ...
        

        I have configured my nginx.conf from jail so that it listens to port 443:

        ...
        #user  nobody;
        worker_processes  1;
        
        This default error log path is compiled-in to make sure configuration parsing
        
        errors are logged somewhere, especially during unattended boot when stderr
        
        isn't normally logged anywhere. This path will be touched on every nginx
        
        start regardless of error log location configured here. See
        
        https://trac.nginx.org/nginx/ticket/147 for more info.
        
        #
        #error_log  /var/log/nginx/error.log;
        #
        
        #pid        logs/nginx.pid;
        
        events {
            worker_connections  1024;
        }
        
        http {
            include       mime.types;
            default_type  application/octet-stream;
        
        #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
        
        #'$status $body_bytes_sent "$http_referer" '
        
        #'"$http_user_agent" "$http_x_forwarded_for"';
        
        #access_log  logs/access.log  main;
        
        sendfile        on;
        #tcp_nopush     on;
        
        #keepalive_timeout  0;
        keepalive_timeout  65;
        
        #gzip  on;
        
        server {
            listen       80;
            server_name  redacted;
            return 301 https://$server_name$request_uri;
        
        #charset koi8-r;
        
        #access_log  logs/host.access.log  main;
        
        location / {
            root   /usr/local/www/nginx;
            index  index.html index.htm;
        }
        
        #error_page  404              /404.html;
        
        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   /usr/local/www/nginx-dist;
        }
        
        # proxy the PHP scripts to Apache listening on 127.0.0.1:80
        #
        #location ~ \.php$ {
        #    proxy_pass   http://127.0.0.1;
        #}
        
        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        #
        #location ~ \.php$ {
        #    root           html;
        #    fastcgi_pass   127.0.0.1:9000;
        #    fastcgi_index  index.php;
        #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
        #    include        fastcgi_params;
        #}
        
        # deny access to .htaccess files, if Apache's document root
        # concurs with nginx's one
        #
        #location ~ /\.ht {
        #    deny  all;
        #}
        
        }
        
        #listen       8000;
        #listen       somename:8080;
        #server_name  somename  alias  another.alias;</h1>
        #location / {
        #   root   html;
        #   index  index.html index.htm;
        #}
        #}
        
        #HTTPS server
        
        #
        #server {
        
        #   listen       443 ssl;
        #   server_name  redacted;
        
        #   ssl_certificate     /usr/local/etc/letsencrypt/live/redacted/fullchain.pem;
        #   ssl_certificate_key /usr/local/etc/letsencrypt/live/redacted/privkey.pem;
        #   ssl_session_cache    shared:SSL:1m;
        
        #   ssl_session_timeout  10m;
        #   ssl_ciphers  HIGH:!aNULL:!MD5;
        #   ssl_prefer_server_ciphers  on;
        #   location / {
        #       root   html;
        #       index  index.html index.htm;
        #   }
        
        #}
        
        }
        ...
        

        But by executing the following command, I get this result. Only port 80 is open:

        ...
        root @ reverse-proxy: ~ # netstat -an -p tcp | grep LISTEN
        tcp4 0 0 * .80 *. * LISTEN
        ...
        

        I suspect the problem has to do with the CNAME setting (redacted) pointing to a Dynamic DNS of NO-IP. Therefore, when executing this CNAME, the freeNAS general interface is executed, when the correct thing would be to try to access the created jail.

        What could be the problem?

        My jail’s IP is 127.16.xxx.xxx (NAT), it is different from FreeNAS’s local IP (192.168.xxx.xxx). I understand that it is not possible to access the jail from the outside.

        In my router I can only define port forwarding to my FreeNAS with 192.168.xxx.xxx, I cannot do it with the IP of jail 127.xxx.xxx.xxx (NAT).
        What should I configure? In FreeNAS should I internally configure a forwarding from 192.168.xxx.xxx/24> 127.16.xxx.xxx/30?

        What advice can you give me?

        Regards

        1. Alejandro, I’ve edited your comment to redact your domain, and in the process I messed up some of the formatting. I’ve tried to reconstruct it, but it may not have been perfect so if I’ve added # in places it shouldn’t be, let me know.

          First of all, it doesn’t look like you’re using my guide. I don’t set my nginx.conf up this way. I’m not sure how I can be expected to support configurations that don’t follow my guide; I’m not an nginx guru or support channel.

          Secondly, this configuration shows all of your SSL parameters commented out. They aren’t in effect. You need to uncomment them if you expect a certificate to be issued.

          1. I have corrected the nginx.conf:


            #user nobody;
            worker_processes 1;

            This default error log path is compiled-in to make sure configuration parsing

            errors are logged somewhere, especially during unattended boot when stderr

            isn’t normally logged anywhere. This path will be touched on every nginx

            start regardless of error log location configured here. See

            https://trac.nginx.org/nginx/ticket/147 for more info.

            #
            #error_log /var/log/nginx/error.log;
            #

            #pid logs/nginx.pid;

            events {
            worker_connections 1024;
            }

            http {
            include mime.types;
            default_type application/octet-stream;

            #log_format main '$remote_addr - $remote_user [$time_local] "$request" '
            # '$status $body_bytes_sent "$http_referer" '
            # '"$http_user_agent" "$http_x_forwarded_for"';

            #access_log logs/access.log main;

            sendfile on;
            #tcp_nopush on;

            #keepalive_timeout 0;
            keepalive_timeout 65;

            #gzip on;

            server {
            listen 80;
            server_name r-proxy.nas.ethopolis.tech;
            return 301 https://$server_name$request_uri;

            #charset koi8-r;

            #access_log logs/host.access.log main;

            location / {
            root /usr/local/www/nginx;
            index index.html index.htm;
            }

            #error_page 404 /404.html;

            # redirect server error pages to the static page /50x.html
            #
            error_page 500 502 503 504 /50x.html;
            location = /50x.html {
            root /usr/local/www/nginx-dist;
            }

            # proxy the PHP scripts to Apache listening on 127.0.0.1:80
            #
            #location ~ \.php$ {
            # proxy_pass http://127.0.0.1;
            #}

            # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
            #
            #location ~ \.php$ {
            # root html;
            # fastcgi_pass 127.0.0.1:9000;
            # fastcgi_index index.php;
            # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
            # include fastcgi_params;
            #}

            # deny access to .htaccess files, if Apache's document root
            # concurs with nginx's one
            #
            #location ~ /\.ht {
            # deny all;
            #}
            }

            # listen 8000;
            # listen somename:8080;
            # server_name somename alias another.alias;

            # location / {
            # root html;
            # index index.html index.htm;
            # }
            #}

            # HTTPS server
            #
            server {
            listen 443 ssl;
            server_name r-proxy.nas.ethopolis.tech;

            ssl_certificate /usr/local/etc/letsencrypt/live/r-proxy.nas.ethopol

            is.tech/fullchain.pem;
            ssl_certificate_key /usr/local/etc/letsencrypt/live/r-proxy.nas.ethopol
            is.tech/privkey.pem;

            ssl_session_cache shared:SSL:1m;
            ssl_session_timeout 10m;

            ssl_ciphers HIGH:!aNULL:!MD5;
            ssl_prefer_server_ciphers on;

            location / {
            root html;
            index index.html index.htm;
            }
            }

            }

            But by executing the following command:

            openssl s_client -connect r-proxy.nas.ethopolis.tech:443

            I get the message “gethostbyname failure”:


            root@reverse-proxy:~ # openssl s_client -connect r-proxy.nas.ethopolis.tech:443
            gethostbyname failure
            connect:errno=0

          2. Yeah, I don’t know what to tell you dude. Follow the guide I wrote? It works; I didn’t write it for no reason.

        2. It looks like you have multiple problems here, I’d start with the networking issue as nothing will work if you can’t get traffic to the jail. Like Samuel said, you have the jail on a completely different subnet and there are no routing rules anywhere to get the traffic to the jail, port forwarding it sending traffic to the FreeNAS IP will not do anything as the jail is, in effect, a different box on a different network. While it is probably possible to put in a janky forward rule in the FreeBSD firewall, it is probably better and easier to just reconfigure your jail to be on the same network. Once you fix this, I think you’d be surprised to find most of your other issues disappear.

  30. @ANGELO CORBO

    Do you still need help with an nginx setup? I’m sorry I didn’t see your questions until now.

  31. This was a great! Thank you so much, I’ve been wanting to host some projects running on my freenas publicly. I had a few issues setting up route53, but other than that all your steps were very easy to follow!

  32. Thanks very much for the guide. I do not have it working yet

    I got my wildcard cert up fine
    Nginx is running with no errors, used modern config for ssl
    In pfsense I could not figure out how to make my NAT look like your example…
    I should add that I can access Nextcloud on the local nw using http when I put in a LAN rule. This was useful in reinstalling nextcloud which I did today.
    I can ping my router from the jail and vice versa,
    SO, any suggestions would be super helpful. I’m inclined to think it is my routing.
    Route 53 confirms it’s working with the WAN addresses for pfsense

    Thanks in advance Nic Greene

    1. Nic, the modern configuration probably won’t work yet. From memory, the only protocol it lists is TLSv1.3, which requires OpenSSL1.1.1. FreeBSD 1.3 doesn’t ship with this version, and so the Nginx port isn’t built with compatibility for it out of the box. You could install the newer OpenSSL version and build the port manually against it if you so desire, or you could use the intermediate configuration. This thread has more information: https://forums.freebsd.org/threads/freebsd-11-tls-1-3.70968/

      See if using the intermediate configuration helps.

      Cheers.

      1. Dear Samuel,

        Thanks a lot . I’ll do that with the SSL Config,

        Other than that, I guess I am trying to debug the routing systematically, is there a way to figure out where things break down on a setup like this?

        Sincerely, Nic Greene

          1. This is so cool!

            I have these messages in Nextcloud:

            • The reverse proxy header configuration is incorrect, or you are accessing Nextcloud from a trusted proxy. If not, this is a security issue and can allow an attacker to spoof their IP address as visible to the Nextcloud. Further information can be found in the documentation.
            • Your web server is not properly set up to resolve “/.well-known/caldav”. Further information can be found in the documentation.

            I believe the CalDav issue is addressed above,

            Has anyone had success with updating their Nextcloud config.php file with trusted proxy? Could you post how you set it up?

          2. Looks like you’ve got this solved, but note that this is addressed in the Nextcloud guide

  33. Hi Samuel,

    Thanks a lot for this and nextcloud build tutorial, worked almost like a charm:

    A question for the audience:

    My only issue, for now, is being stuck in the Account Access when we got redirection from the v2 wizard authentication on the nextcloud desktop app.

    My nextcloud version is: 19.0.0 and the nextcloud desktop app version is:2.6.4

    Any help will be appreciated

    Thanks

  34. For trusted proxies — I’m running my reverse proxy on a different machine than my nextcloud (well they are both virtualized). I also run pfsense as a router. Via a dhcp override I associated the physical IP addresses of both machines (Nextcloud, Reverse proxy) with names —

    So for example
    Reverse Proxy – IP address – 10.0.1.86 – Name – reverseproxy.domain.com
    Nextcloud – IP address – 10.0.1.158 – Name – nextcloud.domain.com

    With this information, I manually edited the config.php file and added this to the file (/usr/local/www/nextcloud/config/config.php)

    ‘trusted_proxies’ =>
    array (
    0 => ‘reverseproxy.domain.com’,
    1 => ‘10.0.1.86’,
    ),

    There is a way to use the command line to do this to avoid syntax errors, but I just found it easier to do manually. Please be very careful of the syntax since I think a malformed config.php file can render your nextcloud installation inoperable. I also don’t know if both the name and IP address are required (possibly you could you just one or the other). This should however give you a starting point.

    Hope that helps.

    1. Kevdog – that’s helpful – if the reverse proxy, i.e. this NGINX setup in a nextcloud jail, doesn’t have a name, then just use the ip address of the Reverse Proxy?

      1. I tried this, with a DHCP override too and had no luck, it seemed to bork by config.php file. I did check the syntax and fixed funky single ‘ , but still no joy.
        Basically my reverse-proxy is on 192.168.1.xx and nextcloud is on 192.168.1.yy – how could you express that as a trusted proxy statement?

        1. @Nic Greene

          Be very careful of the spacing:

          ‘trusted_domains’ =>
          array (
          0 => ‘192.168.1.yy’,
          1 => ‘nextcloud.gohilton.com’,
          ),
          ‘trusted_proxies’ =>
          array (
          0 => ‘192.168.1.xx’,
          ),

          Make sure to backup your config.php prior to editing and if you have syntax error, we can try something else. Hard to know since you haven’t posted the error you are getting.

          1. Sorry here is the corrected syntax, possibly previous post could be redacted or deleted.

            ‘trusted_domains’ =>
            array (
            0 => ‘192.168.1.yy’,
            ),
            ‘trusted_proxies’ =>
            array (
            0 => ‘192.168.1.xx’,
            ),

  35. Sorry here is the corrected syntax, possibly previous post could be redacted or deleted.

    ‘trusted_domains’ =>
    array (
    0 => ‘192.168.1.yy’,
    ),
    ‘trusted_proxies’ =>
    array (
    0 => ‘192.168.1.xx’,
    ),

    1. Thanks – yes it was the syntax. I ended up copying and pastin the trusted domain statement and altering carefully. Thank you – All checks passed now!

  36. Samuel –
    Ok i have everything working now and this is great – added a subdomain to my home assistant RPI easily using the same domain and a different A record. Awesome.

    I have one question – given that the Update Route 53 Script/setup in your Nextcloud guide is basically only really pertinent for this reverse proxy now, and it is this proxy that is scripted to update my SSL certs etc., I dont see why I couldnt set that scripting up on this Jail? Is there a reason you wouldn’t do that?

    1. This sounds like a reasonable thing to do Nic, I might raise an issue on github to move it from nextcloud to the reverse proxy jail in a future update

  37. I am following your guide.
    To be able to connect to the jail from outside, do I have to have pfsense? or can i set special settings in freeNAS?
    In my case I don’t have pfsense. Is it necessary to connect to the VLAN of the jail from outside?
    I am not being able to connect to the internet in jail nor can I access it from the outside.

    Regards

    1. Alejandro, the configuration you’ve posted so far doesn’t follow anything I’ve specified in my guide, so try to get the configuration I’ve specified working and you might have more luck. As Josh has mentioned, the networking is going to be the place to start. When creating the jail, you specified a value for the defaultrouter parameter (probably 192.168.0.1). This should be the IP address of your router. Assuming this is the IP address, your jail has to be on the same subnet. This means that in the jail creation command, you should be specifying something similar to the following:

      iocage create -n reverse-proxy -r 11.3-RELEASE ip4_addr="vnet0|192.168.0.110/24" defaultrouter="192.168.0.1" vnet="on" allow_raw_sockets="1" boot="on"
      

      If you create it with ip4_addr="vnet0|172.6.0.2/24" like you have indicated you have, it will not work unless you put routing rules in place to make this network accessible. I strongly advise against attempting to do this, as it seems like you’re new to networking and it’s an unnecessary complication. Better to start with the basics. This will give you internet access within the jail.

      Now, to connect to a domain from the outside, you need two things:
      1. A DNS A record entry to point at your public IP address (mine is with Route 53, other popular services include Cloudflare or Dynamic DNS services)
      2. A router that is capable of forwarding traffic using port forwards. I use pfSense for this, but it is not necessary – many routers should have this functionality but you’ll need to work out how to do it yourself.

      When you connect to another computer via TCP, you use a socket. A socket is an IP:Port pair, for example 36.12.234.48:443. Port 443 is a common port, because this is the default port used for HTTPS connections. Since each DNS A record entry will just point to an IP address, and you may have multiple subdomains, i.e. Bitwarden.example.com and cloud.example.com, you have multiple subdomains pointing to the same IP address, i.e. both of these will resolve the socket 36.12.234.48:443 when accessed via a web browser. The problem posed here is, if we have multiple services that all point to the same IP, how can we differentiate them? The solution is to use the reverse proxy. This uses the ‘Host’ header as a differentiator, which contains the subdomain name specified (i.e, cloud.example.com; this is the value specified in the server_name directive). This means that we need the reverse proxy to handle the traffic. To do this, we need to accept the traffic at the router, and redirect it to the reverse proxy jail. This is what a port forward does.

      So to answer your question, no, you don’t need pfSense. Though, you do need a router capable of port forwarding. It’s worth noting that you could also set up a pfSense VM/jail and use that as your router. I don’t love this solution because it means connecting the unfiltered internet directly in to your NAS, so you would want to make sure you have a separate NIC, and have that NIC only available to the pfSense jail/VM, but this poses it’s own issues and is probably requires a lot of familiarity with networking and *nix.

      There is nothing you can do from within FreeNAS to replace the role of the router, short of setting up a router in a VM or a jail.

  38. Very good tutorial, amazing, thank you, I have a question, I have a comodo certificate, and Godaddy with DNS, how do I setup using this certificate? Thank you

  39. Hello Samuel and others!
    My nginx reverse proxy that I built using this guide is working great, but I’m trying to work through an issue I’m having.
    I’m not sure if there are any folks using Standard Notes, but I’m setting up a syncing server on my debian machine. The nginx is on my FreeNAS machine, and the standard notes server is on a separate debian machine. The server is running and working well. All notes are able to sync via windows, web, and iOS using my FQDN.

    I’m having issues with the extensions piece. It has to point to a specific folder on the debian machine located at: /home/phil/standardnotes-extensions/public. My nginx vdomain file is pasted below. I can navigate to the sync server just fine using notes.mydomain.com, but when I try to navigate to notes.mydomain.com/extensions/index.json I get a 404 director or file does not exist. I know the path is correct and the file does exist and I can cat the index.json items just fine. Any help you can provide would be fantastic. Thank you in advance!

    Error log from ningx:
    2020/08/28 11:38:38 [error] 31289#102740: *48868 open() “/home/phil/standardnotes-extensions/public/index.json” failed (2: No such file or directory), client: 192.168.150.1, server: notes.mydomain.com, request: “GET /extensions/index.json HTTP/2.0”, host: “notes.mydomain.com”

    nginx vdomain file for the sync server:
    server {
    listen 443 ssl http2;

    server_name notes.mydomain.com;
    access_log /var/log/nginx/notes.access.log;
    error_log /var/log/nginx/notes.error.log;

    include snippets/mydomain.com.cert.conf;
    include snippets/ssl-params.conf;

    location / {
    include snippets/proxy-params.conf;
    proxy_pass http://192.168.150.20:3000;
    }

    location ^~ /extensions {
    autoindex off;
    alias /home/phil/standardnotes-extensions/public;
    # CORS HEADERS
    if ($request_method = 'OPTIONS') {
    add_header 'Access-Control-Allow-Origin' '*';
    add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
    #
    # Custom headers and headers various browsers *should* be OK with but aren't
    #
    add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
    #
    # Tell client that this pre-flight info is valid for 20 days
    #
    add_header 'Access-Control-Max-Age' 1728000;
    add_header 'Content-Type' 'text/plain; charset=utf-8';
    add_header 'Content-Length' 0;
    return 204;
    }
    if ($request_method = 'POST') {
    add_header 'Access-Control-Allow-Origin' '*';
    add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
    add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
    add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range';
    }
    if ($request_method = 'GET') {
    add_header 'Access-Control-Allow-Origin' '*';
    add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
    add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
    add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range';
    }
    }

    }

    1. Hey Kevdog. You’ve been very helpful in the past.
      My nginx machine is on 192.168.150.15.
      My debian machine is on 192.168.150.20

  40. Figured it out, turns out it is DNS thats is making trouble. Once I set cloudfare to full encryption everything is fixed;/

    1. Hi zibellon, that’s far from a quick question and pretty far afield from the content I’ve presented, but here are some links that should direct your research:
      https://forums.freebsd.org/threads/install-mod_security-on-nginx-webserver.53286/
      https://www.nginx.com/blog/compiling-and-installing-modsecurity-for-open-source-nginx/
      https://www.freshports.org/security/modsecurity3-nginx/
      https://github.com/SpiderLabs/ModSecurity-nginx

      You can use pkg search $KEYWORDS to identify what the appropriate packages in the freebsd repositories might be.

      Hope this helps.

      Cheers.

  41. suggestion: add log rotation, after couple of month you will get too much history there.
    for me its a personal server with low traffic, so i set it to once a month, you can edit $M1D0 for it.
    https://gist.github.com/killercup/5698316

    how to: type

    nano /etc/newsyslog.conf.d/nginx-reverse.conf

    paste:

    log rotation for nginx reverse proxy

    logfilename [owner:group] mode count size when flags [/pid_file] [sig_num]

    /var/log/nginx/*.log 640 7 * $M1D0 GB /var/run/nginx.pid 30

  42. Hi, I have this running and working but can’t work out how to get reverse proxy working for multiple servers on same subnet. How does nginx know what you are wanting when you just go to https://domain or do you need to go to https://domain/ombi. Do you need to create a proxy_setup.conf and get nginx.conf to use.

    Any help would be must appreciated and guide was the best I have found

    Thanks,

    Jay

    1. Hi Jay, Nginx uses the Host header to determine where the request should go. If you type https://subdomain.domain.com in to the URL bar in a browser, ‘subdomain.domain.com’ will be populate the ‘Host’ header in the request the browser sends. This is matched in the server block to the server_name directive. In order to have multiple servers, you need to have an A record that corresponds to each server, and a server block in your nginx configuration. If you’ve followed my guide, this will be satisfied by simply creating a new .conf file in the vdomains/ directory; i.e., vdomains/subdomain1.domain.com.conf and vdomains/subdomain2.domain.com.conf, with appropriate values for the server_name directives. You’ll also need to make sure that the proxy_pass directive points to the actual IP of the server the service runs on. Whether these servers are on the same subset or not is immaterial to this process provided you have the correct routing in place, otherwise having the servers on the same subnet actually makes everything easier.

      Hope this helps.

      Cheers,

      Sam

      Edit: I’ve just re-read your question. If you’re asking about how you can differentiate between servers pointing to the same URL, then you’re right, you can’t. This is not the point of a reverse proxy. You must have either a subdomain (https://subdomain.domain.com) OR a unique path (https://www.domain.com/servicename). The guide I’ve provided here is for the former.

Leave a Reply

Your email address will not be published. Required fields are marked *