Like many of us I have several services hosted at home. Most of my services run off Unraid in Docker these days and a select few are exposed to the Internet behind nginx Proxy Manager running on my Opnsense router.
I have been thinking a lot about security lately, especially with the services that are accessible from the outside.
I understand that using a proxy manager like nginx increases security by being a solid, well maintained service that accepts requests and forwards them to the inside server.
But how exactly does it increase security? An attacker would access the service just the same. Accessing a URL opens the path to the upstream service. How does nginx come into play even though it’s not visible and does not require any additional login (apart from things like geoblocking etc)?
My router exposes ports 80 and 443 for nginx. All sites are https only, redirect 80 to 443 and have valid Let’s Encrypt certificates
A reverse proxy streamlines your approach of exposing web services.
The services are then, usually, only accessible by knowing a hostname or subdomain, not directly by visiting the IP address.
The reverse proxy also manages your SSL certificates and ensures that HTTPS is provided. Regarding this, it also terminates SSL. Means, your end users accessing the services are always using TLS encrypted communication channels. However, the reverse proxy itself can talk without encryption in plaintext with the proxy services. This increases speed and reduces load.
If you do not use any advanced configuration, then a reverse proxy won’t provide any special security features out of the box usually. However, you can add various things into the mix, such as a WAF like Mod Security or some log monitoring solutions or middlewares for IP whitelisting, rate limiting and so on.
As everything goes over the reverse proxy, you have a single point of entry and can easily manage access. You can combine it with other stuff easily too like Cloudflare in front or an IdP like Authelia/Keycloak/Authentik for Single Sign On. Or crowdsec/fail2ban, which inspect the logs and ban misbehaving threat actors.
The logs are normalized and strictly formatted and do not vary for different services. So analyzing logs or applying security solution based on those logs is easier by using a reverse proxy than having various individual web servers doing and logging their own way (Apache, Nginx, IIS and all the others).
The main use of a reverse proxy in this scenario is for the reverse proxy to impose restriction on how the service can be accessed. For example rate-limiting, or filtering clients. As you rightfully pointed out, it requires more than just starting the reverse proxy and pointing to a service, otherwise you would almost access it the same way (although by default NGINX does do a little bit of that already)
I look at reverse proxy as being more of a convenience to the administrator rather than a more secure way of exposing services. I don’t feel warm and fuzzy about just letting my NPM instance hang out there so I use pfblockerng in front of it for geoIP blocking and threat lists. I also utilize Crowdsec and Fail2Ban for additional security. My NPM VM also sits on an untrusted vlan inside my network with explicit rules of what it can reach on any segment of my internal network. It all boils down to your comfort level and setting your allowed risks.
I don’t know how “hackable” nginx proxy manager or any service i host from my home network is. But you can add more layers of security on it than just having a login page.
Starting from your home network, you are running a opnsense instance:
Put your web services in their own vlan and set up rules, that the services themselves can not communicate to each other or anything in your home network (sandbox them). You can even use portainer and put every docker Container in it’s own docker network, so they can not communicate to each other anymore. So you also have a way with portainer to work behind Unraids “Curtain”.
Use Cloudflare to hide your public ip and set up rules for ports 80/443 to only accept incoming traffic from the cloudflare proxy. This won’t work for things like nextcloud or plex, since it violates cloudflares tos. Just keep in mind that some stuff may not work properly, most does, but for example mesh central relies on the ssl certificates you issue on your npm, so adding cloudflare in front changes the hash of the ssl cert which breaks validation of machines. But there are workarounds.
set up pfblocker (don’t know the service for opnsense) with ip blocklists to block at least some or most known malicious ip addresses scanning your firewall. Can also be used to block whole countries. I’ve set up a Tpot instance for some time with pfblocker ip blocklists in front and most attempts I saw came from China, Russia and if I remember correctly the near Eastern countries. Also northern America, mostly proxied cloudflare ips from there.
extra layer of security for login pages/brute force protection: Set up authentik in front of your services and profit from single sign on and multi factor authentication. With nginx proxy manager it’s fairly easy to set up. Either your service already supports sso, or you set up forward Auth for your services. so you can use something like oauth/OpenID, saml, totp and user certs to authenticate with your services. All these authentication forms can be combined. It’s even possible to use your phone to authenticate with your fingerprint. For services which support OpenID or saml, you can just deactivate normal user login, for the other services just use forward Auth. Once authenticated with authentik, you can access all services without authenticating again. Its even adjustable how long your login token is valid. I would go a way with a combination using user certificates and password or fingerprint for authentication, just keep in mind to find a way to renew your user certs before they expire (I did not set up this form of authentication yet).
In the end, you are just a normal consumer and not a company. So for the average “hacker” there is not much benefit from hacking your services and even less when you provide some extra security. Only thing which comes in my mind may be a open VM which he could use for some thing like farm crypto or build some kind of botnet to run ddos attacks or some kind of script kiddie stuff.
You already have opnsense running, so i assume you set it up in a proper way. You already have a more secure home network than 95% of the consumers out there.
Edit: Another thing may be to not set up dns records like “thisismypasswordmanager.mydomain.com” for each service. Better set up a wildcard subdomain “*.mydomain.com”, so it’s not obvious on first sight what you host. Then, use your password manager and set up different random passwords for everything you use. Check on haveibeenpwned if your mail and regular password combinations are compromised and use another one for your password manager which you don’t use anywhere else. There are tons of lists out there and ppl could try to brute force your password manager.
I have added authelia for MFA on my web services on top of normal authentification. In addition I banned countries like russia /china ( I will never visit them anyway).
How does nginx come into play even though it’s not visible
Of course its “visible”. Its the service that sits directly in front and answers the request. The user from the outside connects directly to nginx (or whatever reverse proxy you chose). If that proxy software has a security flaw, it could be exploited.
Im a bit sceptical about running a proxy directly on something like OPNsense. For the simple reason of having a up-to-date version, lots of OPNsense plugins lag behind a few versions compared to their “stand alones”. So in the case of a security flaw in a proxy, that could be a issue.
Assuming the versions are identical, then sure why not run on the OPNsense. Maybe it would be ideal tho to run it in a seperate “device”, maybe a dedicated VM, or atleast a rootless container, something like that.
Ask /r/CyberSecurity /r/CyberSecurityAdvice and /r/HomeNetworking i guess.
The http server code in a reverse proxy like nginx is very well tested, and probably more so than whatever framework your individual apps are using. Many of those are not expected to be exposed publicly eg gunicorn (popular Python server).
The reverse proxy should be the thing handling SSL encryption and you should be more confident the implementation in any mainstream reverse proxy is at least as secure if not more secure than the implementation in each individual server the proxy is in front of.
Its also more convenient for an admin as all the SSL stuff is in one place instead of on every individual server
Well lets imagine a case where you had to keep a really old application online. Perhaps something still susceptible to the heartbleed bug or something like that. With a reverse proxy in place the outside world would only be communicating with your proxy, and have little to no control against the system behind it with an issue.
A reverse proxy potentially protects you against some kinds of issues that could exploited on the web servers behind the proxy. Not all of the issues, but some of them. That said, you obviously should be keeping everything behind your proxy secure as possible.
Ideally your reverse proxy should be something minimal and different from what you are running on the backend. By that I mean, you wouldn’t get much benefit by having nginx in the front, and nginx in the back.
The theory is that your proxy has a smaller attack surface than your app server. The codebase can be more easily secured as it has less complexity because it only has a single relatively simple task.
There’s also expected to be less risk of lateral movement within your network if your proxy does get compromised because the proxy is supposed to have much less exposure to the rest of your network than your app server (proxy should only need https access to your app servers, your app server is likely to need access to database and/or other services).
But that all assumes that you are deploying your proxy in a secure manner and with best practices for network segmentation (which a home network is likely to miss)
Use a vpn. Nginx is not making your applications safer. There are thousand of scans each day on all ip’s in the world that exist. 1 flaw and they are inside your network.
I have a Spring Boot app for scrapping and storing certain information periodically. It has two exposed HTTP methods (one for scrapping and one for returning stored results), and the app is configured to only accept connections from localhost. I expose the reading method via nginx, while the scrapping one is only used from cron and is not available for anything outside.
This setup also scales well - I can add new services without having to copy-paste any additional checks, all configs for them are in the same nginx site, and if ever I add a login system to it, I can control access per-service from a single point.
The services are then, usually, only accessible by knowing a hostname or subdomain, not directly by visiting the IP address.
I get that this gives you the possibility to get an own domain for every port. But wouldn’t a port scan reveal all ports used?
For example: I got a reverse proxy which routes port 7000 to my server. I gave this port-route a custom domain: “mydomain.com”. Somebody does a portscan on mydomain.com. wouldn’t he see that im using port 7000? Is the only benefit that he needs to find that domain name in the first place?