Protecting your Internal Services with Nginx and Oauth2 | Aerospace & Computer Graphics on WordPress.com

françois Ruty
3 min readApr 16, 2018

--

When you have multiple internal services in your company, some of which your coworkers need to access, using a SSO mechanism is good practice, so that people don’t need to manage multiple passwords.
One of the most common SSO mechanism is Oauth, for example if your company uses Google Apps.

In our infrastructure in my current startup, we have multiple services such as pgWeb (postgres UI), Concourse web UI and other internal tools that people in our company need to use.
All those services run in Docker containers, and are interconnected via a VPN. I could issue OpenVPN keys to everyone (so that they can access the services directly on the VPN), but that would be a pain to manage, and it is a pain for users to remember VPN IP addresses to enter in their browser.

So here is how we expose and protect all our internal services. We create a public subdomain (DNS) for each of the service. Each subdomain points to the same server, our central server. On our central server, we have an Nginx docker container listening on port 80, and depending on the subdomain in the request, routes it to the right container.

Then, we use the awesome Nginx auth_request feature. This means that Nginx can verify authentication in a subrequest to a 3rd party service. For this 3rd-party service, we use bitly Oauth2 proxy (https://github.com/bitly/oauth2_proxy), inside a docker container. We also created an oauth2 client in google cloud, and we use this oauth2 client id, secret and redirect-url.
Instead of forwarding all traffic through this proxy, we can just send a subrequest to it, in Nginx config.

Concretely, here is how it looks like:

docker-compose.yml for oauth2 proxy:

version: '2' 
services:
authproxy:
build: .
ports:
- "ip-of-oauth-proxy:4180" command: /usr/bin/oauth2_proxy --upstream=... --http-address="0.0.0.0:4180" --redirect-url=... --email-domain=... --cookie-secret=... --client-secret=${OAUTH2_SECRET} --client-id=${OAUTH2_KEY} -cookie-domain=...

Dockerfile to build the container:

FROM debian:stable-slim 
RUN DEBIAN_FRONTEND=noninteractive apt-get update --fix-missing \ apt-get install -y --force-yes wget \
wget https://github.com/bitly/oauth2_proxy/releases/download/v2.2/oauth2_proxy-2.2.0.linux-amd64.go1.8.1.tar.gz \
tar xzvf /oauth2_proxy-2.2.0.linux-amd64.go1.8.1.tar.gz \
cp /oauth2_proxy-2.2.0.linux-amd64.go1.8.1/oauth2_proxy /usr/bin/oauth2_proxy
# Install CA certificates RUN apt-get update -y apt-get install -y ca-certificates

Then, we configured our Nginx load balancer to use this Oauth2 proxy as an authentication backend. When authentication is validated, the proxy_pass sends the traffic to its destination. Here is an example of Nginx config:

server { 
listen 443;
server_name XXXXX;
include /etc/nginx/ssl-conf;
ssl_certificate /etc/letsencrypt/live/XXXXX/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/XXXXX/privkey.pem; client_max_body_size 50M;
gzip on;
gzip_vary on;
gzip_types application/json application/javascript text/css;
location /oauth2/
{
proxy_pass http://ip-of-oauth-proxy;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Auth-Request-Redirect $request_uri;
}
location = /oauth2/auth
{
proxy_pass http://ip-of-oauth-proxy;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
# nginx auth_request includes headers but not body proxy_set_header Content-Length ""; proxy_pass_request_body off;
}
location /
{
auth_request /oauth2/auth;
error_page 401 = /oauth2/sign_in;
proxy_pass http://destination-for-your-traffic;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_http_version 1.1;
proxy_connect_timeout 300;
proxy_send_timeout 30;
proxy_read_timeout 300;
proxy_set_header Connection "";
}
}

Thanks to bitly Oauth2 proxy and Nginx auth_request feature, you can, with just 2 containers (Nginx “front” web server with all incoming traffic going through it, and Oauth2 proxy), protect all your internal services behind Oauth2 authentication, at the cost of adding, for each service to protect, a block in Nginx config.

Originally published at fruty.io on April 16, 2018.

--

--

françois Ruty
françois Ruty

Written by françois Ruty

I'm a CTO, and I like to talk about Murphy's law

No responses yet