Introduction

It seems like I’m not one lucky person who needs something weird when it comes to DNS names resolution :).

Recently I had a problem at work: we have a very secure datacenter. It has no direct access to the Internet and internally resolves only our company internal domain names. Since my team was doing step-by-step migration to AWS and we introduced a new way CNAME records to our subdomains. Because we not aware of the particular way of working for DNS in a datacenter and we used new self-registered domain, our servers in this datacenter were not able to reach resources in AWS. And here is where my workaround comes in action.

Pre-requisites

For our case, we needed to resolve both: our internal domains into internal IP’s and external domains. Since we don’t have a direct Internet connection from this datacenter and only through a proxy server, we decided to use for this DNS-over-HTTPS.

But on the other hand, we needed to resolve our internal domains through our in-house DNS server. So we also required to have something to route DNS queries from our internal machines. So for this, the best choice became classic DNS server - Bind9.

Implementation

CloudflareD

Since we wanted to have a universal solution, we decided to go with a few Docker containers. For DNS-over-HTTPS use-case, we decided to use cloudflared. This is a light-weight DNS server which does DNS-over-HTTPS and is developed by CloudFlare. It also perfectly works with proxy server in-between.

For our use-case, we have chosen Docker image from visibilityspots.

Bind9

As I mentioned above, we used Bind9 to configure DNS routing between our in-house and public DNS server. Here is a configuration file we used for our setup:

options {
    directory "/var/lib/bind";
    listen-on { any; };
    listen-on-v6 { none; };
    forwarders { 172.18.0.1 port 5053; };
    allow-query { any; };
    pid-file "/var/run/named/named.pid";
    allow-recursion { any; };
    filter-aaaa-on-v4 yes;
};
zone "internal1.domain." {
  type forward;
  forwarders { <your_in-house_dns_server>; };
};
zone "internal2.domain." {
  type forward;
  forwarders { <your_in-house_dns_server>; };
};
zone "internal3.domain." {
  type forward;
  forwarders { <your_in-house_dns_server>; };
};

As base image for our setup we have choosen image from fike.

Docker Compose

We used SystemD in our setup to start containers. But here is example of docker-compose.yaml which could be used for this setup.

version: '3.3'

services:
   bind9:
     depends_on:
       - cloudflared
     image: fike/bind9
     volumes:
       - <path_to_your_config>/named.conf:/etc/bind/named.conf
     restart: always
     ports:
       - "53:53"
       - "53:53/udp"

   cloudflared:
     image: visibilityspots/cloudflared
     ports:
       - "5053:5053"
     restart: always
     network_mode: host
     environment:
       PORT: 5053
       http_proxy: <your_proxy_server_address>
       https_proxy: <your_proxy_server_address>

Update those compose file and Bind9 configuration with your parameters and just run. After that you need to update you DNS configuration to point it to IP address of the server where you run those containers.

Conclution

This setup could be also reused to run local DNS server to resolve blocked domains inside your provider/office/country network.