A few weeks ago I found that an application in my AWS VPC using Kerberos authentication needed to do reverse DNS lookups to instances in my on-premises network. The problem was that when instances in my VPC did a reverse DNS lookup, it would return results resolved by the AmazonProvidedDNS, such as ip-10-1-1-101.ec2.internal
, rather than the host name known by the on-premises network. As a result, when Kerberos saw that the .ec2.internal
host name did not match the expected host name, it would refuse to authenticate.
My initial reaction was to change the DHCP Option Set for my VPC to use only the on-premises nameserver, and not the AmazonProvidedDNS nameserver. This change did fix my application, yet reading in the documentation it does state:
“Services that use the Hadoop framework, such as Amazon EMR, require instances to resolve their own fully qualified domain names (FQDN). In such cases, DNS resolution can fail if the domain-name-servers option is set to a custom value.”
Since I was using EMR within my VPC, I had to dig deeper. The desired behavior I wanted was for reverse DNS lookups within my subnet to use AmazonProvidedDNS, and for all other reverse DNS lookups to defer to my on-premises nameserver. This can be accomplished using Route 53 Resolver. The solution was simple. Presuming my VPC lived inside subnet 10.100.4.0/24
, I had to add two rules to Route 53 Resolver:
FORWARD 10.in-addr.arpa TO $ON_PREM_NAMESERVER
SYSTEM 4.100.10.in-addr.arpa
In this case, SYSTEM
overrides FORWARD
and simply means to use AmazonProvidedDNS.
I hadn’t encountered .in-addr.arpa
notation before and ended up reading through RFC-2317 to learn more. While some nameservers (e.g. BIND, Windows) support in-addr.arpa subnet notation such as 64/26.100.168.192.in-addr.arpa
, I was not able to get any sort of subnet notation to work with Route 53; so if the subnet boundaries do not neatly fit into a size-256 chunk, then quite a lot of rules will need to be generated. RFC-2317 addresses this directly:
“For each size-256 chunk split up using this method, there is a need to install close to 256 CNAME records in the parent zone. Some people might view this as ugly; we will not argue that particular point. It is however quite easy to automatically generate the CNAME resource records in the parent zone once and for all, if the way the address space is partitioned is known.”