Setting up NodeLocal DNS Cache
To reduce the number of DNS queries to a Kubernetes cluster, enable NodeLocal DNS Cache.
Tip
If a cluster is made up of over 50 nodes, use automatic DNS scaling.
By default, pods send queries to the kube-dns
service. The nameserver
field in the /etc/resolv.conf
file is set to the ClusterIp
value of kube-dns
. A connection to the ClusterIP
is established using iptables
When NodeLocal DNS Cache is enabled, a DaemonSetnode-local-dns
). User pods now send queries to the agent running on their nodes.
If a query is in the agent cache, it returns a direct response. Otherwise, a TCP connection to the ClusterIP
kube-dns
is created. By default, the caching agent makes cache-miss requests to kube-dns
for the cluster.local
cluster zone.
This helps avoid the DNAT rules, connection tracking
To set up DNS query caching:
Getting started
Create the infrastructure
Create a Kubernetes cluster and a node group with public access to the internet.
Prepare the environment
-
If you don't have the Nebius AI command line interface yet, install and initialize it.
The folder specified in the CLI profile is used by default. You can specify a different folder using the
--folder-name
or--folder-id
parameter. -
Install kubectl
and configure it to work with the created cluster. -
Retrieve the service IP address for
kube-dns
:kubectl get svc kube-dns -n kube-system -o jsonpath={.spec.clusterIP}
Install NodeLocal DNS
-
Create a file named
node-local-dns.yaml
. In thenode-local-dns
DaemonSet settings, specify thekube-dns
service IP address:node-local-dns.yaml# Copyright 2018 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Modified for Nebius AI Usage --- apiVersion: v1 kind: ServiceAccount metadata: name: node-local-dns namespace: kube-system labels: --- apiVersion: v1 kind: Service metadata: name: kube-dns-upstream namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/name: "KubeDNSUpstream" spec: ports: - name: dns port: 53 protocol: UDP targetPort: 53 - name: dns-tcp port: 53 protocol: TCP targetPort: 53 selector: k8s-app: kube-dns --- apiVersion: v1 kind: ConfigMap metadata: name: node-local-dns namespace: kube-system labels: data: Corefile: | cluster.local:53 { errors cache { success 9984 30 denial 9984 5 } reload loop bind 169.254.20.10 <kube-dns service IP address> forward . __PILLAR__CLUSTER__DNS__ { prefer_udp } prometheus :9253 health 169.254.20.10:8080 } in-addr.arpa:53 { errors cache 30 reload loop bind 169.254.20.10 <kube-dns service IP address> forward . __PILLAR__CLUSTER__DNS__ { prefer_udp } prometheus :9253 } ip6.arpa:53 { errors cache 30 reload loop bind 169.254.20.10 <kube-dns service IP address> forward . __PILLAR__CLUSTER__DNS__ { prefer_udp } prometheus :9253 } .:53 { errors cache 30 reload loop bind 169.254.20.10 <kube-dns service IP address> forward . __PILLAR__UPSTREAM__SERVERS__ { prefer_udp } prometheus :9253 } --- apiVersion: apps/v1 kind: DaemonSet metadata: name: node-local-dns namespace: kube-system labels: k8s-app: node-local-dns spec: updateStrategy: rollingUpdate: maxUnavailable: 10% selector: matchLabels: k8s-app: node-local-dns template: metadata: labels: k8s-app: node-local-dns annotations: prometheus.io/port: "9253" prometheus.io/scrape: "true" spec: priorityClassName: system-node-critical serviceAccountName: node-local-dns hostNetwork: true dnsPolicy: Default # Don't use cluster DNS. tolerations: - key: "CriticalAddonsOnly" operator: "Exists" - effect: "NoExecute" operator: "Exists" - effect: "NoSchedule" operator: "Exists" containers: - name: node-cache image: k8s.gcr.io/dns/k8s-dns-node-cache:1.17.0 resources: requests: cpu: 25m memory: 5Mi args: [ "-localip", "169.254.20.10,<kube-dns IP address>", "-conf", "/etc/Corefile", "-upstreamsvc", "kube-dns-upstream" ] securityContext: privileged: true ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9253 name: metrics protocol: TCP livenessProbe: httpGet: host: 169.254.20.10 path: /health port: 8080 initialDelaySeconds: 60 timeoutSeconds: 5 volumeMounts: - mountPath: /run/xtables.lock name: xtables-lock readOnly: false - name: config-volume mountPath: /etc/coredns - name: kube-dns-config mountPath: /etc/kube-dns volumes: - name: xtables-lock hostPath: path: /run/xtables.lock type: FileOrCreate - name: kube-dns-config configMap: name: kube-dns optional: true - name: config-volume configMap: name: node-local-dns items: - key: Corefile path: Corefile.base --- # A headless service is a service with a service IP but instead of load-balancing it will return the IPs of our associated Pods. # We use this to expose metrics to Prometheus. apiVersion: v1 kind: Service metadata: annotations: prometheus.io/port: "9253" prometheus.io/scrape: "true" labels: k8s-app: node-local-dns name: node-local-dns namespace: kube-system spec: clusterIP: None ports: - name: metrics port: 9253 targetPort: 9253 selector: k8s-app: node-local-dns
-
Create resources for NodeLocal DNS:
kubectl apply -f node-local-dns.yaml
Result:
serviceaccount/node-local-dns created service/kube-dns-upstream created configmap/node-local-dns created daemonset.apps/node-local-dns created service/node-local-dns created
-
Make sure that DaemonSet is successfully deployed and running:
kubectl get ds -l k8s-app=node-local-dns -n kube-system
Result:
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE node-local-dns 3 3 3 3 3 <none> 24m
Change the NodeLocal DNS Cache configuration
To change the configuration, edit the appropriate configmap
. For instance, to enable DNS request logging for the cluster.local
zone.
-
Run the following command:
kubectl -n kube-system edit configmap node-local-dns
-
Add the
log
line to thecluster.local
zone configuration:... apiVersion: v1 data: Corefile: | cluster.local:53 { log errors cache { success 9984 30 denial 9984 5 } ...
-
Save your changes.
Result:
configmap/node-local-dns edited
It may take several minutes to update the configuration.
Run DNS queries
To run test queries
-
Run the pod:
kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml
Result:
pod/dnsutils created
-
Make sure the pod status changed to
Running
:kubectl get pods dnsutils
Result:
NAME READY STATUS RESTARTS AGE dnsutils 1/1 Running 0 26m
-
Connect to a pod:
kubectl exec -i -t dnsutils -- sh
-
Get the IP address of the local DNS cache:
nslookup kubernetes.default
Result:
Server: <kube-dns IP> Address: <kube-dns IP>#53 Name: kubernetes.default.svc.cluster.local Address: 10.96.128.1
-
Run the following queries:
dig +short @169.254.20.10 www.com dig +short @<kube-dns IP> example.com
Result:
# dig +short @169.254.20.10 www.com 52.128.23.153 # dig +short @<kube-dns IP> example.com 93.184.216.34
After
node-local-dns
launches, the iptables rules will be configured for local DNS to respond on both of the addresses (<kube-dns service IP>:53
and169.254.20.10:53
).The
kube-dns
service can be accessed at the new address, that is, theClusterIp
ofkube-dns-upstream
. You may need this address to configure request forwarding.
Check logs
Run the following command:
kubectl logs --namespace=kube-system -l k8s-app=node-local-dns -f
To stop displaying a log, press Ctrl + C.
Result:
...
[INFO] 10.112.128.7:50527 - 41658 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097538s
[INFO] 10.112.128.7:44256 - 26847 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.057075876s
...
Stop the DaemonSet
To disable DaemonSet in NodeLocal DNS Cache, run:
kubectl delete -f node-local-dns.yaml
Result:
serviceaccount "node-local-dns" deleted
service "kube-dns-upstream" deleted
configmap "node-local-dns" deleted
daemonset.apps "node-local-dns" deleted
service "node-local-dns" deleted
Delete the resources you created
Delete the resources you no longer need to avoid paying for them: