
Restart Kubernetes DNS and check whether the issue is resolved. If DNS resolves from the node, then the issue is related to Kubernetes DNS and not a networking issue.This step will help determine if the issue is related to AKS related or networking configuration. If those steps don't show insights, SSH to one of the nodes and try resolution from there.If the pod doesn't have those tools, start a utility pod in the same namespace and retry with the tools.Exec into the pod to examine and use nslookup or dig if those tools are installed on the pod.If DNS resolution isn't working, then control plane errors or container image pull failures may occur.įollow these steps to make sure that DNS resolution is working. Kubectl get pods -n kube-system -o wide | grep ģ- Validate DNS resolution when restricting egressĭNS resolution is a critical component of your cluster. kubectl pods -n kube-system -o wide | grep tunnelfrontĭocker exec -it /bin/bash -c "ping " This example finds the tunnelfront pod after connecting to the node through SSH. If you can't get the logs through the kubectl or queries, use SSH into the node. | where LogEntry has "ssh to tunnelend is not connected" Here's another example query to check for tunnelfront connectivity errors. | project LogEntrySource, LogEntry, TimeGenerated, Computer, Image, Name, ContainerID | where ClusterId =~ "/subscriptions/YOUR_SUBSCRIPTION_ID/resourceGroups/RESOURCE_GROUP/providers/Microsoft.ContainerService/managedClusters/YOUR_CLUSTER_ID" This example searches Azure Monitor container insights to check for aks-link connectivity errors. You can also retrieve those logs by searching the container logs in the logging and monitoring service. kubectl logs -l app=aks-link -c openvpn-client -tail=50

This output shows logs for a working connection. If restarting the pods doesn't fix the connection, continue to the next step.Ĭheck the logs and look for abnormalities. If tunnelfront or aks-link connectivity is not working, establish connectivity after checking that the appropriate AKS egress traffic rules have been allowed. Depending on the age and type of cluster configuration, the connectivity pods are either tunnelfront or aks-link, and located in the kube-system namespace. If worker nodes are healthy, examine the connectivity between the managed AKS control plane and the worker nodes.

Open the Node Conditions dashboard.Ģ- Verify the control plane and worker node connectivity Select the cluster to view the health of the nodes, user pods, and system pods.ĪKS - Nodes view: In Azure portal, open navigate to the cluster. On the right pane, select Monitored clusters. You can check node health in one of these ways:Īzure Monitor - Containers health view. In that case, add more compute, memory, or storage resources.

As a fix, you can allow the necessary ports and fully qualified domain names through the firewall according to the AKS egress traffic guidance. A common reason is when the control plane to node communication is broken as a result of misconfiguration of routing and firewall rules. 1- Check the overall health of the worker nodesĪ node can be unhealthy because of various reasons. Determine the reason for the unhealthy node and resolve the issue. If the cluster checks are clear, check the health of the AKS worker nodes.
