Running Amazon EKS behind Customer HTTP Proxy without NAT

By February 28, 2020 March 26th, 2020 Blogs

Written by Praful Tamrakar and Manoj S Rao, Senior Cloud Engineer, Powerupcloud Technologies

Most of the enterprise customers would use the Proxy for the indirect network connections to other network services.

One of our customers has the following network configurations in AWS:

  • No AWS NAT gateway and Internet Gateway  for outbound traffic
  • All the traffic to the Internet must go via Proxy to reduce surface attacks and all such traffic will be monitored pro-actively.
  • All URLs outside the VPC must be whitelisted

The DNS and proxy resolution Diagram

Problem Statement:

With the above networking configurations, we had a requirement to have an EKS Cluster in private subnets. We faced multiple challenges (as mentioned below) with the EKS connectivity.

To join the worker nodes to an EKS cluster, we require to execute a bootstrap command through the user-data script in the EC2 server. With our networking configurations, the kubelet was not able to start after executing the bootstrap command and we were facing connection timed out  issues in the below two scenarios:

  1. When the kubelet service was trying to pull the pod-infra-container image from the docker API call.
  2. When kubelet service was trying to connect to EC2 and ECR API call

With the below solution we were able to resolve both the issues and run EKS behind the proxy successfully.

In this article, we are elaborating on how we can achieve and automate the configuration of an HTTP proxy for Amazon Elastic Kubernetes Service (Amazon EKS) worker nodes with the help of user data.

Assumptions and requisites:

  1. This solution can be used for either Terraform or Amazon Cloudformation which will help you to create an EKS cluster worker-node initial setup or upgrading the worker node.

The cloud formation script can be found on ->

https://github.com/powerupcloud/kubernetes-spot-webinar/tree/master/provision-eks-worker-nodes

The terraform Script can be found on ->

https://learn.hashicorp.com/terraform/aws/eks-intro

  1. You must edit the user-data on both the above method with the solution mentioned below.
  2. IF the EKS cluster API Endpoint setup is a Private subnet and does not have NAT Gateway,  Please setup VPC endpoint for Amazon EC2 and Amazon ECR. ( please ensure the EC2 and ECR endpoint Security Groups must be same as the worker node Security Group)

Resolution

1.    Let’s find out the CIDR Block of the cluster :

kubectl get service kubernetes -o jsonpath='{.spec.clusterIP}'; echo

This will return either 10.100.0.1, or 172.20.0.1, which means that your cluster IP CIDR block is either 10.100.0.0/16 or 172.20.0.0/16.

2.    Let’s create a ConfigMap file named proxy-env-vars-config.yaml.

If the output from the command in step 1 has an IP from the range 172.20.x.x, then structure your ConfigMap file will be:

apiVersion: v1
kind: ConfigMap
metadata:
name: proxy-environment-variables
namespace: kube-system
data:
HTTPS_PROXY:http://customer.proxy.host:proxy_port
HTTP_PROXY: http://customer.proxy.host:proxy_port
NO_PROXY: 172.20.0.0/16,localhost,127.0.0.1,VPC_CIDR_RANGE,169.254.169.254,.internal,.s3.amazonaws.com,.s3.<aws-region-code>.amazonaws.com

If the output from the command in step 1 has an IP from the range 10.100.x.x, then structure your ConfigMap file as follows:

apiVersion: v1
kind: ConfigMap
metadata:
name: proxy-environment-variables
namespace: kube-system
data:
HTTPS_PROXY:http://customer.proxy.host:proxy_port
HTTP_PROXY: http://customer.proxy.host:proxy_port
NO_PROXY: 10.100.0.0/16,localhost,127.0.0.1,VPC_CIDR_RANGE,169.254.169.254,.internal,.s3.amazonaws.com,.s3.<aws-region-code>.amazonaws.com

3. Now we will create a  ConfigMap :

kubectl apply -f /path/to/yaml/proxy-env-vars-config.yaml

Consider the following:

  • If you use a VPC endpoint, add a public endpoint subdomain to NO_PROXY (for example, with an Amazon Simple Storage Service (Amazon S3) endpoint in the region you run your EK cluster.).
  • You don’t need a proxy configuration to communicate, because the kube-dns pod communicates directly with the Kubernetes service.
  • Verify that the NO_PROXY variable in the proxy-environment-variables ConfigMap (used by the kube-proxy and aws-node pods) includes the Kubernetes cluster IP address space.

4.   Now we will come to Bootstrapping the worker nodes to configure the Docker daemon and kubelet by injecting user data into your worker nodes.

We must update or create yum, Docker, and kubelet configuration files before starting the Docker daemon and kubelet.

Let’s take user data injected into worker nodes using an AWS CloudFormation template that’s launched from the AWS Management Console, see Launching Amazon EKS Worker Nodes.

#Set the proxy hostname and port
PROXY="http://customer.proxy.host:proxy_port"
VPC_CIDR=VPC_CIDR_RANGE

#Create the docker systemd directory
mkdir -p /etc/systemd/system/docker.service.d

#Configure yum to use the proxy
cat << EOF >> /etc/yum.conf
proxy=http://$PROXY
EOF

#Set the proxy for future processes, and use as an include file
cat << EOF >> /etc/environment
http_proxy=$PROXY
https_proxy=$PROXY
HTTP_PROXY=$PROXY
HTTPS_PROXY=$PROXY
no_proxy=$VPC_CIDR,localhost,127.0.0.1,169.254.169.254,.internal,.<aws-region-code>.eks.amazonaws.com
NO_PROXY=$VPC_CIDR,localhost,127.0.0.1,169.254.169.254,.internal,.<aws-region-code>.eks.amazonaws.com
EOF

#Configure docker with the proxy
tee <<EOF /etc/systemd/system/docker.service.d/proxy.conf >/dev/null
[Service]
EnvironmentFile=/etc/environment
EOF


#Configure the kubelet with the proxy
tee <<EOF /etc/systemd/system/kubelet.service.d/proxy.conf >/dev/null
[Service]
EnvironmentFile=/etc/environment
EOF

#!/bin/bash
set -o xtrace

#Set the proxy variables before running the bootstrap.sh script
set -a
source /etc/environment

/etc/eks/bootstrap.sh ${ClusterName} ${BootstrapArguments}
/opt/aws/bin/cfn-signal
    --exit-code $? \
    --stack  ${AWS::StackName} \
    --resource NodeGroup  \
    --region ${AWS::Region}

5. To update the AWS-node and Kube-proxy pods, run the following commands:

kubectl patch -n kube-system -p '{ "spec": {"template": { "spec": { "containers": [ { "name": "aws-node", "envFrom": [ { "configMapRef": {"name": "proxy-environment-variables"} } ] } ] } } } }' daemonset aws-node

kubectl patch -n kube-system -p '{ "spec": {"template":{ "spec": { "containers": [ { "name": "kube-proxy", "envFrom": [ { "configMapRef": {"name": "proxy-environment-variables"} } ] } ] } } } }' daemonset kube-proxy

6. If you change the ConfigMap, apply the updates, and then set the ConfigMap in the pods again to initiate an update as follows:

kubectl set env daemonset/kube-proxy --namespace=kube-system --from=configmap/proxy-environment-variables --containers='*'

kubectl set env daemonset/aws-node --namespace=kube-system --from=configmap/proxy-environment-variables --containers='*'

Note: You must update all YAML modifications to the Kubernetes objects kube-dns or AWS-node when these objects are upgraded. To update a ConfigMap to a default value :

With EKSCTL:

eksctl utils update-kube-proxy

WARNINGS :

If the proxy loses connectivity to the API server, then the proxy becomes a single point of failure and your cluster’s behavior can become unpredictable. For this reason, it’s best practice to run your proxy behind a service discovery namespace or load balancer, and then scale as needed.

And that’s all..!! Hope you found it useful. Keep following our Kubernetes series for more interesting articles.

References:

Join the discussion One Comment

Leave a Reply