I was experimenting how I could expose applications in AWS Elastic Kubernetes Service (EKS) via Kubernetes Service resources and AWS load balancers. Out of curiosity, I also wanted to know if I could
ssh into containers in EKS without using “
kubectl exec” or any container runtime commands (e.g. “
One scenario would be when I need to access the container’s filesystem to extract a log/config file, but 1) I do not have EKS cluster admin role for more permissive actions, and 2) the
kubectl environment is exposed via a structured CI/CD pipeline and is non-interactive.
This is how I envision it will be:
User SSH connection <--> AWS Elastic Load Balancer <--> Service <--> Pod <--> Container
As I could not find any concrete examples/tutorials, here are my implementation setup and steps. Do note that allowing users to
ssh externally into containers on an EKS cluster increases the attack surface area for malicious users.
- Some basic understanding of Kubernetes and EKS
- You have an EKS cluster with at least one public subnet
Step 1: Prerequisites
If not already done:
- Install and configure AWS CLI v1.18 or later (needs the
kubectl(acceptable version for your cluster)
- Setup kubeconfig to point to the cluster:
$ aws eks update-kubeconfig --region <EKS_CLUSTER_REGION> --name <EKS_CLUSTER_NAME>
Step 2: Find/Build an Image with
openssh-server Installed and Running
Depending on your requirements, you may want to build your own container image running
openssh-server and other utilities/services. For security purposes, you should do it. However, for the testing done in this post, I found s1ntaxe770r‘s “ssh service inside a docker container” repo useful to avoid reinventing the wheel. The container is hosted on GitHub Container Registry – the image name is
Step 3: Define Kubernetes Service Resource
Service resource of type
LoadBalancer that when created in EKS, will spin up an ELB in a public subnet of the cluster. In my configuration below, TCP traffic sent to the Service/ELB port 2222 will be forwarded to port 22 of the Pod(s) that match the
$ cat service.yaml apiVersion: v1 kind: Service metadata: name: workspace-svc labels: app: workspace spec: type: LoadBalancer selector: user: username ports: - port: 2222 targetPort: 22 name: ssh protocol: TCP
Step 4: Define Kubernetes Pod Resource
Now, we’ll define a Pod running the container from Step 2. I used a
StatefulSet here, but feel free to use the workload of your preference (e.g. pod, replicaset, deployment) .
apiVersion: apps/v1 kind: StatefulSet metadata: name: workspace spec: selector: matchLabels: user: username serviceName: workspace-svc replicas: 1 template: metadata: labels: user: username spec: terminationGracePeriodSeconds: 10 containers: - name: workspace image: ghcr.io/s1ntaxe770r/image:latest ports: - containerPort: 22 name: ssh protocol: TCP
Step 5: Apply and Test
Apply the above declarative YAML files in the cluster:
$ kubectl apply -f service.yaml service/workspace-svc created $ kubectl apply -f statefulset.yaml statefulset.apps/workspace created
Once you have waited for a while for the workloads/resources to initialize, use
kubectl to get the IP address of the Service:
$ kubectl get all NAME READY STATUS RESTARTS AGE pod/workspace-0 1/1 Running 0 61s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 125m service/workspace-svc LoadBalancer 10.100.87.190 <ELB_DNS_NAME> 2222:31894/TCP 101s
Do note that the NodePort (i.e. 31894 in the above code snippet) for your service will appear different as
kube-proxy will randomly select a port from the 30000-32767 range (default range if unchanged).
In the AWS console, you will notice that there is an ELB created in one of the public subnets of the EKS cluster. Being in the public subnet allows external traffic to reach the load balancer.
Now, attempt to
ssh into the container via the <ELB_DNS_NAME> – our request will be passed to the Service, then the selected Pod (and subsequently container). Do note that the “test” credentials below belong to the preconfigured one in the Image. Refer to its Dockerfile for more details.
$ ssh test@<ELB_DNS_NAME> -p 2222 The authenticity of host '[<ELB_DNS_NAME>]:2222 ([<ELB_IP_ADDRESS>]:2222)' can't be established. ECDSA key fingerprint is <SHA256_FINGERPRINT>. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added '[<ELB_DNS_NAME>]:2222,[<ELB_IP_ADDRESS>]:2222' (ECDSA) to the list of known hosts. test@<ELB_DNS_NAME>'s password: Welcome to Ubuntu 20.04.1 LTS (GNU/Linux 4.14.209-160.339.amzn2.x86_64 x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage This system has been minimized by removing packages and content that are not required on a system that users do not log into. To restore this content, you can run the 'unminimize' command. The programs included with the Ubuntu system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. To run a command as administrator (user "root"), use "sudo <command>". See "man sudo_root" for details. test@workspace-0:~$
Congratulations, you have managed to
ssh into a container in your EKS cluster! 🙂
Step 6: Cleanup (Optional)
If you need to clean up the testing resource created in this post, run the kubectl delete subcommand:
$ kubectl delete -f service.yaml service "workspace-svc" deleted $ kubectl delete -f statefulset.yaml statefulset.apps "workspace" deleted
You might also need to manually delete the ELB record in the AWS console.