EKS Hybride With Terraform: EC2 & Fargate

Naoufal EL GAFA
3 min readMar 10, 2023

It has been a long time since I last wrote a new story. There have been a lot of changes, a lot of stuff to manage, and many problems to resolve ๐Ÿ˜….

But today, I am back with a new story where I will be talking about an EKS Hybrid Cluster with EC2 and Fargate, deployed with Terraform.

Context

When you arrive at a new project in a large company with a critical environment that requires maintaining a blueprint, you need to have a clear understanding of what has been previously done before implementing new features to the blueprint.

Furthermore, if you utilize the AWS EKS CLI, it will automatically install all necessary resources, and customization is unnecessary since all resources are deployed automatically ๐Ÿ‘Œ๐Ÿผ.

The blueprint in question is an #EKS Blueprint with #EC2, and my task was to add a new feature to include the ability to use #EKS with #Fargate, keeping in mind that the Terraform EKS Module has not been implemented in the blueprint yet.

Prerequisites:

There are a lot of tutorials that explain how to deploy EKS with Terraform, such as https://engineering.finleap.com/posts/2020-02-27-eks-fargate-terraform/.

However, in this story, Iโ€™ll provide a SOLUTION to a significant problem that isnโ€™t mentioned in these tutorials when using #NodeGroups with #AWSLaunchTemplate and #Fargate ๐Ÿ’ช๐Ÿผ.

4 Steps to Add Fargate :

  1. Create a Fargate Profile
  2. Create a Role with principal service: eks-fargate-pods.amazonaws.com
  3. Create a policy attachment with this role and this policy arn:aws:iam::aws:policy/AmazonEKSFargatePodExecutionRolePolicy
  4. Attache the role to the Fargate Profile

You can find all these steps in this Gist

The end ๐Ÿ˜‚๐Ÿ˜‚๐Ÿ˜‚๐Ÿ˜‚๐Ÿ˜‚,

Iโ€™m joking. Now, we will talk about the problem and the solution.

The Problem :

So, with this example, everything normally works fine, assuming that you already have your node groups working. When you deploy a new pod on Fargate, it should work fine ๐Ÿ™‚.

However, communication between pods on Fargate and pods on EC2 will not be resolved. ๐Ÿ˜ฎ

For example, if you run this command from a pod deployed on Fargate, you will experience a DNS timeout

kubectl exec -it [podname] -n fargate-example -- nslookup kube-dns.kube-system.svc.cluster.local

If you run the same command from a pod deployed on an EC2 node to resolve the pod on Fargate, it will work ๐Ÿ˜ฑ ๐Ÿ˜ก.

Keep in mind that we already have a custom security group attached to the EKS control plane to allow communication between the control plane and nodes.

Solution:

After weeks of debugging and searching through AWS Documentation, we contacted AWS to see what the solution would be.๐Ÿ˜Ž

Be ready, itโ€™s too simple but more difficult to find: When Terraform deploys the EKS Cluster, EKS will create a default security group, and we must attach this default security group to our managed nodes, especially when our node groups use a launch template to which we attach security groups.

To do so :

Conclusion :

โ€œWhen you have a hybrid cluster with EC2 and Fargate, and you want to resolve DNS between pods deployed on Fargate and others on EC2, always have the reflex to attach the default security group of your cluster to your node groups, especially when you customize the security group attached to your launch template.โ€

magic word ๐Ÿคช : aws_eks_cluster.eks_cluster.vpc_config.0.cluster_security_group_id

For more stories, please subscribe and interact. If you have another subject in mind that you would like me to explain, please feel free to put it in the comments.๐Ÿ™๐Ÿผ

See you โœŒ๐Ÿผ

--

--

Naoufal EL GAFA

Software Lead Architect Enjoy designing, creating, coding, and Cloud Deploy. https://elgafa.com