Skip to main content

Careful with externalTrafficPolicy

On a project I am working on is hosted in an EKS cluster with the NGINX ingress controller (the one maintained by Kubernetes). It is deployed using it's official official Helm chart, which I realized, after a lengthy debugging session, was a mistake.

The initial setup I aimed to improve had several flaws. Firstly, we were using the AWS Classic Load Balancer in front of the nginx ingress in the cluster, which has been deprecated for some time (years?). Continuing to use it makes little sense to us.

The second issue was that we were only running one(!) nginx pod, which is quite sketchy since the exposed web services had essentially no high availability. 

I switched to the Network Load Balancer (NLB), which was straightforward - I just needed to change the ingress-nginx service annotation to specify the load balancer type as NLB:

service.beta.kubernetes.io/aws-load-balancer-type: nlb

However, increasing the replica count turned out to be tricky. When I bumped it up to two, I began to experience sporadic timeouts on new connections to the cluster services, but the logs showed no errors. 

My first thought was that it could be related to leadership elections since, apparently, the pods should form a cluster. Running two pods could potentially lead to a split-brain scenario that disrupts service - or so I thought. 

Ultimately, it was not that issue. After semi-blindly experimenting with different configurations without success, I joined the Kubernetes Slack channel and started searching through the message history, as LLMs were not providing the help I needed.

I eventually found the fix. The ingress service of type LoadBalancer (so the thing that handles incoming traffic from the internet-facing AWS NLB) had a misconfiguration. The externalTrafficPolicy, was set to Cluster. This effectively adds a load-spreading step before passing the traffic to the nginx pod. 

And the fix is again a one-liner in the helm values file

externalTrafficPolicy: Local

Although I don't fully understand why this change resolved the issue, it worked. It likely has to do with the node security groups, it may be that we are just blocking traffic with the ingress port range.

As already mentioned, the root cause for this blunder is the fact that we used the helm chart to deploy the NGINX ingress. The official ingress documentation does not recommend this; instead, it provides a basic set of manifests for deployment. In those manifests, the externalTrafficPolicy is correctly set to Local. There are even open issues in the GitHub repository warning about the risks associated with using Helm deployments in cloud-provided clusters.


Comments

Popular posts from this blog

I'm not a passionate developer

A family friend of mine is an airlane pilot. A dream job for most, right? As a child, I certainly thought so. Now that I can have grown-up talks with him, I have discovered a more accurate description of his profession. He says that the truth about the job is that it is boring. To me, that is not that surprising. Airplanes are cool and all, but when you are in the middle of the Atlantic sitting next to the colleague you have been talking to past five years, how stimulating can that be? When he says the job is boring, it is not a bad kind of boring. It is a very specific boring. The "boring" you would want as a passenger. Uneventful.  Yet, he loves his job. According to him, an experienced pilot is most pleased when each and every tiny thing in the flight plan - goes according to plan. Passengers in the cabin of an expert pilot sit in the comfort of not even noticing who is flying. As someone employed in a field where being boring is not exactly in high demand, this sounds pro...

Emit structured Postgres data change events with wal2json

A common thing I see in an enterprise system is that when an end-user does some action, say add a user, the underlying web of subsystems adds the user to multiple databases in separate transactions. Each of these transactions may happen in varying order and, even worse, can fail, leaving the system in an inconsistent state. A better way could be to write the user data to some main database and then other subsystems like search indexes, pull/push the data to other interested parties, thus eliminating the need for multiple end-user originating boundary transactions. That's the theory part; how about a technical solution. The idea of this post came from the koodia pinnan alla podcast about event-driven systems and CDC . One of the discussion topics in the show is emitting events from Postgres transaction logs.  I built an utterly simple change emitter and reader using Postgres with the wal2json transaction decoding plugin and a custom go event parser. I'll stick to the boring ...

Extracting object properties from an IFC file with IfcOpenShell

Besides the object geometry information, IFC files may contain properties for the IFC objects. The properties can be, for example, some predefined dimension information such as an object volume or a choice of material. Some of the properties are predefined in the IFC standards, but custom ones can be added. IFC files can be massive and resource-intensive to process, so in some cases, it helps to separate the object properties from the geometry data. IfcOpenShell  is a toolset for processing IFC files. It is written mostly in C++ but also provides a Python interface. To read an IFC file >>> ifc_file = ifcopenshell.open("model.ifc") Fetch all objects of type IfcSlab >>> slab = ifc_file.by_type("IfcSlab")[1] Get the list of properties >>> slab.IsDefinedBy (#145075=IfcRelDefinesByType('2_fok0__fAcBZmMlQcYwie',#1,$,$,(#27,#59),#145074), #145140=IfcRelDefinesByProperties('3U2LyORgXC2f_hWf6I16C1',#1,$,$,(#27,#59),#145141), #145142...