On a project I am working on is hosted in an EKS cluster with the NGINX ingress controller (the one maintained by Kubernetes). It is deployed using it's official official Helm chart, which I realized, after a lengthy debugging session, was a mistake. The initial setup I aimed to improve had several flaws. Firstly, we were using the AWS Classic Load Balancer in front of the nginx ingress in the cluster, which has been deprecated for some time (years?). Continuing to use it makes little sense to us. The second issue was that we were only running one(!) nginx pod, which is quite sketchy since the exposed web services had essentially no high availability. I switched to the Network Load Balancer (NLB), which was straightforward - I just needed to change the ingress-nginx service annotation to specify the load balancer type as NLB: service.beta.kubernetes.io/aws-load-balancer-type: nlb However, increasing the replica count turned out to be tricky. When I bumped it up to two, I began to ...
On a project I've been working on, I've been preparing for SOC 2 Type II certification. My responsibilities have mostly been on the engineering/IT side, ensuring that our SaaS is deployed and developed according to SOC 2 processes. This isn't something a developer would willingly or enthusiastically take on, right? I can’t believe I’m saying this out loud, but actually ... it hasn’t been that bad. The biggest reason for this has definitely been Vanta . Vanta has distilled the rather hard-to-decipher process descriptions into actionable items. As far as I know, SOC 2 isn’t a one-size-fits-all (SaaS provider) certification; it differs according to the stack - which makes sense. Plugging in all our services, from cloud providers to issue trackers, spits out a tailored task list. The task list can even include literal Terraform code examples, which you can copy-paste with minor changes. You could kind of get a similar list from AWS Audit Manager or AWS Security ...