Skip to main content

Vanta is a pretty good tool

 On a project I've been working on, I've been preparing for SOC 2 Type II certification. My responsibilities have mostly been on the engineering/IT side, ensuring that our SaaS is deployed and developed according to SOC 2 processes.

This isn't something a developer would willingly or enthusiastically take on, right?


I can’t believe I’m saying this out loud, but actually... it hasn’t been that bad. The biggest reason for this has definitely been Vanta.



Vanta has distilled the rather hard-to-decipher process descriptions into actionable items. As far as I know, SOC 2 isn’t a one-size-fits-all (SaaS provider) certification; it differs according to the stack - which makes sense. Plugging in all our services, from cloud providers to issue trackers, spits out a tailored task list.


The task list can even include literal Terraform code examples, which you can copy-paste with minor changes. You could kind of get a similar list from AWS Audit Manager or AWS Security Hub, but Vanta is cloud-agnostic.


It’s like a ticketing template on steroids - or one of those rare automated security tools that actually proves useful. It ensures that secure practices continue in our organization, not only during the audit period but continuously. It also provides a trust center page for transparency where the customers eventually can download our SOC 2 report 🤞


The biggest benefit is definitely the automated integration with cloud providers. If you add all third-party services, you can get an offboarding and access review list out of the box.


Regarding process documentation, Vanta provides pre-filled document templates based on some basic information you provide about your organization - saving at least some time. They also outline the steps clearly, making it easier to follow.


Committing to a process like SOC 2 is something that, on the whole, is good for an organization. Sure, all of these certifications have their oddities, and some requirements can feel like pure waste. But as with a well-structured codebase, clearly defined boundaries in all-around engineering process are a definite plus.

Comments

Popular posts from this blog

I'm not a passionate developer

A family friend of mine is an airlane pilot. A dream job for most, right? As a child, I certainly thought so. Now that I can have grown-up talks with him, I have discovered a more accurate description of his profession. He says that the truth about the job is that it is boring. To me, that is not that surprising. Airplanes are cool and all, but when you are in the middle of the Atlantic sitting next to the colleague you have been talking to past five years, how stimulating can that be? When he says the job is boring, it is not a bad kind of boring. It is a very specific boring. The "boring" you would want as a passenger. Uneventful.  Yet, he loves his job. According to him, an experienced pilot is most pleased when each and every tiny thing in the flight plan - goes according to plan. Passengers in the cabin of an expert pilot sit in the comfort of not even noticing who is flying. As someone employed in a field where being boring is not exactly in high demand, this sounds pro...

PydanticAI + evals + LiteLLM pipeline

I gave a tech talk at a Python meetup titled "Overengineering an LLM pipeline". It's based on my experiences of building production-grade stuff with LLMs I'm not sure how overengineered it actually turned out. Experimental would be a better term as it is using PydanticAI graphs library, which is in its very early stages as of writing this, although arguably already better than some of the pipeline libraries. Anyway, here is a link to it. It is a CLI poker app where you play one hand against an LLM. The LLM (theoretically) gets better with a self-correcting mechanism based on the evaluation score from another LLM. It uses the annotated past games as an additional context to potentially improve its decision-making. https://github.com/juho-y/archipylago-poker

Careful with externalTrafficPolicy

On a project I am working on is hosted in an EKS cluster with the NGINX ingress controller (the one maintained by Kubernetes). It is deployed using it's official official Helm chart, which I realized, after a lengthy debugging session, was a mistake. The initial setup I aimed to improve had several flaws. Firstly, we were using the AWS Classic Load Balancer in front of the nginx ingress in the cluster, which has been deprecated for some time (years?). Continuing to use it makes little sense to us. The second issue was that we were only running one(!) nginx pod, which is quite sketchy since the exposed web services had essentially no high availability.  I switched to the Network Load Balancer (NLB), which was straightforward - I just needed to change the ingress-nginx service annotation to specify the load balancer type as NLB: service.beta.kubernetes.io/aws-load-balancer-type: nlb However, increasing the replica count turned out to be tricky. When I bumped it up to two, I began to ...