Working with Django in ASGI mode can benefit applications with long requests where the synchronous mode may be a bit wasteful, as you would likely need a ton of workers—especially if those workers are processes (e.g., Gunicorn). Your app may also run some fancy new LLM framework which is often async only. That said, async mode isn't a silver bullet for scaling. In Django 5.1, you still need threads to offload blocking database operations, which remain synchronous by design. Django provides async query wrappers like aget , acount , and aall , but these are just syntactic sugar for offloading the operation to a worker thread via asgiref.sync_to_async. You can use the asgiref package's sync_to_async utility to move blocking executions to the worker. By default, that is a ThreadPoolExecutor with one worker. This thread is shared by all sync_to_async calls during the request context, effectively meaning that each request spawns one worker thread for all sync work. So, to summar...
On a project I am working on is hosted in an EKS cluster with the NGINX ingress controller (the one maintained by Kubernetes). It is deployed using it's official official Helm chart, which I realized, after a lengthy debugging session, was a mistake. The initial setup I aimed to improve had several flaws. Firstly, we were using the AWS Classic Load Balancer in front of the nginx ingress in the cluster, which has been deprecated for some time (years?). Continuing to use it makes little sense to us. The second issue was that we were only running one(!) nginx pod, which is quite sketchy since the exposed web services had essentially no high availability. I switched to the Network Load Balancer (NLB), which was straightforward - I just needed to change the ingress-nginx service annotation to specify the load balancer type as NLB: service.beta.kubernetes.io/aws-load-balancer-type: nlb However, increasing the replica count turned out to be tricky. When I bumped it up to two, I began to ...