Skip to main content

Async with Django

Working with Django in ASGI mode can benefit applications with long requests where the synchronous mode may be a bit wasteful, as you would likely need a ton of workers—especially if those workers are processes (e.g., Gunicorn). Your app may also run some fancy new LLM framework which is often async only.


That said, async mode isn't a silver bullet for scaling. In Django 5.1, you still need threads to offload blocking database operations, which remain synchronous by design.

Django provides async query wrappers like aget, acount, and aall, but these are just syntactic sugar for offloading the operation to a worker thread via asgiref.sync_to_async.

You can use the asgiref package's sync_to_async utility to move blocking executions to the worker. By default, that is a ThreadPoolExecutor with one worker. This thread is shared by all sync_to_async calls during the request context, effectively meaning that each request spawns one worker thread for all sync work.

So, to summarize:

  • The async main thread handles HTTP calls or non-blocking tasks (e.g., LLM requests, httpx calls...).
  • But the moment your view touches the database (or any blocking operation), Django spawns a thread.
  • That thread lives for the entire request, holding a DB connection open until the request is complete.

This thread-per-request operation is mostly fine—this has been the default modus operandi for web apps for years, at least before event loops, green threads, and coroutines took over. Of course, it's a bit wasteful, but it's not like we don't do wasteful things as programmers.

So, you can choose between creating a lot of workers or spawning a thread-per-request. Latter is arguably better since with thread you don't have to think about forking and shared memory, and a thread is likely much lighter than, say, a full-blown Gunicorn worker.

Another thing to consider with long requests are of course the DB connections. As said, the DB connection starts when you perform your first operation and stays open for the full request. This is problematic for high-load systems due to the expense of connections to DBs like Postgres.

Django 5.1 introduced the Psycopg 3 connection pooling option, which can help here. During your long request, you likely don't need to keep the connection open at all times, but you can return it to the pool. That's a bit annoying, of course, since you need to manually find where you wait for I/O (like a complex LLM call) outside DB operations for a prolonged time and manually close the connection. It is performant, though, since the actual DB connection doesn't get closed; only the requests handle to it.

It would be fantastic in this era of LLM streaming or long polling if Django supported async DB connections. It would eliminate the need to spawn a thread for each request. Combined with that, it would be fantastic if we were able to configure Django to return an idling DB connection to the Psycopg pool automatically (though I'm not sure how feasible that is).

Comments

Popular posts from this blog

RocksDB data recovery

I recently needed to do some maintenance on a RocksDB key-value store. The task was simple enough, just delete some keys as the db served as a cache and did not contain any permanent data. I used the RocksDB cli administration tool ldb to erase the keys. After running a key scan with it, I got this error Failed: Corruption: Snappy not supported or corrupted Snappy compressed block contents So a damaged database. Fortunately, there's a tool to fix it, and after running it, I had access to the db via the admin tool. All the data was lost though. Adding and removing keys worked fine but all the old keys were gone. It turned out that the corrupted data was all the data there was. The recovery tool made a backup folder, and I recovered the data by taking the files from the backup folder and manually changing the CURRENT file to point to the old MANIFEST file which is apparently how RocksDB knows which sst (table) files to use. I could not access the data with the admin tool, ...

I'm not a passionate developer

A family friend of mine is an airlane pilot. A dream job for most, right? As a child, I certainly thought so. Now that I can have grown-up talks with him, I have discovered a more accurate description of his profession. He says that the truth about the job is that it is boring. To me, that is not that surprising. Airplanes are cool and all, but when you are in the middle of the Atlantic sitting next to the colleague you have been talking to past five years, how stimulating can that be? When he says the job is boring, it is not a bad kind of boring. It is a very specific boring. The "boring" you would want as a passenger. Uneventful.  Yet, he loves his job. According to him, an experienced pilot is most pleased when each and every tiny thing in the flight plan - goes according to plan. Passengers in the cabin of an expert pilot sit in the comfort of not even noticing who is flying. As someone employed in a field where being boring is not exactly in high demand, this sounds pro...

Careful with externalTrafficPolicy

On a project I am working on is hosted in an EKS cluster with the NGINX ingress controller (the one maintained by Kubernetes). It is deployed using it's official official Helm chart, which I realized, after a lengthy debugging session, was a mistake. The initial setup I aimed to improve had several flaws. Firstly, we were using the AWS Classic Load Balancer in front of the nginx ingress in the cluster, which has been deprecated for some time (years?). Continuing to use it makes little sense to us. The second issue was that we were only running one(!) nginx pod, which is quite sketchy since the exposed web services had essentially no high availability.  I switched to the Network Load Balancer (NLB), which was straightforward - I just needed to change the ingress-nginx service annotation to specify the load balancer type as NLB: service.beta.kubernetes.io/aws-load-balancer-type: nlb However, increasing the replica count turned out to be tricky. When I bumped it up to two, I began to ...