I gave a tech talk at a Python meetup titled "Overengineering an LLM pipeline". It's based on my experiences of building production-grade stuff with LLMs I'm not sure how overengineered it actually turned out. Experimental would be a better term as it is using PydanticAI graphs library, which is in its very early stages as of writing this, although arguably already better than some of the pipeline libraries. Anyway, here is a link to it. It is a CLI poker app where you play one hand against an LLM. The LLM (theoretically) gets better with a self-correcting mechanism based on the evaluation score from another LLM. It uses the annotated past games as an additional context to potentially improve its decision-making. https://github.com/juho-y/archipylago-poker
Working with Django in ASGI mode can benefit applications with long requests where the synchronous mode may be a bit wasteful, as you would likely need a ton of workers—especially if those workers are processes (e.g., Gunicorn). Your app may also run some fancy new LLM framework which is often async only. That said, async mode isn't a silver bullet for scaling. In Django 5.1, you still need threads to offload blocking database operations, which remain synchronous by design. Django provides async query wrappers like aget , acount , and aall , but these are just syntactic sugar for offloading the operation to a worker thread via asgiref.sync_to_async. You can use the asgiref package's sync_to_async utility to move blocking executions to the worker. By default, that is a ThreadPoolExecutor with one worker. This thread is shared by all sync_to_async calls during the request context, effectively meaning that each request spawns one worker thread for all sync work. So, to summar...