Skip to main content

Practicing zone of proximal development

I was at a LIVE tech conference last week! The keynote was about learning through tinkering. The talk had a chapter that resonated with me quite profoundly: the zone of proximal development. Essentially, it means a student can only learn something "around" a given subject if the topic is familiar to them.

As I've written here a few times, one of the pieces of technology close to my heart is Postgres (and RDMSs in general). I know the basics + some more and can tune some common knobs. Having said that, database internals present a fascinating mystery. I don't understand how they work under the hood - not even close. 

I believe that delving into the details and getting hands-on experience can take my understanding to a whole new level. This is the essence of the zone of proximal development - learning by doing and pushing the boundaries of what we already know.

I stumbled upon the Carnegie Mellon university database lecture series in some of the investigatory rabbit holes. It starts from the fundamental algorithms of how most databases work. Again, very cool, but can I get even the gist of them by watching YouTube videos? The answer is a harsh no. Glancing over some videos barely scratches the surface.

That's why I started to supply the graveyard of side projects with a fresh corpse. This time it is a key-value store. Can you think of a more wonderful project? Honestly, there are some pretty interesting problems to solve there, at least if you are really nerdy! 🤓

Why a key-value store if I am interested in Postgres? 

The answer is ingrained in the practice of zone of proximal development. Learning about RDS internals is only possible if I know the basics. If you squint your eyes a bit, a relational database is a key-value store. Sure, there is the whole query language thingy, but I need help grasping it. I must know the foundation on which those algorithms are built.

In short, I've implemented a linear hashing KVS with a simplistic implementation of how data is persisted to disk. That means 4K pages, page directory, page buffer, LRU cache, free pages, etc. I also intend to be mindful of excess allocations, handle mostly raw byte arrays, and occasionally benchmark the thingy. All of these are present in Postgres on some level but, of course, in a more sophisticated form.

It is in it's early days but here is the repository

https://github.com/jjylik/aapari


Comments

Popular posts from this blog

I'm not a passionate developer

A family friend of mine is an airlane pilot. A dream job for most, right? As a child, I certainly thought so. Now that I can have grown-up talks with him, I have discovered a more accurate description of his profession. He says that the truth about the job is that it is boring. To me, that is not that surprising. Airplanes are cool and all, but when you are in the middle of the Atlantic sitting next to the colleague you have been talking to past five years, how stimulating can that be? When he says the job is boring, it is not a bad kind of boring. It is a very specific boring. The "boring" you would want as a passenger. Uneventful.  Yet, he loves his job. According to him, an experienced pilot is most pleased when each and every tiny thing in the flight plan - goes according to plan. Passengers in the cabin of an expert pilot sit in the comfort of not even noticing who is flying. As someone employed in a field where being boring is not exactly in high demand, this sounds pro...

Emit structured Postgres data change events with wal2json

A common thing I see in an enterprise system is that when an end-user does some action, say add a user, the underlying web of subsystems adds the user to multiple databases in separate transactions. Each of these transactions may happen in varying order and, even worse, can fail, leaving the system in an inconsistent state. A better way could be to write the user data to some main database and then other subsystems like search indexes, pull/push the data to other interested parties, thus eliminating the need for multiple end-user originating boundary transactions. That's the theory part; how about a technical solution. The idea of this post came from the koodia pinnan alla podcast about event-driven systems and CDC . One of the discussion topics in the show is emitting events from Postgres transaction logs.  I built an utterly simple change emitter and reader using Postgres with the wal2json transaction decoding plugin and a custom go event parser. I'll stick to the boring ...

Extracting object properties from an IFC file with IfcOpenShell

Besides the object geometry information, IFC files may contain properties for the IFC objects. The properties can be, for example, some predefined dimension information such as an object volume or a choice of material. Some of the properties are predefined in the IFC standards, but custom ones can be added. IFC files can be massive and resource-intensive to process, so in some cases, it helps to separate the object properties from the geometry data. IfcOpenShell  is a toolset for processing IFC files. It is written mostly in C++ but also provides a Python interface. To read an IFC file >>> ifc_file = ifcopenshell.open("model.ifc") Fetch all objects of type IfcSlab >>> slab = ifc_file.by_type("IfcSlab")[1] Get the list of properties >>> slab.IsDefinedBy (#145075=IfcRelDefinesByType('2_fok0__fAcBZmMlQcYwie',#1,$,$,(#27,#59),#145074), #145140=IfcRelDefinesByProperties('3U2LyORgXC2f_hWf6I16C1',#1,$,$,(#27,#59),#145141), #145142...