Skip to main content

RocksDB data recovery

I recently needed to do some maintenance on a RocksDB key-value store. The task was simple enough, just delete some keys as the db served as a cache and did not contain any permanent data.

I used the RocksDB cli administration tool ldb to erase the keys. After running a key scan with it, I got this error

Failed: Corruption: Snappy not supported or corrupted Snappy compressed block contents
So a damaged database. Fortunately, there's a tool to fix it, and after running it, I had access to the db via the admin tool.

All the data was lost though. Adding and removing keys worked fine but all the old keys were gone. It turned out that the corrupted data was all the data there was.

The recovery tool made a backup folder, and I recovered the data by taking the files from the backup folder and manually changing the CURRENT file to point to the old MANIFEST file which is apparently how RocksDB knows which sst (table) files to use.

I could not access the data with the admin tool, but the native C API worked fine, so I just exported all the keys as raw key-value text strings with the native API and typed a script which used the admin tool to recreate the database.

Everything was working fine now, I could add and remove keys both via the native API which the actual application uses and with the admin tool.

I returned to the DB a few days later, and as I feared, the DB has corrupted again. Or so the admin tool claimed, but the actual application was working fine, so I thought it has probably to do something with the data, or there is a bug in the admin tool. Investigation on that is still ongoing, next step is to upgrade the DB to a later version, check if there is some wrong configuration parameter and try out adding some of the newer keys manually to a fresh db to see if I can "break" it.



Comments

Popular posts from this blog

I'm not a passionate developer

A family friend of mine is an airlane pilot. A dream job for most, right? As a child, I certainly thought so. Now that I can have grown-up talks with him, I have discovered a more accurate description of his profession. He says that the truth about the job is that it is boring. To me, that is not that surprising. Airplanes are cool and all, but when you are in the middle of the Atlantic sitting next to the colleague you have been talking to past five years, how stimulating can that be? When he says the job is boring, it is not a bad kind of boring. It is a very specific boring. The "boring" you would want as a passenger. Uneventful.  Yet, he loves his job. According to him, an experienced pilot is most pleased when each and every tiny thing in the flight plan - goes according to plan. Passengers in the cabin of an expert pilot sit in the comfort of not even noticing who is flying. As someone employed in a field where being boring is not exactly in high demand, this sounds pro...

Emit structured Postgres data change events with wal2json

A common thing I see in an enterprise system is that when an end-user does some action, say add a user, the underlying web of subsystems adds the user to multiple databases in separate transactions. Each of these transactions may happen in varying order and, even worse, can fail, leaving the system in an inconsistent state. A better way could be to write the user data to some main database and then other subsystems like search indexes, pull/push the data to other interested parties, thus eliminating the need for multiple end-user originating boundary transactions. That's the theory part; how about a technical solution. The idea of this post came from the koodia pinnan alla podcast about event-driven systems and CDC . One of the discussion topics in the show is emitting events from Postgres transaction logs.  I built an utterly simple change emitter and reader using Postgres with the wal2json transaction decoding plugin and a custom go event parser. I'll stick to the boring ...

Extracting object properties from an IFC file with IfcOpenShell

Besides the object geometry information, IFC files may contain properties for the IFC objects. The properties can be, for example, some predefined dimension information such as an object volume or a choice of material. Some of the properties are predefined in the IFC standards, but custom ones can be added. IFC files can be massive and resource-intensive to process, so in some cases, it helps to separate the object properties from the geometry data. IfcOpenShell  is a toolset for processing IFC files. It is written mostly in C++ but also provides a Python interface. To read an IFC file >>> ifc_file = ifcopenshell.open("model.ifc") Fetch all objects of type IfcSlab >>> slab = ifc_file.by_type("IfcSlab")[1] Get the list of properties >>> slab.IsDefinedBy (#145075=IfcRelDefinesByType('2_fok0__fAcBZmMlQcYwie',#1,$,$,(#27,#59),#145074), #145140=IfcRelDefinesByProperties('3U2LyORgXC2f_hWf6I16C1',#1,$,$,(#27,#59),#145141), #145142...