Skip to main content

OpenShift logging issues

I have been digging into some logging issues in an OpenShift production system. The first problem what we noticed was that the pod logs viewed from the web console were clearly missing some lines. Initially, we thought that this was due to some rate limiting for the web console itself but it turned out to be an issue at the OS level. Another issue what we initially thought was related to the first one was that the Elasticsearch cluster which contains the aggregated logs from all nodes was missing some logs as well and we even had the Elasticsearch cluster members crashing a couple of times without being able to recover the cluster health.

It turned out that we had two separate issues with similar symptoms

First thing was to check why the web console was missing logs. Openshift (kubernetes) is logging the container logs to journald. After tailing the journald logs a while, it seemed fine. Upon closer inspection, I saw something strange though. It seems that the containers that produced a lot of log lines wrote those in chunks and then the whole log seemed to freeze for a while.

After some googling around I found out that there is a rate limit in journald which is by default 1K lines of logs in 30 seconds. All exceeding lines will be dropped out. After tailing the journald logs with journald unit name itself, I saw that this was indeed the issue. I found out soon after that the OpenShift "high load" guide warned about this and described some example configurations for both journald and rsyslogd to mitigate the issue.

However, we could not simply increase the rate limit in production system since we were not sure what would happen to the already fragile Elasticsearch logging cluster. The master machines had quite high CPU and IO usage already.

First, we had to fix the Elasticsearch cluster since we could not log into Kibana to view the logs. The problem was that there were too many indices out of sync and when the ES cluster members started, the authentication part timed out. Since we had already lost a lot of logs due to the journald issue, we simply removed all indexes and gave the ES's a "fresh start" so to speak.

Now when we had the aggregated logging working again, we one by one fixed the rate limits in each of the nodes. We also modified the buffering configurations in node Fluentd's to decrease the rate which they are sending the logs from each node to the ES cluster. We also gave a bit more resources to the page cache on the host machines.

Anyone running OpenShift with its logging feature enabled, I recommend reading this https://docs.openshift.com/enterprise/3.2/install_config/aggregate_logging_sizing.html

Comments

Popular posts from this blog

I'm not a passionate developer

A family friend of mine is an airlane pilot. A dream job for most, right? As a child, I certainly thought so. Now that I can have grown-up talks with him, I have discovered a more accurate description of his profession. He says that the truth about the job is that it is boring. To me, that is not that surprising. Airplanes are cool and all, but when you are in the middle of the Atlantic sitting next to the colleague you have been talking to past five years, how stimulating can that be? When he says the job is boring, it is not a bad kind of boring. It is a very specific boring. The "boring" you would want as a passenger. Uneventful.  Yet, he loves his job. According to him, an experienced pilot is most pleased when each and every tiny thing in the flight plan - goes according to plan. Passengers in the cabin of an expert pilot sit in the comfort of not even noticing who is flying. As someone employed in a field where being boring is not exactly in high demand, this sounds pro

Extracting object properties from an IFC file with IfcOpenShell

Besides the object geometry information, IFC files may contain properties for the IFC objects. The properties can be, for example, some predefined dimension information such as an object volume or a choice of material. Some of the properties are predefined in the IFC standards, but custom ones can be added. IFC files can be massive and resource-intensive to process, so in some cases, it helps to separate the object properties from the geometry data. IfcOpenShell  is a toolset for processing IFC files. It is written mostly in C++ but also provides a Python interface. To read an IFC file >>> ifc_file = ifcopenshell.open("model.ifc") Fetch all objects of type IfcSlab >>> slab = ifc_file.by_type("IfcSlab")[1] Get the list of properties >>> slab.IsDefinedBy (#145075=IfcRelDefinesByType('2_fok0__fAcBZmMlQcYwie',#1,$,$,(#27,#59),#145074), #145140=IfcRelDefinesByProperties('3U2LyORgXC2f_hWf6I16C1',#1,$,$,(#27,#59),#145141), #145142

Hubristic developer

Almost half of any Finnish generation goes through a shared experience: the conscript army. An integral part of that experience is learning military slang, a set way people in the army talk. The stories told with said jargon often spread outside of the barracks. It is not uncommon to hear strangers bonding together over beers reminiscing and feeling nostalgic about freezing cold nights spent in tents. There is a similar phenomenon detectable among us coders. To be part of the coder tribe, there is at least one type of story that one must master. That is - of course - ranting about legacy codebases. "Can you believe how much of a mess the previous coders left? Hear, hear!" There is no better way to onboard a new team member than to blame some previous B-team for all the murky parts of the system at hand. This can be seen as harmless, a subject for a good  meme . Rarely do we hold real grudges against "the legacy folk" and can be the best of friends in a social gather