Skip to main content

OpenShift logging issues

I have been digging into some logging issues in an OpenShift production system. The first problem what we noticed was that the pod logs viewed from the web console were clearly missing some lines. Initially, we thought that this was due to some rate limiting for the web console itself but it turned out to be an issue at the OS level. Another issue what we initially thought was related to the first one was that the Elasticsearch cluster which contains the aggregated logs from all nodes was missing some logs as well and we even had the Elasticsearch cluster members crashing a couple of times without being able to recover the cluster health.

It turned out that we had two separate issues with similar symptoms

First thing was to check why the web console was missing logs. Openshift (kubernetes) is logging the container logs to journald. After tailing the journald logs a while, it seemed fine. Upon closer inspection, I saw something strange though. It seems that the containers that produced a lot of log lines wrote those in chunks and then the whole log seemed to freeze for a while.

After some googling around I found out that there is a rate limit in journald which is by default 1K lines of logs in 30 seconds. All exceeding lines will be dropped out. After tailing the journald logs with journald unit name itself, I saw that this was indeed the issue. I found out soon after that the OpenShift "high load" guide warned about this and described some example configurations for both journald and rsyslogd to mitigate the issue.

However, we could not simply increase the rate limit in production system since we were not sure what would happen to the already fragile Elasticsearch logging cluster. The master machines had quite high CPU and IO usage already.

First, we had to fix the Elasticsearch cluster since we could not log into Kibana to view the logs. The problem was that there were too many indices out of sync and when the ES cluster members started, the authentication part timed out. Since we had already lost a lot of logs due to the journald issue, we simply removed all indexes and gave the ES's a "fresh start" so to speak.

Now when we had the aggregated logging working again, we one by one fixed the rate limits in each of the nodes. We also modified the buffering configurations in node Fluentd's to decrease the rate which they are sending the logs from each node to the ES cluster. We also gave a bit more resources to the page cache on the host machines.

Anyone running OpenShift with its logging feature enabled, I recommend reading this https://docs.openshift.com/enterprise/3.2/install_config/aggregate_logging_sizing.html

Comments

Popular posts from this blog

I'm not a passionate developer

A family friend of mine is an airlane pilot. A dream job for most, right? As a child, I certainly thought so. Now that I can have grown-up talks with him, I have discovered a more accurate description of his profession. He says that the truth about the job is that it is boring. To me, that is not that surprising. Airplanes are cool and all, but when you are in the middle of the Atlantic sitting next to the colleague you have been talking to past five years, how stimulating can that be? When he says the job is boring, it is not a bad kind of boring. It is a very specific boring. The "boring" you would want as a passenger. Uneventful.  Yet, he loves his job. According to him, an experienced pilot is most pleased when each and every tiny thing in the flight plan - goes according to plan. Passengers in the cabin of an expert pilot sit in the comfort of not even noticing who is flying. As someone employed in a field where being boring is not exactly in high demand, this sounds pro...

Canyon Precede:ON 7

I bought or technically leased a Canyon Precede:ON 7 (2022) electric bike last fall. This post is about my experiences with it after riding for about 2000 km this winter. The season was a bit colder than usual, and we had more snow than in years, so I properly put the bike through its paces. I've been cycling for almost 20 years. I've never owned a car nor used public transport regularly. I pedal all distances below 30km in all seasons. Besides commuting, I've mountain biked and raced BMX, and I still actively ride my road bike during the spring and summer months. I've owned a handful of bikes and kept them until their frames failed. Buying new bikes or gear has not been a major part of my hobby, and frankly, I'm quite sceptical about the benefits of updating bikes or gear frequently. I've never owned an E-bike before, but I've rented one a couple of times. The bike arrived in a hilariously large box. I suppose there's no need to worry about damage durin...

Extracting object properties from an IFC file with IfcOpenShell

Besides the object geometry information, IFC files may contain properties for the IFC objects. The properties can be, for example, some predefined dimension information such as an object volume or a choice of material. Some of the properties are predefined in the IFC standards, but custom ones can be added. IFC files can be massive and resource-intensive to process, so in some cases, it helps to separate the object properties from the geometry data. IfcOpenShell  is a toolset for processing IFC files. It is written mostly in C++ but also provides a Python interface. To read an IFC file >>> ifc_file = ifcopenshell.open("model.ifc") Fetch all objects of type IfcSlab >>> slab = ifc_file.by_type("IfcSlab")[1] Get the list of properties >>> slab.IsDefinedBy (#145075=IfcRelDefinesByType('2_fok0__fAcBZmMlQcYwie',#1,$,$,(#27,#59),#145074), #145140=IfcRelDefinesByProperties('3U2LyORgXC2f_hWf6I16C1',#1,$,$,(#27,#59),#145141), #145142...