I have been digging into some logging issues in an OpenShift production system. The first problem what we noticed was that the pod logs viewed from the web console were clearly missing some lines. Initially, we thought that this was due to some rate limiting for the web console itself but it turned out to be an issue at the OS level. Another issue what we initially thought was related to the first one was that the Elasticsearch cluster which contains the aggregated logs from all nodes was missing some logs as well and we even had the Elasticsearch cluster members crashing a couple of times without being able to recover the cluster health.
It turned out that we had two separate issues with similar symptoms
First thing was to check why the web console was missing logs. Openshift (kubernetes) is logging the container logs to journald. After tailing the journald logs a while, it seemed fine. Upon closer inspection, I saw something strange though. It seems that the containers that produced a lot of log lines wrote those in chunks and then the whole log seemed to freeze for a while.
After some googling around I found out that there is a rate limit in journald which is by default 1K lines of logs in 30 seconds. All exceeding lines will be dropped out. After tailing the journald logs with journald unit name itself, I saw that this was indeed the issue. I found out soon after that the OpenShift "high load" guide warned about this and described some example configurations for both journald and rsyslogd to mitigate the issue.
However, we could not simply increase the rate limit in production system since we were not sure what would happen to the already fragile Elasticsearch logging cluster. The master machines had quite high CPU and IO usage already.
First, we had to fix the Elasticsearch cluster since we could not log into Kibana to view the logs. The problem was that there were too many indices out of sync and when the ES cluster members started, the authentication part timed out. Since we had already lost a lot of logs due to the journald issue, we simply removed all indexes and gave the ES's a "fresh start" so to speak.
Now when we had the aggregated logging working again, we one by one fixed the rate limits in each of the nodes. We also modified the buffering configurations in node Fluentd's to decrease the rate which they are sending the logs from each node to the ES cluster. We also gave a bit more resources to the page cache on the host machines.
Anyone running OpenShift with its logging feature enabled, I recommend reading this https://docs.openshift.com/enterprise/3.2/install_config/aggregate_logging_sizing.html
It turned out that we had two separate issues with similar symptoms
First thing was to check why the web console was missing logs. Openshift (kubernetes) is logging the container logs to journald. After tailing the journald logs a while, it seemed fine. Upon closer inspection, I saw something strange though. It seems that the containers that produced a lot of log lines wrote those in chunks and then the whole log seemed to freeze for a while.
After some googling around I found out that there is a rate limit in journald which is by default 1K lines of logs in 30 seconds. All exceeding lines will be dropped out. After tailing the journald logs with journald unit name itself, I saw that this was indeed the issue. I found out soon after that the OpenShift "high load" guide warned about this and described some example configurations for both journald and rsyslogd to mitigate the issue.
However, we could not simply increase the rate limit in production system since we were not sure what would happen to the already fragile Elasticsearch logging cluster. The master machines had quite high CPU and IO usage already.
First, we had to fix the Elasticsearch cluster since we could not log into Kibana to view the logs. The problem was that there were too many indices out of sync and when the ES cluster members started, the authentication part timed out. Since we had already lost a lot of logs due to the journald issue, we simply removed all indexes and gave the ES's a "fresh start" so to speak.
Now when we had the aggregated logging working again, we one by one fixed the rate limits in each of the nodes. We also modified the buffering configurations in node Fluentd's to decrease the rate which they are sending the logs from each node to the ES cluster. We also gave a bit more resources to the page cache on the host machines.
Anyone running OpenShift with its logging feature enabled, I recommend reading this https://docs.openshift.com/enterprise/3.2/install_config/aggregate_logging_sizing.html
Comments
Post a Comment