Skip to main content


Showing posts from June, 2017

Embedded OrientDB on a OpenShift / Kubernetes cluster

A few tips on setting up an embedded OrientDB to run on OpenShift / Kubernetes cluster. Set the ORIENTDB_NODE_NAME system property or environment variable. If your database volumes are host volumes, you can use the downwards API spec.nodeName . If the node name contains dots, replace those with dashes for example. If you use something like OpenShift persistence volumes, make sure that the running pods ORIENTDB_NODE_NAME  matches with the node name values it reads from the DB. Use at least 3 replicas and don't use even number due to split-brain clustering issues If you use rolling upgrade strategy, give the pods some time to start up the DBs so no more than one pod is unavailable at a time. This way syncing up the cluster status becomes smoother. Use the newNodeStrategy dynamic OrientDb distribution configuration parameter   so unreachable nodes don't break up the write quorum so easily. Use Hazelcast to discover the cluster members. There is a library  for that

Flame graph from a Scala app

Apologies for the large SVG I got inspired by a Devoxx talk about flame graphs and how they can visualize what is happening on a JVM process. Getting a graph is actually quite simple. You only need a recent enough Java 8 JDK, a running subject JVM process running on linux, perf  which is part of kernel utils in most distributions and a couple of simple profiling tools which are open source. Detailed infomation can be found in a blog post by Nitsan Wakart here So what is in that SVG. It is illustrating what was happening on a Scala app I'm running on a DigitalOcean pod sampled 100 times a second during a 40 second period. The bars describe call stacks and the topmost item is always the one running in the CPU. More details how to read the graph can be found here In this case the stacks are divided to threads. The leftmost stuff (thread) contains a
A few random things that I have been dealing with My Digitalocean droplet suffered recently a brute force ssh attack. Unfortunately I noticed it a couple of days after the attack had happened but luckily the act caused little harm except very high CPU usage from sshd process for a few days. I'm not sure how to really protect against such attacks (cheaply) but I decided to try out fail2ban. With fail2ban I could protect the server also against attacks towards nginx. Installing it was simple enough and I saw it was working rather well. There were some 3K ssh login attempts per day and the iptables based port blocking reduced the amount to some hundreds. After a while though I noticed that the fail2ban stopped blocking unauthorized IPs. I took a look at fail2ban github and saw some issues with ssh regex filters (fail2ban works by monitoring logs and matching those against predefined regexes). I made some small adjustments but still no luck, it did not ban anything. I turned on d