Prometheus High Availability proxy
even though application is pretty stable, keep in mind it is still in active development, we use it in production, but it doesn't necessary mean it is ready for you.
The Problem
Normal Prometheus upgrade procedure looks like this:
- Switch traffic to fallback server (automatic with HA proxy, or manually)
- Upgrade primary server
- Wait for it to become available
- Do 1-3 steps for secondary server
Even if you did everything right, it doesn't mean you will not have gaps on Grafana.
The Solution
In order to prevent gaps on Grafana, we query the same request from several servers. The resulting responses are merged into one unified response using, pretty naive, json merging techniques.
how to try it out
docker run -d -p 9090:9090 anchorfree/prometheus-ha-proxy:master http://server1:9090 http://server2:9090
what this project is
This project is simple and pretty dumb Prometheus output merger. With this project you can:
- Use one endpoint in Grafana instead of several servers
- Have truly highly available Prometheus setup for Grafana
- Startup this proxy, and do upgrades/restarts without any problems.
what this project is NOT
- It doesn't do any way of values aggregations over datasets, e.g. you can not calculate total bandwidth for all of your datacenters, you need Federation Server for this.
- It doesn't solve any HA problems for AlertManager or any querying issues.
current state and feature plans
We would like to make a proof of concept and share the answer to Grafana drops problem we had with Prometheus at Anchorfree.
Evolution of this tool would be:
- Intergare Prometheus WEB GUI, in order to make look and feel like real Prometheus (currently only Grafana and curl queries are possible)
- Use remote reades + local TSDB storage to provide single point of contact for our Ops team
Known problems and limitations
- if you count() something over specific timeframe, and that timeframe happened to be time of one Prometheus server restart, you will have incorrect calculations (gap in data). In order to prevent this from happening, you can use Federation server, and then make count() over aggregated data, this may not be applicable to any case.
topk
can show more than expected amount of values.
vector
output is merged naively without deduplication, which means double the results.