π‘οΈ Gopher-Guard

A highly available, distributed rate-limiting microservice written in Go. Gopher-Guard implements a strongly consistent, fault-tolerant cluster using the Raft Consensus Algorithm, enforcing API rate limits across multiple physical nodes without external dependencies like Redis.
π Technical highlights
- Consensus & High Availability: Uses
github.com/hashicorp/raft for leader election and log replication.
- Contract-First RPC: gRPC and Protocol Buffers for strict typing and high-performance RPCs.
- Sliding Window Engine: Thread-safe sliding-window rate limiter using
sync.RWMutex to avoid boundary spikes.
- Embedded Storage:
bbolt for fast, embedded persistence of Raft logs and stable state.
- Background Janitor: Periodic goroutine that sweeps the in-memory FSM to remove stale entries.
ποΈ Architecture
[ Client ]
β (gRPC / HTTP/2)
βΌ
[ NGINX Load Balancer ] (Port 80)
β
βΌ (Round-Robin with Failover)
[ gRPC Server (Leader) ] βββΊ [ Sliding Window Engine ]
β β
β βΌ (Propose Log)
βββΊ [ HashiCorp Raft ] ββββΊ [ BoltDB ]
β
(Log Replication) βββΊ [ Node 2, 3, 4, 5 ]
Prerequisites
- Go 1.21+
protoc (Protocol Buffers compiler)
protoc-gen-go and protoc-gen-go-grpc plugins
Getting Started: Deploying the Cluster
Gopher-Guard is a self-hosted infrastructure tool. You must first boot the cluster before your applications can use it. The project now provides a Docker Compose configuration that boots a multi-node cluster (5 nodes by default) plus an NGINX load balancer and observability stack (Prometheus + Grafana). This is the recommended way to start a reproducible local environment.
1. Start the stack
docker compose up --build -d
This will build the image and start the nodes. Containers expose the following useful ports on localhost:
- gRPC / API (proxied via NGINX):
80 (the client in cmd/client connects to 127.0.0.1:80)
- Node admin ports:
8080..8084 (node-specific admin endpoints)
- Prometheus:
9090
- Grafana:
3000 (login: admin / admin)
2. Automatic Cluster Formation
Gopher-Guard features a fully automated "Zero-Touch" deployment. When you run the docker compose command above, a temporary bootstrapper container runs in the background and automatically links the 5 nodes together to form the Raft quorum. No manual setup is required!
To verify the cluster formed successfully, you can check the bootstrapper logs:
docker logs gopher-bootstrapper
Legacy : there is still a start_cluster.sh script that can be used for simple local bootstrapping without Docker. The Docker Compose flow is preferred for reproducible, observable environments.
π Integrating the Go SDK
Gopher-Guard provides a robust, auto-retrying Go client SDK. You do not need to manage gRPC connections or Raft leader failovers yourself; the SDK handles it automatically.
1. Install the SDK :
go get github.com/AbhinavG786/Gopher-Guard/client
2. Protect your API : (π‘Check out the /examples folder for a fully reproducible integration environment. )
Example :
package main
import (
"context"
"fmt"
"time"
"[github.com/AbhinavG786/Gopher-Guard/client](https://github.com/AbhinavG786/Gopher-Guard/client)"
)
func main() {
// Connect to your Gopher-Guard load balancer
guard, err := client.New("localhost:80")
if err != nil {
panic(err)
}
defer guard.Close()
// Ask the cluster for permission (e.g., 100 requests per minute)
allowed, remaining, err := guard.Allow(context.Background(), "user_123", 100, time.Minute)
if err != nil {
fmt.Println("β οΈ Cluster unavailable. Failsafe activated.")
return
}
if !allowed {
fmt.Println("π« HTTP 429: Too Many Requests")
return
}
fmt.Printf("β
HTTP 200: Request processed. Remaining quota: %d\n", remaining)
}
π§ͺ Simulating fault tolerance
To test leader failover with Docker Compose:
- Identify the leader via container logs (look for election/leadership messages) or check the the Grafana dashboard.
- Simulate an immediate crash (assassination test)
To emulate a sudden crash (SIGKILL) and observe rapid leader re-election, use docker kill instead of docker stop:
docker kill gopher-node-1
- Watch the logs of other nodes to observe a new election and leadership change:
docker logs -f gopher-node-2
The cluster should elect a new leader and continue serving requests without data loss.
π Observability & Dashboards
- Prometheus: Scrapes metrics from all nodes every 5s (configured in
prometheus.yml).
- Grafana: Dashboard is pre-provisioned via the repository's provisioning directory and is available at http://localhost:3000.
- Anonymous Access: Grafana is configured for anonymous read access so the dashboard is immediately viewable (no manual provisioning required).
Open Grafana to inspect cluster health, per-node request rates, and rate-limit spikes.
π¦ Dependencies
google.golang.org/grpc β gRPC framework
github.com/hashicorp/raft β Consensus algorithm
github.com/hashicorp/raft-boltdb/v2 β Raft storage backend
go.etcd.io/bbolt (bbolt) β embedded key/value store
github.com/joho/godotenv β environment configuration helper
For development questions or help running the cluster, open an issue or contact the maintainer.
π License
This project is licensed under the MIT License. See the LICENSE file for details.