Gopher-Guard

module
v1.2.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 9, 2026 License: MIT

README ΒΆ

πŸ›‘οΈ Gopher-Guard

Go Report Card Docker Hub

A highly available, distributed rate-limiting microservice written in Go. Gopher-Guard implements a strongly consistent, fault-tolerant cluster using the Raft Consensus Algorithm, enforcing API rate limits across multiple physical nodes without external dependencies like Redis.

πŸš€ Technical highlights

  • Consensus & High Availability: Uses github.com/hashicorp/raft for leader election and log replication.
  • Contract-First RPC: gRPC and Protocol Buffers for strict typing and high-performance RPCs.
  • Sliding Window Engine: Thread-safe sliding-window rate limiter using sync.RWMutex to avoid boundary spikes.
  • Embedded Storage: bbolt for fast, embedded persistence of Raft logs and stable state.
  • Background Janitor: Periodic goroutine that sweeps the in-memory FSM to remove stale entries.

πŸ—οΈ Architecture

[ Client ]
    β”‚ (gRPC / HTTP/2)
    β–Ό
[ NGINX Load Balancer ] (Port 80)
    β”‚
    β–Ό (Round-Robin with Failover)
[ gRPC Server (Leader) ] ──► [ Sliding Window Engine ]
    β”‚                                β”‚
    β”‚                                β–Ό (Propose Log)
    └─► [ HashiCorp Raft ] ◄──► [ BoltDB ]
             β”‚
      (Log Replication) ──► [ Node 2, 3, 4, 5 ]

Prerequisites

  • Go 1.21+
  • protoc (Protocol Buffers compiler)
  • protoc-gen-go and protoc-gen-go-grpc plugins

Getting Started: Deploying the Cluster

Gopher-Guard is a self-hosted infrastructure tool. You must first boot the cluster before your applications can use it. The project now provides a Docker Compose configuration that boots a multi-node cluster (5 nodes by default) plus an NGINX load balancer and observability stack (Prometheus + Grafana). This is the recommended way to start a reproducible local environment.

1. Start the stack

docker compose up --build -d

This will build the image and start the nodes. Containers expose the following useful ports on localhost:

  • gRPC / API (proxied via NGINX): 80 (the client in cmd/client connects to 127.0.0.1:80)
  • Node admin ports: 8080..8084 (node-specific admin endpoints)
  • Prometheus: 9090
  • Grafana: 3000 (login: admin / admin)

2. Automatic Cluster Formation

Gopher-Guard features a fully automated "Zero-Touch" deployment. When you run the docker compose command above, a temporary bootstrapper container runs in the background and automatically links the 5 nodes together to form the Raft quorum. No manual setup is required!

To verify the cluster formed successfully, you can check the bootstrapper logs:

docker logs gopher-bootstrapper

Legacy : there is still a start_cluster.sh script that can be used for simple local bootstrapping without Docker. The Docker Compose flow is preferred for reproducible, observable environments.

πŸ”Œ Integrating the Go SDK

Gopher-Guard provides a robust, auto-retrying Go client SDK. You do not need to manage gRPC connections or Raft leader failovers yourself; the SDK handles it automatically.

1. Install the SDK :

go get github.com/AbhinavG786/Gopher-Guard/client

2. Protect your API : (πŸ’‘Check out the /examples folder for a fully reproducible integration environment. )

Example :

package main

import (
	"context"
	"fmt"
	"time"
	"[github.com/AbhinavG786/Gopher-Guard/client](https://github.com/AbhinavG786/Gopher-Guard/client)"
)

func main() {
	// Connect to your Gopher-Guard load balancer
	guard, err := client.New("localhost:80")
	if err != nil {
		panic(err)
	}
	defer guard.Close()

	// Ask the cluster for permission (e.g., 100 requests per minute)
	allowed, remaining, err := guard.Allow(context.Background(), "user_123", 100, time.Minute)
	
	if err != nil {
		fmt.Println("⚠️ Cluster unavailable. Failsafe activated.")
		return
	}

	if !allowed {
		fmt.Println("🚫 HTTP 429: Too Many Requests")
		return
	}

	fmt.Printf("βœ… HTTP 200: Request processed. Remaining quota: %d\n", remaining)
}

πŸ§ͺ Simulating fault tolerance

To test leader failover with Docker Compose:

  1. Identify the leader via container logs (look for election/leadership messages) or check the the Grafana dashboard.
  2. Simulate an immediate crash (assassination test)

To emulate a sudden crash (SIGKILL) and observe rapid leader re-election, use docker kill instead of docker stop:

docker kill gopher-node-1
  1. Watch the logs of other nodes to observe a new election and leadership change:
docker logs -f gopher-node-2

The cluster should elect a new leader and continue serving requests without data loss.

πŸ“Š Observability & Dashboards

  • Prometheus: Scrapes metrics from all nodes every 5s (configured in prometheus.yml).
  • Grafana: Dashboard is pre-provisioned via the repository's provisioning directory and is available at http://localhost:3000.
  • Anonymous Access: Grafana is configured for anonymous read access so the dashboard is immediately viewable (no manual provisioning required).

Open Grafana to inspect cluster health, per-node request rates, and rate-limit spikes.

πŸ“¦ Dependencies

  • google.golang.org/grpc β€” gRPC framework
  • github.com/hashicorp/raft β€” Consensus algorithm
  • github.com/hashicorp/raft-boltdb/v2 β€” Raft storage backend
  • go.etcd.io/bbolt (bbolt) β€” embedded key/value store
  • github.com/joho/godotenv β€” environment configuration helper

For development questions or help running the cluster, open an issue or contact the maintainer.

πŸ“„ License

This project is licensed under the MIT License. See the LICENSE file for details.


Directories ΒΆ

Path Synopsis
cmd
client command
server command
examples
basic command
video_demo command
internal
pkg
pb

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL