GoAlert Provisioning Operator
Kubernetes operator that manages GoAlert resources via CRDs. Reconciles users, rotations, schedules, escalation policies, and services to GoAlert's GraphQL API. Syncs generated integration keys back to Kubernetes Secrets.
Install
Requires a Kubernetes 1.26+ cluster, Helm 3, and a GoAlert instance with an admin API key.
helm install goalert-provisioning ./charts/goalert-provisioning \
--namespace goalert-provisioning-system \
--create-namespace \
--set goalert.url=https://goalert.example.com \
--set goalert.apiKey=your-admin-api-key
| Parameter |
Description |
Default |
goalert.url |
GoAlert base URL |
(required) |
goalert.apiKey |
GoAlert admin API key |
(required) |
goalert.existingSecret |
Use a pre-existing Secret instead |
"" |
metrics.service.enabled |
Create metrics Service (use when scraping via PodMonitor / annotations) |
false |
serviceMonitor.enabled |
Create metrics Service + ServiceMonitor for Prometheus |
false |
See docs/deployment/production.md for advanced configuration.
Metrics
The operator exposes standard kubebuilder / controller-runtime metrics on :8080/metrics — controller_runtime_reconcile_total, controller_runtime_reconcile_time_seconds_bucket, the operator's own goalert_provisioning_managed_resources gauge, and Go runtime metrics.
Scraping is OFF by default. To make metrics reachable from Prometheus, set serviceMonitor.enabled: true in your Helm values:
serviceMonitor:
enabled: true
# Add labels here if your Prometheus / Alloy filters ServiceMonitors by label
# (e.g. `release: prometheus` for kube-prometheus-stack).
labels: {}
interval: 30s
This creates:
- A
Service named <release>-metrics exposing port 8080.
- A
ServiceMonitor selecting that Service. jobLabel: app.kubernetes.io/name sets Prometheus's job label to goalert-provisioning — match this in your alert rules.
If you use a different scrape mechanism (PodMonitor, Alloy with custom discovery, annotation-based scraping), set metrics.service.enabled: true instead — that creates the Service without the ServiceMonitor, leaving the scrape config to whatever you've already wired up.
Reference alert rules and dashboard are in examples/observability/.
Usage
Create a GoAlertService to have the operator provision a service in GoAlert, generate integration keys, and write them to a Secret:
apiVersion: goalert.heystaq.com/v1alpha1
kind: GoAlertService
metadata:
name: my-app-alerts
spec:
serviceName: "My Application (Production)"
escalationPolicyRef:
name: production-oncall
integrationKeys:
- name: grafana
secretRef:
name: my-app-goalert-keys
key: grafana-token
kubectl apply -f my-app-alerts.yaml
gctl CLI
gctl is a standalone CLI for GoAlert incident response and RCA investigations. It uses session authentication (not API keys) and works independently of the operator.
# Install from source
go install gitlab.aknostic.com/aknostic/goalert-provisioning/cmd/gctl@latest
# Or extract from the container image
docker cp $(docker create --rm registry.gitlab.aknostic.com/aknostic/goalert-provisioning:latest):/gctl /usr/local/bin/gctl
gctl login --url https://goalert.example.com
gctl oncall
gctl alert list
gctl service status my-service
See docs/gctl.md for the full command reference and RCA walkthrough.
CRDs
| Kind |
Description |
GoAlertUser |
Users with contact methods (SMS, voice, email, Slack) |
GoAlertRotation |
On-call rotations with participants |
GoAlertSchedule |
Schedules with time-based assignment rules |
GoAlertEscalationPolicy |
Multi-step escalation policies |
GoAlertService |
Services with integration keys synced to Secrets |
GoAlertAdmin |
GoAlert system configuration |
See examples/ for complete configurations covering all CRDs.
Brownfield Adoption
The operator can be deployed against an existing GoAlert instance. It automatically adopts pre-existing resources (integration keys, heartbeat monitors, contact methods) instead of failing with "already exists" errors. See docs/adoption.md for details.
Documentation
License
Apache License 2.0. See LICENSE.