README
¶
go-gin-react-playground
This project is a simple petshop-style application. It is meant to serve as a personal playground for experimenting with the tech and as a framework for future, more serious applications. It consists of a frontend written in React (although I consider myself a mediocre frontend dev) and a backend written in Go. Backend utilises some popular Go libraries, such as Gin and GORM. The project is meant to be deployed on Kubernetes and provides a Helm chart to do so. It also has a fully featured Gitlab pipeline to help with that. Some other features include database schema migration through Liquibase, routing production traffic through Cloudflare, authorization to REST API through API tokens stored in Redis and a rate-limiter.
Build
Backend
For local run:
CGO_ENABLED=0 go build
For deployment:
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build
Frontend
cd frontend/
npm install
npm run-script build
Run locally
Backend
docker-compose up
./go-gin-react-playground --properties dev/properties.yml
# to cleanup:
docker-compose down && rm -rf _docker_compose_volumes/
Use the following commands to test API:
GOGIN_HOST="http://localhost:5000"
# retrieve users
curl -s $GOGIN_HOST/api/v1/user | jq
curl -s $GOGIN_HOST/api/v1/user/0098d5b6-5986-4ffe-831f-5c3a59aeef50 | jq
# get access token
TOKEN=$(curl -X POST -H "Content-Type: application/json" -d '{"username":"username","password":"password"}' \
-s $GOGIN_HOST/api/v1/login | jq -r '.token')
# add user
curl -s -X POST -H "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" \
-d '{"name":"xxx","creditCards":[{"number":"0000 0000 0000 0000"}]}' $GOGIN_HOST/api/v1/user | jq
# modify user
curl -v -X PUT -H "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" \
-d '{"name":"John Doe","creditCards":[{"number":"1111 1111 1111 1111"}, {"number":"2222 2222 2222 2222"}]}' \
$GOGIN_HOST/api/v1/user/0098d5b6-5986-4ffe-831f-5c3a59aeef50
# delete user
curl -v -X DELETE -H "Authorization: Bearer $TOKEN" \
$GOGIN_HOST/api/v1/user/0098d5b6-5986-4ffe-831f-5c3a59aeef50 | jq
# to generate more test data
dev/random_data.sh "$GOGIN_HOST" "username" "password"
Frontend
cd frontend/
npm install
npm start
Visit http://localhost:8080/
in browser.
Deploy on Kubernetes
- Make sure you have Helm installed (https://helm.sh/docs/intro/install/)
Prepare cluster
Install nginx ingress controller
Install nginx ingress controller using Helm. Value of replicaCount
dependes on your needs, and can be increased.
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
kubectl create namespace ingress-nginx
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace=ingress-nginx \
--set controller.service.externalTrafficPolicy="Local" \
--set controller.replicaCount=1
Wait for the controller to wake up:
kubectl wait --namespace ingress-nginx \
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=120s
In Docker Desktop
the Ingress controller listens on 127.0.0.1
, on ports 80
and 443
. In cloud environments, such as Azure - public IP address is allocated for the needs of the controller. It may introduce some additional costs. Keep in mind the Ingress controller works all the time, no matter if any Ingress resource is actually deployed at the time. If no actual Ingress resources are deployed - the Ingress controller redirects all the requests to the default-backend
that just returns 404 and a fake certificate.
Create a namespace for the project
kubectl create namespace go-gin
kubectl config set-context --current --namespace=go-gin
Deploy development version on Docker Desktop
Build Docker images locally with dev
tag:
dev/build.sh
Start development environment. The script will upload application secrets to the cluster, start external dependiencies such as redis and Postgres as well as feed the database with some test data:
dev/kubernetes-env/up.sh
Deploy:
deployment/deploy.sh
In order to clean up, simply run:
deployment/undeploy.sh
dev/kubernetes-env/down.sh
Deploy development version on any cluster
Build Docker images locally with dev
tag:
dev/build.sh
Push Docker images to the registry. Both your workstation and the target cluster should be able to access the specified registry:
dev/push.sh 192.168.1.100:32000
Start development environment. The script will upload application secrets to the cluster, start external dependiencies such as redis and Postgres as well as feed the database with some test data:
dev/kubernetes-env/up.sh
Deploy (remember to specify the full names of the pushed images):
deployment/deploy.sh \
--set backend.imageName="192.168.1.100:32000/go-gin-react-playground/backend" \
--set frontend.imageName="192.168.1.100:32000/go-gin-react-playground/frontend"
In order to clean up, simply run:
deployment/undeploy.sh
dev/kubernetes-env/down.sh
Deploy production version in the cloud
Provide access to Gitlab registry
- Generate a Gitlab personal access token with
read_registry
scope - Generate
AUTH_STRING
withecho -n '<USERNAME>:<ACCESS_TOKEN>' | base64
- Create a
docker.json
file:
{
"auths": {
"registry.gitlab.com": {
"auth": "<AUTH_STRING>"
}
}
}
- Upload it to the cluster
kubectl create secret generic gitlab-docker-registry --namespace=kube-system \
--from-file=.dockerconfigjson=./docker.json --type="kubernetes.io/dockerconfigjson"
Create a secret with all the credentials
We assume the app running in cloud makes use of external services provided as SaaS, such as Amazon RDS or Azure Database for PostgreSQL and the external services are not the part of Kubernetes cluster itself.
In this case we need to explicitly create a secret backend-secrets
containing all the confident properties required to run the app.
kubectl create secret generic backend-secrets \
--from-literal=postgres_dsn="<POSTGRES_DSN>" \
--from-literal=redis_dsn="<REDIS_DSN>" \
--from-literal=api_username="<API_USERNAME>" \
--from-literal=api_password="<API_PASSWORD>"
(Obviously replacing all the <placeholders>
with the proper values)
Generate HTTPS certificate
Either generate a self-signed cert
export DOMAIN="example.com"
openssl req -x509 -nodes -days 365 -newkey rsa:4096 -keyout key.pem -out cert.pem -subj "/CN=$DOMAIN/O=$DOMAIN"
Or use Let's Encrypt to generate a proper one
# brew install certbot
export DOMAIN="example.com"
sudo certbot -d "$DOMAIN" --manual --preferred-challenges dns certonly
sudo cp "/etc/letsencrypt/live/$DOMAIN/fullchain.pem" ./cert.pem && sudo chown $USER ./cert.pem
sudo cp "/etc/letsencrypt/live/$DOMAIN/privkey.pem" ./key.pem && sudo chown $USER ./key.pem
# to renew later: sudo certbot renew -q
Then just upload it
kubectl create secret tls domain-specific-tls-cert --key key.pem --cert cert.pem
(Optional) Add security headers
It's a good idea to also improve security by telling ingress to send some additional security headers. Run:
kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: security-headers
namespace: ingress-nginx
data:
X-Frame-Options: "DENY"
X-Content-Type-Options: "nosniff"
X-XSS-Protection: "0"
Strict-Transport-Security: "max-age=63072000; includeSubDomains; preload"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
data:
add-headers: "ingress-nginx/security-headers"
EOF
Deploy
deployment/deploy.sh \
--set app.version="v1.0.4" \
--set backend.imageName="registry.gitlab.com/mkorman/go-gin-react-playground/backend" \
--set frontend.imageName="registry.gitlab.com/mkorman/go-gin-react-playground/frontend" \
--set images.pullSecret="kube-system/gitlab-docker-registry" \
--set ingress.hostname="example.com" \
--set ingress.stictHostCheck=true \
--set ingress.useHttps=true \
--set ingress.tlsCertName="domain-specific-tls-cert"
Clean up
deployment/undeploy.sh
(Optional) Configure logs collection
Graylog can be deployed with:
deployment/extras/graylog/deploy.sh
Graylog UI runs on port 9000
. Default credentials are admin/admin
. By default, UDP ports 12201
(gelf) and 1514
(syslog) are opened.
In order to configure Graylog to receive messages from backend and frontend:
- Add two new input -
GELF UDP
on port12201
andSyslog UDP
on port1514
- Add new pipeline and attach it to the
All messages
stream. - Add two new rules to the pipeline
Rule 1
rule "parse backend logs"
when
starts_with(to_string($message.source), "backend-")
then
let json_tree = parse_json(to_string($message.message));
let json_fields = select_jsonpath(json_tree, {
time: "$.time",
level: "$.level",
message: "$.message",
error: "$.error",
stack: "$.stack",
status: "$.status",
method: "$.method",
path: "$.path",
ip: "$.ip",
request_id: "$.request_id",
latency: "$.latency",
user_agent: "$.user_agent"
});
set_field("timestamp", flex_parse_date(to_string(json_fields.time)));
set_field("log_level", to_string(json_fields.level));
set_field("message", to_string(json_fields.message));
set_field("error", to_string(json_fields.error));
set_field("stack", to_string(json_fields.stack));
set_field("status", to_string(json_fields.status));
set_field("method", to_string(json_fields.method));
set_field("path", to_string(json_fields.path));
set_field("ip", to_string(json_fields.ip));
set_field("request_id", to_string(json_fields.request_id));
set_field("latency", to_string(json_fields.latency));
set_field("user_agent", to_string(json_fields.user_agent));
remove_field("time");
remove_field("level");
remove_field("line");
remove_field("file");
end
Rule 2
rule "receive frontend logs"
when
starts_with(to_string($message.source), "frontend-")
then
end
We also need to make the app aware of the Graylog by specifying additional flags to deployment/deploy.sh
script when deploying the app:
deployment/deploy.sh \
...
--set backend.config.remoteLogging.enabled=true \
--set frontend.config.remoteLogging.enabled=true
...
In order to clean up, run:
deployment/extras/graylog/undeploy.sh
(Optional) Configure metrics collection
Application is configured to automatically publish metrics in a format recognized by the Prometheus. All you need to do is to deploy Prometheus to your cluster:
deployment/extras/prometheus/deploy.sh
It can be easily cleaned up with:
deployment/extras/prometheus/undeploy.sh
Grafana can be deployed the similar way:
deployment/extras/grafana/deploy.sh
Grafana UI runs on port 3000
.
Default credentials are admin/admin
. Address to the prometheus data source would be http://prometheus.monitoring.svc.cluster.local
.
To clean up Grafana run:
deployment/extras/grafana/undeploy.sh
(Random Notes) Deployment on Microsoft Azure
Provision AKS (Kubernetes cluster)
- Open Azure console, navigate to
Kubernetes Services
and clickAdd -> Add Kubernetes cluster
- Enter
Kubernetes cluster name
,Region
and chooseAvailability Zones
- Specify number of in the primary pool and their type. For testing environment -
1
node is enough, for production - specify3
or more nodes for high-availability.A2_v2
is probably the cheapest node and is more than enough for testing environment. For production - choose general purpose nodes likeD2s_v3
. - In the next tab you can specify more node pools.
- In
Authentication
tab selectSystem-assigned managed identity
. ENABLERole-based access control (RBAC)
and DISABLEAKS-managed Azure Active Directory
- In
Networking
tab - setNetwork configuration
tokubenet
, make sureEnable HTTP application routing
is DISABLED. UnderNetwork policy
you may consider choosingCalico
- it will allow you to createNetworkPolicy
resources. They WILL NOT WORK if you chooseNone
. - In
Integrations
tab - DISABLE bothContainer monitoring
andAzure Policy
. - After clicking
Create
a couple of resources will be provisioned -Kubernetes Services
,Virtual Network
(aks-vnet-*
),Virtual Machine Scale set
andPublic IP Address
(used for egress traffic from the cluster). - After cluster successfully provisions - you'll be able to get it's connection details by clicking
Connect
. Procedure consists of installingazure-cli
and retrieving cluster credentials through it:
brew install azure-cli
az login
az account set --subscription <SUBSCRIPTION_ID>
az aks get-credentials --resource-group <RESOURCE_GROUP> --name <CLUSTER_NAME>
Provision PostgreSQL instance
- Open Azure console, navigate to
Azure Database for PostgreSQL servers
and clickNew
- Select
Single server
(it claims to provide 99.99% availability) - Enter details like
Server Name
andLocation
. UnderCompute + storage
select eitherBasic
tier for testing environment orGeneral Purpose
if the application requires Geo-Redundancy - Enter
Admin Username
, generateAdmin Password
with something likedd if=/dev/urandom bs=48 count=1 | base64
- Create database and wait for it to provision
- Copy DSN from
Connection String
section. If you need an access from the Internet (NOT RECOMMENDED!) - add firewall rule0.0.0.0-255.255.255.255
underConnection Security
Provision Redis instance
- Open Azure console, navigate to
Azure Cache for Redis
and clickNew
- Enter
DNS name
andLocation
. UnderCache Type
selectBasic C0
for testing environment orStandard C1
(or higher) if the application requires high-availability. - In the next section select either
Public Endpoint
(NOT RECOMMENDED!) orPrivate Endpoint
. In case of private endpoint - you'll need to create a private network in the private network created for AKS cluster (aks-vnet-*
). - Create instance and wait for it to provision. Address and password will pop up when you click
Keys: Show access keys...
Integrate Kubernetes cluster with Gitlab
Integration with Gitlab provides a basic pods monitoring on the project's page and automatic configuration of kubectl
in deployment CI jobs. Full instruction is available under https://docs.gitlab.com/ee/user/project/clusters/add_remove_clusters.html
Documentation
¶
There is no documentation for this package.
Directories
¶
Path | Synopsis |
---|---|
backend
module
|
|
server
Code in this file is adapted directly from the Gin library (https://github.com/gin-gonic/gin) to overcome its limitations in resolving source IP addresses from headers
|
Code in this file is adapted directly from the Gin library (https://github.com/gin-gonic/gin) to overcome its limitations in resolving source IP addresses from headers |