package module
Version: v0.2.9 Latest Latest

This package is not in the latest version of its module.

Go to latest
Published: Jan 13, 2022 License: Apache-2.0 Imports: 5 Imported by: 48


.. raw:: html

    <img src="" width="650px">

|build-status| |go-report| |go-doc| |apache2| |chat| |codecov|

Gaia is an open source automation platform which makes it easy and fun to build powerful pipelines in any programming language. Based on `HashiCorp's go-plugin`_ and `gRPC`_, gaia is efficient, fast, lightweight, and developer friendly.

Develop powerful `pipelines <What is a pipeline?_>`_ with the help of `SDKs <Why do I need an SDK?_>`_ and simply check-in your code into a git repository. Gaia automatically clones your code repository, compiles your code to a binary, and executes it on-demand. All results are streamed back and formatted as a user-friendly graphical output.

Check out ``_ to learn more.


.. begin-motivation

*Automation Engineer*, *DevOps Engineer*, *SRE*, *Cloud Engineer*,
*Platform Engineer* - they all have one in common:
The majority of tech people are not motivated to take up this work and they are hard to recruit.

One of the main reasons for this is the abstraction and poor execution of many automation tools. They come with their own configuration (`YAML`_ syntax) specification or limit the user to one specific programming language. Testing is nearly impossible because most automation tools lack the ability to mock services and subsystems. Even tiny things, for example parsing a JSON file, are sometimes really painful because external, outdated libraries were used and not included in the standard framework.

We believe it's time to remove all those abstractions and come back to our roots. Are you tired of writing endless lines of YAML-code? Are you sick of spending days forced to write in a language that does not suit you and is not fun at all? Do you enjoy programming in a language you like? Then Gaia is for you.

How does it work?

.. begin-architecture

Gaia is based on `HashiCorp's go-plugin`_. It's a `plugin system`_ that uses `gRPC`_ to communicate over `HTTP/2`_. Initially, HashiCorp developed this tool for `Packer`_ but now it's heavily used by `Terraform`_, `Nomad`_, and `Vault`_ too.

Plugins, also called `pipelines <What is a pipeline?_>`_, are applications which can be written in any programming language, as long as `gRPC`_ is supported. All functions, also called `jobs <What is a job?>`_, are exposed to Gaia and can form up a dependency graph that describes the order of execution.

Pipelines can be compiled locally or simply over the integrated build system. Gaia clones the git repository and automatically builds the included pipeline. If a change (`git push`_) happened, Gaia will automatically rebuild the pipeline for you*.

After a pipeline has been started, all log output is returned back to Gaia and displayed in a detailed overview with their final result status.

Gaia uses `boltDB` for storage. This makes the installation process super easy. No external database is currently required.

\* *This requires polling or webhook to be activated.*


.. begin-screenshots


Getting Started

.. begin-getting-started


The installation of gaia is simple and often takes a few minutes.

Using docker

The following command starts gaia as a daemon process and mounts all data to the current folder. Afterwards, gaia will be available on the host system on port 8080. Use the standard user **admin** and password **admin** as initial login. It is recommended to change the password afterwards.

.. code:: sh

    docker run -d -p 8080:8080 -v $PWD:/data gaiapipeline/gaia:latest

This uses the image with the *latest* tag which includes all required libraries and compilers for all supported languages. If you prefer a smaller image suited for your preferred language, have a look at the `available docker image tags`_.


It is possible to install Gaia directly on the host system.
This can be achieved by downloading the binary from the `releases page`_.

Gaia will automatically detect the folder of the binary and will place all data next to it. You can change the data directory with the startup parameter *-home-path* if you want.

Using helm

If you haven't got an ingress controller pod yet, make sure that you have `kube-dns` or `coredns` enabled, run this command to set it up.

.. code:: sh

    make kube-ingress

To init helm:

.. code:: sh

    helm init

To deploy gaia:

.. code:: sh

    make deploy-kube

Example Pipelines


.. code:: go

    package main

    import (

	sdk ""

    // This is one job. Add more if you want.
    func DoSomethingAwesome(args sdk.Arguments) error {
        log.Println("This output will be streamed back to gaia and will be displayed in the pipeline logs.")

	// An error occurred? Return it back so gaia knows that this job failed.
	return nil

    func main() {
        jobs := sdk.Jobs{
                Handler:     DoSomethingAwesome,
	        Title:       "DoSomethingAwesome",
		Description: "This job does something awesome.",

	// Serve
	if err := sdk.Serve(jobs); err != nil {


.. code:: python

    from gaiasdk import sdk
    import logging

    def MyAwesomeJob(args):"This output will be streamed back to gaia and will be displayed in the pipeline logs.")
        # Just raise an exception to tell Gaia if a job failed.
        # raise Exception("Oh no, this job failed!")

    def main():
        myjob = sdk.Job("MyAwesomeJob", "Do something awesome", MyAwesomeJob)


.. code:: java

    package io.gaiapipeline;

    import io.gaiapipeline.javasdk.*;

    import java.util.ArrayList;
    import java.util.Arrays;
    import java.util.logging.Logger;

    public class Pipeline
        private static final Logger LOGGER = Logger.getLogger(Pipeline.class.getName());

        private static Handler MyAwesomeJob = (gaiaArgs) -> {
  "This output will be streamed back to gaia and will be displayed in the pipeline logs.");
	    // Just raise an exception to tell Gaia if a job failed.
            // throw new IllegalArgumentException("Oh no, this job failed!");

        public static void main( String[] args )
            PipelineJob myjob = new PipelineJob();
            myjob.setDescription("Do something awesome.");

            Javasdk sdk = new Javasdk();
            try {
                sdk.Serve(new ArrayList<>(Arrays.asList(myjob)));
            } catch (Exception ex) {


.. code:: cpp

   #include "cppsdk/sdk.h"
   #include <list>
   #include <iostream>

   void DoSomethingAwesome(std::list<gaia::argument> args) throw(std::string) {
      std::cerr << "This output will be streamed back to gaia and will be displayed in the pipeline logs." << std::endl;

      // An error occurred? Return it back so gaia knows that this job failed.
      // throw "Uhh something badly happened!"

   int main() {
      std::list<gaia::job> jobs;
      gaia::job awesomejob;
      awesomejob.handler = &DoSomethingAwesome;
      awesomejob.title = "DoSomethingAwesome";
      awesomejob.description = "This job does something awesome.";

      try {
      } catch (string e) {
         std::cerr << "Error: " << e << std::endl;


.. code:: ruby

   require 'rubysdk'

   class Main
       AwesomeJob = lambda do |args|
           STDERR.puts "This output will be streamed back to gaia and will be displayed in the pipeline logs."

           # An error occurred? Raise an exception and gaia will fail the pipeline.
           # raise "Oh gosh! Something went wrong!"

       def self.main
           awesomejob = "Awesome Job",
                                           handler: AwesomeJob,
                                           desc: "This job does something awesome.")

           rescue => e
               puts "Error occured: #{e}"


.. code:: javascript

   const nodesdk = require('@gaia-pipeline/nodesdk');

   function DoSomethingAwesome(args) {
       console.error('This output will be streamed back to gaia and will be displayed in the pipeline logs.');

       // An error occurred? Throw it back so gaia knows that this job failed.
       // throw new Error('My error message');

   // Serve
   try {
           handler: DoSomethingAwesome,
           title: 'DoSomethingAwesome',
           description: 'This job does something awesome.'
   } catch (err) {

Pipelines are defined by jobs and a function usually represents a job. You can define as many jobs in your pipeline as you want.

Every function accepts arguments. Those arguments can be requested from the pipeline itself and the values are passed back in from the UI.

Some pipeline jobs need a specific order of execution. `DependsOn` allows you to declare dependencies for every job.

You can find real examples and more information on `how to develop a pipeline`_ in the docs.


See the Documentation located here: `security-docs`_.

Documentation and more

Please find the docs at We also have a tutorials section over there with examples and real use-case scenarios. For example, `Kubernetes deployment with vault integration`_.

Questions and Answers (Q&A)

What problem solves **Gaia**?
Literally every tool that was designed for automation, continuous integration (CI), and continuous deployment (CD) like Spinnaker, Jenkins, Gitlab CI/CD, TravisCI, CircleCI, Codeship, Bamboo and many more, introduced their own configuration format. Some of them don't even support *configuration/automation as code*. This works well for simple tasks like running a ``go install`` or ``mvn clean install`` but in the real world there is more to do.

Gaia is the first platform that does not limit the user and provides full support for almost all common programming languages without losing the features offered by todays CI/CD tools.

What is a **pipeline**?
A pipeline is a real application with at least one function (we call it a Job). Every programming language can be used as long as gRPC is supported. We offer SDKs to support the development.

What is a **job**?
A job is a function, usually globally exposed to Gaia. Dependent on the dependency graph, Gaia will execute this function in a specific order.

Why do I need an **SDK**?
The SDK implements the Gaia plugin gRPC interface and offers helper functions like serving the gRPC-Server. This helps you to focus on the real problem instead of doing the boring stuff.

Which programming languages are supported?
We currently fully support Go, Java, Python, C++, Ruby and Node.JS.

When do you support programming language **XYZ**?
We are working hard to support as much programming languages as possible but our resources are limited and we are also mostly no experts in all programming languages. If you are willing to contribute, feel free to open an issue and start working.


Gaia is currently available as beta version.

Feel free to open a new GitHub issue to request a new feature.


Gaia can only evolve and become a great product with the help of contributors. If you like to contribute, please have a look at our `issues section`_. We do our best to mark issues for new contributors with the label *good first issue*.

If you think you found a good first issue, please consider this list as a short guide:

* If the issue is clear and you have no questions, please leave a short comment that you started working on this. The issue will be usually blocked for two weeks for you to solve it.
* If something is not clear or you are unsure what to do, please leave a comment so we can add more detailed description.
* Make sure your development environment is configured and set up. You need `Go installed`_ on your machine and also `nodeJS`_ for the frontend. Clone this repository and run the **make** command inside the cloned folder. This will start the backend. To start the frontend you have to open a new terminal window and go into the frontend folder. There you run **npm install** and then **npm run serve**. This should automatically open a new browser window.
* Before you start your work, you should fork this repository and push changes to your fork. Afterwards, send a merge request back to upstream.


If you have any questions feel free to contact us on `slack`_.

.. _`HashiCorp's go-plugin`:
.. _`gRPC`:
.. _`Do not use it for mission critical jobs yet!`:
.. _`YAML`:
.. _`releases page`:
.. _`Packer`:
.. _`Terraform`:
.. _`Nomad`:
.. _`Vault`:
.. _`boltDB`:
.. _`Unix nice level`:
.. _`issues section`:
.. _`Go installed`:
.. _`nodeJS`:
.. _`go-example repo`:
.. _`slack`:
.. _`Kubernetes deployment with vault integration`:
.. _`git push`:
.. _`HTTP/2`:
.. _`security-docs`:
.. _`plugin system`:
.. _`available docker image tags`:
.. _`how to develop a pipeline`:
.. _``:

.. |build-status| image::
    :alt: Build Status
    :scale: 100%

.. |go-report| image::
    :alt: Go Report Card

.. |go-doc| image::
    :alt: GoDoc

.. |apache2| image::
    :alt: Apache licensed

.. |chat| image::
    :alt: Slack

.. |codecov| image::

.. |sh-login| image::
    :alt: gaia login screenshot
    :width: 650px

.. |sh-overview| image::
    :alt: gaia overview screenshot
    :width: 650px

.. |sh-create-pipeline| image::
    :alt: gaia create pipeline screenshot
    :width: 650px

.. |sh-vault| image::
    :alt: gaia Vault screenshot
    :width: 650px

.. |sh-pipeline-detailed| image::
    :alt: gaia pipeline detailed screenshot
    :width: 650px

.. |sh-pipeline-logs| image::
    :alt: gaia pipeline logs screenshot
    :width: 650px

.. |sh-settings| image::
    :alt: gaia settings screenshot
    :width: 650px




View Source
const (
	// PTypeUnknown unknown plugin type
	PTypeUnknown PipelineType = "unknown"

	// PTypeGolang golang plugin type
	PTypeGolang PipelineType = "golang"

	// PTypeJava java plugin type
	PTypeJava PipelineType = "java"

	// PTypePython python plugin type
	PTypePython PipelineType = "python"

	// PTypeCpp C++ plugin type
	PTypeCpp PipelineType = "cpp"

	// PTypeRuby ruby plugin type
	PTypeRuby PipelineType = "ruby"

	// PTypeNodeJS NodeJS plugin type
	PTypeNodeJS PipelineType = "nodejs"

	// CreatePipelineFailed status
	CreatePipelineFailed CreatePipelineType = "failed"

	// CreatePipelineRunning status
	CreatePipelineRunning CreatePipelineType = "running"

	// CreatePipelineSuccess status
	CreatePipelineSuccess CreatePipelineType = "success"

	// RunNotScheduled status
	RunNotScheduled PipelineRunStatus = "not scheduled"

	// RunScheduled status
	RunScheduled PipelineRunStatus = "scheduled"

	// RunFailed status
	RunFailed PipelineRunStatus = "failed"

	// RunSuccess status
	RunSuccess PipelineRunStatus = "success"

	// RunRunning status
	RunRunning PipelineRunStatus = "running"

	// RunCancelled status
	RunCancelled PipelineRunStatus = "cancelled"

	// RunReschedule status
	RunReschedule PipelineRunStatus = "reschedule"

	// JobWaitingExec status
	JobWaitingExec JobStatus = "waiting for execution"

	// JobSuccess status
	JobSuccess JobStatus = "success"

	// JobFailed status
	JobFailed JobStatus = "failed"

	// JobRunning status
	JobRunning JobStatus = "running"

	// ModeServer mode
	ModeServer Mode = "server"

	// ModeWorker mode
	ModeWorker Mode = "worker"

	// WorkerActive status
	WorkerActive WorkerStatus = "active"

	// WorkerInactive status
	WorkerInactive WorkerStatus = "inactive"

	// WorkerSuspended status
	WorkerSuspended WorkerStatus = "suspended"

	// LogsFolderName represents the Name of the logs folder in pipeline run folder
	LogsFolderName = "logs"

	// LogsFileName represents the file name of the logs output
	LogsFileName = "output.log"

	// APIVersion represents the current API version
	APIVersion = "v1"

	// SrcFolder folder name where the sources are stored
	SrcFolder = "src"

	// TmpFolder is the temp folder for temporary files
	TmpFolder = "tmp"

	// TmpPythonFolder is the name of the python temporary folder
	TmpPythonFolder = "python"

	// TmpGoFolder is the name of the golang temporary folder
	TmpGoFolder = "golang"

	// TmpCppFolder is the name of the c++ temporary folder
	TmpCppFolder = "cpp"

	// TmpRubyFolder is the name of the ruby temporary folder
	TmpRubyFolder = "ruby"

	// TmpNodeJSFolder is the name of the nodejs temporary folder
	TmpNodeJSFolder = "nodejs"

	// TmpJavaFolder is the name of the java temporary folder
	TmpJavaFolder = "java"

	// WorkerRegisterKey is the used key for worker registration secret
	WorkerRegisterKey = "WORKER_REGISTER_KEY"

	// ExecutablePermission is the permission used for gaia created executables.
	ExecutablePermission = 0700

	// StartReasonRemote label for pipelines which were triggered through a remote token.
	StartReasonRemote = "remote"

	// StartReasonManual label for pipelines which were triggered through the admin site.
	StartReasonManual = "manual"

	// StartReasonScheduled label for pipelines which were triggered automated process, i.e. cron job.
	StartReasonScheduled = "scheduled"

	// SecretNamePrefix defines the prefix for github secrets for pipelines.
	SecretNamePrefix = "GITHUB_WEBHOOK_SECRET_"

	// LegacySecretName is the old name for a secret that has been created by previous versions.
	// Deprecated
View Source
const JwtExpiry = 12 * 60 * 60

JwtExpiry is the default JWT expiry.


View Source
var Cfg = &Config{}

Cfg represents the global config instance


This section is empty.


type Argument added in v0.2.1

type Argument struct {
	Description string `json:"desc,omitempty"`
	Type        string `json:"type,omitempty"`
	Key         string `json:"key,omitempty"`
	Value       string `json:"value,omitempty"`

Argument represents a single argument of a job

type Config

type Config struct {
	DevMode                 bool
	ModeRaw                 string
	Mode                    Mode
	VersionSwitch           bool
	Poll                    bool
	PVal                    int
	ListenPort              string
	HomePath                string
	Hostname                string
	VaultPath               string
	DataPath                string
	PipelinePath            string
	WorkspacePath           string
	Worker                  int
	JwtPrivateKeyPath       string
	JWTKey                  interface{}
	Logger                  hclog.Logger
	CAPath                  string
	WorkerServerPort        string
	PreventPrimaryWork      bool
	AutoDockerMode          bool
	DockerHostURL           string
	DockerRunImage          string
	DockerWorkerHostURL     string
	DockerWorkerGRPCHostURL string
	RBACEnabled             bool
	RBACDebug               bool

	// Worker
	WorkerName        string
	WorkerHostURL     string
	WorkerGRPCHostURL string
	WorkerSecret      string
	WorkerTags        string

	Bolt struct {
		Mode os.FileMode

Config holds all config options

type CreatePipeline

type CreatePipeline struct {
	ID          string             `json:"id,omitempty"`
	Pipeline    Pipeline           `json:"pipeline,omitempty"`
	Status      int                `json:"status,omitempty"`
	StatusType  CreatePipelineType `json:"statustype,omitempty"`
	Output      string             `json:"output,omitempty"`
	Created     time.Time          `json:"created,omitempty"`
	GitHubToken string             `json:"githubtoken,omitempty"`

CreatePipeline represents a pipeline which is not yet compiled.

type CreatePipelineType

type CreatePipelineType string

CreatePipelineType represents the different status types a create pipeline can have.

type GitRepo

type GitRepo struct {
	URL            string     `json:"url,omitempty"`
	Username       string     `json:"user,omitempty"`
	Password       string     `json:"password,omitempty"`
	PrivateKey     PrivateKey `json:"privatekey,omitempty"`
	SelectedBranch string     `json:"selectedbranch,omitempty"`
	Branches       []string   `json:"branches,omitempty"`
	LocalDest      string     `json:"-"`

GitRepo represents a single git repository

type Job

type Job struct {
	ID           uint32      `json:"id,omitempty"`
	Title        string      `json:"title,omitempty"`
	Description  string      `json:"desc,omitempty"`
	DependsOn    []*Job      `json:"dependson,omitempty"`
	Status       JobStatus   `json:"status,omitempty"`
	Args         []*Argument `json:"args,omitempty"`
	FailPipeline bool        `json:"failpipeline,omitempty"`

Job represents a single job of a pipeline

type JobStatus

type JobStatus string

JobStatus represents the different status a job can have

type JwtCustomClaims added in v0.2.6

type JwtCustomClaims struct {
	Username string   `json:"username"`
	Roles    []string `json:"roles"`

JwtCustomClaims is the custom JWT claims for a Gaia session.

type Mode added in v0.2.4

type Mode string

Mode represents the different modes for Gaia

type Pipeline

type Pipeline struct {
	ID                int          `json:"id,omitempty"`
	Name              string       `json:"name,omitempty"`
	Repo              *GitRepo     `json:"repo,omitempty"`
	Type              PipelineType `json:"type,omitempty"`
	ExecPath          string       `json:"execpath,omitempty"`
	SHA256Sum         []byte       `json:"sha256sum,omitempty"`
	Jobs              []*Job       `json:"jobs,omitempty"`
	Created           time.Time    `json:"created,omitempty"`
	UUID              string       `json:"uuid,omitempty"`
	IsNotValid        bool         `json:"notvalid,omitempty"`
	PeriodicSchedules []string     `json:"periodicschedules,omitempty"`
	TriggerToken      string       `json:"trigger_token,omitempty"`
	Tags              []string     `json:"tags,omitempty"`
	Docker            bool         `json:"docker"`
	CronInst          *cron.Cron   `json:"-"`

Pipeline represents a single pipeline

type PipelineRun

type PipelineRun struct {
	UniqueID       string            `json:"uniqueid"`
	ID             int               `json:"id"`
	PipelineID     int               `json:"pipelineid"`
	StartDate      time.Time         `json:"startdate,omitempty"`
	StartReason    string            `json:"started_reason"`
	FinishDate     time.Time         `json:"finishdate,omitempty"`
	ScheduleDate   time.Time         `json:"scheduledate,omitempty"`
	Status         PipelineRunStatus `json:"status,omitempty"`
	Jobs           []*Job            `json:"jobs,omitempty"`
	PipelineType   PipelineType      `json:"pipelinetype,omitempty"`
	PipelineTags   []string          `json:"pipelinetags,omitempty"`
	Docker         bool              `json:"docker,omitempty"`
	DockerWorkerID string            `json:"dockerworkerid,omitempty"`

PipelineRun represents a single run of a pipeline.

type PipelineRunStatus

type PipelineRunStatus string

PipelineRunStatus represents the different status a run can have.

type PipelineType

type PipelineType string

PipelineType represents supported plugin types

func (PipelineType) String

func (p PipelineType) String() string

String returns a pipeline type string back

type PrivateKey

type PrivateKey struct {
	Key      string `json:"key,omitempty"`
	Username string `json:"username,omitempty"`
	Password string `json:"password,omitempty"`

PrivateKey represents a pem encoded private key

type SHAPair added in v0.2.4

type SHAPair struct {
	Original   []byte `json:"original"`
	Worker     []byte `json:"worker"`
	PipelineID int    `json:"pipelineid"`

SHAPair struct contains the original sha of a pipeline executable and the new sha which was created when the worker had to rebuild it.

type StoreConfig added in v0.2.4

type StoreConfig struct {
	ID          int
	Poll        bool
	RBACEnabled bool

StoreConfig defines config settings to be stored in DB.

type User

type User struct {
	Username     string    `json:"username,omitempty"`
	Password     string    `json:"password,omitempty"`
	DisplayName  string    `json:"display_name,omitempty"`
	Tokenstring  string    `json:"tokenstring,omitempty"`
	JwtExpiry    int64     `json:"jwtexpiry,omitempty"`
	LastLogin    time.Time `json:"lastlogin,omitempty"`
	TriggerToken string    `json:"trigger_token,omitempty"`

User is the user object

type UserPermission added in v0.2.3

type UserPermission struct {
	Username string   `json:"username"`
	Roles    []string `json:"roles"`
	Groups   []string `json:"groups"`

UserPermission is stored in its own data structure away from the core user. It represents all permission data for a single user.

type UserRole added in v0.2.3

type UserRole struct {
	Name        string              `json:"name"`
	Description string              `json:"description"`
	APIEndpoint []*UserRoleEndpoint `json:"api_endpoints"`

UserRole represents a single permission role.

type UserRoleCategory added in v0.2.3

type UserRoleCategory struct {
	Name        string      `json:"name"`
	Description string      `json:"description"`
	Roles       []*UserRole `json:"roles"`

UserRoleCategory represents the top-level of the permission role system

type UserRoleEndpoint added in v0.2.3

type UserRoleEndpoint struct {
	Path   string `json:"path"`
	Method string `json:"method"`

UserRoleEndpoint represents the path and method of the API endpoint to be secured.

type Worker added in v0.2.4

type Worker struct {
	UniqueID     string       `json:"uniqueid"`
	Name         string       `json:"name"`
	Status       WorkerStatus `json:"status"`
	Slots        int32        `json:"slots"`
	RegisterDate time.Time    `json:"registerdate"`
	LastContact  time.Time    `json:"lastcontact"`
	FinishedRuns int64        `json:"finishedruns"`
	Tags         []string     `json:"tags"`

Worker represents a single registered worker.

type WorkerStatus added in v0.2.4

type WorkerStatus string

WorkerStatus represents the different status a worker can have

Source Files

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL