worker

package
Version: v0.22.3 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 14, 2021 License: MIT Imports: 27 Imported by: 3

README

Cadence Worker

Cadence Worker is a new role for Cadence service used for hosting any components responsible for performing background processing on the Cadence cluster.

Replicator

Replicator is a background worker responsible for consuming replication tasks generated by remote Cadence clusters and pass it down to processor so they can be applied to local Cadence cluster.

Quickstart for localhost development

  1. Setup Kafka by following instructions: Kafka Quickstart
  2. Create Kafka topic for active and standby clusters if needed. By default the development Kafka should create topics in- flight (with 1 partition). If not, then use the follow command to create topics:
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic active

and

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic standby
  1. Start Cadence development server for active zone:
./cadence-server --zone active start
  1. Start Cadence development server for standby(passive) zone:
./cadence-server --zone standby start
  1. Create global domains
cadence --do sample domain register --gd true --ac active --cl active standby
  1. Failover between zones:

Failover to standby:

cadence --do sample domain update --ac standby

Failback to active:

cadence --do sample domain update --ac active

Create replication task using CLI

Kafka CLI can be used to generate a replication task for testing purpose:

bin/kafka-console-producer.sh --broker-list localhost:9092 --topic standby

Replication task message:

{taskType: 0}

Archiver

Archiver is used to handle archival of workflow execution histories. It does this by hosting a cadence client worker and running an archival system workflow. The archival client gets used to initiate archival through signal sending. The archiver shards work across several workflows.

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func NewService

func NewService(
	params *service.BootstrapParams,
) (resource.Resource, error)

NewService builds a new cadence-worker service

Types

type Config

type Config struct {
	ArchiverConfig *archiver.Config
	IndexerCfg     *indexer.Config
	ScannerCfg     *scanner.Config
	BatcherCfg     *batcher.Config

	ThrottledLogRPS                   dynamicconfig.IntPropertyFn
	PersistenceGlobalMaxQPS           dynamicconfig.IntPropertyFn
	PersistenceMaxQPS                 dynamicconfig.IntPropertyFn
	EnableBatcher                     dynamicconfig.BoolPropertyFn
	EnableParentClosePolicyWorker     dynamicconfig.BoolPropertyFn
	EnableFailoverManager             dynamicconfig.BoolPropertyFn
	EnableWorkflowShadower            dynamicconfig.BoolPropertyFn
	DomainReplicationMaxRetryDuration dynamicconfig.DurationPropertyFn
	// contains filtered or unexported fields
}

Config contains all the service config for worker

func NewConfig

func NewConfig(params *service.BootstrapParams) *Config

NewConfig builds the new Config for cadence-worker service

type Service

type Service struct {
	resource.Resource
	// contains filtered or unexported fields
}

Service represents the cadence-worker service. This service hosts all background processing needed for cadence cluster: 1. Replicator: Handles applying replication tasks generated by remote clusters. 2. Indexer: Handles uploading of visibility records to elastic search. 3. Archiver: Handles archival of workflow histories.

func (*Service) Start

func (s *Service) Start()

Start is called to start the service

func (*Service) Stop

func (s *Service) Stop()

Stop is called to stop the service

Source Files

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL