a1upgrade

package
v0.0.0-...-95abdeb Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 2, 2024 License: Apache-2.0 Imports: 47 Imported by: 0

Documentation

Index

Constants

View Source
const (
	// A1ChefServerCtlPath is the full path to the expected
	// location of chef-server-clt from the old chef-server
	// installation. This is used to ensure we can access the
	// correct chef-server-ctl even after the new A2
	// chef-server-ctl has been binlinked into place.
	A1ChefServerCtlPath = "/opt/opscode/bin/chef-server-ctl"

	// MaintDotFlag is the path to the file which will make A1's nginx go into
	// maintenance mode. See also:
	// https://github.com/chef/a1/blob/e1278c5fbb4478fa31b0853788cad1f6714fecf7/cookbooks/delivery/templates/default/nginx-internal.conf.erb#L146
	MaintDotFlag = "/var/opt/delivery/delivery/etc/maint.flag"
)
View Source
const (
	// A1RequiredMajorVersion is major component of the minimum
	// allowed Automate v1 version that we will upgrade from.
	A1RequiredMajorVersion = 1
	// A1RequiredMinorVersion is minor component of the minimum
	// allowed Automate v1 version that we will upgrade from.
	A1RequiredMinorVersion = 8
	// A1RequiredPatchVersion is patch component of the minimum
	// allowed Automate v1 version that we will upgrade from.
	A1RequiredPatchVersion = 38
)
View Source
const OmnibusInstallRoot = "/opt"
View Source
const SAMLConfigSQL = `SELECT EXISTS(SELECT 1 FROM saml_config)`

SAMLConfigSQL returns the existence of any data in the saml_config table

View Source
const UserRolesExportSQL = `` /* 195-byte string literal not displayed */

UserRolesExportSQL is the statement used to export user roles data from A1. It works analogous to UsersExportSQL above.

View Source
const UsersExportSQL = `` /* 181-byte string literal not displayed */

UsersExportSQL is the statement used to export users data from A1. It returns one row of JSON, which we then feed into a file in local-user-service's data directory.

Variables

View Source
var (
	// ErrMigrationFailed is returned from HandleStatus if the migration status is failed
	ErrMigrationFailed = errors.New("Automate 1 data migrations failed")
	// ErrMigrationStatusUnknown is returned from HandleStatus if the migration status is unknown
	ErrMigrationStatusUnknown = errors.New("Automate 1 data migrations failed")
)
View Source
var A1DeliveryDB = A1Database{
	// contains filtered or unexported fields
}

A1DeliveryDB is the delivery SQL database required to migrate from A1 to A2 this database has multiple tables that are used by multiple parts of A1, the following once are Compliance-related tables.

View Source
var A1StatusURL = "https://localhost/api/_status"

A1StatusURL is the HTTP URL for the A1 Status API endpoint

View Source
var A1VersionManifestPath = "/opt/delivery/version-manifest.txt"

A1VersionManifestPath is the path to the text-formatted omnibus version manifest for A1

View Source
var CSDatabases = []A1Database{
	{
		// contains filtered or unexported fields
	},
	{
		// contains filtered or unexported fields
	},
	{
		// contains filtered or unexported fields
	},
}

CSDatabases are the SQL databases required to migrate the All-In-One Chef server to A2

View Source
var FailureInt int
View Source
var PGDumpOutputDir = "/var/opt/delivery/a1_pg_export"

PGDumpOutputDir is the path that the pg_dumpall output from A1 will be written to. This is exposed as a package-level variable so that we can reference it from the pg_dumpall stub code.

Functions

func AutomateCtlCreateBackup

func AutomateCtlCreateBackup(name string) error

AutomateCtlCreateBackup runs `automate-ctl create-backup` -- used to back up A1 during upgrades.

func AutomateCtlStatus

func AutomateCtlStatus() error

AutomateCtlStatus runs `automate-ctl status` -- used to check automate v1 health before upgrading

func AutomateCtlStop

func AutomateCtlStop() error

AutomateCtlStop runs `automate-ctl stop` -- used to shut down A1 during upgrades.

func BackupA1Postgres

func BackupA1Postgres(connInfo *pg.A1ConnInfo, dbs []A1Database, timeout time.Duration) error

BackupA1Postgres takes a backup of A1 postgres

func CheckSAMLConfig

func CheckSAMLConfig(connInfo *pg.A1ConnInfo) (bool, error)

CheckSAMLConfig checks for SAML config in the Automate 1 PostgreSQL database and returns true if it is found

func ChefServerCtlStop

func ChefServerCtlStop() error

ChefServerCtlStop stops all Chef Server services via chef-server-ctl.

func ChefServerCtlStopService

func ChefServerCtlStopService(svcName string) error

ChefServerCtlStopService stops the given service using chef-server-ctl.

func EngageMaintMode

func EngageMaintMode() error

EngageMaintMode puts a1 into maintenance mode by creating the MaintDotFlag file.

func ExportUserData

func ExportUserData(connInfo *pg.A1ConnInfo, timeout time.Duration) (string, error)

ExportUserData retrieves user data from the Automate 1 PostgreSQL database and exports it as JSON for the local-user-service to consume at runtime.

func ExportUserRolesData

func ExportUserRolesData(connInfo *pg.A1ConnInfo, timeout time.Duration) (string, error)

ExportUserRolesData retrieves roles data from the Automate 1 PostgreSQL database and exports it as JSON for the authz-service to consume at runtime.

func Failure

func Failure(maybeFailure FailureScenario) bool

Failure determines if the given maybeFailure FailureScenario was set by the user. EX:

if Failure(A1ShutdownFail) {
  return aCannedError()
}

// normal program execution...

func IsEmptyDir

func IsEmptyDir(dir string) (bool, error)

IsEmptyDir returns true if an only if the given path is a directory (or symlink to a directory) and is empty. An error is returned if Stat() or ReadDir() fails.

func IsSameFilesystem

func IsSameFilesystem(path1 string, path2 string) (bool, error)

IsSameFilesystem returns true if both of the paths exist on the same filesystem.

func PsqlAlterUserWithPassword

func PsqlAlterUserWithPassword(user, password string) error

PsqlAlterUserWithPassword runs a SQL query to set a user's password.

func PubKeyFromPriv

func PubKeyFromPriv(keyData string) (string, error)

PubKeyFromPriv extracts the public key from a PEM-encoded PKCS1 private key and returns it as a PEM-encoded PKIX public key.

Currently the erchef bootstrap code expects the public key to be on on disk if the private key is on disk, so we use this to create the public key from the private key during the Chef Server credential migration.

func ReadURI

func ReadURI(uri string) ([]byte, error)

ReadURI reads the contents of the resource at the given uri

func RestoreA1PostgresToA2

func RestoreA1PostgresToA2(connInfo *pg.A2ConnInfo, dbs []A1Database, timeout time.Duration) error

RestoreA1PostgresToA2 restores the A1 postgres dump to A2

func RetrievePivotalKey

func RetrievePivotalKey() (string, error)

func RetrieveWebuiPrivKey

func RetrieveWebuiPrivKey() (string, error)

func RetrieveWebuiPubKey

func RetrieveWebuiPubKey() (string, error)

func SpaceRequiredToMove

func SpaceRequiredToMove(srcPath string, dstPath string) (uint64, error)

SpaceRequiredToMove attempts to determine how much space would be required on dst if `mv src dst` was called for each of the sources, assuming rename() will be called if src and dst are on the same filesystem.

func SpaceUsedByDir

func SpaceUsedByDir(path string) (uint64, error)

SpaceUsedByDir returns the disk space used in KB for the given directory.

func SystemCtlDisableChefServer

func SystemCtlDisableChefServer() error

SystemCtlDisableChefServer runs `systemctl disable private_chef-runsvdir-start.service` Succeeds if Chef Server is disabled or was already disabled

func SystemCtlDisableDelivery

func SystemCtlDisableDelivery() error

SystemCtlDisableDelivery runs `systemctl disable delivery-runsvdir-start.service` Succeeds if delivery is disabled or was already disabled

func SystemCtlIsEnabledChefServer

func SystemCtlIsEnabledChefServer() bool

SystemCtlIsEnabledChefServer runs `systemctl is-enabled private_chef-runsvdir-start.service`

func SystemCtlIsEnabledDelivery

func SystemCtlIsEnabledDelivery() bool

SystemCtlIsEnabledDelivery runs `systemctl is-enabled delivery-runsvdir-start.service`

func SystemCtlStopChefServer

func SystemCtlStopChefServer() error

SystemCtlStopChefServer runs `systemctl stop private_chef-runsvdir-start.service` Succeeds if Chef Server is stopped or was already stopped

func SystemCtlStopDelivery

func SystemCtlStopDelivery() error

SystemCtlStopDelivery runs `systemctl stop delivery-runsvdir-start.service` Succeeds if delivery is stopped or was already stopped

func VerifyA1QueueDrained

func VerifyA1QueueDrained(a1Config *A1Config) error

VerifyA1QueueDrained checks A1's rabbitmq to ensure all queued data from the data collector gets written to disk before we continue with the upgrade. It checks the rabbitmq management API every 10s for up to 10m for a queue depth of zero. If the queue depth is still not zero after 10m, it will return the last error it encountered while checking.

func VerifyMaintMode

func VerifyMaintMode(conf *deployment.AutomateConfig) error

VerifyMaintMode makes one or more HTTP requests to the A1/Chef server endpoints to ensure that they're not accepting data. Verification succeeds if the requests return 503; any other response is considered a failure. If a check fails after five tries the last error encountered is returned.

func VersionStringFromA1Manifest

func VersionStringFromA1Manifest() (string, error)

VersionStringFromA1Manifest returns the version string from the Automate 1 manifest

func WaitForEsSnapshot

func WaitForEsSnapshot(w cli.FormatWriter, esURL, repoType, name string) error

WaitForEsSnapshot polls elasticsearch snapshot status at the given esURL and displays a progressbar showing the shard-wise progress of the snapshot

Types

type A1Config

type A1Config struct {
	DeliveryRunningPath   string
	DeliverySecretsPath   string
	ChefServerRunningPath string
	DeliveryRunning       *DeliveryRunning
	DeliverySecrets       *DeliverySecrets
	ChefServerRunning     *ChefServerRunning
}

A1Config represents the A1 configuration

func NewA1Config

func NewA1Config() *A1Config

NewA1Config returns a new default instance of an A1Config.

func (*A1Config) LoadChefServerRunning

func (c *A1Config) LoadChefServerRunning() error

LoadChefServerRunning marshals chef-server-running.json into the ChefServerRunning struct

func (*A1Config) LoadDeliveryRunning

func (c *A1Config) LoadDeliveryRunning() error

LoadDeliveryRunning marshals the delivery-running.json into a DeliveryRunning struct.

func (*A1Config) LoadDeliverySecrets

func (c *A1Config) LoadDeliverySecrets() error

LoadDeliverySecrets marshals the delivery-secrets.json into a DeliverySecrets struct.

type A1Database

type A1Database struct {
	// contains filtered or unexported fields
}

A1Database represents a SQL database to migration

type A1Upgrade

type A1Upgrade struct {
	CreatedAt time.Time
	*A1Config
	A2Config             *dc.AutomateConfig
	SkipUpgradePreflight bool

	PgDumpWait                int
	PgRestoreWait             int
	SkipBackup                bool
	FileMoveTimeout           int
	SkipBackupCheck           bool
	SkipDisasterRecoveryCheck bool
	SkipExternalESCheck       bool
	SkipFIPSCheck             bool
	SkipSAMLCheck             bool
	SkipWorkflowCheck         bool
	EnableChefServer          bool
	EnableWorkflow            bool
	// contains filtered or unexported fields
}

A1Upgrade represents a given A1Upgrade

func NewA1Upgrade

func NewA1Upgrade(options ...Option) (*A1Upgrade, error)

NewA1Upgrade initializes a new A1Upgrade and applies any provided optional functions.

func (*A1Upgrade) Databases

func (u *A1Upgrade) Databases() []A1Database

Databases returns a slice of A1Databases that need to me migrated

func (*A1Upgrade) GenerateA2ConfigIfNoneProvided

func (u *A1Upgrade) GenerateA2ConfigIfNoneProvided(a2ConfigPath string) error

GenerateA2ConfigIfNoneProvided creates an Automate 2 configuration from Automate 1 configuration if a2ConfigPath is the zero value for a string (""). If a2ConfigPath is not the zero value, it's assumed that

  1. an a2 config has been loaded via the initializer options, and:
  2. the a2 config that we loaded contains user customizations that we must not override.

func (*A1Upgrade) SetA2Config

func (u *A1Upgrade) SetA2Config(a2Config *dc.AutomateConfig) error

SetA2Config replaces upgrade's A2Config with the given a2Config

func (*A1Upgrade) SetAdminPassword

func (u *A1Upgrade) SetAdminPassword(password string) error

SetAdminPassword sets the admin password for the upgrade.

func (*A1Upgrade) SetChannel

func (u *A1Upgrade) SetChannel(channel string) error

SetChannel sets the channel for the upgrade.

func (*A1Upgrade) SetChefServerEnabled

func (u *A1Upgrade) SetChefServerEnabled(enabled bool) error

func (*A1Upgrade) SetHartifactsPath

func (u *A1Upgrade) SetHartifactsPath(path string) error

SetHartifactsPath overrides the hartifacts path. configuration during initialization.

func (*A1Upgrade) SetManifestDir

func (u *A1Upgrade) SetManifestDir(dir string) error

SetManifestDir sets the manifest dir for the upgrade.

func (*A1Upgrade) SetOverrideOrigin

func (u *A1Upgrade) SetOverrideOrigin(origin string) error

SetOverrideOrigin sets the override origin for the upgrade.

func (*A1Upgrade) SetUpgradeStrategy

func (u *A1Upgrade) SetUpgradeStrategy(strategy string) error

SetUpgradeStrategy sets the upgrade strategy for the upgrade.

func (*A1Upgrade) SetWorkflowEnabled

func (u *A1Upgrade) SetWorkflowEnabled(enabled bool) error

type Bookshelf

type Bookshelf struct {
	AbandonedUploadCleanupInterval json.Number `json:"abandoned_upload_cleanup_interval"`
	DbPoolerTimeout                json.Number `json:"db_pooler_timeout"`
	DbPoolInit                     json.Number `json:"db_pool_init"`
	DbPoolMax                      json.Number `json:"db_pool_max"`
	DbPoolQueueMax                 json.Number `json:"db_pool_queue_max"`
	DbPoolSize                     json.Number `json:"db_pool_size"`
	DeleteDataCleanupInterval      json.Number `json:"delete_data_cleanup_interval"`
	Enable                         bool        `json:"enable"`
	LogRotation                    struct {
		FileMaxbytes json.Number `json:"file_maxbytes"`
		NumToKeep    json.Number `json:"num_to_keep"`
	} `json:"log_rotation"`
	SqlRetryCount  json.Number `json:"sql_retry_count"`
	SqlRetryDelay  json.Number `json:"sql_retry_delay"`
	StorageType    string      `json:"storage_type"`
	StreamDownload bool        `json:"stream_download"`
	SqlDbTimeout   json.Number `json:"sql_db_timeout"`
}

type CSNginx

type CSNginx struct {
	ClientMaxBodySize         string      `json:"client_max_body_size"`
	Gzip                      string      `json:"gzip"`
	GzipCompLevel             string      `json:"gzip_comp_level"`
	GzipHTTPVersion           string      `json:"gzip_http_version"`
	GzipProxied               string      `json:"gzip_proxied"`
	KeepaliveTimeout          json.Number `json:"keepalive_timeout"`
	ProxyConnectTimeout       json.Number `json:"proxy_connect_timeout"`
	Sendfile                  string      `json:"sendfile"`
	ServerNamesHashBucketSize json.Number `json:"server_names_hash_bucket_size"`
	TCPNodelay                string      `json:"tcp_nodelay"`
	TCPNopush                 string      `json:"tcp_nopush"`
	WorkerConnections         json.Number `json:"worker_connections"`
	WorkerProcesses           json.Number `json:"worker_processes"`
}

type CSRPostgreSQL

type CSRPostgreSQL struct {
	Enable   bool        `json:"enable"`
	External bool        `json:"external"`
	Vip      string      `json:"vip"`
	Port     json.Number `json:"port"`
}

type ChefServerRunning

type ChefServerRunning struct {
	PrivateChef PrivateChef `json:"private_chef"`
}

type CompatChecker

type CompatChecker struct {
	Warnings int
	Failures int
	Msgs     strings.Builder
}

func NewCompatChecker

func NewCompatChecker() CompatChecker

func (*CompatChecker) BackupConfigured

func (c *CompatChecker) BackupConfigured(backupType string, retention bool, backupLocation string, backupS3Bucket string) error

func (*CompatChecker) ChefServerBookshelfConfigValid

func (c *CompatChecker) ChefServerBookshelfConfigValid(csBookshelfConfig *Bookshelf) error

func (*CompatChecker) ChefServerElasticsearchConfigValid

func (c *CompatChecker) ChefServerElasticsearchConfigValid(csEsConfig *OpscodeSolr4,
	a1EsConfig *DeliveryRunningOpensearch,
	csErchefConfig *OpscodeErchef) error

func (*CompatChecker) ChefServerPostgresConfigValid

func (c *CompatChecker) ChefServerPostgresConfigValid(csPgConfig *CSRPostgreSQL, a1PgConfig *DeliveryRunningPostgreSQL) error

func (*CompatChecker) DisasterRecoveryConfigured

func (c *CompatChecker) DisasterRecoveryConfigured(primary string, standby string) error

func (*CompatChecker) ExternalElasticsearchConfigured

func (c *CompatChecker) ExternalElasticsearchConfigured(clusterURLS []string) error

func (*CompatChecker) FipsConfigured

func (c *CompatChecker) FipsConfigured(fips bool) error

func (*CompatChecker) OcIDNotUsed

func (c *CompatChecker) OcIDNotUsed(ocIDConfig *OcID) error

func (*CompatChecker) ProxyConfigured

func (c *CompatChecker) ProxyConfigured(proxy string)

func (*CompatChecker) RunAutomateChecks

func (c *CompatChecker) RunAutomateChecks(a1Config *A1Config, skip CompatCheckerSkips) error

func (*CompatChecker) RunChefServerChecks

func (c *CompatChecker) RunChefServerChecks(a1Config *A1Config) error

func (*CompatChecker) RunWorkflowChecks

func (c *CompatChecker) RunWorkflowChecks(a1Config *A1Config) error

RunWorkflowChecks run all the workflow checks to verify compatibility with A2

TODO: @afiune here we can verify everything wee need to verify, correct config, existing git-repos, valid ssh-keys, etc.

func (*CompatChecker) RunningMarketplaceImage

func (c *CompatChecker) RunningMarketplaceImage(omnibusRoot string) error

func (*CompatChecker) SAMLConfigured

func (c *CompatChecker) SAMLConfigured(saml bool) error

func (*CompatChecker) UnsupportedCSAddOnsNotUsed

func (c *CompatChecker) UnsupportedCSAddOnsNotUsed(omnibusRoot string) error

func (*CompatChecker) WorkflowGitReposValid

func (c *CompatChecker) WorkflowGitReposValid(gitReposDir string) error

WorkflowGitReposValid

func (*CompatChecker) WorkflowInUse

func (c *CompatChecker) WorkflowInUse(workflowDir string) error

WorkflowInUse

type CompatCheckerSkips

type CompatCheckerSkips struct {
	BackupCheck           bool
	DisasterRecoveryCheck bool
	ExternalESCheck       bool
	FIPSCheck             bool
	SAMLCheck             bool
	WorkflowCheck         bool
}

func (*CompatCheckerSkips) SkipWorkflowCheck

func (s *CompatCheckerSkips) SkipWorkflowCheck()

@afiune delete me when workflow feature is completed, as well as the skip flags

type DataCollector

type DataCollector struct {
	HTTPInitCount json.Number `json:"http_init_count"`
	HTTPMaxCount  json.Number `json:"http_max_count"`
	Timeout       json.Number `json:"timeout"`
}

type DeliveryRunning

type DeliveryRunning struct {
	Delivery struct {
		FQDN      string `json:"fqdn"`
		IPVersion string `json:"ip_version"`
		Backup    struct {
			S3AccessKeyID          string `json:"access_key_id"`
			S3Bucket               string `json:"bucket"`
			S3SecretAccessKey      string `json:"secret_access_key"`
			S3Region               string `json:"region"`
			S3ServerSideEncryption string `json:"server_side_encryption"`
			S3SSECustomerAlgorithm string `json:"sse_customer_algorithm"`
			S3SSECustomerKey       string `json:"sse_customer_key"`
			S3SSECustomerKeyMD5    string `json:"sse_customer_key_md5"`
			S3SSEKMSKeyID          string `json:"ssekms_key_id"`
			Opensearch             struct {
				S3AccessKeyID          string `json:"access_key_id"`
				S3Bucket               string `json:"bucket"`
				S3Region               string `json:"region"`
				S3SecretAccessKey      string `json:"secret_access_key"`
				S3ServerSideEncryption string `json:"server_side_encryption"`
				Location               string `json:"location"`
				Type                   string `json:"type"`
			} `json:"opensearch"`
			Location  string `json:"location"`
			Type      string `json:"type"`
			Retention struct {
				Enabled      bool        `json:"enabled"`
				MaxArchives  json.Number `json:"max_archives"`
				MaxSnapshots json.Number `json:"max_snapshots"`
				// I'm not sure if we'll honor this but we'll grab it anyway
				Notation string `json:"notation"` // eg: "0 0 * * *"
			} `json:"cron"`
		}
		Compliance struct {
			MarketPath   string `json:"market_path"`
			ProfilesPath string `json:"profiles_path"`
			// I'm not sure we care about log rotation since we're relying
			// on the systemd journal. I've added it because it was in the
			// spec but we might be able to drop these later.
			LogRotation struct {
				MaxBytes       json.Number `json:"file_maxbytes"`
				RetentionCount json.Number `json:"num_to_keep"`
			} `json:"log_rotation"`
		} `json:"compliance_profiles"`
		DataCollector struct {
			Token string `json:"token"`
		} `json:"data_collector"`
		Delivery struct {
			GitRepos           string      `json:"git_repos"`
			LDAPHosts          []string    `json:"ldap_hosts"`
			LDAPPort           json.Number `json:"ldap_port"`
			LDAPTimeout        json.Number `json:"ldap_timeout"`
			LDAPBaseDN         string      `json:"ldap_base_dn"`
			LDAPBindDN         string      `json:"ldap_bind_dn"`
			LDAPBindDNPassword string      `json:"ldap_bind_dn_password"`
			LDAPEncryption     string      `json:"ldap_encryption"`
			LDAPLogin          string      `json:"ldap_attr_login"`
			LDAPMail           string      `json:"ldap_attr_mail"`
			Proxy              struct {
				Host     string      `json:"host"`
				Port     json.Number `json:"port"`
				User     string      `json:"user"`
				Password string      `json:"password"`
				NoProxy  []string    `json:"no_proxy"`
			} `json:"proxy"`
			PrimaryIp         string                       `json:"primary_ip"`
			StandbyIp         string                       `json:"standby_ip"`
			NoSSLVerification []string                     `json:"no_ssl_verification"`
			SSLCertificates   map[string]map[string]string `json:"ssl_certificates"`
			SecretsKey        string                       `json:"secrets_key"`
			ErlCookie         string                       `json:"erl_cookie"`
		} `json:"delivery"`
		Opensearch DeliveryRunningOpensearch `json:"opensearch"`
		FIPS       struct {
			Enabled bool `json:"enable"`
		} `json:"fips"`
		Nginx struct {
			AccessLog struct {
				BufferSize string `json:"buffer_size"`
				FlushTime  string `json:"flush_time"`
			} `json:"access_log"`
			ClientMaxBodySize        string      `json:"client_max_body_size"`
			Dir                      string      `json:"dir"`
			GZip                     string      `json:"gzip"`
			GZipCompLevel            string      `json:"gzip_comp_level"`
			GZipHTTPVersion          string      `json:"gzip_http_version"`
			GZipProxied              string      `json:"gzip_proxied"`
			GZipTypes                []string    `json:"gzip_types"`
			HTTPSPort                json.Number `json:"ssl_port"`
			HTTPPort                 json.Number `json:"non_ssl_port"`
			KeepaliveTimeout         json.Number `json:"keepalive_timeout"`
			KeepaliveRequests        json.Number `json:"keepalive_requests"`
			LargeClientHeaderBuffers struct {
				Size   string      `json:"size"`
				Number json.Number `json:"number"`
			} `json:"large_client_header_buffers"`
			LogRotation struct {
				MaxBytes       json.Number `json:"file_maxbytes"`
				RetentionCount json.Number `json:"num_to_keep"`
			} `json:"log_rotation"`
			MultiAccept           string      `json:"multi_accept"`
			Sendfile              string      `json:"sendfile"`
			SSLCiphers            string      `json:"ssl_ciphers"`
			SSLProtocols          string      `json:"ssl_protocols"`
			TCPNoPush             string      `json:"tcp_nopush"`
			TCPNoDelay            string      `json:"tcp_nodelay"`
			WorkerConnections     json.Number `json:"worker_connections"`
			WorkerProcesses       json.Number `json:"worker_processes"`
			WorkerProcessorMethod string      `json:"worker_processor_method"`
		} `json:"nginx"`
		Notifications struct {
			RuleStore   string `json:"rule_store_file"`
			LogRotation struct {
				MaxBytes       json.Number `json:"file_maxbytes"`
				RetentionCount json.Number `json:"num_to_keep"`
			} `json:"log_rotation"`
		} `json:"notifications"`
		PostgreSQL DeliveryRunningPostgreSQL `json:"postgresql"`
		Reaper     struct {
			RetentionPeriod       json.Number `json:"retention_period_in_days"`
			Threshold             json.Number `json:"free_space_threshold_percent"`
			Enabled               bool        `json:"enable"`
			Mode                  string      `json:"mode"`
			ArchiveDestination    string      `json:"archive_destination"`
			ArchiveFilesystemPath string      `json:"archive_filesystem_path"`
		} `json:"reaper"`
		Insights struct {
			DataDirectory string `json:"data_directory"`
		} `json:"insights"`
	} `json:"delivery"`
}

DeliveryRunning represents the data we're extracting from the A1 /etc/delivery/delivery-running.json configuration file.

type DeliveryRunningOpensearch

type DeliveryRunningOpensearch struct {
	ClusterURLS     []string    `json:"urls"`
	MaxOpenFiles    json.Number `json:"max_open_files"`
	MaxMapCount     json.Number `json:"max_map_count"`
	MaxLockedMemory string      `json:"max_locked_memory"`
	HeapSize        string      `json:"memory"`
	NewMemory       string      `json:"new_memory_size"`
	NginxProxyURL   string      `json:"nginx_proxy_url"`
	LogRotation     struct {
		MaxBytes       json.Number `json:"file_maxbytes"`
		RetentionCount json.Number `json:"num_to_keep"`
	} `json:"log_rotation"`
	RepoPath struct {
		Data   string `json:"data"`
		Logs   string `json:"logs"`
		Backup string `json:"repo"`
	} `json:"path"`
}

type DeliveryRunningPostgreSQL

type DeliveryRunningPostgreSQL struct {
	// Vip and SuperuserUsername are used only during upgrade from A1.
	// We deliberately do not generate entries for them in the A2 config.
	CheckpointSegments         json.Number `json:"checkpoint_segments"`
	CheckpointTimeout          string      `json:"checkpoint_timeout"`
	CheckpointCompletionTarget json.Number `json:"checkpoint_completion_target"`
	CheckpointWarning          string      `json:"checkpoint_warning"`
	DataDirectory              string      `json:"data_dir"`
	EffectiveCacheSize         string      `json:"effective_cache_size"`
	LogRotation                struct {
		MaxBytes       json.Number `json:"file_maxbytes"`
		RetentionCount json.Number `json:"num_to_keep"`
	} `json:"log_rotation"`
	ListenAddress          string      `json:"listen_address"`
	MaxConnections         json.Number `json:"max_connections"`
	MD5AuthCIDRAddresses   []string    `json:"md5_auth_cidr_addresses"`
	Port                   json.Number `json:"port"`
	SharedBuffers          string      `json:"shared_buffers"`
	SHMMAX                 json.Number `json:"shmmax"`
	SHMALL                 json.Number `json:"shmall"`
	SuperuserUsername      string      `json:"superuser_username"`
	SuperuserEnable        bool        `json:"superuser_enable"`
	TrustAuthCIDRAddresses []string    `json:"trust_auth_cidr_addresses"`
	Username               string      `json:"username"`
	WorkMem                string      `json:"work_mem"`
	Vip                    string      `json:"vip"`
}

type DeliverySecrets

type DeliverySecrets struct {
	Delivery struct {
		SQLPassword     string `json:"sql_password"`
		SQLROPassword   string `json:"sql_ro_password"`
		SQLREPLPassword string `json:"sql_repl_password"`
		SecretsKey      string `json:"secrets_key"`
	} `json:"delivery"`
	Postgresql struct {
		SuperuserPassword string `json:"superuser_password"`
	} `json:"postgresql"`
	RabbitMQ struct {
		Password           string `json:"password"`
		ManagementPassword string `json:"management_password"`
	} `json:"rabbitmq"`
}

DeliverySecrets represents the data we're extracting from the A1 /etc/delivery/delivery-secrets.json secrets file.

type EsAggSnapshotStats

type EsAggSnapshotStats struct {
	// Snapshots is the JSON array of status objects for each requested snapshot
	Snapshots []EsSnapshotStats `json:"snapshots"`
}

EsAggSnapshotStats represents the JSON messages returned by elasticsearch's snapshot status API. The API is reached at a URL like:

http(s)://host:port/_snapshot/BACKUP_REPO_NAME/SNAPSHOT_NAME/_status

Since elasticsearch supports requesting the status of multiple snapshots in one request, the top-level json object is a one-key map of "snapshots" to an Array of status objects for individual snapshots:

{
  "snapshots": [ snap1_stats, snap2_stats, ... ]
}

type EsShardsStats

type EsShardsStats struct {
	Done  int `json:"done"`
	Total int `json:"total"`
}

EsShardsStats is the JSON object in an elasticsearch snapshot status message that contains status about how many shards there are and how many have been snapshotted

type EsSnapshotStats

type EsSnapshotStats struct {
	// State can be "IN_PROGRESS", "STARTED", "SUCCESS", or one of a few failure
	// states. es5 and below use "STARTED", es6 uses "IN_PROGRESS"
	State string `json:"state"`
	// ShardsStats contains stats about the progress of the snapshot in terms of
	// shards. It's the only reliable way to track the progress of the snapshot,
	// as elasticsearch does not pre-compute the number of bytes needed for a
	// snapshot and therefore the "total_size_in_bytes" metric in the "stats"
	// object is not constant.
	ShardsStats EsShardsStats `json:"shards_stats"`
}

EsSnapshotStats represent the status of individual snapshots, as returned by elasticsearch's snapshot status API.

type FailureScenario

type FailureScenario int

FailureScenario represents a set of failure conditions that we want to be able to test and demo. To invoke a given failure condition in an e2e test, set FAILURE=number on the command line. To check if the user has requested a certain failure scenario, use the Failure() function.

const (
	// NoFailure indicates that the user hasn't asked to exercise a specific
	// failure case
	NoFailure FailureScenario = 0

	// PreflightNoA1 is the case we're running upgrade on a host that doesn't
	// have automate v1 installed.
	PreflightNoA1 FailureScenario = 100

	// PreflightA1TooOld is the case that the installed automate v1 doesn't meet
	// our minimum version requirement for upgrading
	PreflightA1TooOld FailureScenario = 101

	// PreflightA1Down is the case that automate v1 is not running. We need a1 to
	// be up so we can export data from postgres
	PreflightA1Down FailureScenario = 102

	// PreflightA1Unhealthy is the case that a1's status check is failing. We
	// just don't want to deal with all the ways that could possibly break
	// upgrades.
	PreflightA1Unhealthy FailureScenario = 103

	// Makes the database checker return 1TB as the size of every postgres
	// database, which will make the preflight fail unless you have a few TB of
	// disk.
	PreflightPostgresTooBig FailureScenario = 104

	// ConfigConflictFIPS is the case that a1 is configured for FIPS, which we
	// don't support currently
	ConfigConflictFIPS FailureScenario = 200

	// ConfigConflictNoBackup is the case that a1 doesn't have backups
	// configured. We want to very strongly encourage users to backup before
	// upgrading in case the upgrade doesn't work.
	ConfigConflictNoBackup FailureScenario = 201

	// ConfigConflictSAML is the case that a1 is configured with SAML, which we
	// don't migrate currently
	ConfigConflictSAML FailureScenario = 202

	// MaintModeDataCollectorNotDown is the case that we attempted to put a1 in
	// maintenance mode but the data collector URL isn't serving 503s
	MaintModeDataCollectorNotDown FailureScenario = 300

	// MaintModeDataCollectorRetryTest indicates that the test harness should
	// fail to return a 503 for the data collector URL a few times before
	// returning a 503
	MaintModeDataCollectorRetryTest FailureScenario = 302

	// MaintModeQueuedDataNotProcessed is the case that we put a1 in maintenance
	// mode but the data collector queue didn't fully drain within some timeout
	// period.
	MaintModeQueuedDataNotProcessed FailureScenario = 301

	// MaintModeQueuedDataRetryTest indicates that the test harness should return
	// a non-zero queue depth from the stub rabbitmq management API a few times
	// before returning a zero queue depth.
	MaintModeQueuedDataRetryTest FailureScenario = 303

	// MaintModeRequiredRecipeNotDown is the case that we attempted to put A1 in
	// maintenance mode but the Chef server isn't serving 503s
	MaintModeRequiredRecipeNotDown FailureScenario = 304

	// MaintModeRequiredRecipeRetryTest will fail to return a 503 for the
	// required_recipe URL a few times before returning a 503
	MaintModeRequiredRecipeRetryTest FailureScenario = 305

	// A1BackupFail is the case that the backup of a1 failed
	A1BackupFail FailureScenario = 400

	// ExtractA1DataFail is the case that we failed for some reason to dump a1's
	// postgres or copy some of the other data.
	ExtractA1DataFail FailureScenario = 500

	// A1ShutdownFail is the case that `automate-ctl stop` errored out for some
	// reason.
	A1ShutdownFail FailureScenario = 600

	// ChefServerShutdownFail is the case that `chef-server-ctl stop` errored out
	// for some reason
	ChefServerShutdownFail FailureScenario = 601

	// ChefServerShowSecretFail is the case that `chef-server-ctl
	// show-secret` errored out for some reason
	ChefServerShowSecretFail FailureScenario = 602

	// DataImportFailPostgres is the case that we couldn't load a1's pg data into
	// a2's pg
	DataImportFailPostgres FailureScenario = 700

	// DataImportFailElastic is the case that we couldn't load a1's es data into
	// a2's es
	DataImportFailElastic FailureScenario = 701

	// Stage1MigrationFail is the case that domain services failed to ingest the
	// staged a1 data for some reason (or otherwise failed to start)
	Stage1MigrationFail FailureScenario = 800
)

type FileMover

type FileMover struct {
	RelDestPath string
	ServiceName string
	SrcPath     string
	Timeout     time.Duration

	User     string
	Group    string
	RsyncCmd string
	// contains filtered or unexported fields
}

A FileMover moves directories of files on the same system. Abstracted into our own type so we can consistently add behaviors such as permissions setting and timeouts.

func FileMoversForConfig

func FileMoversForConfig(d *DeliveryRunning, workflow bool) []*FileMover

FileMoversForConfig returns the file migrations that should take place given the passed DeliveryRunning configuration

func NewFileMover

func NewFileMover(src string, serviceName string, relDst string, opts ...FileMoverOpt) *FileMover

NewFileMover returns a FileMover with the default timeout and commandExector.

func (*FileMover) AlreadyMoved

func (f *FileMover) AlreadyMoved() (bool, error)

AlreadyMoved returns true if the directory has already been migrated.

func (*FileMover) DestPath

func (f *FileMover) DestPath() string

func (*FileMover) Move

func (f *FileMover) Move(w cli.BodyWriter) error

Move performs the file move

func (*FileMover) MoveStarted

func (f *FileMover) MoveStarted() (bool, error)

MoveStarted returns true if we've previously attempted to move this directory.

type FileMoverOpt

type FileMoverOpt func(*FileMover)

FileMoverOpt sets properties of the file mover

func ForceCopy

func ForceCopy() FileMoverOpt

ForceCopy is a FileMover option that forces the file to be copied

type OcBifrost

type OcBifrost struct {
	DbPoolerTimeout json.Number `json:"db_pooler_timeout"`
	DbPoolInit      json.Number `json:"db_pool_init"`
	DbPoolMax       json.Number `json:"db_pool_max"`
	DbPoolQueueMax  json.Number `json:"db_pool_queue_max"`
	DbPoolSize      json.Number `json:"db_pool_size"`
	LogRotation     struct {
		FileMaxbytes         json.Number `json:"file_maxbytes"`
		MaxMessagesPerSecond json.Number `json:"max_messages_per_second"`
		NumToKeep            json.Number `json:"num_to_keep"`
	} `json:"log_rotation"`
}

type OcID

type OcID struct {
	Applications map[string]interface{} `json:"applications"`
}

type OpscodeErchef

type OpscodeErchef struct {
	AuthzFanout            json.Number `json:"authz_fanout"`
	AuthSkew               json.Number `json:"auth_skew"`
	AuthzTimeout           json.Number `json:"authz_timeout"`
	AuthzPoolerTimeout     json.Number `json:"authz_pooler_timeout"`
	BaseResourceURL        string      `json:"base_resource_url"`
	BulkFetchBatchSize     json.Number `json:"bulk_fetch_batch_size"`
	DBPoolerTimeout        json.Number `json:"db_pooler_timeout"`
	DBPoolInit             json.Number `json:"db_pool_init"`
	DBPoolMax              json.Number `json:"db_pool_max"`
	DBPoolQueueMax         json.Number `json:"db_pool_queue_max"`
	DepsolverPoolerTimeout json.Number `json:"depsolver_pooler_timeout"`
	DepsolverPoolQueueMax  json.Number `json:"depsolver_pool_queue_max"`
	DepsolverTimeout       json.Number `json:"depsolver_timeout"`
	DepsolverWorkerCount   json.Number `json:"depsolver_worker_count"`
	KeygenCacheSize        json.Number `json:"keygen_cache_size"`
	KeygenStartSize        json.Number `json:"keygen_start_size"`
	KeygenTimeout          json.Number `json:"keygen_timeout"`
	LogRotation            struct {
		FileMaxbytes         json.Number `json:"file_maxbytes"`
		MaxMessagesPerSecond json.Number `json:"max_messages_per_second"`
		NumToKeep            json.Number `json:"num_to_keep"`
	} `json:"log_rotation"`
	MaxRequestSize         json.Number `json:"max_request_size"`
	MemoryMaxbytes         json.Number `json:"max_bytes"`
	ReindexBatchSize       json.Number `json:"reindex_batch_size"`
	ReindexItemRetries     json.Number `json:"reindex_item_retries"`
	ReindexSleepMaxMs      json.Number `json:"reindex_sleep_max_ms"`
	ReindexSleepMinMs      json.Number `json:"reindex_sleep_min_ms"`
	SearchBatchSizeMaxSize json.Number `json:"search_batch_max_size"`
	SearchBatchSizeMaxWait json.Number `json:"search_batch_max_wait"`
	SearchProvider         string      `json:"search_provider"`
	SearchQueueMode        string      `json:"search_queue_mode"`
	SolrHTTPInitCount      json.Number `json:"solr_http_init_count"`
	SolrHTTPMaxCount       json.Number `json:"solr_http_max_count"`
	SolrTimeout            json.Number `json:"solr_timeout"`
	StrictSearchResultACLs bool        `json:"strict_search_result_acls"`
	SQLDBTimeout           json.Number `json:"sql_db_timeout"`
}

type OpscodeSolr4

type OpscodeSolr4 struct {
	Enable      bool   `json:"enable"`
	External    bool   `json:"external"`
	ExternalURL string `json:"external_url"`
}

type Option

type Option func(*A1Upgrade) error

Option represents an option that can be applied to an A1Upgrade. This is implemented using the "functional options" pattern. Various functions return an Option which can then be passed as arguments to NewA1Upgrade

``` upgrade := NewA1Upgrade(a1upgrade.WithDeliverySecrets("/some/path.json"),

a1upgrade.WithDeliveryRunning("/some/other/path.json"))

```

func SetFileMoveTimeout

func SetFileMoveTimeout(seconds int) Option

SetFileMoveTimeout returns an Option that sets the timeout in seconds for moving files during upgrade from A1.

func SetPostgresDumpWait

func SetPostgresDumpWait(seconds int) Option

SetPostgresDumpWait returns an Option that sets the PostgreSQL dump timeout in seconds.

func SetPostgresRestoreWait

func SetPostgresRestoreWait(seconds int) Option

SetPostgresRestoreWait returns an Option that sets the PostgreSQL restore timeout in seconds.

func SkipBackupConfiguredCheck

func SkipBackupConfiguredCheck(backup bool) Option

SkipBackupConfiguredCheck returns an Option that indicates whether to skip the A1 upgrade preflight check for configured backups.

func SkipDisasterRecoveryConfiguredCheck

func SkipDisasterRecoveryConfiguredCheck(dr bool) Option

SkipDisasterRecoveryConfiguredCheck returns an Option that indicates whether to skip the A1 upgrade preflight check for disaster recovery.

func SkipExternalESConfiguredCheck

func SkipExternalESConfiguredCheck(es bool) Option

SkipExternalESConfiguredCheck returns an Option that indicates whether to skip A1 upgrade preflight check for external elasticsearch.

func SkipFIPSConfiguredCheck

func SkipFIPSConfiguredCheck(fips bool) Option

SkipFIPSConfiguredCheck returns an Option that indicates whether to skip A1 upgrade preflight check for FIPS.

func SkipSAMLConfiguredCheck

func SkipSAMLConfiguredCheck(saml bool) Option

SkipSAMLConfiguredCheck returns an Option that indicates whether to skip A1 upgrade preflight check for SAML.

func SkipUpgradeBackup

func SkipUpgradeBackup(backup bool) Option

SkipUpgradeBackup returns an Option that indicates whether to skip A1 backup during upgrade.

func SkipUpgradePreflight

func SkipUpgradePreflight(skip bool) Option

SkipUpgradePreflight returns an Option that indicates whether to skip the upgrade preflight checks.

func SkipWorkflowConfiguredCheck

func SkipWorkflowConfiguredCheck(workflow bool) Option

SkipWorkflowConfiguredCheck returns an Option that indicates whether to skip the A1 upgrade preflight check for workflow.

func WithA2Config

func WithA2Config(a2Config *dc.AutomateConfig) Option

WithA2Config returns an Option option that sets the a2 config that should be used for the upgrade.

func WithA2ConfigPath

func WithA2ConfigPath(path string, options ...dc.AutomateConfigOpt) Option

WithA2ConfigPath returns an Option that sets the path to a TOML representation of the configuration that should be used for the upgrade.

func WithAdminPassword

func WithAdminPassword(password string) Option

WithAdminPassword returns an Option that sets the admin password in the A2 configuration.

func WithChannel

func WithChannel(channel string) Option

WithChannel returns an Option that sets the channel the in the A2 config.

func WithChefServerEnabled

func WithChefServerEnabled(enabled bool) Option

func WithChefServerRunning

func WithChefServerRunning(path string, EnableChefServer bool) Option

WithDeliveryRunning return an Option that sets the path to chef-server-running.json and loads it if EnableChefServer is true.

func WithDeliveryRunning

func WithDeliveryRunning(path string) Option

WithDeliveryRunning returns an Option that sets the path to the Automate v1 configuration that should be used during the upgrade.

func WithDeliverySecrets

func WithDeliverySecrets(path string) Option

WithDeliverySecrets returns an Option that sets the path to the Automate v1 Secrets file that should be used during the upgrade.

func WithHartifactsPath

func WithHartifactsPath(path string) Option

WithHartifactsPath returns an Option that sets the hartifacts path in the A2 config.

func WithManifestDir

func WithManifestDir(path string) Option

WithManifestDir returns an Option that sets the manifest directory in the A2 config.

func WithOverrideOrigin

func WithOverrideOrigin(origin string) Option

WithOverrideOrigin returns an Option that sets the override the in the A2 config.

func WithUpgradeStrategy

func WithUpgradeStrategy(strategy string) Option

WithUpgradeStrategy returns an Option that sets the upgrade strategy the in the A2 config.

func WithWorkflowEnabled

func WithWorkflowEnabled(enabled bool) Option

type PreflightRunner

type PreflightRunner struct {
	// contains filtered or unexported fields
}

The PreflightRunner aggregates failures across our upgrade preflight checks and presents them to the user once all checks have been run.

func NewPreflightRunner

func NewPreflightRunner(
	writer cli.FormatWriter,
	deliveryRunning *DeliveryRunning,
	deliverySecrets *DeliverySecrets,
	checkChefServer bool,
	checkWorkflow bool) *PreflightRunner

NewPreflightRunner sets up a PreflightRunner

func (*PreflightRunner) Run

func (p *PreflightRunner) Run() error

Run the preflight checks

type PrivateChef

type PrivateChef struct {
	Bookshelf     Bookshelf     `json:"bookshelf"`
	CSNginx       CSNginx       `json:"nginx"`
	DataCollector DataCollector `json:"data_collector"`
	OcBifrost     OcBifrost     `json:"oc_bifrost"`
	OcID          OcID          `json:"oc_id"`
	OpscodeErchef OpscodeErchef `json:"opscode-erchef"`
	OpscodeSolr4  OpscodeSolr4  `json:"opscode-solr4"`
	Postgresql    CSRPostgreSQL `json:"postgresql"`
}

type RabbitStats

type RabbitStats struct {
	Messages int `json:"messages"`
}

RabbitStats maps the JSON from rabbit's management API to a struct. We discard almost all of it except for the Messages count.

type Reindexer

type Reindexer struct {
	// contains filtered or unexported fields
}

func NewReindexer

func NewReindexer(w cli.FormatWriter, elasticsearchURL string) (*Reindexer, error)

func (*Reindexer) DiskRequirementsInBytes

func (r *Reindexer) DiskRequirementsInBytes() (largestIndexBytes int64, err error)

func (*Reindexer) RunReindex

func (r *Reindexer) RunReindex() error

func (*Reindexer) UnknownIndicesError

func (r *Reindexer) UnknownIndicesError() error

type StatusHandler

type StatusHandler struct {
	// contains filtered or unexported fields
}

StatusHandler is a client side backup event stream handler.

func NewStatusHandler

func NewStatusHandler(opts ...StatusHandlerOpt) *StatusHandler

NewStatusHandler returns a new instance of a backup event handler

func (*StatusHandler) HandleStatus

func (sh *StatusHandler) HandleStatus(status *api.A1UpgradeStatusResponse) (done bool, err error)

HandleStatus handles status

type StatusHandlerOpt

type StatusHandlerOpt func(*StatusHandler)

StatusHandlerOpt represents an configuration function for the event handler

func NoTTY

func NoTTY() StatusHandlerOpt

NoTTY configures the status handler to produce simple output suitable for non-TTY writers.

func WithWriter

func WithWriter(writer cli.FormatWriter) StatusHandlerOpt

WithWriter configures the status handlers's writer

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL