playback

command module
v0.1.10 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 20, 2015 License: MIT Imports: 2 Imported by: 0

README

Playback

Playback is an OpenStack provisioning DevOps tool that all of the OpenStack components can be deployed automation with high availability.

Define a inventory file

The inventory file at inventory, the default setting is the Vagrant testing node. You can according to your environment to change parameters.

To define your variables in vars/openstack

The vars/openstack/openstack.yml is all the parameters.

  • openstack.yml

For OpenStack HA

Deploy Playback
pip install playback
initial the configurations
playback --init
cd .playback
Configure storage network(DEPRECATED! Using playback-nic instead of it)
playback --ansible 'openstack_interfaces.yml --extra-vars "node_name=lb01 storage_ip=192.168.1.10 storage_mask=255.255.255.0 storage_network=192.168.1.0 storage_broadcast=192.168.1.255" -vvvv'
playback --ansible 'openstack_interfaces.yml --extra-vars "node_name=lb02 storage_ip=192.168.1.11 storage_mask=255.255.255.0 storage_network=192.168.1.0 storage_broadcast=192.168.1.255" -vvvv'
playback --ansible 'openstack_interfaces.yml --extra-vars "node_name=controller01 storage_ip=192.168.1.12 storage_mask=255.255.255.0 storage_network=192.168.1.0 storage_broadcast=192.168.1.255" -vvvv'
playback --ansible 'openstack_interfaces.yml --extra-vars "node_name=controller02 storage_ip=192.168.1.13 storage_mask=255.255.255.0 storage_network=192.168.1.0 storage_broadcast=192.168.1.255" -vvvv'
playback --ansible 'openstack_interfaces.yml --extra-vars "node_name=compute01 storage_ip=192.168.1.14 storage_mask=255.255.255.0 storage_network=192.168.1.0 storage_broadcast=192.168.1.255" -vvvv'
playback --ansible 'openstack_interfaces.yml --extra-vars "node_name=compute02 storage_ip=192.168.1.20 storage_mask=255.255.255.0 storage_network=192.168.1.0 storage_broadcast=192.168.1.255" -vvvv'
playback --ansible 'openstack_interfaces.yml --extra-vars "node_name=compute03 storage_ip=192.168.1.18 storage_mask=255.255.255.0 storage_network=192.168.1.0 storage_broadcast=192.168.1.255" -vvvv'
playback --ansible 'openstack_interfaces.yml --extra-vars "node_name=compute04 storage_ip=192.168.1.17 storage_mask=255.255.255.0 storage_network=192.168.1.0 storage_broadcast=192.168.1.255" -vvvv'
playback --ansible 'openstack_interfaces.yml --extra-vars "node_name=compute05 storage_ip=192.168.1.16 storage_mask=255.255.255.0 storage_network=192.168.1.0 storage_broadcast=192.168.1.255" -vvvv'
playback --ansible 'openstack_interfaces.yml --extra-vars "node_name=compute06 storage_ip=192.168.1.15 storage_mask=255.255.255.0 storage_network=192.168.1.0 storage_broadcast=192.168.1.255" -vvvv'
Using playback-nic

Purge the configuration and set address to 192.169.151.19 for eth1 of host 192.169.150.19 as public interface:

playback-nic --purge --public --host 192.169.150.19 --user ubuntu --address 192.169.151.19 --nic eth1 --netmask 255.255.255.0 --gateway 192.169.151.1 --dns-nameservers "192.169.11.11 192.169.11.12"

Setting address to 192.168.1.12 for eth2 of host 192.169.150.19 as private interface:

playback-nic --private --host 192.169.150.19 --user ubuntu --address 192.168.1.12 --nic eth2 --netmask 255.255.255.0
HAProxy and Keepalived
playback --ansible 'openstack_haproxy.yml --extra-vars "host=lb01 router_id=lb01 state=MASTER priority=150" -vvvv'
playback --ansible 'openstack_haproxy.yml --extra-vars "host=lb02 router_id=lb02 state=SLAVE priority=100" -vvvv'
python patch-limits.py
Prepare OpenStack basic environment
playback --ansible 'openstack_basic_environment.yml -vvvv'
MariaDB Cluster
playback --ansible 'openstack_mariadb.yml --extra-vars "host=controller01 my_ip=192.169.151.19" -vvvv'
playback --ansible 'openstack_mariadb.yml --extra-vars "host=controller02 my_ip=192.169.151.17" -vvvv'
python keepalived.py
RabbitMQ Cluster
playback --ansible 'openstack_rabbitmq.yml --extra-vars "host=controller01" -vvvv'
playback --ansible 'openstack_rabbitmq.yml --extra-vars "host=controller02" -vvvv'
Keystone
playback --ansible 'openstack_keystone.yml --extra-vars "host=controller01" -vvvv'
playback --ansible 'openstack_keystone.yml --extra-vars "host=controller02" -vvvv'
Format devices for Swift Storage (sdb1 and sdc1)

Each of the swift nodes, /dev/sdb1 and /dev/sdc1, must contain a suitable partition table with one partition occupying the entire device. Although the Object Storage service supports any file system with extended attributes (xattr), testing and benchmarking indicate the best performance and reliability on XFS.

playback --ansible 'openstack_storage_partitions.yml --extra-vars "host=compute05" -vvvv'
playback --ansible 'openstack_storage_partitions.yml --extra-vars "host=compute06" -vvvv'
Swift Storage
playback --ansible 'openstack_swift_storage.yml --extra-vars "host=compute05 my_storage_ip=192.168.1.16" -vvvv'
playback --ansible 'openstack_swift_storage.yml --extra-vars "host=compute06 my_storage_ip=192.168.1.15" -vvvv'
Swift Proxy
playback --ansible 'openstack_swift_proxy.yml --extra-vars "host=controller01" -vvvv'
playback --ansible 'openstack_swift_proxy.yml --extra-vars "host=controller02" -vvvv'                    
Initial swift rings
playback --ansible 'openstack_swift_builder_file.yml -vvvv'
playback --ansible 'openstack_swift_add_node_to_the_ring.yml --extra-vars "swift_storage_storage_ip=192.168.1.16 device_name=sdb1 device_weight=100" -vvvv'
playback --ansible 'openstack_swift_add_node_to_the_ring.yml --extra-vars "swift_storage_storage_ip=192.168.1.16 device_name=sdc1 device_weight=100" -vvvv'
playback --ansible 'openstack_swift_add_node_to_the_ring.yml --extra-vars "swift_storage_storage_ip=192.168.1.15 device_name=sdb1 device_weight=100" -vvvv'
playback --ansible 'openstack_swift_add_node_to_the_ring.yml --extra-vars "swift_storage_storage_ip=192.168.1.15 device_name=sdc1 device_weight=100" -vvvv'
playback --ansible 'openstack_swift_rebalance_ring.yml -vvvv'
Distribute ring configuration files

Copy the account.ring.gz, container.ring.gz, and object.ring.gz files to the /etc/swift directory on each storage node and any additional nodes running the proxy service.

Finalize swift installation
playback --ansible 'openstack_swift_finalize_installation.yml --extra-vars "hosts=swift_proxy" -vvvv'
playback --ansible 'openstack_swift_finalize_installation.yml --extra-vars "hosts=swift_storage" -vvvv'
Glance (Swift backend)
playback --ansible 'openstack_glance.yml --extra-vars "host=controller01" -vvvv'
playback --ansible 'openstack_glance.yml --extra-vars "host=controller02" -vvvv'
To deploy the Ceph admin node

Ensure the admin node must be have password-less SSH access to Ceph nodes. When ceph-deploy logs in to a Ceph node as a user, that particular user must have passwordless sudo privileges.

Copy SSH public key to each Ceph node from Ceph admin node

ssh-keygen
ssh-copy-id ubuntu@ceph_node

Deploy the Ceph admin node

playback --ansible 'openstack_ceph_admin.yml -vvvv'
To deploy the Ceph initial monitor
playback --ansible 'openstack_ceph_initial_mon.yml -vvvv'
To deploy the Ceph clients
playback --ansible 'openstack_ceph_client.yml --extra-vars "client=controller01" -vvvv'
playback --ansible 'openstack_ceph_client.yml --extra-vars "client=controller02" -vvvv'
playback --ansible 'openstack_ceph_client.yml --extra-vars "client=compute01" -vvvv'
playback --ansible 'openstack_ceph_client.yml --extra-vars "client=compute02" -vvvv'
playback --ansible 'openstack_ceph_client.yml --extra-vars "client=compute03" -vvvv'
playback --ansible 'openstack_ceph_client.yml --extra-vars "client=compute04" -vvvv'
playback --ansible 'openstack_ceph_client.yml --extra-vars "client=compute05" -vvvv'
playback --ansible 'openstack_ceph_client.yml --extra-vars "client=compute06" -vvvv'
To add Ceph initial monitor(s) and gather the keys
playback --ansible 'openstack_ceph_gather_keys.yml -vvvv'
To add Ceph OSDs
playback --ansible 'openstack_ceph_osd.yml --extra-vars "node=compute01 disk=sdb partition=sdb1" -vvvv'
playback --ansible 'openstack_ceph_osd.yml --extra-vars "node=compute01 disk=sdc partition=sdc1" -vvvv'
playback --ansible 'openstack_ceph_osd.yml --extra-vars "node=compute02 disk=sdb partition=sdb1" -vvvv'
playback --ansible 'openstack_ceph_osd.yml --extra-vars "node=compute02 disk=sdc partition=sdc1" -vvvv'
playback --ansible 'openstack_ceph_osd.yml --extra-vars "node=compute03 disk=sdb partition=sdb1" -vvvv'
playback --ansible 'openstack_ceph_osd.yml --extra-vars "node=compute03 disk=sdc partition=sdc1" -vvvv'
playback --ansible 'openstack_ceph_osd.yml --extra-vars "node=compute04 disk=sdb partition=sdb1" -vvvv'
playback --ansible 'openstack_ceph_osd.yml --extra-vars "node=compute04 disk=sdc partition=sdc1" -vvvv'
To add Ceph monitors
playback --ansible 'openstack_ceph_mon.yml --extra-vars "node=compute01" -vvvv'
playback --ansible 'openstack_ceph_mon.yml --extra-vars "node=compute02" -vvvv'
playback --ansible 'openstack_ceph_mon.yml --extra-vars "node=compute03" -vvvv'
playback --ansible 'openstack_ceph_mon.yml --extra-vars "node=compute04" -vvvv'
To copy the Ceph keys to nodes

Copy the configuration file and admin key to your admin node and your Ceph Nodes so that you can use the ceph CLI without having to specify the monitor address and ceph.client.admin.keyring each time you execute a command.

playback --ansible 'openstack_ceph_copy_keys.yml --extra-vars "node=controller01" -vvvv'
playback --ansible 'openstack_ceph_copy_keys.yml --extra-vars "node=controller02" -vvvv'
playback --ansible 'openstack_ceph_copy_keys.yml --extra-vars "node=compute01" -vvvv'
playback --ansible 'openstack_ceph_copy_keys.yml --extra-vars "node=compute02" -vvvv'
playback --ansible 'openstack_ceph_copy_keys.yml --extra-vars "node=compute03" -vvvv'
playback --ansible 'openstack_ceph_copy_keys.yml --extra-vars "node=compute04" -vvvv'
playback --ansible 'openstack_ceph_copy_keys.yml --extra-vars "node=compute05" -vvvv'
playback --ansible 'openstack_ceph_copy_keys.yml --extra-vars "node=compute06" -vvvv'
Create the cinder ceph user and pool name
playback --ansible 'openstack_ceph_cinder_pool_user.yml -vvvv'
cinder-api
playback --ansible 'openstack_cinder_api.yml --extra-vars "host=controller01" -vvvv'
playback --ansible 'openstack_cinder_api.yml --extra-vars "host=controller02" -vvvv'
Install cinder-volume on controller node(Ceph backend)
playback --ansible 'openstack_cinder_volume_ceph.yml --extra-vars "host=controller01" -vvvv'
playback --ansible 'openstack_cinder_volume_ceph.yml --extra-vars "host=controller02" -vvvv'

Copy the ceph.client.cinder.keyring from ceph-admin node to /etc/ceph/ceph.client.cinder.keyring of cinder volume nodes and nova-compute nodes to using the ceph client.

ceph auth get-or-create client.cinder | ssh ubuntu@controller01 sudo tee /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder | ssh ubuntu@controller02 sudo tee /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder | ssh ubuntu@compute01 sudo tee /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder | ssh ubuntu@compute02 sudo tee /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder | ssh ubuntu@compute03 sudo tee /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder | ssh ubuntu@compute04 sudo tee /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder | ssh ubuntu@compute05 sudo tee /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder | ssh ubuntu@compute06 sudo tee /etc/ceph/ceph.client.cinder.keyring
Restart volume service dependency to take effect for ceph backend
python restart_cindervol_deps.py ubuntu@controller01 ubuntu@controller02
Nova Controller
playback --ansible 'openstack_compute_controller.yml --extra-vars "host=controller01" -vvvv'
playback --ansible 'openstack_compute_controller.yml --extra-vars "host=controller02" -vvvv'
Add Dashboard
playback --ansible 'openstack_horizon.yml -vvvv'
Nova Computes
playback --ansible 'openstack_compute_node.yml --extra-vars "host=compute01 my_ip=192.169.151.16" -vvvv'
playback --ansible 'openstack_compute_node.yml --extra-vars "host=compute02 my_ip=192.169.151.22" -vvvv'
playback --ansible 'openstack_compute_node.yml --extra-vars "host=compute03 my_ip=192.169.151.18" -vvvv'
playback --ansible 'openstack_compute_node.yml --extra-vars "host=compute04 my_ip=192.169.151.25" -vvvv'
playback --ansible 'openstack_compute_node.yml --extra-vars "host=compute05 my_ip=192.169.151.12" -vvvv'
playback --ansible 'openstack_compute_node.yml --extra-vars "host=compute06 my_ip=192.169.151.14" -vvvv'
Install Legacy networking nova-network(FlatDHCP Only)
playback --ansible 'openstack_nova_network_compute.yml --extra-vars "host=compute01 my_ip=192.169.151.16" -vvvv'
playback --ansible 'openstack_nova_network_compute.yml --extra-vars "host=compute02 my_ip=192.169.151.22" -vvvv'
playback --ansible 'openstack_nova_network_compute.yml --extra-vars "host=compute03 my_ip=192.169.151.18" -vvvv'
playback --ansible 'openstack_nova_network_compute.yml --extra-vars "host=compute04 my_ip=192.169.151.25" -vvvv'
playback --ansible 'openstack_nova_network_compute.yml --extra-vars "host=compute05 my_ip=192.169.151.12" -vvvv'
playback --ansible 'openstack_nova_network_compute.yml --extra-vars "host=compute06 my_ip=192.169.151.14" -vvvv'

Create initial network. For example, using an exclusive slice of 172.16.0.0/16 with IP address range 172.16.0.1 to 172.16.255.254:

nova network-create ext-net --bridge br100 --multi-host T --fixed-range-v4 172.16.0.0/16
nova floating-ip-bulk-create --pool ext-net 192.169.151.65/26
nova floating-ip-bulk-list

Extend the demo-net pool:

nova floating-ip-bulk-create --pool ext-net 192.169.151.128/26
nova floating-ip-bulk-create --pool ext-net 192.169.151.192/26
nova floating-ip-bulk-list
Orchestration components(heat)
 playback --ansible 'openstack_heat_controller.yml --extra-vars "host=controller01" -vvvv'
 playback --ansible 'openstack_heat_controller.yml --extra-vars "host=controller02" -vvvv'
Enable service auto start
python patch-autostart.py
Dns as a Service
playback --ansible openstack_dns.yml

execute this on controller01:

bash /mnt/designate-keystone-setup
nohup designate-central > /dev/null 2>&1 &
nohup designate-api > /dev/null 2>&1 &
Convert kvm to docker(OPTIONAL)
playback --novadocker --user ubuntu --hosts compute06

Documentation

The Go Gopher

There is no documentation for this package.

Directories

Path Synopsis
The config package is that OpenStack configuration.
The config package is that OpenStack configuration.
libs
azure-sdk-for-go/core/http
Package http provides HTTP client and server implementations.
Package http provides HTTP client and server implementations.
azure-sdk-for-go/core/http/cgi
Package cgi implements CGI (Common Gateway Interface) as specified in RFC 3875.
Package cgi implements CGI (Common Gateway Interface) as specified in RFC 3875.
azure-sdk-for-go/core/http/cookiejar
Package cookiejar implements an in-memory RFC 6265-compliant http.CookieJar.
Package cookiejar implements an in-memory RFC 6265-compliant http.CookieJar.
azure-sdk-for-go/core/http/fcgi
Package fcgi implements the FastCGI protocol.
Package fcgi implements the FastCGI protocol.
azure-sdk-for-go/core/http/httptest
Package httptest provides utilities for HTTP testing.
Package httptest provides utilities for HTTP testing.
azure-sdk-for-go/core/http/httputil
Package httputil provides HTTP utility functions, complementing the more common ones in the net/http package.
Package httputil provides HTTP utility functions, complementing the more common ones in the net/http package.
azure-sdk-for-go/core/http/pprof
Package pprof serves via its HTTP server runtime profiling data in the format expected by the pprof visualization tool.
Package pprof serves via its HTTP server runtime profiling data in the format expected by the pprof visualization tool.
azure-sdk-for-go/core/tls
Package tls partially implements TLS 1.2, as specified in RFC 5246.
Package tls partially implements TLS 1.2, as specified in RFC 5246.
azure-sdk-for-go/management
Package management provides the main API client to construct other clients and make requests to the Microsoft Azure Service Management REST API.
Package management provides the main API client to construct other clients and make requests to the Microsoft Azure Service Management REST API.
azure-sdk-for-go/management/hostedservice
Package hostedservice provides a client for Hosted Services.
Package hostedservice provides a client for Hosted Services.
azure-sdk-for-go/management/location
Package location provides a client for Locations.
Package location provides a client for Locations.
azure-sdk-for-go/management/networksecuritygroup
Package networksecuritygroup provides a client for Network Security Groups.
Package networksecuritygroup provides a client for Network Security Groups.
azure-sdk-for-go/management/osimage
Package osimage provides a client for Operating System Images.
Package osimage provides a client for Operating System Images.
azure-sdk-for-go/management/storageservice
Package storageservice provides a client for Storage Services.
Package storageservice provides a client for Storage Services.
azure-sdk-for-go/management/testutils
Package testutils contains some test utilities for the Azure SDK
Package testutils contains some test utilities for the Azure SDK
azure-sdk-for-go/management/virtualmachine
Package virtualmachine provides a client for Virtual Machines.
Package virtualmachine provides a client for Virtual Machines.
azure-sdk-for-go/management/virtualmachinedisk
Package virtualmachinedisk provides a client for Virtual Machine Disks.
Package virtualmachinedisk provides a client for Virtual Machine Disks.
azure-sdk-for-go/management/virtualmachineimage
Package virtualmachineimage provides a client for Virtual Machine Images.
Package virtualmachineimage provides a client for Virtual Machine Images.
azure-sdk-for-go/management/virtualnetwork
Package virtualnetwork provides a client for Virtual Networks.
Package virtualnetwork provides a client for Virtual Networks.
azure-sdk-for-go/management/vmutils
Package vmutils provides convenience methods for creating Virtual Machine Role configurations.
Package vmutils provides convenience methods for creating Virtual Machine Role configurations.
azure-sdk-for-go/storage
Package storage provides clients for Microsoft Azure Storage Services.
Package storage provides clients for Microsoft Azure Storage Services.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL