fastforward

package module
v0.0.10 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jul 21, 2016 License: MIT Imports: 0 Imported by: 0

README

FastForward GoDoc Build Status

FastForward is an ultimate DevOps platform.

Quick-start provision OpenStack(mitaka) on xenil or trusty

Requirements
  • The OpenStack bare metal hosts are in MAAS environment(recommend)
  • All hosts are two NICs at least(external and internal)
  • We assume that you have ceph installed, the cinder bachend default using ceph, the running instace default to using ceph as it's local storage. About ceph please visit: http://docs.ceph.com/docs/master/rbd/rbd-openstack/ or see the (Option)Ceph Guide below.
  • For resize instance nova user can be login to each compute node via ssh passwordless(include sudo), and all compute nodes need to restart libvirt-bin to enable live migration
  • The FastForward node is the same as ceph-deploy node where can be login to each openstack node passwordless and sudo-passwordless
  • The FastForward node default using ~/.ssh/id_rsa ssh private key to logon remote server
  • You need to restart the nova-compute, cinder-volume and glance-api services to finalize the installation if you have selected the ceph as that backend
  • FastForward support consistency groups for future use but the default LVM and Ceph driver does not support consistency groups yet because the consistency technology is not available at that storage level
Install FastForward

Install FastForward on FASTFORWARD-NODE

pip install fastforward
Prepare environment

Prepare the OpenStack environment. (NOTE) DO NOT setup eth1 in /etc/network/interfaces

ff --user ubuntu --hosts \
HAPROXY1,\
HAPROXY2,\
CONTROLLER1,\
CONTROLLER2,\
COMPUTE1,\
COMPUTE2,\
COMPUTE3,\
COMPUTE4,\
COMPUTE5,\
COMPUTE6,\
COMPUTE7,\
COMPUTE8,\
COMPUTE9,\
COMPUTE10 \
environment \
prepare-host --public-interface eth1
MySQL HA

Deploy to CONTROLLER1

ff --user ubuntu --hosts CONTROLLER1 openstack mysql install
ff --user ubuntu --hosts CONTROLLER1 openstack mysql config --wsrep-cluster-address "gcomm://CONTROLLER1,CONTROLLER2" --wsrep-node-name="galera1" --wsrep-node-address="CONTROLLER1"

Deploy to CONTROLLER2

ff --user ubuntu --hosts CONTROLLER2 openstack mysql install
ff --user ubuntu --hosts CONTROLLER2 openstack mysql config --wsrep-cluster-address "gcomm://CONTROLLER1,CONTROLLER2" --wsrep-node-name="galera2" --wsrep-node-address="CONTROLLER2"

Start the cluster

ff --user ubuntu --hosts CONTROLLER1 openstack mysql manage --wsrep-new-cluster
ff --user ubuntu --hosts CONTROLLER2 openstack mysql manage --start
ff --user ubuntu --hosts CONTROLLER1 openstack mysql manage --change-root-password changeme

Show the cluster status

ff --user ubuntu --hosts CONTROLLER1 openstack mysql manage --show-cluster-status --root-db-pass changeme
HAProxy HA

Deploy to HAPROXY1

ff --user ubuntu --hosts HAPROXY1 openstack haproxy install

Deploy to HAPROXY2

ff --user ubuntu --hosts HAPROXY2 openstack haproxy install

Generate the HAProxy configuration and upload to target hosts(Do not forget to edit the generated configuration)

ff openstack haproxygen-conf
ff --user ubuntu --hosts HAPROXY1,HAPROXY2 openstack haproxy config --upload-conf haproxy.cfg

Configure Keepalived

ff --user ubuntu --hosts HAPROXY1 openstack haproxy config --configure-keepalived --router_id lb1 --priority 150 --state MASTER --interface eth0 --vip CONTROLLER_VIP
ff --user ubuntu --hosts HAPROXY2 openstack haproxy config --configure-keepalived --router_id lb2 --priority 100 --state SLAVE --interface eth0 --vip CONTROLLER_VIP
RabbitMQ HA

Deploy to CONTROLLER1 and CONTROLLER2

ff --user ubuntu --hosts CONTROLLER1,CONTROLLER2 openstack rabbitmq install --erlang-cookie changemechangeme --rabbit-user openstack --rabbit-pass changeme

Create cluster(Ensure CONTROLLER2 can access CONTROLLER1 via hostname)

ff --user ubuntu --hosts CONTROLLER2 openstack rabbitmq join-cluster --name rabbit@CONTROLLER1
Keystone HA

Create keystone database

ff --user ubuntu --hosts CONTROLLER1 openstack keystone create-keystone-db --root-db-pass changeme --keystone-db-pass changeme

Install keystone on CONTROLLER1 and CONTROLLER2

ff --user ubuntu --hosts CONTROLLER1 openstack keystone install --admin-token changeme --connection mysql+pymysql://keystone:changeme@CONTROLLER_VIP/keystone --memcached-servers CONTROLLER1:11211,CONTROLLER2:11211 --populate
ff --user ubuntu --hosts CONTROLLER2 openstack keystone install --admin-token changeme --connection mysql+pymysql://keystone:changeme@CONTROLLER_VIP/keystone --memcached-servers CONTROLLER1:11211,CONTROLLER2:11211

Create the service entity and API endpoints

ff --user ubuntu --hosts CONTROLLER1 openstack keystone create-entity-and-endpoint --os-token changeme --os-url http://CONTROLLER_VIP:35357/v3 --public-endpoint http://CONTROLLER_VIP:5000/v3 --internal-endpoint http://CONTROLLER_VIP:5000/v3 --admin-endpoint http://CONTROLLER_vip:35357/v3

Create projects, users, and roles

ff --user ubuntu --hosts CONTROLLER1 openstack keystone create-projects-users-roles --os-token changeme --os-url http://CONTROLLER_VIP:35357/v3 --admin-pass changeme --demo-pass changeme

(OPTION) you will need to create OpenStack client environment scripts admin-openrc.sh

export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=changeme
export OS_AUTH_URL=http://CONTROLLER_VIP:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
export OS_AUTH_VERSION=3

demo-openrc.sh

export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=demo
export OS_TENANT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=changeme
export OS_AUTH_URL=http://CONTROLLER_VIP:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
export OS_AUTH_VERSION=3
Glance HA

Create glance database

ff --user ubuntu --hosts CONTROLLER1 openstack glance create-glance-db --root-db-pass changeme --glance-db-pass changeme

Create service credentials

ff --user ubuntu --hosts CONTROLLER1 openstack glance create-service-credentials --os-password changeme --os-auth-url http://CONTROLLER_VIP:35357/v3 --glance-pass changeme --public-endpoint http://CONTROLLER_VIP:9292 --internal-endpoint http://CONTROLLER_VIP:9292 --admin-endpoint http://CONTROLLER_VIP:9292

Install glance on CONTROLLER1 and CONTROLLER2

ff --user ubuntu --hosts CONTROLLER1 openstack glance install --connection mysql+pymysql://glance:GLANCE_PASS@CONTROLLER_VIP/glance --auth-uri http://CONTROLLER_VIP:5000 --auth-url http://CONTROLLER_VIP:35357 --glance-pass changeme --memcached-servers CONTROLLER1:11211,CONTROLLER2:11211 --populate
ff --user ubuntu --hosts CONTROLLER2 openstack glance install --connection mysql+pymysql://glance:GLANCE_PASS@CONTROLLER_VIP/glance --auth-uri http://CONTROLLER_VIP:5000 --auth-url http://CONTROLLER_VIP:35357 --glance-pass changeme --memcached-servers CONTROLLER1:11211,CONTROLLER2:11211
Nova HA

Create nova database

ff --user ubuntu --hosts CONTROLLER1 openstack nova create-nova-db --root-db-pass changeme --nova-db-pass changeme

Create service credentials

ff --user ubuntu --hosts CONTROLLER1 openstack nova create-service-credentials --os-password changeme --os-auth-url http://CONTROLLER_VIP:35357/v3 --nova-pass changeme --public-endpoint 'http://CONTROLLER_VIP:8774/v2.1/%\(tenant_id\)s' --internal-endpoint 'http://CONTROLLER_VIP:8774/v2.1/%\(tenant_id\)s' --admin-endpoint 'http://CONTROLLER_VIP:8774/v2.1/%\(tenant_id\)s'

Install nova on CONTROLLER1

ff --user ubuntu --hosts CONTROLLER1 openstack nova install --connection mysql+pymysql://nova:NOVA_PASS@CONTROLLER_VIP/nova --api-connection mysql+pymysql://nova:NOVA_PASS@CONTROLLER_VIP/nova_api --auth-uri http://CONTROLLER_VIP:5000 --auth-url http://CONTROLLER_VIP:35357 --nova-pass changeme --my-ip MANAGEMENT_IP --memcached-servers CONTROLLER1:11211,CONTROLLER2:11211 --rabbit-hosts CONTROLLER1,CONTROLLER2 --rabbit-user openstack --rabbit-pass changeme --glance-api-servers http://CONTROLLER_VIP:9292 --neutron-endpoint http://CONTROLLER_VIP:9696 --neutron-pass changeme --metadata-proxy-shared-secret changeme --populate

Install nova on CONTROLLER2

ff --user ubuntu --hosts CONTROLLER2 openstack nova install --connection mysql+pymysql://nova:NOVA_PASS@CONTROLLER_VIP/nova --api-connection mysql+pymysql://nova:NOVA_PASS@CONTROLLER_VIP/nova_api --auth-uri http://CONTROLLER_VIP:5000 --auth-url http://CONTROLLER_VIP:35357 --nova-pass changeme --my-ip MANAGEMENT_IP --memcached-servers CONTROLLER1:11211,CONTROLLER2:11211 --rabbit-hosts CONTROLLER1,CONTROLLER2 --rabbit-user openstack --rabbit-pass changeme --glance-api-servers http://CONTROLLER_VIP:9292 --neutron-endpoint http://CONTROLLER_VIP:9696 --neutron-pass changeme --metadata-proxy-shared-secret changeme
Nova Compute

Add nova computes(use uuidgen to generate the ceph uuid)

ff --user ubuntu --hosts COMPUTE1 openstack nova-compute install --my-ip MANAGEMENT_IP --rabbit-hosts CONTROLLER1,CONTROLLER2 --rabbit-user openstack --rabbit-pass changeme --auth-uri http://CONTROLLER_VIP:5000 --auth-url http://CONTROLLER_VIP:35357 --nova-pass changeme --novncproxy-base-url http://CONTROLLER_VIP:6080/vnc_auto.html --glance-api-servers http://CONTROLLER_VIP:9292 --neutron-endpoint http://CONTROLLER_VIP:9696 --neutron-pass changeme --rbd-secret-uuid changeme-changeme-changeme-changeme --memcached-servers CONTROLLER1:11211,CONTROLLER2:11211
ff --user ubuntu --hosts COMPUTE2 openstack nova-compute install --my-ip MANAGEMENT_IP --rabbit-hosts CONTROLLER1,CONTROLLER2 --rabbit-user openstack --rabbit-pass changeme --auth-uri http://CONTROLLER_VIP:5000 --auth-url http://CONTROLLER_VIP:35357 --nova-pass changeme --novncproxy-base-url http://CONTROLLER_VIP:6080/vnc_auto.html --glance-api-servers http://CONTROLLER_VIP:9292 --neutron-endpoint http://CONTROLLER_VIP:9696 --neutron-pass changeme --rbd-secret-uuid changeme-changeme-changeme-changeme --memcached-servers CONTROLLER1:11211,CONTROLLER2:11211

The libvirt defaults to using ceph as shared storage, the ceph pool for running instance is vms. if you do not using ceph as it's bachend, you must remove the following param:

images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = changeme-changeme-changeme-changeme
disk_cachemodes="network=writeback"
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
Neutron HA

Create nova database

ff --user ubuntu --hosts CONTROLLER1 openstack neutron create-neutron-db --root-db-pass changeme --neutron-db-pass changeme

Create service credentials

ff --user ubuntu --hosts CONTROLLER1 openstack neutron create-service-credentials --os-password changeme --os-auth-url http://CONTROLLER_VIP:35357/v3 --neutron-pass changeme --public-endpoint http://CONTROLLER_VIP:9696 --internal-endpoint http://CONTROLLER_VIP:9696 --admin-endpoint http://CONTROLLER_VIP:9696

Install Neutron for self-service

ff --user ubuntu --hosts CONTROLLER1 openstack neutron install --connection mysql+pymysql://neutron:NEUTRON_PASS@CONTROLLER_VIP/neutron --rabbit-hosts CONTROLLER1,CONTROLLER2 --rabbit-user openstack --rabbit-pass changeme --auth-uri http://CONTROLLER_VIP:5000 --auth-url http://CONTROLLER_VIP:35357 --neutron-pass changeme --nova-url http://CONTROLLER_VIP:8774/v2.1 --nova-pass changeme --public-interface eth1 --local-ip MANAGEMENT_INTERFACE_IP --nova-metadata-ip CONTROLLER_VIP --metadata-proxy-shared-secret changeme-changeme-changeme-changeme --memcached-servers CONTROLLER1:11211,CONTROLLER2:11211 --populate
ff --user ubuntu --hosts CONTROLLER2 openstack neutron install --connection mysql+pymysql://neutron:NEUTRON_PASS@CONTROLLER_VIP/neutron --rabbit-hosts CONTROLLER1,CONTROLLER2 --rabbit-user openstack --rabbit-pass changeme --auth-uri http://CONTROLLER_VIP:5000 --auth-url http://CONTROLLER_VIP:35357 --neutron-pass changeme --nova-url http://CONTROLLER_VIP:8774/v2.1 --nova-pass changeme --public-interface eth1 --local-ip MANAGEMENT_INTERFACE_IP --nova-metadata-ip CONTROLLER_VIP --metadata-proxy-shared-secret changeme-changeme-changeme-changeme --memcached-servers CONTROLLER1:11211,CONTROLLER2:11211
Neutron Agent

Install neutron agent on compute nodes

ff --user ubuntu --hosts COMPUTE1 openstack neutron-agent install --rabbit-hosts CONTROLLER1,CONTROLLER2 --rabbit-user openstack --rabbit-pass changeme --auth-uri http://CONTROLLER_VIP:5000 --auth-url http://CONTROLLER_VIP:35357 --neutron-pass changeme --public-interface eth1 --local-ip MANAGEMENT_INTERFACE_IP --memcached-servers CONTROLLER1:11211,CONTROLLER2:11211
ff --user ubuntu --hosts COMPUTE2 openstack neutron-agent install --rabbit-hosts CONTROLLER1,CONTROLLER2 --rabbit-user openstack --rabbit-pass changeme --auth-uri http://CONTROLLER_VIP:5000 --auth-url http://CONTROLLER_VIP:35357 --neutron-pass changeme --public-interface eth1 --local-ip MANAGEMENT_INTERFACE_IP --memcached-servers CONTROLLER1:11211,CONTROLLER2:11211
Horizon HA

Install horizon on controller nodes

ff --user ubuntu --hosts CONTROLLER1,CONTROLLER2 openstack horizon install --openstack-host CONTROLLER_VIP  --memcached-servers CONTROLLER1:11211 --time-zone Asia/Shanghai
Cinder HA

Create cinder database

ff --user ubuntu --hosts CONTROLLER1 openstack cinder create-cinder-db --root-db-pass changeme --cinder-db-pass changeme

Create cinder service creadentials

ff --user ubuntu --hosts CONTROLLER1 openstack cinder create-service-credentials --os-password changeme --os-auth-url http://CONTROLLER_VIP:35357/v3 --cinder-pass changeme --public-endpoint-v1 'http://CONTROLLER_VIP:8776/v1/%\(tenant_id\)s' --internal-endpoint-v1 'http://CONTROLLER_VIP:8776/v1/%\(tenant_id\)s' --admin-endpoint-v1 'http://CONTROLLER_VIP:8776/v1/%\(tenant_id\)s' --public-endpoint-v2 'http://CONTROLLER_VIP:8776/v2/%\(tenant_id\)s' --internal-endpoint-v2 'http://CONTROLLER_VIP:8776/v2/%\(tenant_id\)s' --admin-endpoint-v2 'http://CONTROLLER_VIP:8776/v2/%\(tenant_id\)s'

Install cinder-api and cinder-volume on controller nodes, the volume backend defaults to ceph (you must have ceph installed)

ff --user ubuntu --hosts CONTROLLER1 openstack cinder install --connection mysql+pymysql://cinder:CINDER_PASS@CONTROLLER_VIP/cinder --rabbit-user openstack --rabbit-pass changeme --rabbit-hosts CONTROLLER1,CONTROLLER2 --auth-uri http://CONTROLLER_VIP:5000 --auth-url http://CONTROLLER_VIP:35357 --cinder-pass changeme --my-ip MANAGEMENT_INTERFACE_IP --glance-api-servers http://CONTROLLER_VIP:9292 --rbd-secret-uuid changeme-changeme-changeme-changeme --memcached-servers CONTROLLER1:11211,CONTROLLER2:11211 --populate
ff --user ubuntu --hosts CONTROLLER2 openstack cinder install --connection mysql+pymysql://cinder:CINDER_PASS@CONTROLLER_VIP/cinder --rabbit-user openstack --rabbit-pass changeme --rabbit-hosts CONTROLLER1,CONTROLLER2 --auth-uri http://CONTROLLER_VIP:5000 --auth-url http://CONTROLLER_VIP:35357 --cinder-pass changeme --my-ip MANAGEMENT_INTERFACE_IP --glance-api-servers http://CONTROLLER_VIP:9292 --rbd-secret-uuid changeme-changeme-changeme-changeme --memcached-servers CONTROLLER1:11211,CONTROLLER2:11211
Swift proxy HA

Create the Identity service credentials

ff --user ubuntu --hosts CONTROLLER1 openstack swift create-service-credentials --os-password changeme --os-auth-url http://CONTROLLER_VIP:35357/v3 --swift-pass changeme --public-endpoint 'http://CONTROLLER_VIP:8080/v1/AUTH_%\(tenant_id\)s' --internal-endpoint 'http://CONTROLLER_VIP:8080/v1/AUTH_%\(tenant_id\)s' --admin-endpoint http://CONTROLLER_VIP:8080/v1

Install swift proxy

ff --user ubuntu --hosts CONTROLLER1,CONTROLLER2 openstack swift install --auth-uri http://CONTROLLER_VIP:5000 --auth-url http://CONTROLLER_VIP:35357 --swift-pass changeme --memcached-servers CONTROLLER1:11211,CONTROLLER2:11211
Swift storage

Prepare disks on storage node

ff --user ubuntu --hosts OBJECT1,OBJECT2 openstack swift-storage prepare-disks --name sdb,sdc,sdd,sde

Install swift storage on storage node

ff --user ubuntu --hosts OBJECT1 openstack swift-storage install --address MANAGEMENT_INTERFACE_IP --bind-ip MANAGEMENT_INTERFACE_IP
ff --user ubuntu --hosts OBJECT2 openstack swift-storage install --address MANAGEMENT_INTERFACE_IP --bind-ip MANAGEMENT_INTERFACE_IP

Create account ring on controller node

ff --user ubuntu --hosts CONTROLLER1 openstack swift-storage create-account-builder-file --partitions 10 --replicas 3 --moving 1
ff --user ubuntu --hosts CONTROLLER1 openstack swift-storage account-builder-add --region 1 --zone 1 --ip OBJECT1_MANAGEMENT_IP --device sdb --weight 100
ff --user ubuntu --hosts CONTROLLER1 openstack swift-storage account-builder-add --region 1 --zone 1 --ip OBJECT1_MANAGEMENT_IP --device sdc --weight 100
ff --user ubuntu --hosts CONTROLLER1 openstack swift-storage account-builder-add --region 1 --zone 1 --ip OBJECT1_MANAGEMENT_IP --device sdd --weight 100
ff --user ubuntu --hosts CONTROLLER1 openstack swift-storage account-builder-add --region 1 --zone 1 --ip OBJECT1_MANAGEMENT_IP --device sde --weight 100
ff --user ubuntu --hosts CONTROLLER1 openstack swift-storage account-builder-add --region 1 --zone 1 --ip OBJECT2_MANAGEMENT_IP --device sdb --weight 100
ff --user ubuntu --hosts CONTROLLER1 openstack swift-storage account-builder-add --region 1 --zone 1 --ip OBJECT2_MANAGEMENT_IP --device sdc --weight 100
ff --user ubuntu --hosts CONTROLLER1 openstack swift-storage account-builder-add --region 1 --zone 1 --ip OBJECT2_MANAGEMENT_IP --device sdd --weight 100
ff --user ubuntu --hosts CONTROLLER1 openstack swift-storage account-builder-add --region 1 --zone 1 --ip OBJECT2_MANAGEMENT_IP --device sde --weight 100
ff --user ubuntu --hosts CONTROLLER1 openstack swift-storage account-builder-rebalance

Create container ring on controller node

ff --user ubuntu --hosts CONTROLLER1 openstack swift-storage create-container-builder-file --partitions 10 --replicas 3 --moving 1
ff --user ubuntu --hosts CONTROLLER1 openstack swift-storage container-builder-add --region 1 --zone 1 --ip OBJECT1_MANAGEMENT_IP --device sdb --weight 100
ff --user ubuntu --hosts CONTROLLER1 openstack swift-storage container-builder-add --region 1 --zone 1 --ip OBJECT1_MANAGEMENT_IP --device sdc --weight 100
ff --user ubuntu --hosts CONTROLLER1 openstack swift-storage container-builder-add --region 1 --zone 1 --ip OBJECT1_MANAGEMENT_IP --device sdd --weight 100
ff --user ubuntu --hosts CONTROLLER1 openstack swift-storage container-builder-add --region 1 --zone 1 --ip OBJECT1_MANAGEMENT_IP --device sde --weight 100
ff --user ubuntu --hosts CONTROLLER1 openstack swift-storage container-builder-add --region 1 --zone 1 --ip OBJECT2_MANAGEMENT_IP --device sdb --weight 100
ff --user ubuntu --hosts CONTROLLER1 openstack swift-storage container-builder-add --region 1 --zone 1 --ip OBJECT2_MANAGEMENT_IP --device sdc --weight 100
ff --user ubuntu --hosts CONTROLLER1 openstack swift-storage container-builder-add --region 1 --zone 1 --ip OBJECT2_MANAGEMENT_IP --device sdd --weight 100
ff --user ubuntu --hosts CONTROLLER1 openstack swift-storage container-builder-add --region 1 --zone 1 --ip OBJECT2_MANAGEMENT_IP --device sde --weight 100
ff --user ubuntu --hosts CONTROLLER1 openstack swift-storage container-builder-rebalance

Create object ring on controller node

ff --user ubuntu --hosts CONTROLLER1 openstack swift-storage create-object-builder-file --partitions 10 --replicas 3 --moving 1
ff --user ubuntu --hosts CONTROLLER1 openstack swift-storage object-builder-add --region 1 --zone 1 --ip OBJECT1_MANAGEMENT_IP --device sdb --weight 100
ff --user ubuntu --hosts CONTROLLER1 openstack swift-storage object-builder-add --region 1 --zone 1 --ip OBJECT1_MANAGEMENT_IP --device sdc --weight 100
ff --user ubuntu --hosts CONTROLLER1 openstack swift-storage object-builder-add --region 1 --zone 1 --ip OBJECT1_MANAGEMENT_IP --device sdd --weight 100
ff --user ubuntu --hosts CONTROLLER1 openstack swift-storage object-builder-add --region 1 --zone 1 --ip OBJECT1_MANAGEMENT_IP --device sde --weight 100
ff --user ubuntu --hosts CONTROLLER1 openstack swift-storage object-builder-add --region 1 --zone 1 --ip OBJECT2_MANAGEMENT_IP --device sdb --weight 100
ff --user ubuntu --hosts CONTROLLER1 openstack swift-storage object-builder-add --region 1 --zone 1 --ip OBJECT2_MANAGEMENT_IP --device sdc --weight 100
ff --user ubuntu --hosts CONTROLLER1 openstack swift-storage object-builder-add --region 1 --zone 1 --ip OBJECT2_MANAGEMENT_IP --device sdd --weight 100
ff --user ubuntu --hosts CONTROLLER1 openstack swift-storage object-builder-add --region 1 --zone 1 --ip OBJECT2_MANAGEMENT_IP --device sde --weight 100
ff --user ubuntu --hosts CONTROLLER1 openstack swift-storage object-builder-rebalance

Sync the builder file from controller node to each storage node and other any proxy node

ff --user ubuntu --host CONTROLLER1 openstack swift-storage sync-builder-file --to CONTROLLER2,OBJECT1,OBJECT2

Finalize installation on all nodes

ff --user ubuntu --hosts CONTROLLER1,CONTROLLER2,OBJECT1,OBJECT2 openstack swift finalize-install --swift-hash-path-suffix changeme --swift-hash-path-prefix changeme
Ceph Guide

For more information about ceph backend visit:

preflight

Cinder and Glance driver

On Xenial please using ceph-deploy version 1.5.34

Install ceph-deploy(1.5.34)

wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
echo deb http://download.ceph.com/debian-jewel/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
sudo apt-get update && sudo apt-get install ceph-deploy

Create ceph cluster directory

mkdir ceph-cluster
cd ceph-cluster

Create cluster and add initial monitor(s) to the ceph.conf

ceph-deploy new  CONTROLLER1 CONTROLLER2 COMPUTE1 COMPUTE2 BLOCK1 BLOCK2
echo "osd pool default size = 2" | tee -a ceph.conf

Install ceph client(Optionaly you can use --release jewel to install jewel version, the ceph-deploy 1.5.34 default release is jewel) and you can use --repo-url http://your-local-repo.example.org/mirror/download.ceph.com/debian-jewel to specify the local repository.

ceph-deploy install PLAYBACK-NODE CONTROLLER1 CONTROLLER2 COMPUTE1 COMPUTE2 BLOCK1 BLOCK2

Add the initial monitor(s) and gather the keys

ceph-deploy mon create-initial

If you want to add additional monitors, do that

ceph-deploy mon add {additional-monitor}

Add ceph osd(s)

ceph-deploy osd create --zap-disk BLOCK1:/dev/sdb
ceph-deploy osd create --zap-disk BLOCK1:/dev/sdc
ceph-deploy osd create --zap-disk BLOCK2:/dev/sdb
ceph-deploy osd create --zap-disk BLOCK2:/dev/sdc

Sync admin key

ceph-deploy admin PLAYBACK-NODE CONTROLLER1 CONTROLLER2 COMPUTE1 COMPUTE2 BLOCK1 BLOCK2
sudo chmod +r /etc/ceph/ceph.client.admin.keyring # On all ceph clients node

Create osd pool for cinder and running instance

ceph osd pool create volumes 512
ceph osd pool create vms 512
ceph osd pool create images 512

Setup ceph client authentication

ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'

Add the keyrings for client.cinder and client.glance to appropriate nodes and change their ownership

ceph auth get-or-create client.cinder | sudo tee /etc/ceph/ceph.client.cinder.keyring # On all cinder-volume nodes
sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring" # On all cinder-volume nodes

ceph auth get-or-create client.glance | sudo tee /etc/ceph/ceph.client.glance.keyring # On all glance-api nodes
sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring" # On all glance-api nodes

Nodes running nova-compute need the keyring file for the nova-compute process

ceph auth get-or-create client.cinder | sudo tee /etc/ceph/ceph.client.cinder.keyring # On all nova-compute nodes

They also need to store the secret key of the client.cinder user in libvirt. The libvirt process needs it to access the cluster while attaching a block device from Cinder. Create a temporary copy of the secret key on the nodes running nova-compute

ceph auth get-key client.cinder | tee client.cinder.key # On all nova-compute nodes

Then, on the compute nodes, add the secret key to libvirt and remove the temporary copy of the key(the uuid is the same as your --rbd-secret-uuid option, you have to save the uuid for later)

uuidgen
457eb676-33da-42ec-9a8c-9293d545c337

# The following steps on all nova-compute nodes
cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
  <uuid>457eb676-33da-42ec-9a8c-9293d545c337</uuid>
  <usage type='ceph'>
    <name>client.cinder secret</name>
  </usage>
</secret>
EOF
sudo virsh secret-define --file secret.xml
Secret 457eb676-33da-42ec-9a8c-9293d545c337 created
sudo virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml

(optional)Now on every compute nodes edit your Ceph configuration file, add the client section

[client]
rbd cache = true
rbd cache writethrough until flush = true
rbd concurrent management ops = 20

[client.cinder]
keyring = /etc/ceph/ceph.client.cinder.keyring

(optional)On every glance-api nodes edit your Ceph configuration file, add the client section

[client.glance]
keyring= /etc/ceph/ceph.client.glance.keyring

(optional)If you want to remove osd

sudo stop ceph-mon-all && sudo stop ceph-osd-all # On osd node
ceph osd out {OSD-NUM}
ceph osd crush remove osd.{OSD-NUM}
ceph auth del osd.{OSD-NUM}
ceph osd rm {OSD-NUM}
ceph osd crush remove {HOST}

(optional)If you want to remove monitor

ceph mon remove {MON-ID}

Notes: you need to restart the nova-compute, cinder-volume and glance-api services to finalize the installation.

Shared File Systems service

Create manila database and service credentials

ff --user ubuntu --hosts CONTROLLER1 openstack manila create-manila-db --root-db-pass CHANGEME --manila-db-pass CHANGEME
ff --user ubuntu --hosts CONTROLLER1 openstack manila create-service-credentials --os-password CHANGEME --os-auth-url http://CONTROLLER_VIP:35357/v3 --manila-pass CHANGEME --public-endpoint-v1 "http://CONTROLLER_VIP:8786/v1/%\(tenant_id\)s" --internal-endpoint-v1 "http://CONTROLLER_VIP:8786/v1/%\(tenant_id\)s" --admin-endpoint-v1 "http://CONTROLLER_VIP:8786/v1/%\(tenant_id\)s" --public-endpoint-v2 "http://CONTROLLER_VIP:8786/v2/%\(tenant_id\)s" --internal-endpoint-v2 "http://CONTROLLER_VIP:8786/v2/%\(tenant_id\)s" --admin-endpoint-v2 "http://CONTROLLER_VIP:8786/v2/%\(tenant_id\)s"

Install manila on CONTROLLER1 and CONTROLLER2

ff --user ubuntu --hosts CONTROLLER1 openstack manila install --connection mysql+pymysql://manila:CHANGEME@CONTROLLER_VIP/manila --auth-uri http://CONTROLLER_VIP:5000 --auth-url http://CONTROLLER_VIP:35357 --manila-pass CHANGEME --my-ip CONTROLLER1 --memcached-servers CONTROLLER1:11211,CONTROLLER2:11211 --rabbit-hosts CONTROLLER1,CONTROLLER2 --rabbit-user openstack --rabbit-pass CHANGEME --populate
ff --user ubuntu --hosts CONTROLLER2 openstack manila install --connection mysql+pymysql://manila:CHANGEME@CONTROLLER_VIP/manila --auth-uri http://CONTROLLER_VIP:5000 --auth-url http://CONTROLLER_VIP:35357 --manila-pass CHANGEME --my-ip CONTROLLER2 --memcached-servers CONTROLLER1:11211,CONTROLLER2:11211 --rabbit-hosts CONTROLLER1,CONTROLLER2 --rabbit-user openstack --rabbit-pass CHANGEME

Install manila share on CONTROLLER1 and CONTROLLER2

ff --user ubuntu --hosts CONTROLLER1 openstack manila-share install --connection mysql+pymysql://manila:CHANGEME@CONTROLLER_VIP/manila --auth-uri http://CONTROLLER_VIP:5000 --auth-url http://CONTROLLER_VIP:35357 --manila-pass CHANGEME --my-ip CONTROLLER1 --memcached-servers CONTROLLER1:11211,CONTROLLER2:11211 --rabbit-hosts CONTROLLER1,CONTROLLER2 --rabbit-user openstack --rabbit-pass CHANGEME --neutron-endpoint http://CONTROLLER_VIP:9696 --neutron-pass CHANGEME --nova-pass CHANGEME --cinder-pass CHANGEME
ff --user ubuntu --hosts CONTROLLER2 openstack manila-share install --connection mysql+pymysql://manila:CHANGEME@CONTROLLER_VIP/manila --auth-uri http://CONTROLLER_VIP:5000 --auth-url http://CONTROLLER_VIP:35357 --manila-pass CHANGEME --my-ip CONTROLLER2 --memcached-servers CONTROLLER1:11211,CONTROLLER2:11211 --rabbit-hosts CONTROLLER1,CONTROLLER2 --rabbit-user openstack --rabbit-pass CHANGEME --neutron-endpoint http://CONTROLLER_VIP:9696 --neutron-pass CHANGEME --nova-pass CHANGEME --cinder-pass CHANGEME

Create the service image for manila

http://docs.openstack.org/mitaka/install-guide-ubuntu/launch-instance-manila.html

Create shares with share servers management support

http://docs.openstack.org/mitaka/install-guide-ubuntu/launch-instance-manila-dhss-true-option2.html

Documentation

Overview

Copyright 2015 nofdev. Licensed under the GPLv2, see LICENCE file for details.

FastForward is a DevOps automate platform.

Project homepage: https://github.com/nofdev/fastforward

For more information please refer to the README file in this directory.

Directories

Path Synopsis
Godeps
_workspace/src/github.com/Azure/go-autorest/autorest
Package autorest implements an HTTP request pipeline suitable for use across multiple go-routines and provides the shared routines relied on by AutoRest (see https://github.com/Azure/autorest/) generated Go code.
Package autorest implements an HTTP request pipeline suitable for use across multiple go-routines and provides the shared routines relied on by AutoRest (see https://github.com/Azure/autorest/) generated Go code.
_workspace/src/github.com/Azure/go-autorest/autorest/azure
Package azure provides Azure-specific implementations used with AutoRest.
Package azure provides Azure-specific implementations used with AutoRest.
_workspace/src/github.com/Azure/go-autorest/autorest/date
Package date provides time.Time derivatives that conform to the Swagger.io (https://swagger.io/) defined date formats: Date and DateTime.
Package date provides time.Time derivatives that conform to the Swagger.io (https://swagger.io/) defined date formats: Date and DateTime.
_workspace/src/github.com/Azure/go-autorest/autorest/mocks
Package mocks provides mocks and helpers used in testing.
Package mocks provides mocks and helpers used in testing.
_workspace/src/github.com/Azure/go-autorest/autorest/to
Package to provides helpers to ease working with pointer values of marshalled structures.
Package to provides helpers to ease working with pointer values of marshalled structures.
_workspace/src/github.com/alyu/configparser
Package configparser provides a simple parser for reading/writing configuration (INI) files.
Package configparser provides a simple parser for reading/writing configuration (INI) files.
_workspace/src/github.com/davecgh/go-spew/spew
Package spew implements a deep pretty printer for Go data structures to aid in debugging.
Package spew implements a deep pretty printer for Go data structures to aid in debugging.
_workspace/src/github.com/dgrijalva/jwt-go
Package jwt is a Go implementation of JSON Web Tokens: http://self-issued.info/docs/draft-jones-json-web-token.html See README.md for more info.
Package jwt is a Go implementation of JSON Web Tokens: http://self-issued.info/docs/draft-jones-json-web-token.html See README.md for more info.
A useful example app.
_workspace/src/github.com/gorilla/context
Package context stores values shared during a request lifetime.
Package context stores values shared during a request lifetime.
_workspace/src/github.com/gorilla/mux
Package gorilla/mux implements a request router and dispatcher.
Package gorilla/mux implements a request router and dispatcher.
_workspace/src/github.com/gorilla/rpc/v2
Package gorilla/rpc is a foundation for RPC over HTTP services, providing access to the exported methods of an object through HTTP requests.
Package gorilla/rpc is a foundation for RPC over HTTP services, providing access to the exported methods of an object through HTTP requests.
_workspace/src/github.com/gorilla/rpc/v2/json
Package gorilla/rpc/json provides a codec for JSON-RPC over HTTP services.
Package gorilla/rpc/json provides a codec for JSON-RPC over HTTP services.
_workspace/src/github.com/gorilla/rpc/v2/protorpc
Package gorilla/rpc/protorpc provides a codec for ProtoRPC over HTTP services.
Package gorilla/rpc/protorpc provides a codec for ProtoRPC over HTTP services.
_workspace/src/github.com/jiasir/playback/config
The config package is that OpenStack configuration.
The config package is that OpenStack configuration.
_workspace/src/github.com/jiasir/playback/libs/azure-sdk-for-go/core/http
Package http provides HTTP client and server implementations.
Package http provides HTTP client and server implementations.
_workspace/src/github.com/jiasir/playback/libs/azure-sdk-for-go/core/http/cgi
Package cgi implements CGI (Common Gateway Interface) as specified in RFC 3875.
Package cgi implements CGI (Common Gateway Interface) as specified in RFC 3875.
_workspace/src/github.com/jiasir/playback/libs/azure-sdk-for-go/core/http/cookiejar
Package cookiejar implements an in-memory RFC 6265-compliant http.CookieJar.
Package cookiejar implements an in-memory RFC 6265-compliant http.CookieJar.
Package fcgi implements the FastCGI protocol.
_workspace/src/github.com/jiasir/playback/libs/azure-sdk-for-go/core/http/httptest
Package httptest provides utilities for HTTP testing.
Package httptest provides utilities for HTTP testing.
_workspace/src/github.com/jiasir/playback/libs/azure-sdk-for-go/core/http/httputil
Package httputil provides HTTP utility functions, complementing the more common ones in the net/http package.
Package httputil provides HTTP utility functions, complementing the more common ones in the net/http package.
_workspace/src/github.com/jiasir/playback/libs/azure-sdk-for-go/core/http/pprof
Package pprof serves via its HTTP server runtime profiling data in the format expected by the pprof visualization tool.
Package pprof serves via its HTTP server runtime profiling data in the format expected by the pprof visualization tool.
_workspace/src/github.com/jiasir/playback/libs/azure-sdk-for-go/core/tls
Package tls partially implements TLS 1.2, as specified in RFC 5246.
Package tls partially implements TLS 1.2, as specified in RFC 5246.
_workspace/src/github.com/jiasir/playback/libs/azure-sdk-for-go/management
Package management provides the main API client to construct other clients and make requests to the Microsoft Azure Service Management REST API.
Package management provides the main API client to construct other clients and make requests to the Microsoft Azure Service Management REST API.
_workspace/src/github.com/jiasir/playback/libs/azure-sdk-for-go/management/hostedservice
Package hostedservice provides a client for Hosted Services.
Package hostedservice provides a client for Hosted Services.
Package location provides a client for Locations.
_workspace/src/github.com/jiasir/playback/libs/azure-sdk-for-go/management/networksecuritygroup
Package networksecuritygroup provides a client for Network Security Groups.
Package networksecuritygroup provides a client for Network Security Groups.
_workspace/src/github.com/jiasir/playback/libs/azure-sdk-for-go/management/osimage
Package osimage provides a client for Operating System Images.
Package osimage provides a client for Operating System Images.
_workspace/src/github.com/jiasir/playback/libs/azure-sdk-for-go/management/storageservice
Package storageservice provides a client for Storage Services.
Package storageservice provides a client for Storage Services.
_workspace/src/github.com/jiasir/playback/libs/azure-sdk-for-go/management/testutils
Package testutils contains some test utilities for the Azure SDK
Package testutils contains some test utilities for the Azure SDK
_workspace/src/github.com/jiasir/playback/libs/azure-sdk-for-go/management/virtualmachine
Package virtualmachine provides a client for Virtual Machines.
Package virtualmachine provides a client for Virtual Machines.
_workspace/src/github.com/jiasir/playback/libs/azure-sdk-for-go/management/virtualmachinedisk
Package virtualmachinedisk provides a client for Virtual Machine Disks.
Package virtualmachinedisk provides a client for Virtual Machine Disks.
_workspace/src/github.com/jiasir/playback/libs/azure-sdk-for-go/management/virtualmachineimage
Package virtualmachineimage provides a client for Virtual Machine Images.
Package virtualmachineimage provides a client for Virtual Machine Images.
_workspace/src/github.com/jiasir/playback/libs/azure-sdk-for-go/management/virtualnetwork
Package virtualnetwork provides a client for Virtual Networks.
Package virtualnetwork provides a client for Virtual Networks.
_workspace/src/github.com/jiasir/playback/libs/azure-sdk-for-go/management/vmutils
Package vmutils provides convenience methods for creating Virtual Machine Role configurations.
Package vmutils provides convenience methods for creating Virtual Machine Role configurations.
_workspace/src/github.com/pmezard/go-difflib/difflib
Package difflib is a partial port of Python difflib module.
Package difflib is a partial port of Python difflib module.
_workspace/src/github.com/russross/blackfriday
Blackfriday markdown processor.
Blackfriday markdown processor.
_workspace/src/github.com/shurcooL/sanitized_anchor_name
Package sanitized_anchor_name provides a func to create sanitized anchor names.
Package sanitized_anchor_name provides a func to create sanitized anchor names.
_workspace/src/github.com/spf13/cobra
Package cobra is a commander providing a simple interface to create powerful modern CLI interfaces.
Package cobra is a commander providing a simple interface to create powerful modern CLI interfaces.
_workspace/src/github.com/spf13/pflag
Package pflag is a drop-in replacement for Go's flag package, implementing POSIX/GNU-style --flags.
Package pflag is a drop-in replacement for Go's flag package, implementing POSIX/GNU-style --flags.
_workspace/src/github.com/stretchr/testify/assert
Package assert provides a set of comprehensive testing tools for use with the normal Go testing system.
Package assert provides a set of comprehensive testing tools for use with the normal Go testing system.
_workspace/src/github.com/wingedpig/loom
Package loom implements a set of functions to interact with remote servers using SSH.
Package loom implements a set of functions to interact with remote servers using SSH.
_workspace/src/golang.org/x/crypto/curve25519
Package curve25519 provides an implementation of scalar multiplication on the elliptic curve known as curve25519.
Package curve25519 provides an implementation of scalar multiplication on the elliptic curve known as curve25519.
_workspace/src/golang.org/x/crypto/ssh
Package ssh implements an SSH client and server.
Package ssh implements an SSH client and server.
_workspace/src/golang.org/x/crypto/ssh/agent
Package agent implements a client to an ssh-agent daemon.
Package agent implements a client to an ssh-agent daemon.
_workspace/src/golang.org/x/crypto/ssh/terminal
Package terminal provides support functions for dealing with terminals, as commonly found on UNIX systems.
Package terminal provides support functions for dealing with terminals, as commonly found on UNIX systems.
_workspace/src/golang.org/x/crypto/ssh/test
This package contains integration tests for the golang.org/x/crypto/ssh package.
This package contains integration tests for the golang.org/x/crypto/ssh package.
_workspace/src/gopkg.in/check.v1
Package check is a rich testing extension for Go's testing package.
Package check is a rich testing extension for Go's testing package.
_workspace/src/gopkg.in/yaml.v2
Package yaml implements YAML support for the Go language.
Package yaml implements YAML support for the Go language.
Package base is a library for FastForward.
Package base is a library for FastForward.
Package bin is the prebuild binary for FastForward.
Package bin is the prebuild binary for FastForward.
Package config holds a FastForward configuration parser.
Package config holds a FastForward configuration parser.
Package main is the ff command line interface.
Package main is the ff command line interface.
Package library are ansible modules for provision OpenStack.
Package library are ansible modules for provision OpenStack.
Package monitoring provides an API of JSON-RPC 2.0 for monitoring.
Package monitoring provides an API of JSON-RPC 2.0 for monitoring.
Package orchestration provides an API of JSON-RPC 2.0 for orchestration.
Package orchestration provides an API of JSON-RPC 2.0 for orchestration.
Package provisioning provides an API of JSON-RPC 2.0 Example Request: ./jsonrpctest.py http://YOUR_FF_SERVER:7000/v1 \ Provisioning.Exec \ "{'User': 'ubuntu', \ 'Host': 'YOUR_REMOTE_SERVER', \ 'DisplayOutput': true, \ 'AbortOnError': true, \ 'AptCache': false, \ 'UseSudo': true, \ 'CmdLine': 'echo FastForward'}" Example Response: {u'id': 1, u'result': u'FastForward\n', u'error': None} Query Parameters: User - The username for remote server.
Package provisioning provides an API of JSON-RPC 2.0 Example Request: ./jsonrpctest.py http://YOUR_FF_SERVER:7000/v1 \ Provisioning.Exec \ "{'User': 'ubuntu', \ 'Host': 'YOUR_REMOTE_SERVER', \ 'DisplayOutput': true, \ 'AbortOnError': true, \ 'AptCache': false, \ 'UseSudo': true, \ 'CmdLine': 'echo FastForward'}" Example Response: {u'id': 1, u'result': u'FastForward\n', u'error': None} Query Parameters: User - The username for remote server.
api/rpc/json
Package main is a provisioning api based on JSON-RPC 2.0
Package main is a provisioning api based on JSON-RPC 2.0
api/rpc/json/openstack
Package openstack provides an API of JSON-RPC 2.0 for Playback.
Package openstack provides an API of JSON-RPC 2.0 for Playback.
api/rpc/json/openstack/client
Package client provides a playback api client for ff command line.
Package client provides a playback api client for ff command line.
api/rpc/json/openstack/server
Package main is the JSON-RPC 2.0 API server for playback.
Package main is the JSON-RPC 2.0 API server for playback.
Package ris is a GUI of FastForward that holds a single page application web interface.
Package ris is a GUI of FastForward that holds a single page application web interface.
Package tools providing utils for FastForward.
Package tools providing utils for FastForward.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL