libvirt-actuator

command
v0.2.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 15, 2019 License: Apache-2.0 Imports: 20 Imported by: 0

README

Libvirt actuator

The command allows to directly interact with the libvirt actuator.

How to prepare environment

For running the libvirt actuator with Fedora 28 Cloud images.

  1. Provision instance in packet.net:

    $ ./hack/packet-provision.sh install
    

    and get the IP address (assuming it is 147.75.96.139).

  2. Update ~/.ssh/config to contain:

    Host libvirtactuator 147.75.96.139
    Hostname 147.75.96.139
    User root
    StrictHostKeyChecking no
    PasswordAuthentication no
    UserKnownHostsFile ~/.ssh/aws_known_hosts
    IdentityFile /tmp/packet_id_rsa
    IdentitiesOnly yes
    Compression yes
    
  3. Run init script to upload private SSH key to access created guests, to create default volume pool and to install genisoimage:

    $ bash cmd/libvirt-actuator/resources/init.sh
    

How to prepare machine resources

By default, available user data (under user-data.sh file) contains a shell script that deploys kubernetes master node. Feel free to modify the file to your needs.

The libvirt actuator expects the user data to be provided by a kubernetes secret by setting spec.providerSpec.value.cloudInit.userDataSecret field. See userdata.yml for example.

At the same time, the spec.providerSpec.value.uri needs to be set to libvirt uri. E.g. qemu+ssh://root@147.75.96.139/system in case the libivrt instance is accessible via ssh on 147.75.96.139 IP address.

To build the libvirt-actuator binary:

You'll need to install libvirt-dev[el] installed on the system you are building and running the binary. e.g. apt-get -y install libvirt-dev or yum -y install libvirt-devel

CGO_ENABLED=1 go build -o bin/libvirt-actuator -a github.com/openshift/cluster-api-provider-libvirt/cmd/libvirt-actuator
Create libvirt instance based on machine manifest with user data
$ ./bin/libvirt-actuator --logtostderr create -m examples/machine-with-userdata.yml -c examples/cluster.yaml -u examples/userdata.yml

Once the libvirt instance is created you can ssh inside. First, run domifaddr to get the guest IP address:

$ virsh -c qemu+ssh://root@147.75.96.139/system domifaddr worker-example

Then SSH inside using the guest.pem (from withing the libvirt instance).

Meantime you can check /root/user-data.logs to see the progress of deploying kubernetes master node:

$ watch -n 1 sudo tail -20 /root/user-data.logs

Once the deployment is done, you can list the master node:

$ sudo kubectl get nodes
NAME                   STATUS    ROLES     AGE       VERSION
fedora.example.local   Ready     master    17m       v1.11.3
How run the kubectl get nodes from your laptop
  1. Get private ip address of the master node (e.g. 192.168.122.6)

  2. Tunnel to the master guest (port 8443 for querying apiserver):

    $ ssh -L 8443:192.168.122.51:8443 libvirtactuator
    

    In case the command complains about 8443 port already in use, check which process uses the port (lsof -ti:8443) and terminate the process.

  3. Pull kubeconfig from the master guest node. Either by logging into the running master guest or by:

    1. tunneling and collecting the file directly:
    $ sudo ssh -N -L 22:192.168.122.51:22 -i /tmp/packet_id_rsa root@147.75.96.139
    
    1. Pulling the kubeconfig:
    $ ssh -i cmd/libvirt-actuator/resources/guest.pem fedora@127.0.0.1 'sudo cat /etc/kubernetes/admin.conf' > kubeconfig
    

    Once pulled, you can terminate the tunneling

  4. Modify the kubeconfig:

    $ export KUBECONFIG=$PWD/kubeconfig
    $ kubectl config set-cluster kubernetes --server=https://127.0.0.1:8443
    
  5. List nodes:

    $ kubectl get nodes
    NAME             STATUS    ROLES     AGE       VERSION
    192.168.122.51   Ready     master    14m       v1.11.3
    
Test if libvirt instance exists based on machine manifest
$ ./bin/libvirt-actuator --logtostderr exists -m examples/machine.yaml -c examples/cluster.yaml
Delete libvirt instance based on machine manifest
$ ./bin/libvirt-actuator --logtostderr delete -m examples/machine.yaml -c examples/cluster.yaml

How to bootstrap cluster API stack in packet.net

Given all libvirt guests have private address, one needs to run the bootstrap command from within the libvirt instance:

  1. Copy private keys and the libvirt-actuator binary to the libvirt instance

    $ scp /tmp/packet_id_rsa libvirtactuator:/libvirt.pem
    $ scp cmd/libvirt-actuator/resources/guest.pem libvirtactuator:/guest.pem
    $ scp ./bin/libvirt-actuator libvirtactuator:/.
    
  2. Run the bootstrap command:

    $ ssh libvirtactuator "/libvirt-actuator bootstrap --logtostderr --libvirt-uri qemu:///system --in-cluster-libvirt-uri 'qemu+ssh://root@147.75.91.169/system?no_verify=1&keyfile=/root/.ssh/actuator.pem/privatekey' --libvirt-private-key=/libvirt.pem --master-guest-private-key /guest.pem"
    

    Expected output:

    I1125 20:48:13.848962    2220 main.go:208] Creating secret with the libvirt PK from /libvirt.pem
    I1125 20:48:13.849674    2220 main.go:250] Creating master machine
    I1125 20:48:13.850026    2220 actuator.go:87] Creating machine "root-master-machine-6d6eca" for cluster "root".
    I1125 20:48:13.851041    2220 domain.go:430] Created libvirt connection: 0xc00000e908
    I1125 20:48:13.851070    2220 volume.go:108] Create a libvirt volume with name root-master-machine-6d6eca for pool default from the base volume /var/lib/libvirt/images/fedora_base
    I1125 20:48:13.966248    2220 volume.go:196] Volume ID: /var/lib/libvirt/images/root-master-machine-6d6eca
    I1125 20:48:13.966273    2220 domain.go:475] Create resource libvirt_domain
    I1125 20:48:13.974496    2220 domain.go:162] Capabilities of host
     {XMLName:{Space: Local:capabilities} Host:{UUID:00000000-0000-0000-0000-0cc47a86250e CPU:0xc0003919a0 PowerManagement:0xc00041d280 IOMMU:<nil> MigrationFeatures:0xc000420000 NUMA:0xc00000ea18 Cache:0xc000432540 MemoryBandwidth:<nil> SecModel:[{Name:none DOI:0 Labels:[]} {Name:dac DOI:0 Labels:[{Type:kvm Value:+107:+107} {Type:qemu Value:+107:+107}]}]} Guests:[{OSType:hvm Arch:{Name:i686 WordSize:32 Emulator:/usr/libexec/qemu-kvm Loader: Machines:[{Name:pc-i440fx-rhel7.5.0 MaxCPUs:240 Canonical:} {Name:pc MaxCPUs:240 Canonical:pc-i440fx-rhel7.5.0} {Name:pc-i440fx-rhel7.0.0 MaxCPUs:240 Canonical:} {Name:rhel6.3.0 MaxCPUs:240 Canonical:} {Name:rhel6.4.0 MaxCPUs:240 Canonical:} {Name:rhel6.0.0 MaxCPUs:240 Canonical:} {Name:pc-i440fx-rhel7.1.0 MaxCPUs:240 Canonical:} {Name:pc-i440fx-rhel7.2.0 MaxCPUs:240 Canonical:} {Name:pc-q35-rhel7.3.0 MaxCPUs:255 Canonical:} {Name:rhel6.5.0 MaxCPUs:240 Canonical:} {Name:pc-q35-rhel7.4.0 MaxCPUs:384 Canonical:} {Name:rhel6.6.0 MaxCPUs:240 Canonical:} {Name:rhel6.1.0 MaxCPUs:240 Canonical:} {Name:rhel6.2.0 MaxCPUs:240 Canonical:} {Name:pc-i440fx-rhel7.3.0 MaxCPUs:240 Canonical:} {Name:pc-i440fx-rhel7.4.0 MaxCPUs:240 Canonical:} {Name:pc-q35-rhel7.5.0 MaxCPUs:384 Canonical:} {Name:q35 MaxCPUs:384 Canonical:pc-q35-rhel7.5.0}] Domains:[{Type:qemu Emulator: Machines:[]} {Type:kvm Emulator:/usr/libexec/qemu-kvm Machines:[]}]} Features:0xc00043cf80} {OSType:hvm Arch:{Name:x86_64 WordSize:64 Emulator:/usr/libexec/qemu-kvm Loader: Machines:[{Name:pc-i440fx-rhel7.5.0 MaxCPUs:240 Canonical:} {Name:pc MaxCPUs:240 Canonical:pc-i440fx-rhel7.5.0} {Name:pc-i440fx-rhel7.0.0 MaxCPUs:240 Canonical:} {Name:rhel6.3.0 MaxCPUs:240 Canonical:} {Name:rhel6.4.0 MaxCPUs:240 Canonical:} {Name:rhel6.0.0 MaxCPUs:240 Canonical:} {Name:pc-i440fx-rhel7.1.0 MaxCPUs:240 Canonical:} {Name:pc-i440fx-rhel7.2.0 MaxCPUs:240 Canonical:} {Name:pc-q35-rhel7.3.0 MaxCPUs:255 Canonical:} {Name:rhel6.5.0 MaxCPUs:240 Canonical:} {Name:pc-q35-rhel7.4.0 MaxCPUs:384 Canonical:} {Name:rhel6.6.0 MaxCPUs:240 Canonical:} {Name:rhel6.1.0 MaxCPUs:240 Canonical:} {Name:rhel6.2.0 MaxCPUs:240 Canonical:} {Name:pc-i440fx-rhel7.3.0 MaxCPUs:240 Canonical:} {Name:pc-i440fx-rhel7.4.0 MaxCPUs:240 Canonical:} {Name:pc-q35-rhel7.5.0 MaxCPUs:384 Canonical:} {Name:q35 MaxCPUs:384 Canonical:pc-q35-rhel7.5.0}] Domains:[{Type:qemu Emulator: Machines:[]} {Type:kvm Emulator:/usr/libexec/qemu-kvm Machines:[]}]} Features:0xc00044aa00}]}
    I1125 20:48:13.974554    2220 domain.go:168] Checking for x86_64/hvm against i686/hvm
    I1125 20:48:13.974557    2220 domain.go:168] Checking for x86_64/hvm against x86_64/hvm
    I1125 20:48:13.974559    2220 domain.go:170] Found 18 machines in guest for x86_64/hvm
    I1125 20:48:13.974562    2220 domain.go:178] getCanonicalMachineName
    I1125 20:48:13.974564    2220 domain.go:168] Checking for x86_64/hvm against i686/hvm
    I1125 20:48:13.974566    2220 domain.go:168] Checking for x86_64/hvm against x86_64/hvm
    I1125 20:48:13.974568    2220 domain.go:170] Found 18 machines in guest for x86_64/hvm
    I1125 20:48:13.974571    2220 domain.go:488] setCoreOSIgnition
    I1125 20:48:13.974656    2220 cloudinit.go:70] cloudInitDef: {Name:root-master-machine-6d6eca_cloud-init PoolName:default MetaData:
    instance-id: root-master-machine-6d6eca; local-hostname: root-master-machine-6d6eca
     UserData:
    #cloud-config
    
    # Hostname management
    preserve_hostname: False
    hostname: whatever
    fqdn: whatever.example.local
    
    runcmd:
      # Set the hostname to its IP address so every kubernetes node has unique name
      - hostnamectl set-hostname $(ip route get 1 | cut -d' ' -f7)
      # Run the user data script
      - echo 'IyEvYmluL2Jhc2gKCmNhdCA8PEhFUkVET0MgPiAvcm9vdC91c2VyLWRhdGEuc2gKIyEvYmluL2Jhc2gKCmNhdCA8PEVPRiA+IC9ldGMveXVtLnJlcG9zLmQva3ViZXJuZXRlcy5yZXBvCltrdWJlcm5ldGVzXQpuYW1lPUt1YmVybmV0ZXMKYmFzZXVybD1odHRwczovL3BhY2thZ2VzLmNsb3VkLmdvb2dsZS5jb20veXVtL3JlcG9zL2t1YmVybmV0ZXMtZWw3LXg4Nl82NAplbmFibGVkPTEKZ3BnY2hlY2s9MQpyZXBvX2dwZ2NoZWNrPTEKZ3Bna2V5PWh0dHBzOi8vcGFja2FnZXMuY2xvdWQuZ29vZ2xlLmNvbS95dW0vZG9jL3l1bS1rZXkuZ3BnIGh0dHBzOi8vcGFja2FnZXMuY2xvdWQuZ29vZ2xlLmNvbS95dW0vZG9jL3JwbS1wYWNrYWdlLWtleS5ncGcKZXhjbHVkZT1rdWJlKgpFT0YKc2V0ZW5mb3JjZSAwCnl1bSBpbnN0YWxsIC15IGRvY2tlcgpzeXN0ZW1jdGwgZW5hYmxlIGRvY2tlcgpzeXN0ZW1jdGwgc3RhcnQgZG9ja2VyCnl1bSBpbnN0YWxsIC15IGt1YmVsZXQtMS4xMS4zIGt1YmVhZG0tMS4xMS4zIGt1YmVjdGwtMS4xMS4zIC0tZGlzYWJsZWV4Y2x1ZGVzPWt1YmVybmV0ZXMKCmNhdCA8PEVPRiA+IC9ldGMvZGVmYXVsdC9rdWJlbGV0CktVQkVMRVRfS1VCRUFETV9FWFRSQV9BUkdTPS0tY2dyb3VwLWRyaXZlcj1zeXN0ZW1kCkVPRgoKZWNobyAnMScgPiAvcHJvYy9zeXMvbmV0L2JyaWRnZS9icmlkZ2UtbmYtY2FsbC1pcHRhYmxlcwoKa3ViZWFkbSBpbml0IC0tYXBpc2VydmVyLWJpbmQtcG9ydCA4NDQzIC0tdG9rZW4gMmlxenFtLjg1YnMweDZtaXl4MW5tN2wgLS1hcGlzZXJ2ZXItY2VydC1leHRyYS1zYW5zPTEyNy4wLjAuMSAgLS1wb2QtbmV0d29yay1jaWRyPTE5Mi4xNjguMC4wLzE2IC12IDYKCiMgRW5hYmxlIG5ldHdvcmtpbmcgYnkgZGVmYXVsdC4Ka3ViZWN0bCBhcHBseSAtZiBodHRwczovL3Jhdy5naXRodWJ1c2VyY29udGVudC5jb20vY2xvdWRuYXRpdmVsYWJzL2t1YmUtcm91dGVyL21hc3Rlci9kYWVtb25zZXQva3ViZWFkbS1rdWJlcm91dGVyLnlhbWwgLS1rdWJlY29uZmlnIC9ldGMva3ViZXJuZXRlcy9hZG1pbi5jb25mCgpta2RpciAtcCAvcm9vdC8ua3ViZQpjcCAtaSAvZXRjL2t1YmVybmV0ZXMvYWRtaW4uY29uZiAvcm9vdC8ua3ViZS9jb25maWcKY2hvd24gJChpZCAtdSk6JChpZCAtZykgL3Jvb3QvLmt1YmUvY29uZmlnCkhFUkVET0MKCmJhc2ggL3Jvb3QvdXNlci1kYXRhLnNoID4gL3Jvb3QvdXNlci1kYXRhLmxvZ3MK' | base64 -d | bash
      # Remove cloud-init when finished with it
      - [ yum, -y, remove, cloud-init ]
    
    # Configure where output will go
    output:
      all: ">> /var/log/cloud-init.log"
    
    
    # configure interaction with ssh server
    ssh_svcname: ssh
    ssh_deletekeys: True
    ssh_genkeytypes: ['rsa', 'ecdsa']
    
    # Install public ssh key to the first user-defined user configured
    # in cloud.cfg in the template (which is fedpra for Fedora cloud images)
    ssh_authorized_keys:
      - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCkvgGhhYwEjWjD+ACW8s+DIanHqYJIC7RbgBRrvAqJQuWE87jfTtREHuW+o0qU1eIPPJzebu58VPgy3SscnrN2fKuMT2PAkevmjj4ARQmdsR/BBrmzdibe/Wnd8WEMNX82L+YrkuHoVkgafFkreSZgf/j8glGNl7IQe5gi2XDG1e+BQ+e94dxAExeRlldhQsbFvQJ+qLmDhHE4zdf/d/CqY6PwoIHlrOVLux7/pBV5SGg5eKlGCPi80oEf23LbwHYjkUXzEreBqUrWSwsdp6jIQ9zzADRQJ0+C47K6uwxy1RIe3q6t7f1eJwjmOaYYS2Sc+U1cpPHrWY3OzZJkbIZ3Fva8qVdbqhMW2ASqJ7oGpdwiRp7FTvoKlEktcc6JUK19sZ6dft79PF9nRy8nfz4obKowCZn7aqVBOW41DhaoC5oB9pfBgSPnObGnpkXITWrx/oUQ1zwrPIH150X3XuDdYXfrmDk/k+cQS7hjG328pfJs8oBhqUmyikUxjnXvDX/LQzacwDF3XKCy6Xq98bemFp8lnAG7c3tW8tYpn3Non6M3XaS2W/ece9JRZKOOCaqC52U7sg6nL/Yv11Sg9WSfJtINzNN1cKxZsIaPvorPflwqNlLWH3dPCb4KQry/54HCBvsKm1+s/yud31zk9C/CI5bFV959bLq+6ra6hAMBTw== Libvirt guest key
    
     NetworkConfig:}
    I1125 20:48:13.974663    2220 cloudinit.go:133] Creating new ISO
    I1125 20:48:13.974666    2220 cloudinit.go:165] Creating ISO contents
    I1125 20:48:13.974798    2220 cloudinit.go:184] ISO contents created
    I1125 20:48:13.974813    2220 cloudinit.go:152] About to execute cmd: &{Path:/usr/bin/mkisofs Args:[mkisofs -output /tmp/cloudinit178511077/root-master-machine-6d6eca_cloud-init -volid cidata -joliet -rock /tmp/cloudinit178511077/user-data /tmp/cloudinit178511077/meta-data /tmp/cloudinit178511077/network-config] Env:[] Dir: Stdin:<nil> Stdout:<nil> Stderr:<nil> ExtraFiles:[] SysProcAttr:<nil> Process:<nil> ProcessState:<nil> ctx:<nil> lookPathErr:<nil> finished:false childFiles:[] closeAfterStart:[] closeAfterWait:[] goroutine:[] errch:<nil> waitDone:<nil>}
    I1125 20:48:13.977148    2220 cloudinit.go:156] ISO created at /tmp/cloudinit178511077/root-master-machine-6d6eca_cloud-init
    I1125 20:48:13.995211    2220 ignition.go:224] 376832 bytes uploaded
    I1125 20:48:13.995760    2220 cloudinit.go:81] key: /var/lib/libvirt/images/root-master-machine-6d6eca_cloud-init
    I1125 20:48:13.995772    2220 domain.go:505] setDisks
    I1125 20:48:13.995780    2220 domain.go:288] LookupStorageVolByKey
    I1125 20:48:13.995848    2220 domain.go:293] diskVolume
    I1125 20:48:13.995920    2220 domain.go:299] DomainDiskSource
    I1125 20:48:13.995927    2220 domain.go:514] setNetworkInterfaces
    I1125 20:48:13.996243    2220 domain.go:368] Networkaddress 192.168.122.0/24
    I1125 20:48:13.996270    2220 domain.go:379] Adding IP/MAC/host=192.168.122.51/92:9d:31:13:27:b4/root-master-machine-6d6eca to default
    I1125 20:48:13.996283    2220 network.go:92] Updating host with XML:
      <host mac="92:9d:31:13:27:b4" name="root-master-machine-6d6eca" ip="192.168.122.51"></host>
    I1125 20:48:14.102369    2220 domain.go:536] Creating libvirt domain at qemu:///system
    I1125 20:48:14.102971    2220 domain.go:543] Creating libvirt domain with XML:
      <domain type="kvm">
          <name>root-master-machine-6d6eca</name>
          <memory unit="MiB">2048</memory>
          <vcpu>2</vcpu>
          <os>
              <type arch="x86_64" machine="pc-i440fx-rhel7.5.0">hvm</type>
          </os>
          <features>
              <pae></pae>
              <acpi></acpi>
              <apic></apic>
          </features>
          <cpu></cpu>
          <devices>
              <emulator>/usr/libexec/qemu-kvm</emulator>
              <disk type="file" device="cdrom">
                  <driver name="qemu" type="raw"></driver>
                  <source file="/var/lib/libvirt/images/root-master-machine-6d6eca_cloud-init"></source>
                  <target dev="hdd" bus="ide"></target>
              </disk>
              <disk type="file" device="disk">
                  <driver name="qemu" type="qcow2"></driver>
                  <source file="/var/lib/libvirt/images/root-master-machine-6d6eca"></source>
                  <target dev="vda" bus="virtio"></target>
              </disk>
              <interface type="network">
                  <mac address="92:9d:31:13:27:b4"></mac>
                  <source network="default"></source>
                  <model type="virtio"></model>
              </interface>
              <console type="pty">
                  <target type="virtio" port="0"></target>
              </console>
              <channel>
                  <target type="virtio" name="org.qemu.guest_agent.0"></target>
              </channel>
              <graphics type="spice" autoport="yes"></graphics>
              <rng model="virtio">
                  <backend model="random"></backend>
              </rng>
          </devices>
      </domain>
    I1125 20:48:14.283535    2220 domain.go:564] Domain ID: ef67ca57-e729-42a3-9f74-4cb491f319d1
    I1125 20:48:14.283558    2220 domain.go:620] Lookup domain by name: "root-master-machine-6d6eca"
    I1125 20:48:14.283637    2220 actuator.go:265] Updating status for root-master-machine-6d6eca
    I1125 20:48:14.283937    2220 actuator.go:316] Machine root-master-machine-6d6eca status has changed:
    object.ProviderStatus:
      a: <nil>
      b: &runtime.RawExtension{Raw:[]uint8{0x7b, 0x22, 0x6b, 0x69, 0x6e, 0x64, 0x22, 0x3a...
    object.Addresses:
      a: []v1.NodeAddress(nil)
      b: []v1.NodeAddress{}
    I1125 20:48:14.283965    2220 domain.go:44] Closing libvirt connection: 0xc00000e908
    I1125 20:48:24.285095    2220 main.go:274] Master machine running at 192.168.122.138
    I1125 20:48:24.285106    2220 main.go:283] Collecting master kubeconfig
    I1125 20:50:54.411129    2220 main.go:310] Waiting for all nodes to come up
    I1125 20:51:19.413533    2220 main.go:316] Creating "test" namespace
    I1125 20:51:19.417442    2220 main.go:321] Creating "test" secret
    I1125 20:51:19.421546    2220 main.go:307] Deploying cluster API stack components
    I1125 20:51:19.421556    2220 main.go:307] Deploying cluster CRD manifest
    I1125 20:51:24.450670    2220 main.go:307] Deploying machine CRD manifest
    I1125 20:51:29.456859    2220 main.go:307] Deploying machineset CRD manifest
    I1125 20:51:34.465142    2220 main.go:307] Deploying machinedeployment CRD manifest
    I1125 20:51:39.479004    2220 main.go:307] Deploying cluster role
    I1125 20:51:39.584024    2220 main.go:307] Deploying controller manager
    I1125 20:51:39.670993    2220 main.go:307] Deploying machine controller
    I1125 20:51:39.760995    2220 main.go:307] Waiting until cluster objects can be listed
    I1125 20:51:44.763339    2220 main.go:307] Cluster API stack deployed
    I1125 20:51:44.763349    2220 main.go:307] Creating "root" cluster
    I1125 20:51:59.776044    2220 main.go:307] Creating "root-worker-machineset-f4186b" machineset
    I1125 20:52:09.783611    2220 main.go:307] Verify machineset's underlying instances is running
    I1125 20:52:09.785800    2220 main.go:307] Waiting for "root-worker-machineset-f4186b-qvsp8" machine
    I1125 20:52:14.788451    2220 main.go:307] Verify machine's underlying instance is running
    

    It takes some time before the worker node is bootstrapped and joins the cluster. After successful deployment you can query the cluster nodes (from within the master guest):

    # kubectl get nodes
    NAME              STATUS    ROLES     AGE       VERSION
    192.168.122.121   Ready     compute   32m       v1.11.3
    192.168.122.51   Ready     master    40m       v1.11.3
    

Documentation

The Go Gopher

There is no documentation for this package.

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL