azure

package
v0.7.4 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 3, 2018 License: Apache-2.0 Imports: 44 Imported by: 0

README

Kubernetes Virtual Kubelet with ACI

Azure Container Instances (ACI) provide a hosted environment for running containers in Azure. When using ACI, there is no need to manage the underlying compute infrastructure, Azure handles this management for you. When running containers in ACI, you are charged by the second for each running container.

The Azure Container Instances provider for the Virtual Kubelet configures an ACI instance as a node in any Kubernetes cluster. When using the Virtual Kubelet ACI provider, pods can be scheduled on an ACI instance as if the ACI instance is a standard Kubernetes node. This configuration allows you to take advantage of both the capabilities of Kubernetes and the management value and cost benefit of ACI.

This document details configuring the Virtual Kubelet ACI provider.

Table of Contents

Prerequisite

This guide assumes that you have a Kubernetes cluster up and running (can be minikube) and that kubectl is already configured to talk to it.

Other pre-requesites are:

Install the Azure CLI

Install az by following the instructions for your operating system. See the full installation instructions if yours isn't listed below.

MacOS
brew install azure-cli
Windows

Download and run the Azure CLI Installer (MSI).

Ubuntu 64-bit
  1. Add the azure-cli repo to your sources:
    echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ wheezy main" | \
         sudo tee /etc/apt/sources.list.d/azure-cli.list
    
  2. Run the following commands to install the Azure CLI and its dependencies:
    sudo apt-key adv --keyserver packages.microsoft.com --recv-keys 52E16F86FEE04B979B07E28DB02C46DF417A0893
    sudo apt-get install apt-transport-https
    sudo apt-get update && sudo apt-get install azure-cli
    
Install the Kubernetes CLI

Install kubectl by running the following command:

az aks install-cli
Install the Helm CLI

Helm is a tool for installing pre-configured applications on Kubernetes. Install helm by running the following command:

MacOS
brew install kubernetes-helm
Windows
  1. Download the latest Helm release.
  2. Decompress the tar file.
  3. Copy helm.exe to a directory on your PATH.
Linux
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash

Cluster and Azure Account Setup

Now that we have all the tools, we will set up your Azure account to work with ACI.

Configure your Azure account

First let's identify your Azure subscription and save it for use later on in the quickstart.

  1. Run az login and follow the instructions in the command output to authorize az to use your account

  2. List your Azure subscriptions:

    az account list -o table
    
  3. Copy your subscription ID and save it in an environment variable:

    Bash

    export AZURE_SUBSCRIPTION_ID="<SubscriptionId>"
    

    PowerShell

    $env:AZURE_SUBSCRIPTION_ID = "<SubscriptionId>"
    
  4. Enable ACI in your subscription:

    az provider register -n Microsoft.ContainerInstance
    

Quick set-up with the ACI Connector

The Azure cli can be used to install the ACI provider. We like to say Azure's provider or implementation for Virtual Kubelet is the ACI Connector. Please note that this command has no Virtual Networking support. For this section Virtual Kubelet's specific ACI provider will be referenced as the the ACI Connector. If you continue with this section you can skip sections below up to "Schedule a pod in ACI", as we use Azure Container Service (AKS) to easily deploy and install the connector, thus it is assumed that you've created an AKS cluster.

To install the ACI Connector use the az cli and the aks namespace. Make sure to use the resource group of the aks cluster you've created and the name of the aks cluster you've created. You can choose the connector name to be anything. Choose any command below to install the Linux, Windows, or both the Windows and Linux Connector.

Note: You might need to specify the --aci-resource-group, due to a bug in the az cli. The resource group is then auto-generated. To find the name navigate to the Azure Portal resource groups, scroll down and find the name that matches MC_aks cluster name_aks rg_location.

  1. Install the Linux ACI Connector

    Bash

    az aks install-connector --resource-group <aks cluster rg> --name <aks cluster name> 
    
  2. Install the Windows ACI Connector

    Bash

    az aks install-connector --resource-group <aks cluster rg> --name <aks cluster name> --os-type windows 
    
  3. Install both the Windows and Linux ACI Connectors

    Bash

    az aks install-connector --resource-group <aks cluster rg> --name <aks cluster name> --os-type both 
    

Now you are ready to deploy a pod to the connector so skip to the "Schedule a pod in ACI" section.

Manual set-up

Create a Resource Group for ACI

To use Azure Container Instances, you must provide a resource group. Create one with the az cli using the following command.

export ACI_REGION=eastus
az group create --name aci-group --location "$ACI_REGION"
export AZURE_RG=aci-group
Create a service principal

This creates an identity for the Virtual Kubelet ACI provider to use when provisioning resources on your account on behalf of Kubernetes. This step is optional if you are provisoning Virtual Kubelet on AKS.

  1. Create a service principal with RBAC enabled for the quickstart:

    az ad sp create-for-rbac --name virtual-kubelet-quickstart -o table
    
  2. Save the values from the command output in environment variables:

    Bash

    export AZURE_TENANT_ID=<Tenant>
    export AZURE_CLIENT_ID=<AppId>
    export AZURE_CLIENT_SECRET=<Password>
    

    PowerShell

    $env:AZURE_TENANT_ID = "<Tenant>"
    $env:AZURE_CLIENT_ID = "<AppId>"
    $env:AZURE_CLIENT_SECRET = "<Password>"
    

Deployment of the ACI provider in your cluster

Run these commands to deploy the virtual kubelet which connects your Kubernetes cluster to Azure Container Instances.

export VK_RElEASE=virtual-kubelet-latest

Grab the public master URI for your Kubernetes cluster and save the value.

kubectl cluster-info
export MASTER_URI=<public uri>

If your cluster is an AKS cluster:

RELEASE_NAME=virtual-kubelet
NODE_NAME=virtual-kubelet
CHART_URL=https://github.com/virtual-kubelet/virtual-kubelet/raw/master/charts/$VK_RELEASE.tgz

helm install "$CHART_URL" --name "$RELEASE_NAME" \
  --set provider=azure \
  --set providers.azure.targetAKS=true \
  --set providers.azure.masterUri=$MASTER_URI

For any other type of Kubernetes cluster:

RELEASE_NAME=virtual-kubelet
NODE_NAME=virtual-kubelet
CHART_URL=https://github.com/virtual-kubelet/virtual-kubelet/raw/master/charts/$VK_RELEASE.tgz

helm install "$CHART_URL" --name "$RELEASE_NAME" \
  --set provider=azure \
  --set rbac.install=true \
  --set providers.azure.targetAKS=false \
  --set providers.azure.aciResourceGroup=$AZURE_RG \
  --set providers.azure.aciRegion=$ACI_REGION \
  --set providers.azure.tenantId=$AZURE_TENANT_ID \
  --set providers.azure.subscriptionId=$AZURE_SUBSCRIPTION_ID \
  --set providers.azure.clientId=$AZURE_CLIENT_ID \
  --set providers.azure.clientKey=$AZURE_CLIENT_SECRET \
  --set providers.azure.masterUri=$MASTER_URI

If your cluster has RBAC disabled set rbac.install=false

Output:

NAME:   virtual-kubelet
LAST DEPLOYED: Thu Feb 15 13:17:01 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Secret
NAME                             TYPE    DATA  AGE
virtual-kubelet-virtual-kubelet  Opaque  3     1s

==> v1beta1/Deployment
NAME                             DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
virtual-kubelet-virtual-kubelet  1        1        1           0          1s

==> v1/Pod(related)
NAME                                              READY  STATUS             RESTARTS  AGE
virtual-kubelet-virtual-kubelet-7bcf5dc749-6mvgp  0/1    ContainerCreating  0         1s


NOTES:
The virtual kubelet is getting deployed on your cluster.

To verify that virtual kubelet has started, run:

```cli
  kubectl --namespace=default get pods -l "app=virtual-kubelet-virtual-kubelet"

Create an AKS cluster with VNet

Run the following commands to create an AKS cluster with a new Azure virtual network. Also, create two subnets. One will be delegated to the cluster and the other will be delegated to Azure Container Instances.

Create an Azure virtual network and subnets

First, set the following variables for your VNet range and two subnet ranges within that VNet. The following ranges are recommended for those just trying out the connector with VNet.

Bash

  export VNET_RANGE=10.0.0.0/8  
  export CLUSTER_SUBNET_RANGE=10.240.0.0/16 
  export ACI_SUBNET_RANGE=10.241.0.0/16 
  export VNET_NAME=myAKSVNet 
  export CLUSTER_SUBNET_NAME=myAKSSubnet 
  export ACI_SUBNET_NAME=myACISubnet 
  export AKS_CLUSTER_RG=myresourcegroup 
  export KUBE_DNS_IP=10.0.0.10

Run the following command to create a virtual network within Azure, and a subnet within that VNet. The subnet will be dedicated to the nodes in the AKS cluster.

```cli
az network vnet create \
--resource-group $AKS_CLUSTER_RG \
--name $VNET_NAME \
--address-prefixes $VNET_RANGE \
--subnet-name $CLUSTER_SUBNET_NAME \
--subnet-prefix $CLUSTER_SUBNET_RANGE
```

Create a subnet that will be delegated to just resources within ACI, note that this needs to be an empty subnet, but within the same VNet that you already created.

az network vnet subnet create \
    --resource-group $AKS_CLUSTER_RG \
    --vnet-name $VNET_NAME \
    --name $ACI_SUBNET_NAME \
    --address-prefix $ACI_SUBNET_RANGE
Create a service principal (OPTIONAL)

Create an Azure Active Directory service principal to allow AKS to interact with other Azure resources. You can use a pre-created service principal too.

az ad sp create-for-rbac -n "virtual-kubelet-sp" --skip-assignment

The output should look similar to the following.

{
  "appId": "bef76eb3-d743-4a97-9534-03e9388811fc",
  "displayName": "azure-cli-2018-08-29-22-29-29",
  "name": "http://azure-cli-2018-08-29-22-29-29",
  "password": "1d257915-8714-4ce7-xxxxxxxxxxxxx",
  "tenant": "72f988bf-86f1-41af-91ab-2d7cd011db48"
}

Save the output values from the command output in enviroment variables.

export AZURE_TENANT_ID=<Tenant>
export AZURE_CLIENT_ID=<AppId>
export AZURE_CLIENT_SECRET=<Password>

These values can be integrated into the az aks create as a field --service-principal $AZURE_CLIENT_ID \.

Integrating Azure VNet Resource

If you want to integrate an already created Azure VNet resource with your AKS cluster than follow these steps. Grab the virtual network resource id with the following command:

az network vnet show --resource-group $AKS_CLUSTER_RG --name $VNET_NAME --query id -o tsv

Grant access to the AKS cluster to use the virtual network by creating a role and assigning it.

az role assignment create --assignee $AZURE_CLIENT_ID --scope <vnetId> --role NetworkContributor
Create an AKS cluster with a virtual network

Grab the id of the cluster subnet you created earlier with the following command.

az network vnet subnet show --resource-group $AKS_CLUSTER_RG --vnet-name $VNET_NAME --name $CLUSTER_SUBNET_NAME --query id -o tsv

Save the entire output starting witn "/subscriptions/..." in the following enviorment variable.

export VNET_SUBNET_ID=<subnet-resource>

Use the following command to create an AKS cluster with the virtual network you've already created.

az aks create \
    --resource-group myResourceGroup \
    --name myAKSCluster \
    --node-count 1 \
    --network-plugin azure \
    --service-cidr 10.0.0.0/16 \
    --dns-service-ip $KUBE_DNS_IP \
    --docker-bridge-address 172.17.0.1/16 \
    --vnet-subnet-id $VNET_SUBNET_ID \
    --client-secret $AZURE_CLIENT_SECRET
Deploy Virtual Kubelet

Manually deploy the Virtual Kubelet, the following env. variables have already been set earlier. You do need to pass through the subnet you created for ACI earlier, otherwise the container instances will not be able to participate with the other pods within the cluster subnet.

Grab the public master URI for your Kubernetes cluster and save the value.

kubectl cluster-info
export MASTER_URI=<public uri>

Set the following values for the helm chart.

RELEASE_NAME=virtual-kubelet
NODE_NAME=virtual-kubelet
CHART_URL=https://github.com/virtual-kubelet/virtual-kubelet/raw/master/charts/$VK_RELEASE.tgz

If your cluster is an AKS cluster:

helm install "$CHART_URL" --name "$RELEASE_NAME" \
  --set provider=azure \
  --set providers.azure.targetAKS=true \
  --set providers.azure.vnet.enabled=true \
  --set providers.azure.vnet.subnetName=$ACI_SUBNET_NAME \
  --set providers.azure.vent.subnetCidr=$ACI_SUBNET_RANGE \
  --set providers.azure.vnet.clusterCidr=$CLUSTER_SUBNET_RANGE \
  --set providers.azure.vnet.kubeDnsIp=$KUBE_DNS_IP \
  --set providers.azure.masterUri=$MASTER_URI

For any other type of cluster:

helm install "$CHART_URL" --name "$RELEASE_NAME" \
  --set provider=azure \
  --set providers.azure.targetAKS=false \
  --set providers.azure.vnet.enabled=true \
  --set providers.azure.vnet.subnetName=$ACI_SUBNET_NAME \
  --set providers.azure.vent.subnetCidr=$ACI_SUBNET_RANGE \
  --set providers.azure.vnet.kubeDnsIp=$KUBE_DNS_IP \
  --set providers.azure.tenantId=$AZURE_TENANT_ID \
  --set providers.azure.subscriptionId=$AZURE_SUBSCRIPTION_ID \
  --set providers.azure.aciResourceGroup=$AZURE_RG \
  --set providers.azure.aciRegion=$ACI_REGION \
  --set providers.azure.masterUri=$MASTER_URI

Validate the Virtual Kubelet ACI provider

To validate that the Virtual Kubelet has been installed, return a list of Kubernetes nodes using the kubectl get nodes command. You should see a node that matches the name given to the ACI connector.

kubectl get nodes

Output:

NAME                                        STATUS    ROLES     AGE       VERSION
virtual-kubelet-virtual-kubelet             Ready     <none>    2m        v1.8.3
aks-nodepool1-39289454-0                    Ready     agent     22h       v1.7.7
aks-nodepool1-39289454-1                    Ready     agent     22h       v1.7.7
aks-nodepool1-39289454-2                    Ready     agent     22h       v1.7.7

Schedule a pod in ACI

Create a file named virtual-kubelet-test.yaml and copy in the following YAML.

apiVersion: v1
kind: Pod
metadata:
  name: helloworld
spec:
  containers:
  - image: microsoft/aci-helloworld
    imagePullPolicy: Always
    name: helloworld
    resources:
      requests:
        memory: 1G
        cpu: 1
    ports:
    - containerPort: 80
      name: http
      protocol: TCP
    - containerPort: 443
      name: https
  dnsPolicy: ClusterFirst
  nodeSelector:
    kubernetes.io/role: agent
    beta.kubernetes.io/os: linux
    type: virtual-kubelet
  tolerations:
  - key: virtual-kubelet.io/provider
    operator: Exists
  - key: azure.com/aci
    effect: NoSchedule

Notice that Virtual-Kubelet nodes are tainted by default to avoid unexpected pods running on them, i.e. kube-proxy, other virtual-kubelet pods, etc. To schedule a pod to them, you need to add the toleration to the pod spec and a node selector:

  nodeSelector:
    kubernetes.io/role: agent
    beta.kubernetes.io/os: linux
    type: virtual-kubelet
  tolerations:
  - key: virtual-kubelet.io/provider
    operator: Exists
  - key: azure.com/aci
    effect: NoSchedule

Also, specify a nodeSelector so the pods will be forced onto the Virtual-Kubelet node.

  nodeSelector:
    kubernetes.io/role: agent
    beta.kubernetes.io/os: linux
    type: virtual-kubelet

Run the application with the kubectl create command.

kubectl create -f virtual-kubelet-test.yaml

Use the kubectl get pods command with the -o wide argument to output a list of pods with the scheduled node.

kubectl get pods -o wide

Notice that the helloworld pod is running on the virtual-kubelet node.

NAME                                            READY     STATUS    RESTARTS   AGE       IP             NODE
aci-helloworld-2559879000-8vmjw                 1/1       Running   0          39s       52.179.3.180   virtual-kubelet

If the AKS cluster was configured with a virtual network, then the output will look like the following. The container instance will get a private ip rather than a public one.

NAME                            READY     STATUS    RESTARTS   AGE       IP           NODE
aci-helloworld-9b55975f-bnmfl   1/1       Running   0          4m        10.241.0.4   virtual-kubelet

To validate that the container is running in an Azure Container Instance, use the az container list Azure CLI command.

az container list -o table

Output:

Name                             ResourceGroup    ProvisioningState    Image                     IP:ports         CPU/Memory       OsType    Location
-------------------------------  ---------------  -------------------  ------------------------  ---------------  ---------------  --------  ----------
helloworld-2559879000-8vmjw  myResourceGroup    Succeeded            microsoft/aci-helloworld  52.179.3.180:80  1.0 core/1.5 gb  Linux     eastus

Work arounds for the ACI Connector

If your pod that's scheduled onto the Virtual Kubelet node is in a pending state please add these workarounds to your Virtual Kubelet pod spec.

First, grab the logs from your ACI Connector pod, with the following command.

kubectl logs virtual-kubelet-virtual-kubelet-7bcf5dc749-6mvgp 
Stream or pod watcher errors

If you see the following errors in the logs:

ERROR: logging before flag.Parse: E0914 00:02:01.546132       1 streamwatcher.go:109] Unable to decode an event from the watch stream: stream error: stream ID 181; INTERNAL_ERROR
time="2018-09-14T00:02:01Z" level=error msg="Pod watcher connection is closed unexpectedly" namespace= node=virtual-kubelet-myconnector-linux operatingSystem=Linux provider=azure

Then copy the master URI with cluster-info.

kubectl cluster-info

Output:

Kubernetes master is running at https://aksxxxx-xxxxx-xxxx-xxxxxxx.hcp.uksouth.azmk8s.io:443

Edit your aci-connector deployment by first getting the deployment name.

kubectl get deploy 

Output:

NAME                           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
virtual-kubelet-virtual-kubelet 1         1         1            1           5d
aci-helloworld                  1         1         1            0           12m

Edit the deployment.

kubectl edit deploy virtual-kubelet-virtual-kubelet 

Add the following name and value to the deployment in the enviorment section. Use your copied AKS master URI.

--name: MASTER_URI
  value: https://aksxxxx-xxxxx-xxxx-xxxxxxx.hcp.uksouth.azmk8s.io:443
Taint deprecated errors

If you see the following errors in the logs:

Flag --taint has been deprecated, Taint key should now be configured using the VK_TAINT_KEY environment variable

Then edit your aci-connector deployment by first grabbing the deployment name.

kubectl get deploy 

Output:

NAME                           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
virtual-kubelet-virtual-kubelet 1         1         1            1           5d
aci-helloworld                  1         1         1            0           12m

Edit the connector deployment.

kubectl edit deploy virtual-kubelet-virtual-kubelet 

Add the following as an enviorment variable within the deployment.

--name: VK_TAINT_KEY
  value: azure.com/aci

Also, delete the following argument in your pod spec:

- --taint
  - azure.com/aci

Upgrade the ACI Connector

If you've installed Virtual Kubelet with the Azure cli so you're using the ACI Connector implementation, then you are also able to upgrade the connector to the latest release. Run the following command to upgrade your ACI Connector.

az aks upgrade-connector --resource-group <aks cluster rg> --name <aks cluster name> --connector-name virtual-kubelet --os-type linux

Remove the Virtual Kubelet

You can remove your Virtual Kubelet node by deleting the Helm deployment. Run the following command:

helm delete virtual-kubelet --purge

If you used the ACI Connector installation then use the following command to remove the the ACI Connector from your cluster.

az aks remove-connector --resource-group <aks cluster rg> --name <aks cluster name> --connector-name virtual-kubelet --os-type linux

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type AADMock

type AADMock struct {
	OnAcquireToken func(http.ResponseWriter, *http.Request)
	// contains filtered or unexported fields
}

AADMock implements a AAD mock server .

func NewAADMock

func NewAADMock() *AADMock

NewAADMock creates a new AAD server mocker.

func (*AADMock) Close

func (mock *AADMock) Close()

Close terminates the AAD server mocker.

func (*AADMock) GetServerURL

func (mock *AADMock) GetServerURL() string

GetServerURL returns the mock server URL.

type ACIMock

type ACIMock struct {
	OnCreate             func(string, string, string, *aci.ContainerGroup) (int, interface{})
	OnGetContainerGroups func(string, string) (int, interface{})
	OnGetContainerGroup  func(string, string, string) (int, interface{})
	// contains filtered or unexported fields
}

ACIMock implements a Azure Container Instance mock server.

func NewACIMock

func NewACIMock() *ACIMock

NewACIMock creates a new Azure Container Instance mock server.

func (*ACIMock) Close

func (mock *ACIMock) Close()

Close terminates the Azure Container Instance mock server.

func (*ACIMock) GetServerURL

func (mock *ACIMock) GetServerURL() string

GetServerURL returns the mock server URL.

type ACIProvider

type ACIProvider struct {
	// contains filtered or unexported fields
}

ACIProvider implements the virtual-kubelet provider interface and communicates with Azure's ACI APIs.

func NewACIProvider

func NewACIProvider(config string, rm *manager.ResourceManager, nodeName, operatingSystem string, internalIP string, daemonEndpointPort int32) (*ACIProvider, error)

NewACIProvider creates a new ACIProvider.

func (*ACIProvider) Capacity

func (p *ACIProvider) Capacity(ctx context.Context) v1.ResourceList

Capacity returns a resource list containing the capacity limits set for ACI.

func (*ACIProvider) CreatePod

func (p *ACIProvider) CreatePod(ctx context.Context, pod *v1.Pod) error

CreatePod accepts a Pod definition and creates an ACI deployment

func (*ACIProvider) DeletePod

func (p *ACIProvider) DeletePod(ctx context.Context, pod *v1.Pod) error

DeletePod deletes the specified pod out of ACI.

func (*ACIProvider) ExecInContainer added in v0.4.1

func (p *ACIProvider) ExecInContainer(name string, uid types.UID, container string, cmd []string, in io.Reader, out, errstream io.WriteCloser, tty bool, resize <-chan remotecommand.TerminalSize, timeout time.Duration) error

ExecInContainer executes a command in a container in the pod, copying data between in/out/err and the container's stdin/stdout/stderr.

func (*ACIProvider) GetContainerLogs

func (p *ACIProvider) GetContainerLogs(ctx context.Context, namespace, podName, containerName string, tail int) (string, error)

GetContainerLogs returns the logs of a pod by name that is running inside ACI.

func (*ACIProvider) GetPod

func (p *ACIProvider) GetPod(ctx context.Context, namespace, name string) (*v1.Pod, error)

GetPod returns a pod by name that is running inside ACI returns nil if a pod by that name is not found.

func (*ACIProvider) GetPodFullName added in v0.4.1

func (p *ACIProvider) GetPodFullName(namespace string, pod string) string

GetPodFullName as defined in the provider context

func (*ACIProvider) GetPodStatus

func (p *ACIProvider) GetPodStatus(ctx context.Context, namespace, name string) (*v1.PodStatus, error)

GetPodStatus returns the status of a pod by name that is running inside ACI returns nil if a pod by that name is not found.

func (*ACIProvider) GetPods

func (p *ACIProvider) GetPods(ctx context.Context) ([]*v1.Pod, error)

GetPods returns a list of all pods known to be running within ACI.

func (*ACIProvider) GetStatsSummary added in v0.5.2

func (p *ACIProvider) GetStatsSummary(ctx context.Context) (summary *stats.Summary, err error)

GetStatsSummary returns the stats summary for pods running on ACI

func (*ACIProvider) NodeAddresses

func (p *ACIProvider) NodeAddresses(ctx context.Context) []v1.NodeAddress

NodeAddresses returns a list of addresses for the node status within Kubernetes.

func (*ACIProvider) NodeConditions

func (p *ACIProvider) NodeConditions(ctx context.Context) []v1.NodeCondition

NodeConditions returns a list of conditions (Ready, OutOfDisk, etc), for updates to the node status within Kubernetes.

func (*ACIProvider) NodeDaemonEndpoints

func (p *ACIProvider) NodeDaemonEndpoints(ctx context.Context) *v1.NodeDaemonEndpoints

NodeDaemonEndpoints returns NodeDaemonEndpoints for the node status within Kubernetes.

func (*ACIProvider) OperatingSystem

func (p *ACIProvider) OperatingSystem() string

OperatingSystem returns the operating system that was provided by the config.

func (*ACIProvider) UpdatePod

func (p *ACIProvider) UpdatePod(ctx context.Context, pod *v1.Pod) error

UpdatePod is a noop, ACI currently does not support live updates of a pod.

type AcsCredential

type AcsCredential struct {
	Cloud             string `json:"cloud"`
	TenantID          string `json:"tenantId"`
	SubscriptionID    string `json:"subscriptionId"`
	ClientID          string `json:"aadClientId"`
	ClientSecret      string `json:"aadClientSecret"`
	ResourceGroup     string `json:"resourceGroup"`
	Region            string `json:"location"`
	VNetName          string `json:"vnetName"`
	VNetResourceGroup string `json:"vnetResourceGroup"`
}

AcsCredential represents the credential file for ACS

func NewAcsCredential

func NewAcsCredential(p string) (*AcsCredential, error)

NewAcsCredential returns an AcsCredential struct from file path

type AuthConfig

type AuthConfig struct {
	Username      string `json:"username,omitempty"`
	Password      string `json:"password,omitempty"`
	Auth          string `json:"auth,omitempty"`
	Email         string `json:"email,omitempty"`
	ServerAddress string `json:"serveraddress,omitempty"`
	IdentityToken string `json:"identitytoken,omitempty"`
	RegistryToken string `json:"registrytoken,omitempty"`
}

AuthConfig is the secret returned from an ImageRegistryCredential

Directories

Path Synopsis
Package azure and subpackages are used to perform operations using the Azure Resource Manager (ARM).
Package azure and subpackages are used to perform operations using the Azure Resource Manager (ARM).
aci
Package aci provides tools for interacting with the Azure Container Instances API.
Package aci provides tools for interacting with the Azure Container Instances API.
api
Package api contains the common code shared by all Azure API libraries.
Package api contains the common code shared by all Azure API libraries.
resourcegroups
Package resourcegroups provides tools for interacting with the Azure Resource Manager resource groups API.
Package resourcegroups provides tools for interacting with the Azure Resource Manager resource groups API.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL