dataproc

package
v0.18.3 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 5, 2019 License: Apache-2.0 Imports: 2 Imported by: 0

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Cluster

type Cluster struct {
	// contains filtered or unexported fields
}

Manages a Cloud Dataproc cluster resource within GCP. For more information see [the official dataproc documentation](https://cloud.google.com/dataproc/).

!> **Warning:** Due to limitations of the API, all arguments except `labels`,`cluster_config.worker_config.num_instances` and `cluster_config.preemptible_worker_config.num_instances` are non-updateable. Changing others will cause recreation of the whole cluster!

func GetCluster

func GetCluster(ctx *pulumi.Context,
	name string, id pulumi.ID, state *ClusterState, opts ...pulumi.ResourceOpt) (*Cluster, error)

GetCluster gets an existing Cluster resource's state with the given name, ID, and optional state properties that are used to uniquely qualify the lookup (nil if not required).

func NewCluster

func NewCluster(ctx *pulumi.Context,
	name string, args *ClusterArgs, opts ...pulumi.ResourceOpt) (*Cluster, error)

NewCluster registers a new resource with the given unique name, arguments, and options.

func (*Cluster) ClusterConfig

func (r *Cluster) ClusterConfig() *pulumi.Output

Allows you to configure various aspects of the cluster. Structure defined below.

func (*Cluster) ID

func (r *Cluster) ID() *pulumi.IDOutput

ID is this resource's unique identifier assigned by its provider.

func (*Cluster) Labels

func (r *Cluster) Labels() *pulumi.MapOutput

The list of labels (key/value pairs) to be applied to instances in the cluster. GCP generates some itself including `goog-dataproc-cluster-name` which is the name of the cluster.

func (*Cluster) Name

func (r *Cluster) Name() *pulumi.StringOutput

The name of the cluster, unique within the project and zone.

func (*Cluster) Project

func (r *Cluster) Project() *pulumi.StringOutput

The ID of the project in which the `cluster` will exist. If it is not provided, the provider project is used.

func (*Cluster) Region

func (r *Cluster) Region() *pulumi.StringOutput

The region in which the cluster and associated nodes will be created in. Defaults to `global`.

func (*Cluster) URN

func (r *Cluster) URN() *pulumi.URNOutput

URN is this resource's unique name assigned by Pulumi.

type ClusterArgs

type ClusterArgs struct {
	// Allows you to configure various aspects of the cluster.
	// Structure defined below.
	ClusterConfig interface{}
	// The list of labels (key/value pairs) to be applied to
	// instances in the cluster. GCP generates some itself including `goog-dataproc-cluster-name`
	// which is the name of the cluster.
	Labels interface{}
	// The name of the cluster, unique within the project and
	// zone.
	Name interface{}
	// The ID of the project in which the `cluster` will exist. If it
	// is not provided, the provider project is used.
	Project interface{}
	// The region in which the cluster and associated nodes will be created in.
	// Defaults to `global`.
	Region interface{}
}

The set of arguments for constructing a Cluster resource.

type ClusterState

type ClusterState struct {
	// Allows you to configure various aspects of the cluster.
	// Structure defined below.
	ClusterConfig interface{}
	// The list of labels (key/value pairs) to be applied to
	// instances in the cluster. GCP generates some itself including `goog-dataproc-cluster-name`
	// which is the name of the cluster.
	Labels interface{}
	// The name of the cluster, unique within the project and
	// zone.
	Name interface{}
	// The ID of the project in which the `cluster` will exist. If it
	// is not provided, the provider project is used.
	Project interface{}
	// The region in which the cluster and associated nodes will be created in.
	// Defaults to `global`.
	Region interface{}
}

Input properties used for looking up and filtering Cluster resources.

type Job

type Job struct {
	// contains filtered or unexported fields
}

Manages a job resource within a Dataproc cluster within GCE. For more information see [the official dataproc documentation](https://cloud.google.com/dataproc/).

!> **Note:** This resource does not support 'update' and changing any attributes will cause the resource to be recreated.

func GetJob

func GetJob(ctx *pulumi.Context,
	name string, id pulumi.ID, state *JobState, opts ...pulumi.ResourceOpt) (*Job, error)

GetJob gets an existing Job resource's state with the given name, ID, and optional state properties that are used to uniquely qualify the lookup (nil if not required).

func NewJob

func NewJob(ctx *pulumi.Context,
	name string, args *JobArgs, opts ...pulumi.ResourceOpt) (*Job, error)

NewJob registers a new resource with the given unique name, arguments, and options.

func (*Job) DriverControlsFilesUri

func (r *Job) DriverControlsFilesUri() *pulumi.StringOutput

If present, the location of miscellaneous control files which may be used as part of job setup and handling. If not present, control files may be placed in the same location as driver_output_uri.

func (*Job) DriverOutputResourceUri

func (r *Job) DriverOutputResourceUri() *pulumi.StringOutput

A URI pointing to the location of the stdout of the job's driver program.

func (*Job) ForceDelete

func (r *Job) ForceDelete() *pulumi.BoolOutput

By default, you can only delete inactive jobs within Dataproc. Setting this to true, and calling destroy, will ensure that the job is first cancelled before issuing the delete.

func (*Job) HadoopConfig

func (r *Job) HadoopConfig() *pulumi.Output

func (*Job) HiveConfig

func (r *Job) HiveConfig() *pulumi.Output

func (*Job) ID

func (r *Job) ID() *pulumi.IDOutput

ID is this resource's unique identifier assigned by its provider.

func (*Job) Labels

func (r *Job) Labels() *pulumi.MapOutput

The list of labels (key/value pairs) to add to the job.

func (*Job) PigConfig

func (r *Job) PigConfig() *pulumi.Output

func (*Job) Placement

func (r *Job) Placement() *pulumi.Output

func (*Job) Project

func (r *Job) Project() *pulumi.StringOutput

The project in which the `cluster` can be found and jobs subsequently run against. If it is not provided, the provider project is used.

func (*Job) PysparkConfig

func (r *Job) PysparkConfig() *pulumi.Output

func (*Job) Reference

func (r *Job) Reference() *pulumi.Output

func (*Job) Region

func (r *Job) Region() *pulumi.StringOutput

The Cloud Dataproc region. This essentially determines which clusters are available for this job to be submitted to. If not specified, defaults to `global`.

func (*Job) Scheduling

func (r *Job) Scheduling() *pulumi.Output

Optional. Job scheduling configuration.

func (*Job) SparkConfig

func (r *Job) SparkConfig() *pulumi.Output

func (*Job) SparksqlConfig

func (r *Job) SparksqlConfig() *pulumi.Output

func (*Job) Status

func (r *Job) Status() *pulumi.Output

func (*Job) URN

func (r *Job) URN() *pulumi.URNOutput

URN is this resource's unique name assigned by Pulumi.

type JobArgs

type JobArgs struct {
	// By default, you can only delete inactive jobs within
	// Dataproc. Setting this to true, and calling destroy, will ensure that the
	// job is first cancelled before issuing the delete.
	ForceDelete  interface{}
	HadoopConfig interface{}
	HiveConfig   interface{}
	// The list of labels (key/value pairs) to add to the job.
	Labels    interface{}
	PigConfig interface{}
	Placement interface{}
	// The project in which the `cluster` can be found and jobs
	// subsequently run against. If it is not provided, the provider project is used.
	Project       interface{}
	PysparkConfig interface{}
	Reference     interface{}
	// The Cloud Dataproc region. This essentially determines which clusters are available
	// for this job to be submitted to. If not specified, defaults to `global`.
	Region interface{}
	// Optional. Job scheduling configuration.
	Scheduling     interface{}
	SparkConfig    interface{}
	SparksqlConfig interface{}
}

The set of arguments for constructing a Job resource.

type JobState

type JobState struct {
	// If present, the location of miscellaneous control files which may be used as part of job setup and handling. If not present, control files may be placed in the same location as driver_output_uri.
	DriverControlsFilesUri interface{}
	// A URI pointing to the location of the stdout of the job's driver program.
	DriverOutputResourceUri interface{}
	// By default, you can only delete inactive jobs within
	// Dataproc. Setting this to true, and calling destroy, will ensure that the
	// job is first cancelled before issuing the delete.
	ForceDelete  interface{}
	HadoopConfig interface{}
	HiveConfig   interface{}
	// The list of labels (key/value pairs) to add to the job.
	Labels    interface{}
	PigConfig interface{}
	Placement interface{}
	// The project in which the `cluster` can be found and jobs
	// subsequently run against. If it is not provided, the provider project is used.
	Project       interface{}
	PysparkConfig interface{}
	Reference     interface{}
	// The Cloud Dataproc region. This essentially determines which clusters are available
	// for this job to be submitted to. If not specified, defaults to `global`.
	Region interface{}
	// Optional. Job scheduling configuration.
	Scheduling     interface{}
	SparkConfig    interface{}
	SparksqlConfig interface{}
	Status         interface{}
}

Input properties used for looking up and filtering Job resources.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL