terraform-provider-kafka

command module
v0.12.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jul 28, 2025 License: MIT Imports: 3 Imported by: 0

README

terraform-provider-kafka

CircleCI

A Terraform plugin for managing Apache Kafka.

Contents

Installation

terraform-provider-kafka is available on the terraform registry. To install, add the below into your main.tf and execute terraform init

terraform {
  required_providers {
    kafka = {
      source = "Mongey/kafka"
    }
  }
}

provider "kafka" {
  bootstrap_servers = ["localhost:9092"]
  ca_cert           = file("../secrets/ca.crt")
  client_cert       = file("../secrets/terraform-cert.pem")
  client_key        = file("../secrets/terraform.pem")
  tls_enabled       = true
}

Otherwise, install by downloading and extracting the latest release to your terraform plugin directory (typically ~/.terraform.d/plugins/)

Developing
  1. Install go
  2. Clone repository to: $GOPATH/src/github.com/Mongey/terraform-provider-kafka
    mkdir -p $GOPATH/src/github.com/Mongey/terraform-provider-kafka; cd $GOPATH/src/github.com/Mongey/
    git clone https://github.com/Mongey/terraform-provider-kafka.git
    cd terraform-provider-kafka
    
  3. Build the provider make build
  4. Run the tests make test
  5. Start a TLS enabled kafka-cluster docker-compose up
  6. Run the acceptance tests make testacc

Provider Configuration

Example

Example provider with TLS client authentication.

provider "kafka" {
  bootstrap_servers = ["localhost:9092"]
  ca_cert           = file("../secrets/ca.crt")
  client_cert       = file("../secrets/terraform-cert.pem")
  client_key        = file("../secrets/terraform.pem")
  tls_enabled       = true
}

Example provider with aws-iam(Assume role) client authentication.

provider "kafka" {
  bootstrap_servers = ["localhost:9098"]
  tls_enabled       = true
  sasl_mechanism    = "aws-iam"
  sasl_aws_region   = "us-east-1"
  sasl_aws_role_arn = "arn:aws:iam::account:role/role-name"
}

Example provider with aws-iam(Aws Profile) client authentication.

provider "kafka" {
  bootstrap_servers = ["localhost:9098"]
  tls_enabled       = true
  sasl_mechanism    = "aws-iam"
  sasl_aws_region   = "us-east-1"
  sasl_aws_profile  = "dev"
}

Example provider with aws-iam(Aws Profile in non-default aws_shared_config_file path) client authentication.

provider "kafka" {
  bootstrap_servers             = ["localhost:9098"]
  tls_enabled                   = true
  sasl_mechanism                = "aws-iam"
  sasl_aws_region               = "us-east-1"
  sasl_aws_profile              = "dev"
  sasl_aws_shared_config_files  = ["/path/to/custom/aws/config"]
}

Example provider with aws-iam(Static Creds) client authentication using explicit credentials.

provider "vault" {
  auth_login_jwt {
    role = "jwt-role-name"
  }
}

data "vault_aws_access_credentials" "creds" {
  backend = "aws"
  type    = "sts"
  role    = "sts-role-name"
}

provider "kafka" {
  bootstrap_servers   = ["localhost:9098"]
  tls_enabled         = true
  sasl_mechanism      = "aws-iam"
  sasl_aws_region     = "us-east-1"
  sasl_aws_access_key = data.vault_aws_access_credentials.creds.access_key
  sasl_aws_secret_key = data.vault_aws_access_credentials.creds.secret_key
  sasl_aws_token      = data.vault_aws_access_credentials.creds.security_token
}

Example provider with aws-iam(Static Creds) client authentication. You have to export AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN(Optional if you are using temp creds)

provider "kafka" {
  bootstrap_servers = ["localhost:9098"]
  tls_enabled       = true
  sasl_mechanism    = "aws-iam"
  sasl_aws_region   = "us-east-1"
}

Example provider with aws-iam(Container Creds) client authentication. You have to export AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE and AWS_CONTAINER_CREDENTIALS_FULL_URI

provider "kafka" {
  bootstrap_servers = ["localhost:9098"]
  tls_enabled       = true
  sasl_mechanism    = "aws-iam"
  sasl_aws_region   = "us-east-1"
}
Compatibility with Redpanda
provider "kafka" {
  bootstrap_servers = ["localhost:9092"]
  kafka_version = "2.1.0"
}

Due to Redpanda not implementing some Metadata APIs, we need to force the Kafka version to use when creating the provider.

Property Description Default
bootstrap_servers A list of host:port addresses that will be used to discover the full set of alive brokers Required
ca_cert The CA certificate or path to a CA certificate file in PEM format to validate the server's certificate. ""
client_cert The client certificate or path to a file containing the client certificate in PEM format. Use for Client authentication to Kafka.
If you have Intermediate CA certificate(s) append them to client_cert.
""
client_key The private key or path to a file containing the private key that the client certificate was issued for. ""
client_key_passphrase The passphrase for the private key that the certificate was issued for. ""
kafka_version The version of Kafka protocol to use in $MAJOR.$MINOR.$PATCH format. Some features may not be available on older versions. ""
tls_enabled Enable communication with the Kafka Cluster over TLS. true
skip_tls_verify Skip TLS verification. false
sasl_username Username for SASL authentication. ""
sasl_password Password for SASL authentication. ""
sasl_mechanism Mechanism for SASL authentication. Allowed values are plain, aws-iam, scram-sha256, scram-sha512 or oauthbearer plain
sasl_aws_region AWS region for IAM authentication. ""
sasl_aws_container_authorization_token_file Path to a file containing the AWS pod identity authorization token. ""
sasl_aws_container_credentials_full_uri URI to retrieve AWS credentials from. ""
sasl_aws_role_arn Arn of AWS IAM role to assume for IAM authentication. ""
sasl_aws_profile AWS profile to use for IAM authentication. ""
sasl_aws_shared_config_files List of paths to AWS shared config files ""
sasl_aws_access_key AWS access key. ""
sasl_aws_secret_key AWS secret key. ""
sasl_aws_token AWS session token. ""
sasl_aws_creds_debug Enable debug logging for AWS authentication. false
sasl_token_url The url to retrieve oauth2 tokens from, when using sasl mechanism oauthbearer ""
sasl_oauth_scopes OAuth scopes to request when using the oauthbearer mechanism []

Resources

kafka_topic

A resource for managing Kafka topics. Increases partition count without destroying the topic.

Example
provider "kafka" {
  bootstrap_servers = ["localhost:9092"]
}

resource "kafka_topic" "logs" {
  name               = "systemd_logs"
  replication_factor = 2
  partitions         = 100

  config = {
    "segment.ms"     = "20000"
    "cleanup.policy" = "compact"
  }
}
Properties
Property Description
name The name of the topic
partitions The number of partitions the topic should have
replication_factor The number of replicas the topic should have
config A map of string K/V attributes
Importing Existing Topics

You can import topics with the following

terraform import kafka_topic.logs systemd_logs
kafka_acl

A resource for managing Kafka ACLs.

Example
provider "kafka" {
  bootstrap_servers = ["localhost:9092"]
  ca_cert           = file("../secrets/ca.crt")
  client_cert       = file("../secrets/terraform-cert.pem")
  client_key        = file("../secrets/terraform.pem")
}

resource "kafka_acl" "test" {
  resource_name       = "syslog"
  resource_type       = "Topic"
  acl_principal       = "User:Alice"
  acl_host            = "*"
  acl_operation       = "Write"
  acl_permission_type = "Deny"
}
Properties
Property Description Valid values
acl_principal Principal that is being allowed or denied *
acl_host Host from which principal listed in acl_principal will have access *
acl_operation Operation that is being allowed or denied Unknown, Any, All, Read, Write, Create, Delete, Alter, Describe, ClusterAction, DescribeConfigs, AlterConfigs, IdempotentWrite
acl_permission_type Type of permission Unknown, Any, Allow, Deny
resource_name The name of the resource *
resource_type The type of resource Unknown, Any, Topic, Group, Cluster, TransactionalID
resource_pattern_type_filter Prefixed, Any, Match, Literal
Importing Existing ACLs

For import, use as a parameter the items separated by | character. Quote it to avoid shell expansion.

# Fields in shell notation are
# ${acl_principal}|${acl_host}|${acl_operation}|${acl_permission_type}|${resource_type}|${resource_name}|${resource_pattern_type_filter}
terraform import kafka_acl.admin 'User:12345|*|Describe|Allow|Topic|experimental-topic|Prefixed'
kafka_quota

A resource for managing Kafka Quotas.

Example
provider "kafka" {
  bootstrap_servers = ["localhost:9092"]
  ca_cert           = file("../secrets/ca.crt")
  client_cert       = file("../secrets/terraform-cert.pem")
  client_key        = file("../secrets/terraform.pem")
}

resource "kafka_quota" "test" {
  entity_name       = "client1"
  entity_type       = "client-id"
  config = {
    "consumer_byte_rate" = "4000000"
    "producer_byte_rate" = "3500000"
  }
}

resource "kafka_quota" "default_user_quota" {
  entity_type = "user"
  config = {
    "consumer_byte_rate" = "2000000"
    "producer_byte_rate" = "1500000"
  }
}
Properties
Property Description
entity_name The name of the entity (if entity_name is not provided, it will create entity-default Kafka quota)
entity_type The entity type (client-id, user, ip)
config A map of string attributes for the entity
kafka_user_scram_credential

A resource for managing Kafka SCRAM user credentials.

Example
provider "kafka" {
  bootstrap_servers = ["localhost:9092"]
  ca_cert           = file("../secrets/ca.crt")
  client_cert       = file("../secrets/terraform-cert.pem")
  client_key        = file("../secrets/terraform.pem")
}

# Legacy usage with 'password' (deprecated)
resource "kafka_user_scram_credential" "test" {
  username               = "user1"
  scram_mechanism        = "SCRAM-SHA-256"
  scram_iterations       = "8192"
  password               = "password"
}

# Recommended usage with write-only password (Terraform 1.11+). Password isn't stored in tfstate anymore
resource "kafka_user_scram_credential" "secure" {
  username               = "user2"
  scram_mechanism        = "SCRAM-SHA-256"
  scram_iterations       = "8192"
  password_wo            = "secure-password"
  password_wo_version    = "1"
}

You can fill password_wo_version with your secret engine metadata. For example, Hashicorp Vault returns it in the data source.

Importing Existing SCRAM user credentials

For import, use as a parameter the items separated by | character. Quote it to avoid shell expansion.

# Fields in shell notation are
# ${username}|${scram_mechanism}|${password} (legacy format)
# or
# ${username}|${scram_mechanism} (for write-only passwords)
terraform import kafka_user_scram_credential.test 'user1|SCRAM-SHA-256|password'
# or for write-only passwords (password_wo and password_wo_version must be set manually after import)
terraform import kafka_user_scram_credential.test 'user1|SCRAM-SHA-256'
Properties
Property Description
username The username
scram_mechanism The SCRAM mechanism (SCRAM-SHA-256 or SCRAM-SHA-512)
scram_iterations The number of SCRAM iterations (must be >= 4096). Default: 4096
password The password for the user (deprecated, use password_wo instead)
password_wo The write-only password for the user (recommended, requires Terraform 1.11+)
password_wo_version Version identifier for the write-only password to track changes

Note: Either password or password_wo must be specified, but not both. The password_wo field is recommended for better security as it's write-only and never returned by the API.

Common Issues and Troubleshooting

Provider Crashes

If you encounter "Empty Summary" errors or nil pointer dereferences, common causes include:

  • Empty bootstrap_servers list - ensure you always provide valid broker addresses
  • Insufficient IAM permissions when using AWS MSK - see the AWS MSK Integration Guide
  • Attempting to modify immutable properties on MSK Serverless
AWS MSK Authentication

For IAM authentication issues:

  • Ensure you're using the correct port (9098 for IAM, 9096 for SASL/SCRAM)
  • For EKS/ECS, set sasl_aws_role_arn = "" to use pod/task credentials
  • Check your IAM policy includes necessary kafka-cluster:* permissions
Dynamic Configuration

The provider requires bootstrap_servers at initialization time. For dynamic environments:

  • Consider using separate Terraform workspaces/states
  • Use count or for_each on resources instead of conditional provider configuration

For detailed troubleshooting, see our Troubleshooting Guide.

Documentation

Requirements

Documentation

The Go Gopher

There is no documentation for this package.

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL