memphis

package module
v1.3.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 21, 2024 License: Apache-2.0 Imports: 30 Imported by: 32

README

Github (4)

Discord Code Of Conduct GitHub release (latest by date)

cloud_native 2 (5)

Cloud - Docs - X - YouTube

Memphis.dev is a highly scalable, painless, and effortless data streaming platform.
Made to enable developers and data teams to collaborate and build
real-time and streaming apps fast.

Installation

After installing and running memphis broker,
In your project's directory:

go get github.com/memphisdev/memphis.go

Importing

import "github.com/memphisdev/memphis.go"
Connecting to Memphis
c, err := memphis.Connect("<memphis-host>", 
	"<application type username>", 
	memphis.ConnectionToken("<connection-token>"), // you will get it on application type user creation
	memphis.Password("<password>")) // depends on how Memphis deployed - default is connection token-based authentication

The connect function allows for the connection to Memphis. Connecting to Memphis (cloud or open-source) will be needed in order to use any of the other functionality of the Memphis class. Upon connection, all of Memphis' features are available.

Configuring the connection to Memphis in the Go SDK can be done by passing in the different configuration functions to the Connect function.

// function params
c, err := memphis.Connect("<memphis-host>", 
	"<application type username>", 
	memphis.ConnectionToken("<connection-token>"), // you will get it on application type user creation
	memphis.Password("<password>"), // depends on how Memphis deployed - default is connection token-based authentication
  	memphis.AccountId(<int>) // You can find it on the profile page in the Memphis UI. This field should be sent only on the cloud version of Memphis, otherwise it will be ignored
  	memphis.Port(<int>), // defaults to 6666       
	memphis.Reconnect(<bool>), // defaults to true
	memphis.MaxReconnect(<int>), // Set the maximum number of reconnection attempts. The default value is -1, which means unlimited reconnection attempts.
  	memphis.ReconnectInterval(<time.Duration>) // defaults to 1 second
  	memphis.Timeout(<time.Duration>) // defaults to 15 seconds
	// for TLS connection:
	memphis.Tls("<cert-client.pem>", "<key-client.pem>",  "<rootCA.pem>"),
	)

Here is an example of connecting to Memphis using a password (using the default user:root password:memphis login with Memphis open-source):

conn, err := memphis.Connect("localhost", "root", memphis.Password("memphis"))

Connecting to Memphis cloud will require the account id and broker hostname. You may find these on the Overview page of the Memphis cloud UI at the top of the page. Here is an example of connecting to a cloud broker that is located in US East:

conn, err := memphis.Connect("aws-us-east-1.cloud.memphis.dev", "my_client_username", memphis.Password("memphis"), memphis.AccountId(123456789))

It is possible to use a token-based connection to memphis as well, where multiple users can share the same token to connect to memphis. Here is an example of using memphis.connect with a token:

conn, err := memphis.Connect("localhost", "root", memphis.ConnectionToken("memphis"))

The token will be made available when creating new users.

Memphis open-source needs to be configured to use token based connection. See the docs for help doing this.

To use a TLS based connection, the TLS function will need to be invoked:

func Tls(TlsCert string, TlsKey string, CaFile string) Option {
	return func(o *Options) error {
		o.TLSOpts = TLSOpts{
			TlsCert: TlsCert,
			TlsKey:  TlsKey,
			CaFile:  CaFile,
		}
		return nil
	}
}

Using this to connect to Memphis looks like this:

conn, err := memphis.Connect("localhost", "root", memphis.Tls(
    "~/tls_file_path.key",
    "~/tls_cert_file_path.crt",
    "~/tls_cert_file_path.crt",
))

To configure memphis to use TLS see the docs.

Disconnecting from Memphis

To disconnect from Memphis, call Close() on the Memphis connection object.

c.Close();
Creating a Station

Stations are distributed units that store messages. Producers add messages to stations and Consumers take messages from them. Each station stores messages until their retention policy causes them to either delete the messages or move them to remote storage.

A station will be automatically created for the user when a consumer or producer is used if no stations with the given station name exist.

Stations can be created from a memphis connection (Conn)
Passing optional parameters using functions
If the station trying to be created exists when this function is called, nothing will change with the exisitng station

s0, err = c.CreateStation("<station-name>")

s1, err = c.CreateStation("<station-name>", 
 memphis.RetentionTypeOpt(<Messages/MaxMessageAgeSeconds/Bytes/AckBased>), // AckBased - cloud only
 memphis.RetentionVal(<int>), // defaults to 3600
 memphis.StorageTypeOpt(<Memory/Disk>), 
 memphis.Replicas(<int>), 
 memphis.IdempotencyWindow(<time.Duration>), // defaults to 2 minutes
 memphis.SchemaName(<string>),
 memphis.SendPoisonMsgToDls(<bool>), // defaults to true
 memphis.SendSchemaFailedMsgToDls(<bool>), // defaults to true
 memphis.TieredStorageEnabled(<bool>), // defaults to false
 memphis.PartitionsNumber(<int>), // default is 1 partition
 memphis.DlsStation(<string>) // defaults to "" (no DLS station) - If selected DLS events will be sent to selected station as well
)

The CreateStation function is used to create a station. Using the different arguemnts, one can programically create many different types of stations. The Memphis UI can also be used to create stations to the same effect.

A minimal example, using all default values would simply create a station with the given name:

conn, err := memphis.Connect("localhost", "root", memphis.Password("memphis"))

// Handle err

station, err := conn.CreateStation("myStation")

To change what criteria the station uses to decide if a message should be retained in the station, change the retention type. The different types of retention are documented here in the go README.

The unit of the rentention value will vary depending on the RetentionType. The previous link also describes what units will be used.

Here is an example of a station which will only hold up to 10 messages:

conn, err := memphis.Connect("localhost", "root", memphis.Password("memphis"))

// Handle err

station, err := conn.CreateStation(
    "myStation",
    memphis.RetentionTypeOpt(memphis.Messages),
    memphis.RetentionVal(10)
    )

Memphis stations can either store Messages on disk or in memory. A comparison of those types of storage can be found here.

Here is an example of how to create a station that uses Memory as its storage type:

conn, err := memphis.Connect("localhost", "root", memphis.Password("memphis"))

// Handle err

station, err := conn.CreateStation(
    "myStation",
    memphis.StorageTypeOpt(memphis.Memory)
    )

In order to make a station more redundant, replicas can be used. Read more about replicas here. Note that replicas are only available in cluster mode. Cluster mode can be enabled in the Helm settings when deploying Memphis with Kubernetes.

Here is an example of creating a station with 3 replicas:

conn, err := memphis.Connect("localhost", "root", memphis.Password("memphis"))

// Handle err

station, err := conn.CreateStation(
    "myStation",
    memphis.Replicas(3)
    )

Idempotency defines how Memphis will prevent duplicate messages from being stored or consumed. The duration of time the message ID's will be stored in the station can be set with the IdempotencyWindow StationOpt. If the environment Memphis is deployed in has unreliably connection and/or a lot of latency, increasing this value might be desiriable. The default duration of time is set to two minutes. Read more about idempotency here.

Here is an example of changing the idempotency window to 3 seconds:

conn, err := memphis.Connect("localhost", "root", memphis.Password("memphis"))

// Handle err

station, err := conn.CreateStation(
    "myStation",
    memphis.IdempotencyWindow(3 * time.Minute)
    )

The SchemaName is used to set a schema to be enforced by the station. The default value ensures that no schema is enforced. Here is an example of changing the schema to a defined schema in schemaverse called "sensorLogs":

conn, err := memphis.Connect("localhost", "root", memphis.Password("memphis"))

// Handle err

station, err := conn.CreateStation(
    "myStation",
    memphis.SchemaName("sensorLogs")
    )

There are two parameters for sending messages to the dead-letter station(DLS). Use the functions SendPoisonMsgToDls and SendSchemaFailedMsgToDls to se these parameters.

Here is an example of sending poison messages to the DLS but not messages which fail to conform to the given schema.

conn, err := memphis.Connect("localhost", "root", memphis.Password("memphis"))

// Handle err

station, err := conn.CreateStation(
    "myStation",
    memphis.SchemaName("SensorLogs"),
    memphis.SendPoisonMsgToDls(true),
    memphis.SendSchemaFailedMsgToDls(false)
    )

When either of the DLS flags are set to True, a station can also be set to handle these events. To set a station as the station to where schema failed or poison messages will be set to, use the DlsStation StationOpt:

conn, err := memphis.Connect("localhost", "root", memphis.Password("memphis"))

// Handle err

station, err := conn.CreateStation(
    "myStation",
    memphis.SchemaName("SensorLogs"),
    memphis.SendPoisonMsgToDls(true),
    memphis.SendSchemaFailedMsgToDls(false),
    memphis.DlsStation("badSensorMessagesStation")
    )

When the retention value is met, Mempihs by default will delete old messages. If tiered storage is setup, Memphis can instead move messages to tier 2 storage. Read more about tiered storage here. Enable this setting with the respective StationOpt:

conn, err := memphis.Connect("localhost", "root", memphis.Password("memphis"))

// Handle err

station, err := conn.CreateStation(
    "myStation",
    memphis.TieredStorageEnabled(true)
    )

Partitioning might be useful for a station. To have a station partitioned, simply set the PartitionNumber StationOpt:

conn, err := memphis.Connect("localhost", "root", memphis.Password("memphis"))

// Handle err

station, err := conn.CreateStation(
    "myStation",
    memphis.PartitionsNumber(3)
    )
Retention Types

Retention types define the methodology behind how a station behaves with its messages. Memphis currently supports the following retention types:

memphis.MaxMessageAgeSeconds

When the retention type is set to MAX_MESSAGE_AGE_SECONDS, messages will persist in the station for the number of seconds specified in the retention_value.

memphis.Messages

When the retention type is set to MESSAGES, the station will only hold up to retention_value messages. The station will delete the oldest messsages to maintain a retention_value number of messages.

memphis.Bytes

When the retention type is set to BYTES, the station will only hold up to retention_value BYTES. The oldest messages will be deleted in order to maintain at maximum retention_vlaue BYTES in the station.

memphis.AckBased // for cloud users only

When the retention type is set to ACK_BASED, messages in the station will be deleted after they are acked by all subscribed consumer groups.

Retention Values

The unit of the retention value changes depending on the retention type specified.

All retention values are of type int. The following units are used based on the respective retention type:

memphis.MaxMessageAgeSeconds is represented in seconds,
memphis.Messages is a number of messages
memphis.Bytes is a number of bytes,
With memphis.AckBased, the retentionValue is ignored.

Storage Types

Memphis currently supports the following types of messages storage:

memphis.Disk

When storage is set to DISK, messages are stored on disk.

memphis.Memory

When storage is set to MEMORY, messages are stored in the system memory (RAM).

Destroying a Station

Destroying a station will remove all its resources (including producers and consumers).

err := s.Destroy();
Creating a new Schema

In case schema is already exist a new version will be created

err := conn.CreateSchema("<schema-name>", "<schema-type>", "<schema-file-path>")
Enforcing a Schema on an Existing Station
err := conn.EnforceSchema("<schema-name>", "<station-name>")
Deprecated - Attaching Schema

use EnforceSchema instead

err := conn.AttachSchema("<schema-name>", "<station-name>")
Detaching a Schema from Station
err := conn.DetachSchema("<station-name>")
Produce and Consume Messages

The most common client operations are producing messages and consuming messages.

Messages are published to a station with a Producer and consumed from it by a Consumer by creating a consumer and calling its Consume function with a message handler callback function.

Alternatively, consumers may call the Fetch function to only consume a specific number of messages.

Consumers are pull-based and consume all the messages in a station unless you are using a consumers group, in which case messages are spread across all members in this group.

Memphis messages are payload agnostic. Payloads are byte slices, i.e []byte.

In order to stop receiving messages, you have to call consumer.StopConsume().

The consumer will terminate even if there are messages currently being sent to the consumer.

Creating a Producer
// from a Conn
p0, err := c.CreateProducer(
	"<station-name>",
	"<producer-name>",
) 

// from a Station
p1, err := s.CreateProducer("<producer-name>")
Producing a message

Both producers and connections can use the produce function. To produce a message from a connection, simply call connection.Produce. This function will create a producer if none with the given name exists, otherwise it will pull the producer from a cache and use it to produce the message.

Here is an example of producing from a connection: (receiver function of the connection struct).

c.Produce("station_name_c_produce", "producer_name_a", []byte("Hey There!"), []memphis.ProducerOpt{}, []memphis.ProduceOpt{})

Here is an example of producing from a producer (p) (receiver function of the producer struct).

Creating a producer and calling produce on it will increase the performance of producing messages as it reduces the latency of having to get a producer from the cache.

p.Produce("<message in []byte or map[string]interface{}/[]byte or protoreflect.ProtoMessage or map[string]interface{}(schema validated station - protobuf)/struct with json tags or map[string]interface{} or interface{}(schema validated station - json schema) or []byte/string (schema validated station - graphql schema) or []byte or map[string]interface{} or struct with avro tags(schema validated station - avro schema)>", memphis.AckWaitSec(15)) // defaults to 15 seconds

Note: When producing a message using avro format([]byte or map[string]interface{}), int types are converted to float64. Type conversion of Golang float64 equals Avro double. So when creating an avro schema, it can't have int types. use double instead. E.g.

myData :=  map[string]interface{}{
"username": "John",
"age": 30
}
{
	"type": "record",
	"namespace": "com.example",
	"name": "test_schema",
	"fields": [
		{ "name": "username", "type": "string" },
		{ "name": "age", "type": "double" }
	]
}

Note: When producing to a station with more than one partition, the producer will produce messages in a Round Robin fashion between the different partitions.

For message data formats see here.

Here is an example of a produce function call that waits up to 30 seconds for an acknowledgement from memphis:

conn, err := memphis.Connect("localhost", "root", memphis.Password("memphis"))

// Handle err

producer, err := conn.CreateProducer(
    "StationToProduceFor",
    "MyNewProducer",
)

// Handle err

err = producer.Produce(
    []byte("My Message :)"),
    memphis.AckWaitSec(30),
)

// Handle err

As discussed before in the station section, idempotency is an important feature of memphis. To achieve idempotency, an id must be assigned to messages that are being produced. Use the MsgId ProducerOpt for this purpose.

conn, err := memphis.Connect("localhost", "root", memphis.Password("memphis"))

// Handle err

producer, err := conn.CreateProducer(
    "StationToProduceFor",
    "MyNewProducer",
    // MsgID not supported yet...
)

// Handle err

err = producer.Produce(
    []byte("My Message :)"),
)

// Handle err

To add message headers to the message, use the headers parameter. Headers can help with observability when using certain 3rd party to help monitor the behavior of memphis. See here for more details.

conn, err := memphis.Connect("localhost", "root", memphis.Password("memphis"))

// Handle err

producer, err := conn.CreateProducer(
    "StationToProduceFor",
    "MyNewProducer",
)

// Handle err

hdrs := memphis.Headers{}
hdrs.New()
err := hdrs.Add("key", "value")

// Handle err

err = producer.Produce(
    []byte("My Message :)"),
    memphis.MsgHeaders(hdrs),
)

// Handle err

Lastly, memphis can produce to a specific partition in a station. To do so, use the ProducerPartitionKey ProducerOpt:

conn, err := memphis.Connect("localhost", "root", memphis.Password("memphis"))

// Handle err

producer, err := conn.CreateProducer(
    "StationToProduceFor",
    "MyNewProducer",
)

// Handle err

err = producer.Produce(
    []byte("My Message :)"),
    memphis.ProducerPartitionKey("2ndPartition"),
)

// Handle err
Async produce

For better performance. The client won't wait while waiting for an acknowledgment before sending more messages.

p.Produce(
	"<message in []byte or map[string]interface{}/[]byte or protoreflect.ProtoMessage or map[string]interface{}(schema validated station - protobuf)/struct with json tags or map[string]interface{} or interface{}(schema validated station - json schema) or []byte/string (schema validated station - graphql schema) or []byte or map[string]interface{} or struct with avro tags(schema validated station - avro schema)>",
    memphis.AckWaitSec(15),
	memphis.AsyncProduce()
)
Sync produce

For better reliability. The client will wait for an acknowledgement from the broker before sending another message.

p.Produce(
	"<message in []byte or map[string]interface{}/[]byte or protoreflect.ProtoMessage or map[string]interface{}(schema validated station - protobuf)/struct with json tags or map[string]interface{} or interface{}(schema validated station - json schema) or []byte/string (schema validated station - graphql schema) or []byte or map[string]interface{} or struct with avro tags(schema validated station - avro schema)>",
    memphis.AckWaitSec(15),
	memphis.SyncProduce()
)
Produce using partition number

The partition number will be used to produce messages to a spacific partition.

p.Produce(
	"<message in []byte or map[string]interface{}/[]byte or protoreflect.ProtoMessage or map[string]interface{}(schema validated station - protobuf)/struct with json tags or map[string]interface{} or interface{}(schema validated station - json schema) or []byte/string (schema validated station - graphql schema) or []byte or map[string]interface{} or struct with avro tags(schema validated station - avro schema)>",
    memphis.ProducerPartitionNumber(<int>)
)
Produce to multiple stations

Producing to multiple stations can be done by creating a producer with multiple stations and then calling produce on that producer.

conn, err := memphis.Connect("localhost", "root", memphis.Password("memphis"))

// Handle err

producer, err := conn.CreateProducer(
    []string{"station1", "station2", "station3"},
    "MyNewProducer",
)

// Handle err

err = producer.Produce(
    []byte("My Message :)"),
    memphis.AckWaitSec(30),
)

// Handle err

In this example, the producer sends a message to three different stations: station1, station2, and station3. Alternatively, it also possible to produce to multiple stations using the connection:

conn.Produce([]string{"station1", "station2", "station3"}, "producer_name_a", []byte("Hey There!"), []memphis.ProducerOpt{}, []memphis.ProduceOpt{})
Destroying a Producer
p.Destroy();
Creating a Consumer
// creation from a Station
consumer0, err = s.CreateConsumer("<consumer-name>",
  memphis.ConsumerGroup("<consumer-group>"), // defaults to consumer name
  memphis.PullInterval(<pull interval time.Duration), // defaults to 1 second
  memphis.BatchSize(<batch-size> int), // defaults to 10
  memphis.BatchMaxWaitTime(<time.Duration>), // defaults to 5 seconds, has to be at least 1 ms
  memphis.MaxAckTime(<time.Duration>), // defaults to 30 sec
  memphis.MaxMsgDeliveries(<int>), // defaults to 2
  memphis.ConsumerErrorHandler(func(*Consumer, error){})
  memphis.StartConsumeFromSeq(<uint64>)// start consuming from a specific sequence. defaults to 1
  memphis.LastMessages(<int64>)// consume the last N messages, defaults to -1 (all messages in the station)
)

// creation from a Conn
consumer1, err = c.CreateConsumer("<station-name>", "<consumer-name>", ...) 

Consumers are used to pull messages from a station. Here is how to create a consumer with all of the default parameters:

Note: When consuming from a station with more than one partition, the consumer will consume messages in Round Robin fashion from the different partitions.

To create a consumer in a consumer group, add the ConsumerGroup parameter:

conn, err := memphis.Connect("localhost", "root", memphis.Password("memphis"))

// Handle err

consumer, err := conn.CreateConsumer(
    "MyStation",
    "MyNewConsumer",
    memphis.ConsumerGroup("ConsumerGroup1"),
)

// Handle err

When using the Consume function from a consumer, the consumer will continue to consume in an infinite loop. To change the rate at which the consumer polls, change the PullInterval consumer option:

conn, err := memphis.Connect("localhost", "root", memphis.Password("memphis"))

// Handle err

consumer, err := conn.CreateConsumer(
    "MyStation",
    "MyNewConsumer",
    memphis.PullInterval(2 * time.Second),
)

// Handle err

Every time the consumer pulls from the station, the consumer will try to take BatchSize number of elements from the station. However, sometimes there are not enough messages in the station for the consumer to consume a full batch. In this case, the consumer will continue to wait until either BatchSize messages are gathered or the time in milliseconds specified by BatchMaxWaitTime is reached.

Here is an example of a consumer that will try to pull 100 messages every 10 seconds while waiting up to 15 seconds for all messages to reach the consumer.

conn, err := memphis.Connect("localhost", "root", memphis.Password("memphis"))

// Handle err

consumer, err := conn.CreateConsumer(
    "MyStation",
    "MyNewConsumer",
    memphis.PullInterval(10 * time.Second),
    memphis.BatchSize(100),
    memphis.BatchMaxWaitTime(15 * time.Second).
)

// Handle err

The MaxMsgDeliveries ConsumerOpt allows the user to set how many messages the consumer is able to consume (without acknowledging) before consuming more.

conn, err := memphis.Connect("localhost", "root", memphis.Password("memphis"))

// Handle err

consumer, err := conn.CreateConsumer(
    "MyStation",
    "MyNewConsumer",
    memphis.PullInterval(10 * time.Second),
    memphis.BatchSize(100),
    memphis.BatchMaxWaitTime(15 * time.Second),
    memphis.MaxMsgDeliveries(100),
)

// Handle err
Passing a context to a message handler
ctx := context.Background()
ctx = context.WithValue(ctx, "key", "value")
consumer.SetContext(ctx)
Processing Messages

First, create a callback function that receives a slice of pointers to memphis.Msg and an error.

Then, pass this callback into consumer.Consume function.

The consumer will try to fetch messages every pullInterval (that was given in Consumer's creation) and call the defined message handler.

func handler(msgs []*memphis.Msg, err error, ctx context.Context) {
	if err != nil {
		fmt.Printf("Fetch failed: %v", err)
		return
	}

	for _, msg := range msgs {
		fmt.Println(string(msg.Data()))
		msg.Ack()
	}
}

consumer.Consume(handler, 
				memphis.ConsumerPartitionKey(<string>) // use the partition key to consume from a spacific partition (if not specified consume in a Round Robin fashion)
)

consumer.Consume(handler, 
				memphis.ConsumerPartitionNumber(<string>) 
)
Consumer schema deserialization

To get messages deserialized, use msg.DataDeserialized().

func handler(msgs []*memphis.Msg, err error, ctx context.Context) {
	if err != nil {
		fmt.Printf("Fetch failed: %v", err)
		return
	}

	for _, msg := range msgs {
		fmt.Println(string(msg.DataDeserialized()))
		msg.Ack()
	}
}

There may be some instances where you apply a schema after a station has received some messages. In order to consume those messages get_data_deserialized may be used to consume the messages without trying to apply the schema to them. As an example, if you produced a string to a station and then attached a protobuf schema, using get_data_deserialized will not try to deserialize the string as a protobuf-formatted message.

Fetch a single batch of messages
msgs, err := conn.FetchMessages("<station-name>", "<consumer-name>",
  memphis.FetchBatchSize(<int>) // defaults to 10
  memphis.FetchConsumerGroup("<consumer-group>"), // defaults to consumer name
  memphis.FetchBatchMaxWaitTime(<time.Duration>), // defaults to 100 millis, has to be at least 100 ms
  memphis.FetchMaxAckTime(<time.Duration>), // defaults to 10 sec
  memphis.FetchMaxMsgDeliveries(<int>), // defaults to 2
  memphis.FetchConsumerErrorHandler(func(*Consumer, error){})
  memphis.FetchStartConsumeFromSeq(<uint64>)// start consuming from a specific sequence. defaults to 1
  memphis.FetchLastMessages(<int64>)// consume the last N messages, defaults to -1 (all messages in the station))
  memphis.FetchPartitionKey(<string>)// use the partition key to consume from a spacific partition (if not specified consume in a Round Robin fashion)
)
Fetch a single batch of messages after creating a consumer

prefetch = true will prefetch next batch of messages and save it in memory for future Fetch() request
Note: Use a higher MaxAckTime as the messages will sit in a local cache for some time before being processed and Ack'd.

msgs, err := consumer.Fetch(<batch-size> int,
							<prefetch> bool,
							memphis.ConsumerPartitionKey(<string>) // use the partition key to consume from a spacific partition (if not specified consume in a Round Robin fashion)
							)
Acknowledging a Message

Acknowledging a message indicates to the Memphis server to not
re-send the same message again to the same consumer or consumers group.

message.Ack();
Nacking a Message

Mark the message as not acknowledged - the broker will resend the message immediately to the same consumers group, instead of waiting to the max ack time configured.

message.Nack();
Sending a message to the dead-letter

Sending the message to the dead-letter station (DLS) - the broker won't resend the message again to the same consumers group and will place the message inside the dead-letter station (DLS) with the given reason. The message will still be available to other consumer groups

message.DeadLetter("reason");
Delay the message after a given duration

Delay the message and tell Memphis server to re-send the same message again to the same consumer group.
The message will be redelivered only in case Consumer.MaxMsgDeliveries is not reached yet.

message.Delay(<time.Duration>);
Get headers

Get headers per message

headers := msg.GetHeaders()
Get message sequence number

Get message sequence number

sequenceNumber, err := msg.GetSequenceNumber()
Get message time sent

Get message time sent

timeSent, err := msg.GetTimeSent()
Destroying a Consumer
consumer.Destroy();
Check if broker is connected
conn.IsConnected()

Documentation

Index

Constants

View Source
const (
	SEED                      = 31
	JetstreamOperationTimeout = 30
)

Variables

View Source
var (
	ConsumerErrStationUnreachable = errors.New("station unreachable")
	ConsumerErrConsumeInactive    = errors.New("consumer is inactive")
	ConsumerErrDelayDlsMsg        = errors.New("cannot delay DLS message")
)

Functions

func DefaultConsumerErrHandler added in v0.1.5

func DefaultConsumerErrHandler(c *Consumer, err error)

func DefaultErrHandler added in v1.1.2

func DefaultErrHandler(nc *nats.Conn)

Types

type Conn

type Conn struct {
	ConnId string
	// contains filtered or unexported fields
}

Conn - holds the connection with memphis.

func Connect

func Connect(host, username string, options ...Option) (*Conn, error)

Connect - creates connection with memphis.

func (*Conn) AttachSchema added in v0.1.8

func (c *Conn) AttachSchema(name string, stationName string) error

Depreciated - use EnforceSchema instead

func (*Conn) Close

func (c *Conn) Close()

func (*Conn) CreateConsumer

func (c *Conn) CreateConsumer(stationName, consumerName string, opts ...ConsumerOpt) (*Consumer, error)

CreateConsumer - creates a consumer.

func (*Conn) CreateProducer

func (c *Conn) CreateProducer(stationName interface{}, name string, opts ...ProducerOpt) (*Producer, error)

CreateProducer - creates a producer.

func (*Conn) CreateSchema added in v1.0.3

func (c *Conn) CreateSchema(name, schemaType, path string, options ...RequestOpt) error

CreateSchema - validates and uploads a new schema to the Broker. In case schema is already exist a new version will be created

func (*Conn) CreateStation

func (c *Conn) CreateStation(Name string, opts ...StationOpt) (*Station, error)

func (*Conn) DetachSchema added in v0.1.8

func (c *Conn) DetachSchema(stationName string, options ...RequestOpt) error

func (*Conn) EnforceSchema added in v1.0.3

func (c *Conn) EnforceSchema(name string, stationName string, options ...RequestOpt) error

EnforceSchema - -Enforcing a schema on a chosen station

func (*Conn) FetchMessages added in v0.2.2

func (c *Conn) FetchMessages(stationName string, consumerName string, opts ...FetchOpt) ([]*Msg, error)

FetchMessages - Consume a batch of messages.

func (*Conn) GetPartitionFromKey added in v1.1.3

func (c *Conn) GetPartitionFromKey(key string, stationName string) (int, error)

func (*Conn) IsConnected added in v0.1.0

func (c *Conn) IsConnected() bool

IsConnected - check if connected to broker - returns boolean

func (*Conn) Produce added in v0.2.1

func (c *Conn) Produce(stationName interface{}, name string, message any, opts []ProducerOpt, pOpts []ProduceOpt) error

Produce - produce a message without creating a new producer, using connection only, in cases where extra performance is needed the recommended way is to create a producer first and produce messages by using the produce receiver function of it

func (*Conn) ValidatePartitionNumber added in v1.1.4

func (c *Conn) ValidatePartitionNumber(partitionNumber int, stationName string) error

type ConsumeHandler added in v0.1.0

type ConsumeHandler func([]*Msg, error, context.Context)

ConsumeHandler - handler for consumed messages

type Consumer

type Consumer struct {
	Name               string
	ConsumerGroup      string
	PullInterval       time.Duration
	BatchSize          int
	BatchMaxTimeToWait time.Duration
	MaxAckTime         time.Duration
	MaxMsgDeliveries   int

	StartConsumeFromSequence uint64
	LastMessages             int64

	PartitionGenerator *RoundRobinProducerConsumerGenerator
	// contains filtered or unexported fields
}

Consumer - memphis consumer object.

func (*Consumer) Consume added in v0.1.0

func (c *Consumer) Consume(handlerFunc ConsumeHandler, opts ...ConsumingOpt) error

Consumer.Consume - start consuming messages according to the interval configured in the consumer object. When a batch is consumed the handlerFunc will be called.

func (*Consumer) Destroy added in v0.0.6

func (c *Consumer) Destroy(options ...RequestOpt) error

Destroy - destroy this consumer.

func (*Consumer) Fetch added in v0.0.7

func (c *Consumer) Fetch(batchSize int, prefetch bool, opts ...ConsumingOpt) ([]*Msg, error)

Fetch - immediately fetch a batch of messages.

func (*Consumer) SetContext added in v0.2.0

func (c *Consumer) SetContext(ctx context.Context)

Consumer.SetContext - set a context that will be passed to each message handler function call

func (*Consumer) StopConsume added in v0.1.0

func (c *Consumer) StopConsume()

StopConsume - stops the continuous consume operation.

type ConsumerErrHandler added in v0.1.5

type ConsumerErrHandler func(*Consumer, error)

ConsumerErrHandler is used to process asynchronous errors.

type ConsumerOpt

type ConsumerOpt func(*ConsumerOpts) error

ConsumerOpt - a function on the options for consumers.

func BatchMaxWaitTime added in v0.1.0

func BatchMaxWaitTime(batchMaxWaitTime time.Duration) ConsumerOpt

BatchMaxWaitTime - max time to wait between pulls, defauls is 5 seconds.

func BatchSize

func BatchSize(batchSize int) ConsumerOpt

BatchSize - pull batch size.

func ConsumerErrorHandler added in v0.1.5

func ConsumerErrorHandler(ceh ConsumerErrHandler) ConsumerOpt

ConsumerErrorHandler - handler for consumer errors.

func ConsumerGenUniqueSuffix deprecated added in v0.1.5

func ConsumerGenUniqueSuffix() ConsumerOpt

Deprecated: will be stopped to be supported after November 1'st, 2023. ConsumerGenUniqueSuffix - whether to generate a unique suffix for this consumer.

func ConsumerGroup

func ConsumerGroup(cg string) ConsumerOpt

ConsumerGroup - consumer group name, default is "".

func ConsumerTimeoutRetry added in v1.2.0

func ConsumerTimeoutRetry(timeoutRetry int) ConsumerOpt

ConsumerTimeoutRetry - number of retries for consumer timeout. the default value is 5

func LastMessages added in v0.1.9

func LastMessages(lastMessages int64) ConsumerOpt

func MaxAckTime added in v0.1.0

func MaxAckTime(maxAckTime time.Duration) ConsumerOpt

MaxAckTime - max time for ack a message, in case a message not acked within this time period memphis will resend it.

func MaxMsgDeliveries

func MaxMsgDeliveries(maxMsgDeliveries int) ConsumerOpt

MaxMsgDeliveries - max number of message deliveries, by default is 2.

func PullInterval added in v0.1.0

func PullInterval(pullInterval time.Duration) ConsumerOpt

PullInterval - interval between pulls, default is 1 second.

func StartConsumeFromSequence added in v0.1.9

func StartConsumeFromSequence(startConsumeFromSequence uint64) ConsumerOpt

type ConsumerOpts

type ConsumerOpts struct {
	Name                     string
	StationName              string
	ConsumerGroup            string
	PullInterval             time.Duration
	BatchSize                int
	BatchMaxTimeToWait       time.Duration
	MaxAckTime               time.Duration
	MaxMsgDeliveries         int
	GenUniqueSuffix          bool
	ErrHandler               ConsumerErrHandler
	StartConsumeFromSequence uint64
	LastMessages             int64
	TimeoutRetry             int
}

ConsumerOpts - configuration options for a consumer.

type ConsumersMap added in v0.2.2

type ConsumersMap map[string]*Consumer

type ConsumingOpt added in v1.1.3

type ConsumingOpt func(*ConsumingOpts) error

func ConsumerPartitionKey added in v1.1.3

func ConsumerPartitionKey(ConsumerPartitionKey string) ConsumingOpt

ConsumerPartitionKey - Partition key for the consumer to consume from

func ConsumerPartitionNumber added in v1.1.4

func ConsumerPartitionNumber(ConsumerPartitionNumber int) ConsumingOpt

ConsumerPartitionNumber - Partition number for the consumer to consume from

type ConsumingOpts added in v1.1.3

type ConsumingOpts struct {
	ConsumerPartitionKey    string
	ConsumerPartitionNumber int
}

ConsumingOpts - configuration options for consuming messages

type DlsMessage added in v0.1.8

type DlsMessage struct {
	StationName     string            `json:"station_name"`
	Producer        ProducerDetails   `json:"producer"`
	Message         MessagePayloadDls `json:"message"`
	ValidationError string            `json:"validation_error"`
}

type FetchOpt added in v0.2.2

type FetchOpt func(*FetchOpts) error

FetchOpt - a function on the options fetch.

func FetchBatchMaxWaitTime added in v0.2.2

func FetchBatchMaxWaitTime(batchMaxWaitTime time.Duration) FetchOpt

BatchMaxWaitTime - max time to wait between pulls, defauls is 5 seconds.

func FetchBatchSize added in v0.2.2

func FetchBatchSize(batchSize int) FetchOpt

BatchSize - pull batch size.

func FetchConsumerErrorHandler added in v0.2.2

func FetchConsumerErrorHandler(ceh ConsumerErrHandler) FetchOpt

FetchConsumerErrorHandler - handler for consumer errors.

func FetchConsumerGenUniqueSuffix deprecated added in v0.2.2

func FetchConsumerGenUniqueSuffix() FetchOpt

Deprecated: will be stopped to be supported after November 1'st, 2023. ConsumerGenUniqueSuffix - whether to generate a unique suffix for this consumer.

func FetchConsumerGroup added in v0.2.2

func FetchConsumerGroup(cg string) FetchOpt

ConsumerGroup - consumer group name, default is "".

func FetchMaxAckTime added in v0.2.2

func FetchMaxAckTime(maxAckTime time.Duration) FetchOpt

MaxAckTime - max time for ack a message, in case a message not acked within this time period memphis will resend it.

func FetchMaxMsgDeliveries added in v0.2.2

func FetchMaxMsgDeliveries(maxMsgDeliveries int) FetchOpt

MaxMsgDeliveries - max number of message deliveries, by default is 2.

func FetchPartitionKey added in v1.1.3

func FetchPartitionKey(partitionKey string) FetchOpt

PartitionKey - partition key to consume from.

func FetchPartitionNumber added in v1.3.0

func FetchPartitionNumber(partitionNumber int) FetchOpt

PartitionNumber - partition number to consume from.

func FetchPrefetch added in v1.0.2

func FetchPrefetch() FetchOpt

FetchPrefetch - whether to prefetch next batch for consumption

type FetchOpts added in v0.2.2

type FetchOpts struct {
	ConsumerName             string
	StationName              string
	ConsumerGroup            string
	BatchSize                int
	BatchMaxTimeToWait       time.Duration
	MaxAckTime               time.Duration
	MaxMsgDeliveries         int
	GenUniqueSuffix          bool
	ErrHandler               ConsumerErrHandler
	StartConsumeFromSequence uint64
	LastMessages             int64
	Prefetch                 bool
	FetchPartitionKey        string
	FetchPartitionNumber     int
}

FetchOpts - configuration options for fetch.

type FunctionsUpdate added in v1.2.0

type FunctionsUpdate struct {
	Functions map[int]int `json:"functions"`
}

type Headers added in v0.1.5

type Headers struct {
	MsgHeaders map[string][]string
}

func (*Headers) Add added in v0.1.5

func (hdr *Headers) Add(key, value string) error

func (*Headers) New added in v0.1.5

func (hdr *Headers) New()

type MessagePayloadDls added in v0.1.8

type MessagePayloadDls struct {
	Size    int               `json:"size"`
	Data    string            `json:"data"`
	Headers map[string]string `json:"headers"`
}

type Msg added in v0.0.7

type Msg struct {
	// contains filtered or unexported fields
}

Msg - a received message, can be acked.

func (*Msg) Ack added in v0.0.7

func (m *Msg) Ack() error

Msg.Ack - ack the message.

func (*Msg) Data added in v0.0.7

func (m *Msg) Data() []byte

Msg.Data - get message's data.

func (*Msg) DataDeserialized added in v1.1.4

func (m *Msg) DataDeserialized() (any, error)

Msg.DataDeserialized - get message's deserialized data.

func (*Msg) DeadLetter added in v1.3.1

func (m *Msg) DeadLetter(reason string) error

Msg.DeadLetter - Sending the message to the dead-letter station (DLS). the broker won't resend the message again to the same consumers group and will place the message inside the dead-letter station (DLS) with the given reason. The message will still be available to other consumer groups

func (*Msg) Delay added in v0.2.2

func (m *Msg) Delay(duration time.Duration) error

Msg.Delay - Delay a message redelivery

func (*Msg) GetHeaders added in v0.1.5

func (m *Msg) GetHeaders() map[string]string

Msg.GetHeaders - get headers per message

func (*Msg) GetSequenceNumber added in v0.1.9

func (m *Msg) GetSequenceNumber() (uint64, error)

Msg.GetSequenceNumber - get message's sequence number

func (*Msg) GetTimeSent added in v1.3.2

func (m *Msg) GetTimeSent() (time.Time, error)

Msg.GetTimeSent - get message's time sent

func (*Msg) Nack added in v1.3.1

func (m *Msg) Nack() error

Msg.Nack - not ack for a message, meaning that the message will be redelivered again to the same consumers group without waiting to its ack wait time.

type NackedDlsMessage added in v1.3.1

type NackedDlsMessage struct {
	StationName string `json:"station_name"`
	Error       string `json:"error"`
	Partition   int    `json:"partition"`
	CgName      string `json:"cg_name"`
	Seq         uint64 `json:"seq"`
}

type Notification added in v0.1.7

type Notification struct {
	Title string
	Msg   string
	Code  string
	Type  string
}

type Option

type Option func(*Options) error

Option is a function on the options for a connection.

func AccountId added in v1.0.2

func AccountId(accountId int) Option

AccountId - default is 1.

func ConnectionToken

func ConnectionToken(connectionToken string) Option

ConnectionToken - string connection token.

func MaxReconnect

func MaxReconnect(maxReconnect int) Option

MaxReconnect - the amount of reconnect attempts.

func Password added in v0.2.2

func Password(password string) Option

Password - string password.

func Port added in v0.1.5

func Port(port int) Option

Port - default is 6666.

func Reconnect

func Reconnect(reconnect bool) Option

Reconnect - whether to do reconnect while connection is lost.

func ReconnectInterval added in v0.1.0

func ReconnectInterval(reconnectInterval time.Duration) Option

ReconnectInterval - interval in miliseconds between reconnect attempts.

func Timeout added in v0.1.0

func Timeout(timeout time.Duration) Option

Timeout - connection timeout in miliseconds.

func Tls added in v0.1.9

func Tls(TlsCert string, TlsKey string, CaFile string) Option

Tls - paths to tls cert, key and ca files.

type Options

type Options struct {
	Host              string
	Port              int
	Username          string
	AccountId         int
	ConnectionToken   string
	Reconnect         bool
	MaxReconnect      int // MaxReconnect is the maximum number of reconnection attempts. The default value is -1 which means reconnect indefinitely.
	ReconnectInterval time.Duration
	Timeout           time.Duration
	TLSOpts           TLSOpts
	Password          string
}

type PMsgToAck added in v0.1.7

type PMsgToAck struct {
	ID     int    `json:"id"`
	CgName string `json:"cg_name"`
}

type PartitionsUpdate added in v1.1.0

type PartitionsUpdate struct {
	PartitionsList []int `json:"partitions_list"`
}

type PrefetchedMsgs added in v1.0.2

type PrefetchedMsgs struct {
	// contains filtered or unexported fields
}

type ProduceOpt

type ProduceOpt func(*ProduceOpts) error

ProduceOpt - a function on the options for produce operations.

func AckWaitSec

func AckWaitSec(ackWaitSec int) ProduceOpt

AckWaitSec - max time in seconds to wait for an ack from memphis.

func AsyncProduce added in v0.1.5

func AsyncProduce() ProduceOpt

AsyncProduce - produce operation won't wait for broker acknowledgement

func MsgHeaders added in v0.1.5

func MsgHeaders(hdrs Headers) ProduceOpt

MsgHeaders - set headers to a message

func MsgId added in v0.1.7

func MsgId(id string) ProduceOpt

MsgId - set an id for a message for idempotent producer

func ProducerPartitionKey added in v1.1.3

func ProducerPartitionKey(partitionKey string) ProduceOpt

ProducerPartitionKey - set a partition key for a message

func ProducerPartitionNumber added in v1.1.4

func ProducerPartitionNumber(partitionNumber int) ProduceOpt

ProducerPartitionNumber - set a specific partition number for a message

func SyncProduce added in v1.1.2

func SyncProduce() ProduceOpt

SyncProduce - produce operation will wait for broker acknowledgement

type ProduceOpts

type ProduceOpts struct {
	Message                 any
	AckWaitSec              int
	MsgHeaders              Headers
	AsyncProduce            bool
	ProducerPartitionKey    string
	ProducerPartitionNumber int
}

ProduceOpts - configuration options for produce operations.

type Producer

type Producer struct {
	Name string

	PartitionGenerator *RoundRobinProducerConsumerGenerator
	// contains filtered or unexported fields
}

Producer - memphis producer object.

func (*Producer) Destroy added in v0.0.6

func (p *Producer) Destroy(options ...RequestOpt) error

Destroy - destoy this producer.

func (*Producer) Produce

func (p *Producer) Produce(message any, opts ...ProduceOpt) error

Producer.Produce - produces a message into a station. message is of type []byte/protoreflect.ProtoMessage in case it is a schema validated station

type ProducerDetails added in v0.1.8

type ProducerDetails struct {
	Name         string `json:"name"`
	ConnectionId string `json:"connection_id"`
}

type ProducerOpt added in v0.1.5

type ProducerOpt func(*ProducerOpts) error

ProducerOpt - a function on the options for producer creation.

func ProducerGenUniqueSuffix deprecated added in v0.1.5

func ProducerGenUniqueSuffix() ProducerOpt

Deprecated: will be stopped to be supported after November 1'st, 2023. ProducerGenUniqueSuffix - whether to generate a unique suffix for this producer.

func ProducerTimeoutRetry added in v1.2.0

func ProducerTimeoutRetry(timeoutRetry int) ProducerOpt

ProducerTimeoutRetry - set the number of retries for timeout requests

type ProducerOpts added in v0.1.5

type ProducerOpts struct {
	GenUniqueSuffix bool
	TimeoutRetry    int
}

ProducerOpts - configuration options for producer creation.

type ProducersMap added in v0.2.1

type ProducersMap map[string]*Producer

type RequestOpt added in v1.2.0

type RequestOpt func(*RequestOpts) error

RequestOpt - a function on the options request.

func TimeoutRetry added in v1.2.0

func TimeoutRetry(retries int) RequestOpt

TimeoutRetry - number of retries in case of timeout. default is 5.

type RequestOpts added in v1.2.0

type RequestOpts struct {
	TimeoutRetries int
}

type RetentionType

type RetentionType int

RetentionType - station's message retention type

const (
	MaxMessageAgeSeconds RetentionType = iota
	Messages
	Bytes
	AckBased
)

func (RetentionType) String

func (r RetentionType) String() string

type RoundRobinProducerConsumerGenerator added in v1.1.0

type RoundRobinProducerConsumerGenerator struct {
	NumberOfPartitions int
	Partitions         []int
	Current            int
	// contains filtered or unexported fields
}

func (*RoundRobinProducerConsumerGenerator) Next added in v1.1.0

type Schema added in v1.0.3

type Schema struct {
	Name              string `json:"name"`
	Type              string `json:"type"`
	CreatedByUsername string `json:"created_by_username"`
	SchemaContent     string `json:"schema_content"`
	MessageStructName string `json:"message_struct_name"`
}

type SchemaUpdate added in v0.1.5

type SchemaUpdate struct {
	UpdateType SchemaUpdateType
	Init       SchemaUpdateInit `json:"init,omitempty"`
}

type SchemaUpdateInit added in v0.1.5

type SchemaUpdateInit struct {
	SchemaName    string        `json:"schema_name"`
	ActiveVersion SchemaVersion `json:"active_version"`
	SchemaType    string        `json:"type"`
}

type SchemaUpdateType added in v0.1.5

type SchemaUpdateType int
const (
	SchemaUpdateTypeInit SchemaUpdateType = iota + 1
	SchemaUpdateTypeDrop
)

type SchemaVersion added in v0.1.5

type SchemaVersion struct {
	VersionNumber     int    `json:"version_number"`
	Descriptor        string `json:"descriptor"`
	Content           string `json:"schema_content"`
	MessageStructName string `json:"message_struct_name"`
}

type SdkClientsUpdate added in v0.2.2

type SdkClientsUpdate struct {
	StationName string `json:"station_name"`
	Type        string `json:"type"`
	Update      bool   `json:"update"`
}

type Station

type Station struct {
	Name              string
	RetentionType     RetentionType
	RetentionValue    int
	StorageType       StorageType
	Replicas          int
	IdempotencyWindow time.Duration

	SchemaName           string
	DlsConfiguration     dlsConfiguration
	TieredStorageEnabled bool
	PartitionsNumber     int
	DlsStation           string
	// contains filtered or unexported fields
}

Station - memphis station object.

func (*Station) CreateConsumer

func (s *Station) CreateConsumer(name string, opts ...ConsumerOpt) (*Consumer, error)

Station.CreateConsumer - creates a producer attached to this station.

func (*Station) CreateProducer

func (s *Station) CreateProducer(name string, opts ...ProducerOpt) (*Producer, error)

Station.CreateProducer - creates a producer attached to this station.

func (*Station) Destroy added in v0.1.0

func (s *Station) Destroy(options ...RequestOpt) error

type StationName

type StationName string

type StationOpt

type StationOpt func(*StationOpts) error

StationOpt - a function on the options for a station.

func DlsStation added in v1.1.4

func DlsStation(name string) StationOpt

DlsStation - If selected DLS events will be sent to selected station as well

func IdempotencyWindow added in v0.1.7

func IdempotencyWindow(idempotencyWindow time.Duration) StationOpt

IdempotencyWindow - time frame in which idempotency track messages, default is 2 minutes. This feature is enabled only for messages contain Msg Id

func Name

func Name(name string) StationOpt

Name - station's name

func PartitionsNumber added in v1.1.0

func PartitionsNumber(partitionsNumber int) StationOpt

func Replicas

func Replicas(replicas int) StationOpt

Replicas - number of replicas for the messages of the data, default is 1.

func RetentionTypeOpt

func RetentionTypeOpt(retentionType RetentionType) StationOpt

RetentionTypeOpt - retention type, default is MaxMessageAgeSeconds.

func RetentionVal

func RetentionVal(retentionVal int) StationOpt

RetentionVal - number which represents the retention based on the retentionType, default is 604800.

func SchemaName added in v0.1.8

func SchemaName(schemaName string) StationOpt

SchemaName - shcema's name to attach

func SendPoisonMsgToDls added in v0.1.8

func SendPoisonMsgToDls(sendPoisonMsgToDls bool) StationOpt

SendPoisonMsgToDls - send poison message to dls, default is true

func SendSchemaFailedMsgToDls added in v0.1.8

func SendSchemaFailedMsgToDls(sendSchemaFailedMsgToDls bool) StationOpt

SendSchemaFailedMsgToDls - send message to dls after schema validation fail, default is true

func StationTimeoutRetry added in v1.2.0

func StationTimeoutRetry(timeoutRetry int) StationOpt

TimeoutRetry - number of retries for timeout errors, default is 5

func StorageTypeOpt

func StorageTypeOpt(storageType StorageType) StationOpt

StorageTypeOpt - persistance storage for messages of the station, default is storageTypes.FILE.

func TieredStorageEnabled added in v0.2.1

func TieredStorageEnabled(tieredStorageEnabled bool) StationOpt

TieredStorageEnabled - enable tiered storage, default is false

type StationOpts

type StationOpts struct {
	Name                     string
	RetentionType            RetentionType
	RetentionVal             int
	StorageType              StorageType
	Replicas                 int
	IdempotencyWindow        time.Duration
	SchemaName               string
	SendPoisonMsgToDls       bool
	SendSchemaFailedMsgToDls bool
	TieredStorageEnabled     bool
	PartitionsNumber         int
	DlsStation               string
	TimeoutRetry             int
}

StationsOpts - configuration options for a station.

func GetStationDefaultOptions

func GetStationDefaultOptions() StationOpts

GetStationDefaultOptions - returns default configuration options for the station.

type StorageType

type StorageType int

StorageType - station's message storage type

const (
	Disk StorageType = iota
	Memory
)

func (StorageType) String

func (s StorageType) String() string

type TLSOpts added in v0.1.9

type TLSOpts struct {
	TlsCert string
	TlsKey  string
	CaFile  string
}

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL