README
¶
An generic load test utility to generate load for various endpoints.
Following endpoints are supported:
- HTTP
- gRPC
- Redis
- MySQL
- PostgreSQL
- Cassandra
- MongoDB
- SMTP
- Kafka
Features:
- Metrics: Latency distribution (percentiles) and response time histograms are reported (using HdrHistogram)
- Programmable via Lua: you can mix/match any of these endpoints and metrics will be collected and reported for each of them
- Custom metrics ("make two gRPC calls and one MySQL, report total for these three calls combined as single unit" etc)
- SQL metrics: queries will be finger printed and metrics are computed on finger printed signatures
Installation
You can download from the Release
Or Compile from source
Requires go
make
Usage
It can be invoked directly for simple test cases or create a Lua file to do more complex operations.
HTTP
Generate http load:
lg http --duration 10s --concurrency 1 --requestrate 1 http://google.com
Output:
INFO[2019-10-17 18:09:57.334] Starting ... Id=0
INFO[2019-10-17 18:10:00.334] Warmup done (2 seconds) Id=0
http://google.com:
+-----+--------+--------+--------+--------+--------+--------+--------+--------+------+--------+-----+-----+-----+-----+
| URL | AVG | STDDEV | MIN | MAX | P50 | P95 | P99 | P99.99 | TOTAL| AVGRPS | 2XX | 3XX | 4XX | 5XX |
+-----+--------+--------+--------+--------+--------+--------+--------+--------+------+--------+-----+-----+-----+-----+
| / | 134.10 | 10.55 | 125.06 | 164.48 | 130.30 | 164.48 | 164.48 | 164.48 | 20| 1.00 | 0 | 10 | 0 | 0 |
+-----+--------+--------+--------+--------+--------+--------+--------+--------+------+--------+-----+-----+-----+-----+
Response time histogram (ms):
/:
125.056 [ 1] |■■■■■■■■
128.998 [ 1] |■■■■■■■■
132.940 [ 5] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
136.882 [ 1] |■■■■■■■■
140.824 [ 1] |■■■■■■■■
144.766 [ 0] |
148.708 [ 0] |
152.650 [ 0] |
156.592 [ 0] |
160.534 [ 0] |
164.479 [ 1] |■■■■■■■■
See see here on how to do this via Lua script
gRPC
Generate gRPC load. It expects gRPC method and payload(as json, which will get converted to protobuf) as input.
-
Find gRPC method name and generate corresponding payload:
There are two options:
-
Use server reflection (preferred):
lg grpc --template \ --plaintext grpcb.in:9000
Output:
========= hello.HelloService ========= --method 'hello.HelloService.SayHello' --data '{"greeting":""}' --method 'hello.HelloService.LotsOfReplies' --data '{"greeting":""}' --method 'hello.HelloService.LotsOfGreetings' --data '{"greeting":""}' --method 'hello.HelloService.BidiHello' --data '{"greeting":""}'
-
Use .proto files:
lg grpc --template \ --proto ~/go/src/google.golang.org/grpc/examples/helloworld/helloworld/helloworld.proto \ --import-path ~/go/src/google.golang.org/grpc/examples/helloworld/helloworld/
Output:
========= helloworld.Greeter ========= --method 'helloworld.Greeter.SayHello' --data '{"name":""}'
-
-
Then, we can generate load:
lg grpc --requestrate 1 --concurrency 1 --duration 10s \
--method 'hello.HelloService.SayHello' --data '{"greeting":"meow"}' \
--plaintext grpcb.in:9000
Output:
INFO[0000] Starting ...
INFO[0005] Warmup done (5s seconds)
GRPC Metrics:
grpcb.in:9000:
+-----------------------------+--------+--------+--------+--------+--------+--------+--------+--------+-------+--------+--------+----------+
| METHOD | AVG | STDDEV | MIN | MAX | P50 | P95 | P99 | P99.99 | TOTAL | AVGRPS | ERRORS | DEADLINE |
+-----------------------------+--------+--------+--------+--------+--------+--------+--------+--------+-------+--------+--------+----------+
| hello.HelloService.SayHello | 155.61 | 0.34 | 155.14 | 156.29 | 155.65 | 156.29 | 156.29 | 156.29 | 10 | 1.00 | 0 | 0 |
+-----------------------------+--------+--------+--------+--------+--------+--------+--------+--------+-------+--------+--------+----------+
Response time histogram (ms):
hello.HelloService.SayHello:
155.136 [ 2] |■■■■■■■■■■■■■■■■■■■■■■■■■■■
155.251 [ 2] |■■■■■■■■■■■■■■■■■■■■■■■■■■■
155.481 [ 1] |■■■■■■■■■■■■■
155.596 [ 3] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
155.941 [ 1] |■■■■■■■■■■■■■
156.287 [ 1] |■■■■■■■■■■■■■
See see here how to do this via Lua script
MySQL
Generate MySQL load:
lg mysql --duration 10s --requestrate 0 --query 'SELECT 1' 'root:@tcp(127.0.0.1:3306)/'
Output:
INFO[2019-10-17 18:22:07.232] Starting ... Id=0
INFO[2019-10-17 18:22:10.233] Warmup done (2 seconds) Id=0
:
+------------------+------+--------+-------+-------+-------+-------+-------+--------+-------+----------+
| QUERY | AVG | STDDEV | MIN | MAX | P50 | P95 | P99 | P99.99 | TOTAL | AVGRPS |
+------------------+------+--------+-------+-------+-------+-------+-------+--------+-------+----------+
| 16219655761820A2 |0.032 | 0.007 | 0.026 | 2.025 | 0.031 | 0.040 | 0.053 | 0.180 |268889 | 26884.80 |
+------------------+------+--------+-------+-------+-------+-------+-------+--------+-------+----------+
Response time histogram (ms):
16219655761820A2:
0.026 [ 116] |
0.225 [ 268754] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.424 [ 9] |
0.623 [ 6] |
0.822 [ 2] |
1.021 [ 0] |
1.220 [ 1] |
1.419 [ 0] |
1.618 [ 0] |
1.817 [ 0] |
2.025 [ 1] |
Digest to query mapping:
16219655761820A2 : select ?
See here how to do this via Lua script.
Postgres
Generate PSQL load:
./lg psql --duration 10s --requestrate 1 --query "SELECT 1" "postgresql://postgres@localhost:5432/postgres"
Output:
./lg psql --query "select 1" --requestrate 1 --duration 10s "postgresql://postgres@127.0.0.1:5432/"
PostgresQL Metrics:
127.0.0.1:
+------------------+------+--------+------+------+------+------+------+--------+-------+--------+--------+
| QUERY | AVG | STDDEV | MIN | MAX | P50 | P95 | P99 | P99.99 | TOTAL | AVGRPS | ERRORS |
+------------------+------+--------+------+------+------+------+------+--------+-------+--------+--------+
| 16219655761820A2 | 0.70 | 0.27 | 0.25 | 1.00 | 0.81 | 1.00 | 1.00 | 1.00 | 10 | 1.00 | 0 |
+------------------+------+--------+------+------+------+------+------+--------+-------+--------+--------+
Response time histogram (ms):
16219655761820A2:
0.252 [ 2] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.326 [ 1] |■■■■■■■■■■■■■■■■■■■■
0.622 [ 1] |■■■■■■■■■■■■■■■■■■■■
0.770 [ 1] |■■■■■■■■■■■■■■■■■■■■
0.844 [ 2] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.918 [ 2] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.997 [ 1] |■■■■■■■■■■■■■■■■■■■■
Digest to query mapping:
16219655761820A2 : select ?
See here how to do this via Lua script.
CQL
Generate Cassandra load:
lg cql --requestrate 1 cql --disable-peers-lookup --username user --password password --plaintext --query 'select * from system.peers LIMIT 1' localhost:9042
Output:
Cassandra Metrics:
:
+------------------+------+--------+------+------+------+------+------+--------+-------+--------+--------+
| QUERY | AVG | STDDEV | MIN | MAX | P50 | P95 | P99 | P99.99 | TOTAL | AVGRPS | ERRORS |
+------------------+------+--------+------+------+------+------+------+--------+-------+--------+--------+
| 818FE829A3B44923 | 1.43 | 0.08 | 1.28 | 1.56 | 1.42 | 1.56 | 1.56 | 1.56 | 8 | 1.00 | 0 |
+------------------+------+--------+------+------+------+------+------+--------+-------+--------+--------+
Response time histogram (ms):
818FE829A3B44923:
1.284 [ 1] |■■■■■■■■■■■■■■■■■■■■
1.311 [ 1] |■■■■■■■■■■■■■■■■■■■■
1.392 [ 2] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
1.446 [ 1] |■■■■■■■■■■■■■■■■■■■■
1.473 [ 2] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
1.558 [ 1] |■■■■■■■■■■■■■■■■■■■■
See see here how to do this via Lua script.
Also see here on how we are doing full application load test on database (by making CQL calls for given business logic) and report low level CQL metrics and as well as high level meaningful metrics.
MongoDB
Generate MongoDB load:
# Basic find operation
lg mongo --database mydb --collection users --operation find --filter '{"status":"active"}' --requestrate 10 --duration 30s mongodb://localhost:27017
# Insert operation
lg mongo --database mydb --collection users --operation insert --document '{"name":"John","age":30,"status":"active"}' --requestrate 5 --duration 30s mongodb://localhost:27017
# Update operation
lg mongo --database mydb --collection users --operation update --filter '{"name":"John"}' --update '{"$set":{"age":31,"updated_at":"2024-01-01"}}' --requestrate 2 --duration 30s mongodb://localhost:27017
# Delete operation
lg mongo --database mydb --collection users --operation delete --filter '{"status":"inactive"}' --requestrate 1 --duration 30s mongodb://localhost:27017
# Aggregate operation
lg mongo --database mydb --collection users --operation aggregate --filter '[{"$match":{"age":{"$gte":18}}},{"$group":{"_id":"$status","count":{"$sum":1}}}]' --requestrate 1 --duration 30s mongodb://localhost:27017
# Lua script execution
lg script --duration 12s --requestrate 3 --warmup 3s ./scripts/mongo_comprehensive.lua
lg script --duration 12s --requestrate 3 --warmup 3s ./scripts/mongo_simple.lua
Authentication Support
MongoDB load generator supports various authentication methods:
# Username/password authentication
lg mongo --database mydb --collection users --operation find --username user --password pass --auth-db admin mongodb://localhost:27017
# Connection string with authentication
lg mongo --database mydb --collection users --operation find mongodb://user:pass@localhost:27017/mydb?authSource=admin
# TLS connection
lg mongo --database mydb --collection users --operation find --tls mongodb://localhost:27017
Supported Operations
Operation | Description | Required Parameters |
---|---|---|
find |
Query documents | --filter (JSON query) |
insert |
Insert documents | --document (JSON document) |
update |
Update documents | --filter (JSON query), --update (JSON update) |
delete |
Delete documents | --filter (JSON query) |
aggregate |
Aggregation pipeline | --filter (JSON pipeline array) |
Output:
INFO[0000] Starting ...
INFO[0005] Warmup done (5s seconds)
MongoDB Metrics:
mydb.users:
+------------------+------+--------+------+------+------+------+------+--------+-------+--------+--------+
| OPERATION | AVG | STDDEV | MIN | MAX | P50 | P95 | P99 | P99.99 | TOTAL | AVGRPS | ERRORS |
+------------------+------+--------+------+------+------+------+------+--------+-------+--------+--------+
| find | 2.45 | 0.85 | 1.20 | 4.50 | 2.30 | 4.20 | 4.50 | 4.50 | 150 | 10.00 | 0 |
+------------------+------+--------+------+------+------+------+------+--------+-------+--------+--------+
Response time histogram (ms):
find:
1.200 [ 12] |■■■■■■■■■■■
1.530 [ 25] |■■■■■■■■■■■■■■■■■■■■■■■
1.860 [ 38] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
2.190 [ 42] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
2.520 [ 18] |■■■■■■■■■■■■■■■■
2.850 [ 8] |■■■■■■■
3.180 [ 4] |■■■
3.510 [ 2] |■
3.840 [ 0] |
4.170 [ 0] |
4.500 [ 1] |■
See here here how to do this via Lua script.
Redis
Generate Redis load:
lg --duration 20s --requestrate 0 --redis --redis-cmd 'get' --redis-arg hello 127.0.0.1:6379
Output:
INFO[2019-10-17 18:34:04.411] Starting ... Id=0
INFO[2019-10-17 18:34:07.411] Warmup done (2 seconds) Id=0
:
+-------+------+--------+------+------+------+------+------+--------+-------+----------+--------+
| QUERY | AVG | STDDEV | MIN | MAX | P50 | P95 | P99 | P99.99 | TOTAL | AVGRPS | ERRORS |
+-------+------+--------+------+------+------+------+------+--------+-------+----------+--------+
| get | 0.02 | 0.01 | 0.02 | 3.17 | 0.02 | 0.03 | 0.04 | 0.12 |773807 | 38691.20 | 0 |
+-------+------+--------+------+------+------+------+------+--------+-------+----------+--------+
Response time histogram (ms):
get:
0.018 [ 3] |
0.333 [ 773783] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.648 [ 9] |
0.963 [ 4] |
1.278 [ 3] |
1.593 [ 2] |
1.908 [ 1] |
2.223 [ 1] |
2.538 [ 0] |
2.853 [ 0] |
3.169 [ 1] |
SMTP
Generate SMTP:
lg smtp --from bar@bar.com --to foo@foo.com --username username --password password "127.0.0.1:1025" --plaintext
Output:
SMTP Metrics:
localhost:1025:
+----------------+------+--------+------+------+------+------+------+--------+-------+--------+--------+
| KEY | AVG | STDDEV | MIN | MAX | P50 | P95 | P99 | P99.99 | TOTAL | AVGRPS | ERRORS |
+----------------+------+--------+------+------+------+------+------+--------+-------+--------+--------+
| localhost:1025 | 1.23 | 0.68 | 0.23 | 1.92 | 1.68 | 1.92 | 1.92 | 1.92 | 9 | 1.00 | 0 |
+----------------+------+--------+------+------+------+------+------+--------+-------+--------+--------+
Response time histogram (ms):
localhost:1025:
0.225 [ 1] |■■■■■■■■■■■■■
0.394 [ 2] |■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.563 [ 1] |■■■■■■■■■■■■■
1.577 [ 1] |■■■■■■■■■■■■■
1.746 [ 3] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
1.918 [ 1] |■■■■■■■■■■■■■
See see here how to do this via Lua script.
Kafka
Generate Kafka load by producing messages to or consuming messages from a Kafka topic:
# Produce messages to a Kafka topic
lg kafka --brokers "localhost:9092" --topic "test-topic" --message "Hello World" --requestrate 10 --duration 30s
# Consume messages from a Kafka topic
lg kafka --brokers "localhost:9092" --topic "test-topic" --group "consumer-group-1" --read --requestrate 10 --duration 30s
SCRAM Authentication Support
The load generator supports SASL SCRAM authentication for Kafka connections. Both SCRAM-SHA-256 and SCRAM-SHA-512 mechanisms are supported, with or without TLS.
# Using SCRAM authentication with TLS
lg kafka --brokers "kafka-broker:9092" --topic "test-topic" --message "Hello World" \
--username "user" --password "password" --sasl-mechanism "SCRAM-SHA-512" --tls \
--requestrate 10 --duration 30s
# Using SCRAM authentication without TLS
lg kafka --brokers "kafka-broker:9092" --topic "test-topic" --message "Hello World" \
--username "user" --password "password" --sasl-mechanism "SCRAM-SHA-256" \
--requestrate 10 --duration 30s
Authentication Options
Option | Description |
---|---|
--username |
SASL username for authentication |
--password |
SASL password for authentication |
--sasl-mechanism |
SASL mechanism (SCRAM-SHA-256 or SCRAM-SHA-512, default: SCRAM-SHA-512) |
--tls |
Enable TLS for Kafka connections (optional with SCRAM authentication) |
Topic Creation Options
The load generator can automatically create topics if they don't exist. This feature is enabled by default.
Option | Description |
---|---|
--auto-create-topic |
Automatically create topic if it doesn't exist (default: true) |
--partitions |
Number of partitions for topic creation (default: 1) |
--replication-factor |
Replication factor for topic creation (default: 1) |
Testing with Docker Compose
The repository includes a Docker Compose configuration with both a regular Kafka instance and a Kafka instance with SCRAM authentication:
# Start Kafka with SCRAM authentication
docker compose up -d zookeeper kafka-scram
Automatic Topic Creation
When running the load generator, topics will be automatically created if they don't exist:
# Run load generator with SCRAM authentication - topic will be created automatically if it doesn't exist
lg kafka --brokers "localhost:9093" --topic "scram-test-topic" --message "Test message" \
--username "admin" --password "admin-secret" --sasl-mechanism "SCRAM-SHA-512" \
--requestrate 10 --duration 30s
Testing with the Load Generator
# Test with SCRAM-SHA-512 authentication
lg kafka --brokers "localhost:9093" --topic "scram-test-topic" --message "Hello World" \
--username "user" --password "user-secret" --sasl-mechanism "SCRAM-SHA-512" \
--requestrate 10 --duration 30s
# Test with SCRAM-SHA-256 authentication
lg kafka --brokers "localhost:9093" --topic "scram-test-topic" --message "Hello World" \
--username "user" --password "user-secret" --sasl-mechanism "SCRAM-SHA-256" \
--requestrate 10 --duration 30s
Predefined Users
The Kafka SCRAM service comes with two predefined users:
-
Client User: Username:
user
, Password:user-secret
- Use this for general client operations with the load generator
-
Admin User: Username:
admin
, Password:admin-secret
- Use this for administrative operations like creating topics
The Kafka SCRAM service is configured to support both SCRAM-SHA-256 and SCRAM-SHA-512 authentication mechanisms.
Output:
INFO[0000] Starting ...
INFO[0005] Warmup done (5s seconds)
Kafka Metrics:
localhost:9092:
+------------------+------+--------+------+------+------+------+------+--------+-------+--------+--------+
| KEY | AVG | STDDEV | MIN | MAX | P50 | P95 | P99 | P99.99 | TOTAL | AVGRPS | ERRORS |
+------------------+------+--------+------+------+------+------+------+--------+-------+--------+--------+
| write:test-topic | 1.25 | 0.35 | 0.85 | 2.15 | 1.20 | 1.85 | 2.15 | 2.15 | 300 | 10.00 | 0 |
+------------------+------+--------+------+------+------+------+------+--------+-------+--------+--------+
Response time histogram (ms):
write:test-topic:
0.850 [ 15] |■■■■■■■■■■■
0.980 [ 42] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
1.110 [ 78] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
1.240 [ 69] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
1.370 [ 41] |■■■■■■■■■■■■■■■■■■■■■■■■■■■
1.500 [ 25] |■■■■■■■■■■■■■■■■
1.630 [ 15] |■■■■■■■■■■■
1.760 [ 8] |■■■■■
1.890 [ 4] |■■
2.020 [ 2] |■
2.150 [ 1] |■
Lua
NOTE: Cookies are retained automatically in http client object (by wathing out for set-cookie response headers), so just "logging-in" once should be sufficient, the cookies will be automatically sent in the future http requests automatically.
Generate load using Lua (script can make use of all supported grpc/http/redis/mysql etc modules), see here for examples.
lg script --duration 10s --requestrate 1 ./scripts/http.lua
Output:
INFO[2019-10-17 18:35:19.367] Global called Id=1
INFO[2019-10-17 18:35:19.367] Initializing ... Id=1
INFO[2019-10-17 18:35:19.367] Request rate: 1 Id=1
INFO[2019-10-17 18:35:19.368] Scripts args: <nil> Id=1
INFO[2019-10-17 18:35:19.368] Initialization done, took 168.029µs Id=1
INFO[2019-10-17 18:35:19.368] Starting ... Id=0
INFO[2019-10-17 18:35:20.731] 301 Moved Permanently Id=1
INFO[2019-10-17 18:35:21.542] 301 Moved Permanently Id=1
INFO[2019-10-17 18:35:22.368] Warmup done (2 seconds) Id=0
INFO[2019-10-17 18:35:22.548] 301 Moved Permanently Id=1
INFO[2019-10-17 18:35:23.567] 301 Moved Permanently Id=1
INFO[2019-10-17 18:35:24.549] 301 Moved Permanently Id=1
INFO[2019-10-17 18:35:25.544] 301 Moved Permanently Id=1
INFO[2019-10-17 18:35:26.548] 301 Moved Permanently Id=1
INFO[2019-10-17 18:35:27.560] 301 Moved Permanently Id=1
INFO[2019-10-17 18:35:28.540] 301 Moved Permanently Id=1
INFO[2019-10-17 18:35:29.539] 301 Moved Permanently Id=1
INFO[2019-10-17 18:35:30.549] 301 Moved Permanently Id=1
INFO[2019-10-17 18:35:31.550] 301 Moved Permanently Id=1
http://google.com:
+-----+--------+--------+--------+--------+--------+--------+--------+--------+------+--------+-----+-----+-----+-----+
| URL | AVG | STDDEV | MIN | MAX | P50 | P95 | P99 | P99.99 | TOTAL| AVGRPS | 2XX | 3XX | 4XX | 5XX |
+-----+--------+--------+--------+--------+--------+--------+--------+--------+------+--------+-----+-----+-----+-----+
| / | 181.59 | 8.15 | 171.39 | 199.55 | 180.22 | 199.55 | 199.55 | 199.55 | 10 | 1.00 | 0 | 10 | 0 | 0 |
+-----+--------+--------+--------+--------+--------+--------+--------+--------+------+--------+-----+-----+-----+-----+
Response time histogram (ms):
/:
171.392 [ 1] |■■■■■■■■■■
174.207 [ 1] |■■■■■■■■■■
177.022 [ 1] |■■■■■■■■■■
179.837 [ 1] |■■■■■■■■■■
182.652 [ 4] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
185.467 [ 0] |
188.282 [ 0] |
191.097 [ 0] |
193.912 [ 1] |■■■■■■■■■■
196.727 [ 0] |
199.551 [ 1] |■■■■■■■■■■
A script that combines multiple kinds of workload is possible as well (doing http call, grpc call, redis call, mysql call etc). They all can be combined together to mimic real app and metrics can be tracked individually as well as overall. See here for an example.
Server/Client
In this mode, server instance of lg just runs in listen mode. Multiple lg clients can be run independently (to distribute the load over N machines). The server can receive metrics from all the clients, aggregate them and publish as single report. In addition to this, server also exposes UI for viewing reports (cli table output, json export, latency graphs).
Run the server (localhost example, but can be remote as well):
lg server :1234
Run the client(s):
lg --duration 5s --warmup 1s --server :1234 http http://google.com
To view the UI, visit http://localhost:1234
For example, visiting http://localhost:1234/graphs, should show something like this:
Docker Compose for Testing
A Docker Compose configuration is included to easily spin up services for testing the load generator.
Available Services
The docker-compose.yml file includes the following services:
- Kafka & Zookeeper: For testing Kafka producer and consumer functionality
- Redis: For testing Redis commands
- PostgreSQL: For testing PostgreSQL queries
- MySQL: For testing MySQL queries
- Cassandra: For testing CQL queries
- MongoDB: For testing MongoDB operations
- MailHog: A development SMTP server with web interface for testing SMTP functionality
Usage
Starting All Services
To start all services:
docker compose up -d
Starting a Specific Service
To start a specific service (and its dependencies):
docker compose up -d <service-name>
For example, to start only Kafka and its dependency (Zookeeper):
docker compose up -d kafka
Stopping Services
To stop all services:
docker compose down
Service Connection Details
Refer to each command section above for examples of how to connect to each service.
Creating Test Topics in Kafka
To create a test topic in Kafka, you can use the following command:
docker exec -it kafka kafka-topics --create --topic test-topic --bootstrap-server localhost:9092 --partitions 1 --replication-factor 1
To list topics:
docker exec -it kafka kafka-topics --list --bootstrap-server localhost:9092
Documentation
¶
There is no documentation for this package.