Purpose of this producer is used for Google Cloud Managed Kafka load test. Hence you could use that for sizing and cost estimation
Use this dataflow consumer to achieve best performance and retrieve stats information.
- install the server
./authserver install
- start the server
./authserver start
- check server status
./authserver status
you could check authserver.log
for running information and errors, or pass
stop
, restart
parameters accordingly for server controls.
topicName
is the Kafka Topic you want to send to"bootstrap.servers": "bootstrap.dingo-kafka.us-central1.managedkafka.du-hast-mich.cloud.goog:9092",
replace the value accordingly to your Kafka servercase <-time.After(100 * time.Millisecond):
, 1000 means 1 message per second per publisher, smaller number means higher producing rate. Completely turn this off could result 429 push back from Kafka. The program has the backoff strategy, so it shouldn't crash. Use it wisely.- Optional
numPublishers
number of Kafka publishers concurrentlynumDataGenThreads
number of data generation threads, only increase the number if data pool is constantly empty which may potentially affect the publishing performancenumWorkers
better to keep this same asnumPublishers
, this only pull the data from the pool and fill the data chanel, which is dedicated for each publisher
- init the project (you only need to do that once)
make init
- build the Golang code
make build
- Dump the data
You should be able see a binary named
main
in the project root directory, simply run it./main
ormake run
Ctr + C
to stop it