Categorygithub.com/mistsys/go_kafka_client
modulepackage
0.0.0-20161104193346-eb414aa04321
Repository: https://github.com/mistsys/go_kafka_client.git
Documentation: pkg.go.dev

# README

Go Kafka Client

The Apache Kafka Client Library for Go is sponsored by [CrowdStrike] (http://www.crowdstrike.com/) and developed by [Big Data Open Source Security LLC] (http://stealth.ly)

Build Status

Ideas and goals behind the Go Kafka Client:

1) Partition Ownership

We decided on implementing multiple strategies for this including static assignment. The concept of re-balancing is preserved but now there are a few different strategies to re-balancing and they can run at different times depending on what is going on (like a blue/green deploy is happening). For more on blue/green deployments check out this video.

2) Fetch Management

This is what “fills up the reservoir” as I like to call it so the processing (either sequential or in batch) will always have data if there is data for it to have without making a network hop. The fetcher has to stay ahead here keeping the processing tap full (or if empty that is controlled) pulling the data for the Kafka partition(s) it is owning.

3) Work Management

For the Go consumer we currently only support “fan out” using go routines and channels. If you have ever used go this will be familiar to you if not you should drop everything and learn Go.

4) Offset Management

Our offset management is based on a per batch basis with each highest offset from the batch committed on a per partition basis.

Prerequisites:

  1. Install Golang http://golang.org/doc/install
  2. Make sure env variables GOPATH and GOROOT exist and point to correct places
  3. Install GPM https://github.com/pote/gpm
  4. mkdir -p $GOPATH/src/github.com/stealthly && cd $GOPATH/src/github.com/stealthly
  5. git clone https://github.com/mistsys/go_kafka_client.git && cd go_kafka_client
  6. gpm install

Optional (for all tests to work):

  1. Install Docker https://docs.docker.com/installation/#installation
  2. cd $GOPATH/src/github.com/mistsys/go_kafka_client
  3. Build docker image: docker build -t mistsys/go_kafka_client .
  4. docker run -v $(pwd):/go_kafka_client mistsys/go_kafka_client

After this is done you're ready to write some code!

For email support https://groups.google.com/forum/#!forum/kafka-clients

Related docs:

  1. Offset Storage configuration.
  2. Log and metrics emitters.

# Packages

No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author

# Functions

BootstrapBrokers queries the ConsumerCoordinator for all known brokers in the cluster to be used later as a bootstrap list for the LowLevelClient.
ConsumerConfigFromFile is a helper function that loads a consumer's configuration from file.
Writes a given message with a given tag to log with level Critical.
Formats a given message according to given params with a given tag to log with level Critical.
Writes a given message with a given tag to log with level Debug.
Formats a given message according to given params with a given tag to log with level Debug.
DefaultConsumerConfig creates a ConsumerConfig with sane defaults.
No description provided by the author
Writes a given message with a given tag to log with level Error.
Formats a given message according to given params with a given tag to log with level Error.
Writes a given message with a given tag to log with level Info.
Formats a given message according to given params with a given tag to log with level Info.
Loads a property file located at Path.
Creates a new BlackList topic filter for a given regex.
No description provided by the author
No description provided by the author
NewConsumer creates a new Consumer with a given configuration.
Creates a new DefaultLogger that is configured to write messages to console with minimum log level Level.
NewEmptyEmitter creates a new EmptyEmitter.
Creates a new FailureCounter with threshold FailedThreshold and time window WorkerThresholdTimeWindow.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
NewKafkaLogEmitter creates a new KafkaLogEmitter with a provided configuration.
NewKafkaLogEmitterConfig creates a new KafkaLogEmitterConfig with log level set to Info.
No description provided by the author
Creates an empty MarathonEventProducerConfig.
Creates a new MirrorMaker using given MirrorMakerConfig.
Creates an empty MirrorMakerConfig.
Creates a new ProcessingFailedResult for given TaskId.
No description provided by the author
No description provided by the author
No description provided by the author
Creates a new SaramaClient using a given ConsumerConfig.
No description provided by the author
Creates a new SiestaClient using a given ConsumerConfig.
Constructs a new TopicsToNumStreams for consumer with Consumerid id that works within consumer group Groupid.
Creates a new SuccessfulResult for given TaskId.
No description provided by the author
Creates an empty SyslogProducerConfig.
Constructs a new TopicsToNumStreams for consumer with Consumerid id that works within consumer group Groupid.
Creates a new WhiteList topic filter for a given regex.
Creates a new WorkerManager with given id using a given ConsumerConfig and responsible for managing given TopicAndPartition.
Created a new ZookeeperConfig with sane defaults.
Creates a new ZookeeperCoordinator with a given configuration.
ProducerConfigFromFile is a helper function that loads a producer's configuration information from file.
Writes a given message with a given tag to log with level Trace.
Formats a given message according to given params with a given tag to log with level Trace.
Writes a given message with a given tag to log with level Warn.
Formats a given message according to given params with a given tag to log with level Warn.
ZookeeperConfigFromFile is a helper function that loads zookeeper configuration information from file.

# Constants

No description provided by the author
No description provided by the author
A coordinator event that informs a consumer group of new deployed topics.
No description provided by the author
Tells the worker manager to ignore the failure and continue normally.
Tells the worker manager to commit offset and stop processing the current batch.
No description provided by the author
Use CriticalLevel to indicate fatal errors that may cause data corruption or loss.
Logtypeid field value for LogLine indicating Critical log level */.
Use DebugLevel for detailed system reports and diagnostic messages.
Logtypeid field value for LogLine indicating Debug log level */.
Tells the worker manager to continue processing new messages but not to commit offset that failed.
Tells the worker manager not to commit offset and stop processing the current batch.
Use ErrorLevel to indicate severe errors that affect application workflow and are not handled automatically.
Logtypeid field value for LogLine indicating Error log level */.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
Use InfoLevel for general information about a running application.
Logtypeid field value for LogLine indicating Info log level */.
Offset with invalid value.
No description provided by the author
No description provided by the author
Reset the offset to the largest offset if it is out of range.
Logtypeid field value for LogLine indicating Metrics data */.
No description provided by the author
Range partitioning works on a per-topic basis.
No description provided by the author
No description provided by the author
A regular coordinator event that should normally trigger consumer rebalance.
Coordinator event that should trigger consumer re-registrer.
The round-robin partition assignor lays out all the available partitions and all the available consumer threads.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
Reset the offset to the smallest offset if it is out of range.
No description provided by the author
Use TraceLevel for debugging to find problems in functions, variables etc.
Logtypeid field value for LogLine indicating Trace log level */.
Use WarnLevel to indicate small errors and failures that should not happen normally but are recovered automatically.
Logtypeid field value for LogLine indicating Warn log level */.

# Variables

LogEmitter used by this client.
Logger used by this client.
No description provided by the author

# Structs

BlackList is a topic filter that will match every topic that does not match a given regex.
DeployedTopics contain information needed to do a successful blue-green deployment.
General information about Kafka broker.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
ConstantPartitioner implements the Partitioner interface by just returning a constant value.
Consumer is a high-level Kafka consumer designed to work within a consumer group.
ConsumerConfig defines configuration options for Consumer.
General information about Kafka consumer.
No description provided by the author
Consumer routine id.
Default implementation of KafkaLogger interface used in this client.
No description provided by the author
No description provided by the author
A counter used to track whether we reached the configurable threshold of failed messages within a given time window.
Partitioner sends messages to partitions that correspond message keys.
No description provided by the author
No description provided by the author
No description provided by the author
HashPartitioner implements the Partitioner interface.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
KafkaLogEmitter implements LogEmitter and KafkaLogger and sends all structured log data to a Kafka topic encoded as Avro.
KafkaLogEmitterConfig provides multiple configuration entries for KafkaLogEmitter.
No description provided by the author
No description provided by the author
Single Kafka message that is sent to user-defined Strategy.
MirrorMaker is a tool to mirror source Kafka cluster into a target (mirror) Kafka cluster.
MirrorMakerConfig defines configuration options for MirrorMaker.
An implementation of WorkerResult interface representing a failure to process incoming message.
No description provided by the author
No description provided by the author
RandomPartitioner implements the Partitioner interface by choosing a random partition each time.
No description provided by the author
RoundRobinPartitioner implements the Partitioner interface by walking through the available partitions one at a time.
No description provided by the author
SaramaClient implements LowLevelClient and uses github.com/Shopify/sarama as underlying implementation.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
SiestaClient implements LowLevelClient and OffsetStorage and uses github.com/mistsys/siesta as underlying implementation.
Represents a consumer state snapshot.
TopicsToNumStreams implementation representing a static consumer subscription.
No description provided by the author
No description provided by the author
An implementation of WorkerResult interface representing a successfully processed incoming message.
No description provided by the author
No description provided by the author
SyslogProducerConfig defines configuration options for SyslogProducer.
Represents a single task for a worker.
No description provided by the author
Type representing a task id.
An implementation of WorkerResult interface representing a timeout to process incoming message.
Type representing a single Kafka topic and partition.
General information about Kafka topic.
WhiteList is a topic filter that will match every topic for a given regex.
TopicsToNumStreams implementation representing either whitelist or blacklist consumer subscription.
Represents a worker that is able to process a single message.
WorkerManager is responsible for splitting the incomming batches of messages between a configured amount of workers.
ZookeeperConfig is used to pass multiple configuration entries to ZookeeperCoordinator.
ZookeeperCoordinator implements ConsumerCoordinator and OffsetStorage interfaces and is used to coordinate multiple consumers that work within the same consumer group as well as storing and retrieving their offsets.

# Interfaces

ConsumerCoordinator is used to coordinate actions of multiple consumers within the same consumer group.
No description provided by the author
No description provided by the author
Logger interface.
LogEmitter is an interface that handles structured log data.
LowLevelClient is a low-level Kafka client that manages broker connections, responsible to fetch metadata and is able to handle Fetch and Offset requests.TODO not sure that's a good name for this interface.
OffsetStorage is used to store and retrieve consumer offsets.
No description provided by the author
No description provided by the author
No description provided by the author
Either a WhiteList or BlackList consumer topic filter.
Information on Consumer subscription.
Interface that represents a result of processing incoming message.

# Type aliases

No description provided by the author
No description provided by the author
CoordinatorEvent is sent by consumer coordinator representing some state change.
EmptyEmitter implements emitter and ignores all incoming messages.
A callback that is triggered when a worker fails to process a single message.
A callback that is triggered when a worker fails to process ConsumerConfig.WorkerRetryThreshold messages within ConsumerConfig.WorkerThresholdTimeWindow.
Defines what to do when worker fails to process a message.
Represents a logging level.
No description provided by the author
No description provided by the author
Defines what to do with a single Kafka message.