Categorygithub.com/remerge/sarama
modulepackage
1.9.0
Repository: https://github.com/remerge/sarama.git
Documentation: pkg.go.dev

# README

sarama

GoDoc Build Status

Sarama is an MIT-licensed Go client library for Apache Kafka version 0.8 (and later).

Getting started

  • API documentation and examples are available via godoc.
  • Mocks for testing are available in the mocks subpackage.
  • The examples directory contains more elaborate example applications.
  • The tools directory contains command line tools that can be useful for testing, diagnostics, and instrumentation.

Compatibility and API stability

Sarama provides a "2 releases + 2 months" compatibility guarantee: we support the two latest stable releases of Kafka and Go, and we provide a two month grace period for older releases. This means we currently officially support Go 1.6 and 1.5, and Kafka 0.9.0 and 0.8.2, although older releases are still likely to work.

Sarama follows semantic versioning and provides API stability via the gopkg.in service. You can import a version with a guaranteed stable API via http://gopkg.in/Shopify/sarama.v1. A changelog is available here.

Contributing

# Packages

No description provided by the author
Package mocks provides mocks that can be used for testing applications that use Sarama.
No description provided by the author

# Functions

NewAsyncProducer creates a new AsyncProducer using the given broker addresses and configuration.
NewAsyncProducerFromClient creates a new Producer using the given client.
NewBroker creates and returns a Broker targetting the given host:port address.
NewClient creates a new Client.
NewConfig returns a new configuration instance with sane defaults.
NewConsumer creates a new consumer using the given broker addresses and configuration.
NewConsumerFromClient creates a new consumer using the given client.
NewHashPartitioner returns a Partitioner which behaves as follows.
NewManualPartitioner returns a Partitioner which uses the partition manually set in the provided ProducerMessage's Partition field as the partition to produce to.
NewMockBroker launches a fake Kafka broker.
NewMockBrokerAddr behaves like newMockBroker but listens on the address you give it rather than just some ephemeral port.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
NewOffsetManagerFromClient creates a new OffsetManager from the given client.
NewRandomPartitioner returns a Partitioner which chooses a random partition each time.
NewRoundRobinPartitioner returns a Partitioner which walks through the available partitions one at a time.
NewSyncProducer creates a new SyncProducer using the given broker addresses and configuration.
NewSyncProducerFromClient creates a new SyncProducer using the given client.

# Constants

No description provided by the author
No description provided by the author
No description provided by the author
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
GroupGenerationUndefined is a special value for the group generation field of Offset Commit Requests that should be used when a consumer group does not rely on Kafka for partition management.
NoResponse doesn't send any response, the TCP ACK is all you get.
OffsetNewest stands for the log head offset, i.e.
OffsetOldest stands for the oldest offset available on the broker for a partition.
ReceiveTime is a special value for the timestamp field of Offset Commit Requests which tells the broker to set the timestamp to the time at which the request was received.
WaitForAll waits for all replicas to commit before responding.
WaitForLocal waits for only the local commit to succeed before responding.

# Variables

ErrAlreadyConnected is the error returned when calling Open() on a Broker that is already connected or connecting.
ErrClosedClient is the error returned when a method is called on a client that has been closed.
ErrIncompleteResponse is the error returned when the server returns a syntactically valid response, but it does not contain the expected information.
ErrInsufficientData is returned when decoding and the packet is truncated.
ErrInvalidPartition is the error returned when a partitioner returns an invalid partition index (meaning one outside of the range [0...numPartitions-1]).
ErrMessageTooLarge is returned when the next message to consume is larger than the configured Consumer.Fetch.Max.
ErrNotConnected is the error returned when trying to send or call Close() on a Broker that is not connected.
ErrOutOfBrokers is the error returned when the client has run out of brokers to talk to because all of them errored or otherwise failed to respond.
ErrShuttingDown is returned when a producer receives a message during shutdown.
Logger is the instance of a StdLogger interface that Sarama writes connection management events to.
MaxRequestSize is the maximum size (in bytes) of any request that Sarama will attempt to send.
MaxResponseSize is the maximum size (in bytes) of any response that Sarama will attempt to parse.
PanicHandler is called for recovering from panics spawned internally to the library (and thus not recoverable by the caller's goroutine).

# Structs

Broker represents a single Kafka broker connection.
Config is used to pass multiple configuration options to Sarama's constructors.
ConsumerError is what is provided to the user when an error occurs.
No description provided by the author
No description provided by the author
ConsumerMessage encapsulates a Kafka message returned by the consumer.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
MockBroker is a mock Kafka broker that is used in unit tests.
MockConsumerMetadataResponse is a `ConsumerMetadataResponse` builder.
MockFetchResponse is a `FetchResponse` builder.
MockMetadataResponse is a `MetadataResponse` builder.
MockOffsetCommitResponse is a `OffsetCommitResponse` builder.
MockOffsetFetchResponse is a `OffsetFetchResponse` builder.
MockOffsetResponse is an `OffsetResponse` builder.
MockProduceResponse is a `ProduceResponse` builder.
MockSequence is a mock response builder that is created from a sequence of concrete responses.
MockWrapper is a mock response builder that returns a particular concrete response regardless of the actual request passed to the `For` method.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
PacketDecodingError is returned when there was an error (other than truncated data) decoding the Kafka broker's response.
PacketEncodingError is returned from a failure while encoding a Kafka packet.
No description provided by the author
No description provided by the author
ProducerError is the type of error generated when the producer fails to deliver a message.
No description provided by the author
No description provided by the author
ProducerMessage is the collection of elements passed to the Producer in order to send a message.
RequestResponse represents a Request/Response pair processed by MockBroker.
No description provided by the author
No description provided by the author
No description provided by the author

# Interfaces

AsyncProducer publishes Kafka messages using a non-blocking API.
Client is a generic Kafka client.
Consumer manages PartitionConsumers which process Kafka messages from brokers.
Encoder is a simple interface for any type that can be encoded as an array of bytes in order to be sent as the key or value of a Kafka message.
MockResponse is a response builder interface it defines one method that allows generating a response based on a request body.
OffsetManager uses Kafka to store and fetch consumed partition offsets.
PartitionConsumer processes Kafka messages from a given topic and partition.
Partitioner is anything that, given a Kafka message and a number of partitions indexed [0...numPartitions-1], decides to which partition to send the message.
PartitionOffsetManager uses Kafka to store and fetch consumed partition offsets.
StdLogger is used to log error messages.
SyncProducer publishes Kafka messages, blocking until they have been acknowledged.
TestReporter has methods matching go's testing.T to avoid importing `testing` in the main part of the library.

# Type aliases

ByteEncoder implements the Encoder interface for Go byte slices so that they can be used as the Key or Value in a ProducerMessage.
CompressionCodec represents the various compression codecs recognized by Kafka in messages.
ConfigurationError is the type of error returned from a constructor (e.g.
ConsumerErrors is a type that wraps a batch of errors and implements the Error interface.
KError is the type of error that can be returned directly by the Kafka broker.
PartitionerConstructor is the type for a function capable of constructing new Partitioners.
ProducerErrors is a type that wraps a batch of "ProducerError"s and implements the Error interface.
RequiredAcks is used in Produce Requests to tell the broker how many replica acknowledgements it must see before responding.
StringEncoder implements the Encoder interface for Go strings so that they can be used as the Key or Value in a ProducerMessage.