# Packages
Package mocks provides mocks that can be used for testing applications
that use Sarama.
# Functions
NewAsyncProducer creates a new AsyncProducer using the given broker addresses and configuration.
NewAsyncProducerFromClient creates a new Producer using the given client.
NewBroker creates and returns a Broker targeting the given host:port address.
NewClient creates a new Client.
NewClusterAdmin creates a new ClusterAdmin using the given broker addresses and configuration.
NewConfig returns a new configuration instance with sane defaults.
NewConsumer creates a new consumer using the given broker addresses and configuration.
NewConsumerFromClient creates a new consumer using the given client.
NewConsumerGroup creates a new consumer group the given broker addresses and configuration.
NewConsumerFromClient creates a new consumer group using the given client.
NewCustomHashPartitioner is a wrapper around NewHashPartitioner, allowing the use of custom hasher.
NewCustomPartitioner creates a default Partitioner but lets you specify the behavior of each component via options.
NewHashPartitioner returns a Partitioner which behaves as follows.
NewManualPartitioner returns a Partitioner which uses the partition manually set in the provided ProducerMessage's Partition field as the partition to produce to.
No description provided by the author
NewMockBroker launches a fake Kafka broker.
NewMockBrokerAddr behaves like newMockBroker but listens on the address you give it rather than just some ephemeral port.
NewMockBrokerListener behaves like newMockBrokerAddr but accepts connections on the listener specified.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
NewOffsetManagerFromClient creates a new OffsetManager from the given client.
NewRandomPartitioner returns a Partitioner which chooses a random partition each time.
NewReferenceHashPartitioner is like NewHashPartitioner except that it handles absolute values in the same way as the reference Java implementation.
NewRoundRobinPartitioner returns a Partitioner which walks through the available partitions one at a time.
NewSyncProducer creates a new SyncProducer using the given broker addresses and configuration.
NewSyncProducerFromClient creates a new SyncProducer using the given client.
No description provided by the author
WithAbsFirst means that the partitioner handles absolute values in the same way as the reference Java implementation.
WithCustomFallbackPartitioner lets you specify what HashPartitioner should be used in case a Distribution Key is empty.
WithCustomHashFunction lets you specify what hash function to use for the partitioning.
# Constants
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclPermissionType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclPermissionType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclPermissionType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclPermissionType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/resource/ResourceType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/resource/ResourceType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/resource/ResourceType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/resource/ResourceType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/resource/ResourceType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/resource/ResourceType.java.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
CompressionLevelDefault is the constant to use in CompressionLevel to have the default compression level for any codec.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
GroupGenerationUndefined is a special value for the group generation field of Offset Commit Requests that should be used when a consumer group does not rely on Kafka for partition management.
No description provided by the author
NoResponse doesn't send any response, the TCP ACK is all you get.
OffsetNewest stands for the log head offset, i.e.
OffsetOldest stands for the oldest offset available on the broker for a partition.
No description provided by the author
No description provided by the author
ReceiveTime is a special value for the timestamp field of Offset Commit Requests which tells the broker to set the timestamp to the time at which the request was received.
No description provided by the author
No description provided by the author
WaitForAll waits for all in-sync replicas to commit before responding.
WaitForLocal waits for only the local commit to succeed before responding.
# Variables
BalanceStrategyRange is the default and assigns partitions as ranges to consumer group members.
BalanceStrategyRoundRobin assigns partitions to members in alternating order.
ErrAlreadyConnected is the error returned when calling Open() on a Broker that is already connected or connecting.
ErrClosedClient is the error returned when a method is called on a client that has been closed.
ErrClosedConsumerGroup is the error returned when a method is called on a consumer group that has been closed.
ErrConsumerOffsetNotAdvanced is returned when a partition consumer didn't advance its offset after parsing a RecordBatch.
ErrControllerNotAvailable is returned when server didn't give correct controller id.
ErrIncompleteResponse is the error returned when the server returns a syntactically valid response, but it does not contain the expected information.
ErrInsufficientData is returned when decoding and the packet is truncated.
ErrInvalidPartition is the error returned when a partitioner returns an invalid partition index (meaning one outside of the range [0...numPartitions-1]).
ErrMessageTooLarge is returned when the next message to consume is larger than the configured Consumer.Fetch.Max.
ErrNotConnected is the error returned when trying to send or call Close() on a Broker that is not connected.
ErrNoTopicsToUpdateMetadata is returned when Meta.Full is set to false but no specific topics were found to update the metadata.
ErrOutOfBrokers is the error returned when the client has run out of brokers to talk to because all of them errored or otherwise failed to respond.
ErrShuttingDown is returned when a producer receives a message during shutdown.
Logger is the instance of a StdLogger interface that Sarama writes connection management events to.
MaxRequestSize is the maximum size (in bytes) of any request that Sarama will attempt to send.
MaxResponseSize is the maximum size (in bytes) of any response that Sarama will attempt to parse.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
No description provided by the author
PanicHandler is called for recovering from panics spawned internally to the library (and thus not recoverable by the caller's goroutine).
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
# Structs
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
Broker represents a single Kafka broker connection.
Config is used to pass multiple configuration options to Sarama's constructors.
No description provided by the author
No description provided by the author
ConsumerError is what is provided to the user when an error occurs.
No description provided by the author
No description provided by the author
ConsumerMessage encapsulates a Kafka message returned by the consumer.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
FetchRequest (API key 1) will fetch Kafka messages.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
KafkaVersion instances represent versions of the upstream Kafka broker.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
MockBroker is a mock Kafka broker that is used in unit tests.
MockConsumerMetadataResponse is a `ConsumerMetadataResponse` builder.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
MockFetchResponse is a `FetchResponse` builder.
MockFindCoordinatorResponse is a `FindCoordinatorResponse` builder.
No description provided by the author
MockMetadataResponse is a `MetadataResponse` builder.
MockOffsetCommitResponse is a `OffsetCommitResponse` builder.
MockOffsetFetchResponse is a `OffsetFetchResponse` builder.
MockOffsetResponse is an `OffsetResponse` builder.
MockProduceResponse is a `ProduceResponse` builder.
MockSequence is a mock response builder that is created from a sequence of concrete responses.
MockWrapper is a mock response builder that returns a particular concrete response regardless of the actual request passed to the `For` method.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
PacketDecodingError is returned when there was an error (other than truncated data) decoding the Kafka broker's response.
PacketEncodingError is returned from a failure while encoding a Kafka packet.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
ProducerError is the type of error generated when the producer fails to deliver a message.
No description provided by the author
No description provided by the author
ProducerMessage is the collection of elements passed to the Producer in order to send a message.
No description provided by the author
No description provided by the author
No description provided by the author
Records implements a union type containing either a RecordBatch or a legacy MessageSet.
RequestResponse represents a Request/Response pair processed by MockBroker.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
# Interfaces
AsyncProducer publishes Kafka messages using a non-blocking API.
BalanceStrategy is used to balance topics and partitions across memebers of a consumer group.
Client is a generic Kafka client.
ClusterAdmin is the administrative client for Kafka, which supports managing and inspecting topics, brokers, configurations and ACLs.
Consumer manages PartitionConsumers which process Kafka messages from brokers.
ConsumerGroup is responsible for dividing up processing of topics and partitions over a collection of processes (the members of the consumer group).
ConsumerGroupClaim processes Kafka messages from a given topic and partition within a consumer group.
ConsumerGroupHandler instances are used to handle individual topic/partition claims.
ConsumerGroupSession represents a consumer group member session.
DynamicConsistencyPartitioner can optionally be implemented by Partitioners in order to allow more flexibility than is originally allowed by the RequiresConsistency method in the Partitioner interface.
Encoder is a simple interface for any type that can be encoded as an array of bytes in order to be sent as the key or value of a Kafka message.
MockResponse is a response builder interface it defines one method that allows generating a response based on a request body.
OffsetManager uses Kafka to store and fetch consumed partition offsets.
PartitionConsumer processes Kafka messages from a given topic and partition.
Partitioner is anything that, given a Kafka message and a number of partitions indexed [0...numPartitions-1], decides to which partition to send the message.
PartitionOffsetManager uses Kafka to store and fetch consumed partition offsets.
StdLogger is used to log error messages.
SyncProducer publishes Kafka messages, blocking until they have been acknowledged.
TestReporter has methods matching go's testing.T to avoid importing `testing` in the main part of the library.
# Type aliases
No description provided by the author
No description provided by the author
No description provided by the author
BalanceStrategyPlan is the results of any BalanceStrategy.Plan attempt.
ByteEncoder implements the Encoder interface for Go byte slices so that they can be used as the Key or Value in a ProducerMessage.
CompressionCodec represents the various compression codecs recognized by Kafka in messages.
No description provided by the author
ConfigurationError is the type of error returned from a constructor (e.g.
ConsumerErrors is a type that wraps a batch of errors and implements the Error interface.
No description provided by the author
HashPartitionOption lets you modify default values of the partitioner.
No description provided by the author
KError is the type of error that can be returned directly by the Kafka broker.
PartitionerConstructor is the type for a function capable of constructing new Partitioners.
ProducerErrors is a type that wraps a batch of "ProducerError"s and implements the Error interface.
RequestNotifierFunc is invoked when a mock broker processes a request successfully and will provides the number of bytes read and written.
RequiredAcks is used in Produce Requests to tell the broker how many replica acknowledgements it must see before responding.
StringEncoder implements the Encoder interface for Go strings so that they can be used as the Key or Value in a ProducerMessage.