Categorygithub.com/psy-core/sarama
modulepackage
1.26.2
Repository: https://github.com/psy-core/sarama.git
Documentation: pkg.go.dev

# README

sarama

GoDoc Build Status Coverage

Sarama is an MIT-licensed Go client library for Apache Kafka version 0.8 (and later).

Getting started

  • API documentation and examples are available via godoc.
  • Mocks for testing are available in the mocks subpackage.
  • The examples directory contains more elaborate example applications.
  • The tools directory contains command line tools that can be useful for testing, diagnostics, and instrumentation.

You might also want to look at the Frequently Asked Questions.

Compatibility and API stability

Sarama provides a "2 releases + 2 months" compatibility guarantee: we support the two latest stable releases of Kafka and Go, and we provide a two month grace period for older releases. This means we currently officially support Go 1.12 through 1.13, and Kafka 2.1 through 2.4, although older releases are still likely to work.

Sarama follows semantic versioning and provides API stability via the gopkg.in service. You can import a version with a guaranteed stable API via http://gopkg.in/Shopify/sarama.v1. A changelog is available here.

Contributing

# Packages

No description provided by the author
Package mocks provides mocks that can be used for testing applications that use Sarama.
No description provided by the author

# Functions

NewAsyncProducer creates a new AsyncProducer using the given broker addresses and configuration.
NewAsyncProducerFromClient creates a new Producer using the given client.
NewBroker creates and returns a Broker targeting the given host:port address.
NewClient creates a new Client.
NewClusterAdmin creates a new ClusterAdmin using the given broker addresses and configuration.
NewClusterAdminFromClient creates a new ClusterAdmin using the given client.
NewConfig returns a new configuration instance with sane defaults.
NewConsumer creates a new consumer using the given broker addresses and configuration.
NewConsumerFromClient creates a new consumer using the given client.
NewConsumerGroup creates a new consumer group the given broker addresses and configuration.
NewConsumerGroupFromClient creates a new consumer group using the given client.
NewCustomHashPartitioner is a wrapper around NewHashPartitioner, allowing the use of custom hasher.
NewCustomPartitioner creates a default Partitioner but lets you specify the behavior of each component via options.
NewHashPartitioner returns a Partitioner which behaves as follows.
* * Create kerberos client used to obtain TGT and TGS tokens * used gokrb5 library, which is a pure go kerberos client with * some GSS-API capabilities, and SPNEGO support.
NewManualPartitioner returns a Partitioner which uses the partition manually set in the provided ProducerMessage's Partition field as the partition to produce to.
No description provided by the author
NewMockBroker launches a fake Kafka broker.
NewMockBrokerAddr behaves like newMockBroker but listens on the address you give it rather than just some ephemeral port.
NewMockBrokerListener behaves like newMockBrokerAddr but accepts connections on the listener specified.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
NewOffsetManagerFromClient creates a new OffsetManager from the given client.
NewRandomPartitioner returns a Partitioner which chooses a random partition each time.
NewReferenceHashPartitioner is like NewHashPartitioner except that it handles absolute values in the same way as the reference Java implementation.
NewRoundRobinPartitioner returns a Partitioner which walks through the available partitions one at a time.
NewSyncProducer creates a new SyncProducer using the given broker addresses and configuration.
NewSyncProducerFromClient creates a new SyncProducer using the given client.
ParseKafkaVersion parses and returns kafka version or error from a string.
WithAbsFirst means that the partitioner handles absolute values in the same way as the reference Java implementation.
WithCustomFallbackPartitioner lets you specify what HashPartitioner should be used in case a Distribution Key is empty.
WithCustomHashFunction lets you specify what hash function to use for the partitioning.

# Constants

ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/resource/PatternType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/resource/PatternType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/resource/PatternType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/resource/PatternType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/resource/PatternType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclPermissionType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclPermissionType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclPermissionType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclPermissionType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/resource/ResourceType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/resource/ResourceType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/resource/ResourceType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/resource/ResourceType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/resource/ResourceType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/resource/ResourceType.java.
APIKeySASLAuth is the API key for the SaslAuthenticate Kafka API.
BrokerLoggerResource constant type.
BrokerResource constant type.
CompressionGZIP compression using GZIP.
CompressionLevelDefault is the constant to use in CompressionLevel to have the default compression level for any codec.
CompressionLZ4 compression using LZ4.
CompressionNone no compression.
CompressionSnappy compression using snappy.
CompressionZSTD compression using ZSTD.
ControlRecordAbort is a control record for abort.
ControlRecordCommit is a control record for commit.
ControlRecordUnknown is a control record of unknown type.
No description provided by the author
No description provided by the author
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
Numeric error codes returned by the Kafka server.
GroupGenerationUndefined is a special value for the group generation field of Offset Commit Requests that should be used when a consumer group does not rely on Kafka for partition management.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
NoResponse doesn't send any response, the TCP ACK is all you get.
OffsetNewest stands for the log head offset, i.e.
OffsetOldest stands for the oldest offset available on the broker for a partition.
RangeBalanceStrategyName identifies strategies that use the range partition assignment strategy.
No description provided by the author
No description provided by the author
ReceiveTime is a special value for the timestamp field of Offset Commit Requests which tells the broker to set the timestamp to the time at which the request was received.
RoundRobinBalanceStrategyName identifies strategies that use the round-robin partition assignment strategy.
SASLExtKeyAuth is the reserved extension key name sent as part of the SASL/OAUTHBEARER intial client response.
SASLHandshakeV0 is v0 of the Kafka SASL handshake protocol.
SASLHandshakeV1 is v1 of the Kafka SASL handshake protocol.
No description provided by the author
SASLTypeOAuth represents the SASL/OAUTHBEARER mechanism (Kafka 2.0.0+).
SASLTypePlaintext represents the SASL/PLAIN mechanism.
SASLTypeSCRAMSHA256 represents the SCRAM-SHA-256 mechanism.
SASLTypeSCRAMSHA512 represents the SCRAM-SHA-512 mechanism.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
StickyBalanceStrategyName identifies strategies that use the sticky-partition assignment strategy.
No description provided by the author
TopicResource constant type.
UnknownResource constant type.
WaitForAll waits for all in-sync replicas to commit before responding.
WaitForLocal waits for only the local commit to succeed before responding.

# Variables

BalanceStrategyRange is the default and assigns partitions as ranges to consumer group members.
BalanceStrategyRoundRobin assigns partitions to members in alternating order.
BalanceStrategySticky assigns partitions to members with an attempt to preserve earlier assignments while maintain a balanced partition distribution.
ErrAlreadyConnected is the error returned when calling Open() on a Broker that is already connected or connecting.
ErrClosedClient is the error returned when a method is called on a client that has been closed.
ErrClosedConsumerGroup is the error returned when a method is called on a consumer group that has been closed.
ErrConsumerOffsetNotAdvanced is returned when a partition consumer didn't advance its offset after parsing a RecordBatch.
ErrControllerNotAvailable is returned when server didn't give correct controller id.
ErrIncompleteResponse is the error returned when the server returns a syntactically valid response, but it does not contain the expected information.
ErrInsufficientData is returned when decoding and the packet is truncated.
ErrInvalidPartition is the error returned when a partitioner returns an invalid partition index (meaning one outside of the range [0...numPartitions-1]).
ErrMessageTooLarge is returned when the next message to consume is larger than the configured Consumer.Fetch.Max.
ErrNotConnected is the error returned when trying to send or call Close() on a Broker that is not connected.
ErrNoTopicsToUpdateMetadata is returned when Meta.Full is set to false but no specific topics were found to update the metadata.
ErrOutOfBrokers is the error returned when the client has run out of brokers to talk to because all of them errored or otherwise failed to respond.
ErrShuttingDown is returned when a producer receives a message during shutdown.
Logger is the instance of a StdLogger interface that Sarama writes connection management events to.
MaxRequestSize is the maximum size (in bytes) of any request that Sarama will attempt to send.
MaxResponseSize is the maximum size (in bytes) of any response that Sarama will attempt to parse.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
No description provided by the author
PanicHandler is called for recovering from panics spawned internally to the library (and thus not recoverable by the caller's goroutine).
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.

# Structs

No description provided by the author
AccessToken contains an access token used to authenticate a SASL/OAUTHBEARER client along with associated metadata.
Acl holds information about acl type.
AclCreation is a wrapper around Resource and Acl type.
AclCreationResponse is an acl creation response type.
No description provided by the author
AddOffsetsToTxnRequest adds offsets to a transaction request.
AddOffsetsToTxnResponse is a response type for adding offsets to txns.
AddPartitionsToTxnRequest is a add paartition request.
AddPartitionsToTxnResponse is a partition errors to transaction type.
AlterConfigsRequest is an alter config request type.
AlterConfigsResource is an alter config resource type.
AlterConfigsResourceResponse is a reponse type for alter config resource.
AlterConfigsResponse is a reponse type for alter config.
ApiVersionsRequest ...
ApiVersionsResponse is an api version response type.
ApiVersionsResponseBlock is an api version reponse block type.
Broker represents a single Kafka broker connection.
Config is used to pass multiple configuration options to Sarama's constructors.
No description provided by the author
No description provided by the author
No description provided by the author
ConsumerError is what is provided to the user when an error occurs.
ConsumerGroupMemberAssignment holds the member assignment for a consume group.
ConsumerGroupMemberMetadata holds the metadata for consumer group.
ConsumerMessage encapsulates a Kafka message returned by the consumer.
ConsumerMetadataRequest is used for metadata requests.
ConsumerMetadataResponse holds the response for a consumer group meta data requests.
Control records are returned as a record by fetchRequest However unlike "normal" records, they mean nothing application wise.
CreateAclsRequest is an acl creation request.
CreateAclsResponse is a an acl reponse creation type.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
DeleteAclsRequest is a delete acl request.
DeleteAclsResponse is a delete acl response.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
DescribeAclsRequest is a secribe acl request type.
DescribeAclsResponse is a describe acl response type.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
DescribeLogDirsRequest is a describe request to get partitions' log size.
DescribeLogDirsRequestTopic is a describe request about the log dir of one or more partitions within a Topic.
No description provided by the author
No description provided by the author
DescribeLogDirsResponsePartition describes a partition's log directory.
DescribeLogDirsResponseTopic contains a topic's partitions descriptions.
No description provided by the author
No description provided by the author
ErrDeleteRecords is the type of error returned when fail to delete the required records.
FetchRequest (API key 1) will fetch Kafka messages.
No description provided by the author
No description provided by the author
FilterResponse is a filter response type.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
KafkaVersion instances represent versions of the upstream Kafka broker.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
MatchingAcl is a matching acl type.
Message is a kafka message type.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
MockBroker is a mock Kafka broker that is used in unit tests.
MockConsumerMetadataResponse is a `ConsumerMetadataResponse` builder.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
MockFetchResponse is a `FetchResponse` builder.
MockFindCoordinatorResponse is a `FindCoordinatorResponse` builder.
No description provided by the author
No description provided by the author
No description provided by the author
MockMetadataResponse is a `MetadataResponse` builder.
MockOffsetCommitResponse is a `OffsetCommitResponse` builder.
MockOffsetFetchResponse is a `OffsetFetchResponse` builder.
MockOffsetResponse is an `OffsetResponse` builder.
MockProduceResponse is a `ProduceResponse` builder.
No description provided by the author
No description provided by the author
MockSequence is a mock response builder that is created from a sequence of concrete responses.
MockWrapper is a mock response builder that returns a particular concrete response regardless of the actual request passed to the `For` method.
MultiError is used to contain multi error.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
PacketDecodingError is returned when there was an error (other than truncated data) decoding the Kafka broker's response.
PacketEncodingError is returned from a failure while encoding a Kafka packet.
PartitionError is a partition error type.
No description provided by the author
No description provided by the author
No description provided by the author
ProducerError is the type of error generated when the producer fails to deliver a message.
No description provided by the author
partition_responses in protocol.
ProducerMessage is the collection of elements passed to the Producer in order to send a message.
Record is kafka record type.
No description provided by the author
RecordHeader stores key and value for a record header.
Records implements a union type containing either a RecordBatch or a legacy MessageSet.
RequestResponse represents a Request/Response pair processed by MockBroker.
Resource holds information about acl resource type.
ResourceAcls is an acl resource type.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
StickyAssignorUserDataV0 holds topic partition information for an assignment.
StickyAssignorUserDataV1 holds topic partition information for an assignment.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author

# Interfaces

AccessTokenProvider is the interface that encapsulates how implementors can generate access tokens for Kafka broker authentication.
AsyncProducer publishes Kafka messages using a non-blocking API.
BalanceStrategy is used to balance topics and partitions across members of a consumer group.
Client is a generic Kafka client.
ClusterAdmin is the administrative client for Kafka, which supports managing and inspecting topics, brokers, configurations and ACLs.
Consumer manages PartitionConsumers which process Kafka messages from brokers.
ConsumerGroup is responsible for dividing up processing of topics and partitions over a collection of processes (the members of the consumer group).
ConsumerGroupClaim processes Kafka messages from a given topic and partition within a consumer group.
ConsumerGroupHandler instances are used to handle individual topic/partition claims.
ConsumerGroupSession represents a consumer group member session.
DynamicConsistencyPartitioner can optionally be implemented by Partitioners in order to allow more flexibility than is originally allowed by the RequiresConsistency method in the Partitioner interface.
Encoder is a simple interface for any type that can be encoded as an array of bytes in order to be sent as the key or value of a Kafka message.
No description provided by the author
MockResponse is a response builder interface it defines one method that allows generating a response based on a request body.
OffsetManager uses Kafka to store and fetch consumed partition offsets.
PartitionConsumer processes Kafka messages from a given topic and partition.
Partitioner is anything that, given a Kafka message and a number of partitions indexed [0...numPartitions-1], decides to which partition to send the message.
PartitionOffsetManager uses Kafka to store and fetch consumed partition offsets.
SCRAMClient is a an interface to a SCRAM client implementation.
StdLogger is used to log error messages.
No description provided by the author
SyncProducer publishes Kafka messages, blocking until they have been acknowledged.
TestReporter has methods matching go's testing.T to avoid importing `testing` in the main part of the library.

# Type aliases

No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
BalanceStrategyPlan is the results of any BalanceStrategy.Plan attempt.
ByteEncoder implements the Encoder interface for Go byte slices so that they can be used as the Key or Value in a ProducerMessage.
CompressionCodec represents the various compression codecs recognized by Kafka in messages.
ConfigResourceType is a type for resources that have configs.
No description provided by the author
ConfigurationError is the type of error returned from a constructor (e.g.
ConsumerErrors is a type that wraps a batch of errors and implements the Error interface.
ControlRecordType ...
No description provided by the author
No description provided by the author
HashPartitionOption lets you modify default values of the partitioner.
No description provided by the author
KError is the type of error that can be returned directly by the Kafka broker.
PartitionerConstructor is the type for a function capable of constructing new Partitioners.
ProducerErrors is a type that wraps a batch of "ProducerError"s and implements the Error interface.
RequestNotifierFunc is invoked when a mock broker processes a request successfully and will provides the number of bytes read and written.
RequiredAcks is used in Produce Requests to tell the broker how many replica acknowledgements it must see before responding.
SASLMechanism specifies the SASL mechanism the client uses to authenticate with the broker.
StringEncoder implements the Encoder interface for Go strings so that they can be used as the Key or Value in a ProducerMessage.