Categorygithub.com/streamdal/ibm-sarama
modulepackage
1.42.2-streamdal1
Repository: https://github.com/streamdal/ibm-sarama.git
Documentation: pkg.go.dev

# README

Confluent's Golang Client for Apache KafkaTM (instrumented with Streamdal)

This library has been instrumented with Streamdal's Go SDK.

Getting Started

The following environment variables must be set before launching a producer or consumer:

  1. STREAMDAL_ADDRESS
    • Address for the streamdal server (Example: localhost:8082)
  2. STREAMDAL_AUTH_TOKEN
    • Authentication token used by the server (Example: 1234)
  3. STREAMDAL_SERVICE_NAME
    • How this application/service will be identified in Streamdal Console (Example: kafkacat)

By default, the library will not have Streamdal instrumentation enabled; to enable it, you will need to TODO.

🎉 That's it - you're ready to run the example! 🎉

For more in-depth explanation of the changes and available settings, see What's changed?.

Example

A fully working example is provided in examples/go-kafkacat-streamdal.

To run the example:

  1. Change directory to examples/go-kafkacat-streamdal
  2. Start a local Kafka instance: docker-compose up -d
  3. Install & start Streamdal: curl -sSL https://sh.streamdal.com | sh
  4. Open a browser to verify you can see the streamdal UI at: http://localhost:8080
    • It should look like this: streamdal-console-1
  5. Launch a consumer:
    STREAMDAL_ADDRESS=localhost:8082 \
    STREAMDAL_AUTH_TOKEN=1234 \
    STREAMDAL_SERVICE_NAME=kafkacat \
    go run go-kafkacat-streamdal.go --broker localhost consume --group testgroup test
    
  6. In another terminal, launch a producer:
    STREAMDAL_ADDRESS=localhost:8082 \
    STREAMDAL_AUTH_TOKEN=1234 \
    STREAMDAL_SERVICE_NAME=kafkacat \
    go run go-kafkacat-streamdal.go produce --broker localhost --topic test --key-delim=":"
    
  7. In the producer terminal, produce some data by pasting: testKey:{"email":"[email protected]"}
  8. In the consumer terminal, you should see: {"email":"[email protected]"}
  9. Open the Streamdal Console in a browser https://localhost:8080
    • It should look like this: streamdal-console-2
  10. Create a pipeline that detects and masks PII fields & attach it to the consumer
    • streamdal-console-3
  11. Produce a message in producer terminal: testKey:{"email":"[email protected]"}
  12. You should see a masked message in the consumer terminal: {"email":"fo*********"}
    • Tip: If you detach the pipeline from the consumer and paste the same message again, you will see the original, unmasked message.

Passing "runtime" settings to the shim

By default, the shim will set the ComponentName to "kafka" and the OperationName to the name of the topic you are producing to or consuming from.

Also, by default, if the shim runs into any errors executing streamdal.Process(), it will swallow the errors and return the original value.

When producing, you can set StreamdalRuntimeConfig in the ProducerMessage:

msg := &ProducerMessage{
	    Topic: "test",
    Value: []byte("hello, world"),
    StreamdalRuntimeConfig: &StreamdalRuntimeConfig{
        ComponentName: "kafka",
        OperationName: "produce",
    },
}

// And then pass the msg to the producer.Input() channel as usual

Passing StreamdalRuntimeConfig in the consumer is not implemented yet!

What's changed?

The goal of any shim is to make minimally invasive changes so that the original library remains backwards-compatible and does not present any "surprises" at runtime.

NOTE: IBM/sarama is significantly more complex than other Go Kafka libraries, so the integration is a bit more invasive than other shims.

The following changes have been made to the original library:

  1. Added EnableStreamdal bool to the main Config struct
    • This is how you enable the Streamdal instrumentation in the library.
  2. Added Streamdal setup in newAsyncProducer() and newConsumer()
    • newAsyncProducer() is used for both NewSyncProducer() and NewAsyncProducer()
  3. Updated async_producer.go:dispatcher() to call on Streamdal's `.Process()
    • This function is called for every message produced to Kafka via the Input() channel
  4. Updated consumer.go:responseFeeder() to call on Streamdal's `.Process()
    • This function feeds messages to Consume()
  5. A new file ./streamdal.go has been added to the library that contains helper funcs, structs and vars used for simplifying Streamdal instrumentation in the core library.

# Packages

Package mocks provides mocks that can be used for testing applications that use Sarama.
No description provided by the author

# Functions

NewAsyncProducer creates a new AsyncProducer using the given broker addresses and configuration.
NewAsyncProducerFromClient creates a new Producer using the given client.
NewBalanceStrategyRange returns a range balance strategy, which is the default and assigns partitions as ranges to consumer group members.
NewBalanceStrategyRoundRobin returns a round-robin balance strategy, which assigns partitions to members in alternating order.
NewBalanceStrategySticky returns a sticky balance strategy, which assigns partitions to members with an attempt to preserve earlier assignments while maintain a balanced partition distribution.
NewBroker creates and returns a Broker targeting the given host:port address.
NewClient creates a new Client.
NewClusterAdmin creates a new ClusterAdmin using the given broker addresses and configuration.
NewClusterAdminFromClient creates a new ClusterAdmin using the given client.
NewConfig returns a new configuration instance with sane defaults.
NewConsistentCRCHashPartitioner is like NewHashPartitioner execpt that it uses the *unsigned* crc32 hash of the encoded bytes of the message key modulus the number of partitions.
NewConsumer creates a new consumer using the given broker addresses and configuration.
NewConsumerFromClient creates a new consumer using the given client.
NewConsumerGroup creates a new consumer group the given broker addresses and configuration.
NewConsumerGroupFromClient creates a new consumer group using the given client.
NewCustomHashPartitioner is a wrapper around NewHashPartitioner, allowing the use of custom hasher.
NewCustomPartitioner creates a default Partitioner but lets you specify the behavior of each component via options.
NewHashPartitioner returns a Partitioner which behaves as follows.
NewKerberosClient creates kerberos client used to obtain TGT and TGS tokens.
NewManualPartitioner returns a Partitioner which uses the partition manually set in the provided ProducerMessage's Partition field as the partition to produce to.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
NewMockBroker launches a fake Kafka broker.
NewMockBrokerAddr behaves like newMockBroker but listens on the address you give it rather than just some ephemeral port.
NewMockBrokerListener behaves like newMockBrokerAddr but accepts connections on the listener specified.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
NewOffsetManagerFromClient creates a new OffsetManager from the given client.
NewRandomPartitioner returns a Partitioner which chooses a random partition each time.
NewReferenceHashPartitioner is like NewHashPartitioner except that it handles absolute values in the same way as the reference Java implementation.
NewRoundRobinPartitioner returns a Partitioner which walks through the available partitions one at a time.
NewSyncProducer creates a new SyncProducer using the given broker addresses and configuration.
NewSyncProducerFromClient creates a new SyncProducer using the given client.
ParseKafkaVersion parses and returns kafka version or error from a string.
WithAbsFirst means that the partitioner handles absolute values in the same way as the reference Java implementation.
WithCustomFallbackPartitioner lets you specify what HashPartitioner should be used in case a Distribution Key is empty.
WithCustomHashFunction lets you specify what hash function to use for the partitioning.
WithHashUnsigned means the partitioner treats the hashed value as unsigned when partitioning.
No description provided by the author

# Constants

ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclOperation.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/resource/PatternType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/resource/PatternType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/resource/PatternType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/resource/PatternType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/resource/PatternType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclPermissionType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclPermissionType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclPermissionType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/acl/AclPermissionType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/resource/ResourceType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/resource/ResourceType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/resource/ResourceType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/resource/ResourceType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/resource/ResourceType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/resource/ResourceType.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/resource/ResourceType.java.
APIKeySASLAuth is the API key for the SaslAuthenticate Kafka API.
BrokerLoggerResource constant type.
BrokerResource constant type.
CompressionGZIP compression using GZIP.
CompressionLevelDefault is the constant to use in CompressionLevel to have the default compression level for any codec.
CompressionLZ4 compression using LZ4.
CompressionNone no compression.
CompressionSnappy compression using snappy.
CompressionZSTD compression using ZSTD.
ControlRecordAbort is a control record for abort.
ControlRecordCommit is a control record for commit.
ControlRecordUnknown is a control record of unknown type.
No description provided by the author
No description provided by the author
Errors.BROKER_NOT_AVAILABLE.
Errors.CLUSTER_AUTHORIZATION_FAILED.
Errors.CONCURRENT_TRANSACTIONS.
Errors.COORDINATOR_NOT_AVAILABLE.
Errors.DELEGATION_TOKEN_AUTH_DISABLED.
Errors.DELEGATION_TOKEN_AUTHORIZATION_FAILED.
Errors.DELEGATION_TOKEN_EXPIRED.
Errors.DELEGATION_TOKEN_NOT_FOUND.
Errors.DELEGATION_TOKEN_OWNER_MISMATCH.
Errors.DELEGATION_TOKEN_REQUEST_NOT_ALLOWED.
Errors.DUPLICATE_SEQUENCE_NUMBER.
Errors.ELECTION_NOT_NEEDED.
Errors.ELIGIBLE_LEADERS_NOT_AVAILABLE.
Errors.FENCED_INSTANCE_ID.
Errors.FENCED_LEADER_EPOCH.
Errors.FETCH_SESSION_ID_NOT_FOUND.
Errors.GROUP_AUTHORIZATION_FAILED.
Errors.GROUP_ID_NOT_FOUND.
Errors.GROUP_MAX_SIZE_REACHED.
Errors.GROUP_SUBSCRIBED_TO_TOPIC.
Errors.ILLEGAL_GENERATION.
Errors.ILLEGAL_SASL_STATE.
Errors.INCONSISTENT_GROUP_PROTOCOL.
Errors.INVALID_COMMIT_OFFSET_SIZE.
Errors.INVALID_CONFIG.
Errors.INVALID_FETCH_SESSION_EPOCH.
Errors.INVALID_GROUP_ID.
Errors.CORRUPT_MESSAGE.
Errors.INVALID_FETCH_SIZE.
Errors.INVALID_PARTITIONS.
Errors.INVALID_PRINCIPAL_TYPE.
Errors.INVALID_PRODUCER_EPOCH.
Errors.INVALID_PRODUCER_ID_MAPPING.
Errors.INVALID_RECORD.
Errors.INVALID_REPLICA_ASSIGNMENT.
Errors.INVALID_REPLICATION_FACTOR.
Errors.INVALID_REQUEST.
Errors.INVALID_REQUIRED_ACKS.
Errors.INVALID_SESSION_TIMEOUT.
Errors.INVALID_TIMESTAMP.
Errors.INVALID_TOPIC_EXCEPTION.
Errors.INVALID_TRANSACTION_TIMEOUT.
Errors.INVALID_TXN_STATE.
Errors.KAFKA_STORAGE_ERROR.
Errors.LEADER_NOT_AVAILABLE.
Errors.LISTENER_NOT_FOUND.
Errors.LOG_DIR_NOT_FOUND.
Errors.MEMBER_ID_REQUIRED.
Errors.RECORD_LIST_TOO_LARGE.
Errors.MESSAGE_TOO_LARGE.
Errors.NETWORK_EXCEPTION.
Errors.NONE.
Errors.NON_EMPTY_GROUP.
Errors.NO_REASSIGNMENT_IN_PROGRESS.
Errors.NOT_CONTROLLER.
Errors.NOT_COORDINATOR.
Errors.NOT_ENOUGH_REPLICAS.
Errors.NOT_ENOUGH_REPLICAS_AFTER_APPEND.
Errors.NOT_LEADER_OR_FOLLOWER.
Errors.OFFSET_METADATA_TOO_LARGE.
Errors.OFFSET_NOT_AVAILABLE.
Errors.OFFSET_OUT_OF_RANGE.
Errors.COORDINATOR_LOAD_IN_PROGRESS.
Errors.OPERATION_NOT_ATTEMPTED.
Errors.OUT_OF_ORDER_SEQUENCE_NUMBER.
Errors.POLICY_VIOLATION.
Errors.PREFERRED_LEADER_NOT_AVAILABLE.
Errors.PRODUCER_FENCED.
Errors.REASSIGNMENT_IN_PROGRESS.
Errors.REBALANCE_IN_PROGRESS.
Errors.REPLICA_NOT_AVAILABLE.
Errors.REQUEST_TIMED_OUT.
Errors.SASL_AUTHENTICATION_FAILED.
Errors.SECURITY_DISABLED.
Errors.STALE_BROKER_EPOCH.
Errors.STALE_CONTROLLER_EPOCH.
Errors.THROTTLING_QUOTA_EXCEEDED.
Errors.TOPIC_ALREADY_EXISTS.
Errors.TOPIC_AUTHORIZATION_FAILED.
Errors.TOPIC_DELETION_DISABLED.
Errors.TRANSACTIONAL_ID_AUTHORIZATION_FAILED.
Errors.TRANSACTION_COORDINATOR_FENCED.
Errors.UNKNOWN_SERVER_ERROR.
Errors.UNKNOWN_LEADER_EPOCH.
Errors.UNKNOWN_MEMBER_ID.
Errors.UNKNOWN_PRODUCER_ID.
Errors.UNKNOWN_TOPIC_OR_PARTITION.
Errors.UNSTABLE_OFFSET_COMMIT.
Errors.UNSUPPORTED_COMPRESSION_TYPE.
Errors.UNSUPPORTED_FOR_MESSAGE_FORMAT.
Errors.UNSUPPORTED_SASL_MECHANISM.
Errors.UNSUPPORTED_VERSION.
GroupGenerationUndefined is a special value for the group generation field of Offset Commit Requests that should be used when a consumer group does not rely on Kafka for partition management.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
NoResponse doesn't send any response, the TCP ACK is all you get.
OffsetNewest stands for the log head offset, i.e.
OffsetOldest stands for the oldest offset available on the broker for a partition.
ProducerTxnFlagAbortableError when producer encounter an abortable error Must call AbortTxn in this case.
ProducerTxnFlagAbortingTransaction when committing txn.
ProducerTxnFlagCommittingTransaction when committing txn.
ProducerTxnFlagEndTransaction when transaction will be committed.
ProducerTxnFlagFatalError when producer encounter an fatal error Must Close an recreate it.
ProducerTxnFlagInError when having abortable or fatal error.
ProducerTxnFlagInitializing when txnmgr is initializing.
ProducerTxnFlagInTransaction when transaction is started.
ProducerTxnFlagReady when is ready to receive transaction.
ProducerTxnFlagUninitialized when txnmgr is created.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/quota/ClientQuotaEntity.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/quota/ClientQuotaEntity.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/quota/ClientQuotaEntity.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/requests/DescribeClientQuotasRequest.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/requests/DescribeClientQuotasRequest.java.
ref: https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/requests/DescribeClientQuotasRequest.java.
RangeBalanceStrategyName identifies strategies that use the range partition assignment strategy.
No description provided by the author
No description provided by the author
ReceiveTime is a special value for the timestamp field of Offset Commit Requests which tells the broker to set the timestamp to the time at which the request was received.
RoundRobinBalanceStrategyName identifies strategies that use the round-robin partition assignment strategy.
SASLExtKeyAuth is the reserved extension key name sent as part of the SASL/OAUTHBEARER initial client response.
SASLHandshakeV0 is v0 of the Kafka SASL handshake protocol.
SASLHandshakeV1 is v1 of the Kafka SASL handshake protocol.
No description provided by the author
SASLTypeOAuth represents the SASL/OAUTHBEARER mechanism (Kafka 2.0.0+).
SASLTypePlaintext represents the SASL/PLAIN mechanism.
SASLTypeSCRAMSHA256 represents the SCRAM-SHA-256 mechanism.
SASLTypeSCRAMSHA512 represents the SCRAM-SHA-512 mechanism.
1.
2.
0.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
StickyBalanceStrategyName identifies strategies that use the sticky-partition assignment strategy.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
TopicResource constant type.
UnknownResource constant type.
WaitForAll waits for all in-sync replicas to commit before responding.
WaitForLocal waits for only the local commit to succeed before responding.

# Variables

Deprecated: use NewBalanceStrategyRange to avoid data race issue.
Deprecated: use NewBalanceStrategyRoundRobin to avoid data race issue.
Deprecated: use NewBalanceStrategySticky to avoid data race issue.
DebugLogger is the instance of a StdLogger that Sarama writes more verbose debug information to.
Effective constants defining the supported kafka versions.
ErrAddPartitionsToTxn is returned when AddPartitionsToTxn failed multiple times.
ErrAlreadyConnected is the error returned when calling Open() on a Broker that is already connected or connecting.
ErrBrokerNotFound is the error returned when there's no broker found for the requested ID.
ErrCannotTransitionNilError when transition is attempted with an nil error.
ErrClosedClient is the error returned when a method is called on a client that has been closed.
ErrClosedConsumerGroup is the error returned when a method is called on a consumer group that has been closed.
ErrConsumerOffsetNotAdvanced is returned when a partition consumer didn't advance its offset after parsing a RecordBatch.
ErrControllerNotAvailable is returned when server didn't give correct controller id.
ErrCreateACLs is the type of error returned when ACL creation failed.
ErrDeleteRecords is the type of error returned when fail to delete the required records.
ErrIncompleteResponse is the error returned when the server returns a syntactically valid response, but it does not contain the expected information.
ErrInsufficientData is returned when decoding and the packet is truncated.
ErrInvalidPartition is the error returned when a partitioner returns an invalid partition index (meaning one outside of the range [0...numPartitions-1]).
ErrMessageTooLarge is returned when the next message to consume is larger than the configured Consumer.Fetch.Max.
ErrNonTransactedProducer when calling BeginTxn, CommitTxn or AbortTxn on a non transactional producer.
ErrNotConnected is the error returned when trying to send or call Close() on a Broker that is not connected.
ErrNoTopicsToUpdateMetadata is returned when Meta.Full is set to false but no specific topics were found to update the metadata.
ErrOutOfBrokers is the error returned when the client has run out of brokers to talk to because all of them errored or otherwise failed to respond.
ErrReassignPartitions is returned when altering partition assignments for a topic fails.
ErrShuttingDown is returned when a producer receives a message during shutdown.
ErrTransactionNotReady when transaction status is invalid for the current action.
ErrTransitionNotAllowed when txnmgr state transition is not valid.
ErrTxnOffsetCommit is returned when TxnOffsetCommit failed multiple times.
ErrTxnUnableToParseResponse when response is nil.
ErrUnknownScramMechanism is returned when user tries to AlterUserScramCredentials with unknown SCRAM mechanism.
No description provided by the author
Logger is the instance of a StdLogger interface that Sarama writes connection management events to.
MaxRequestSize is the maximum size (in bytes) of any request that Sarama will attempt to send.
MaxResponseSize is the maximum size (in bytes) of any response that Sarama will attempt to parse.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
MultiErrorFormat specifies the formatter applied to format multierrors.
No description provided by the author
No description provided by the author
PanicHandler is called for recovering from panics spawned internally to the library (and thus not recoverable by the caller's goroutine).
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.
Effective constants defining the supported kafka versions.

# Structs

No description provided by the author
AccessToken contains an access token used to authenticate a SASL/OAUTHBEARER client along with associated metadata.
Acl holds information about acl type.
AclCreation is a wrapper around Resource and Acl type.
AclCreationResponse is an acl creation response type.
No description provided by the author
AddOffsetsToTxnRequest adds offsets to a transaction request.
AddOffsetsToTxnResponse is a response type for adding offsets to txns.
AddPartitionsToTxnRequest is a add partition request.
AddPartitionsToTxnResponse is a partition errors to transaction type.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
AlterConfigsRequest is an alter config request type.
AlterConfigsResource is an alter config resource type.
AlterConfigsResourceResponse is a response type for alter config resource.
AlterConfigsResponse is a response type for alter config.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
ApiVersionsResponseKey contains the APIs supported by the broker.
Broker represents a single Kafka broker connection.
No description provided by the author
Config is used to pass multiple configuration options to Sarama's constructors.
No description provided by the author
No description provided by the author
No description provided by the author
ConsumerError is what is provided to the user when an error occurs.
ConsumerGroupMemberAssignment holds the member assignment for a consume group https://github.com/apache/kafka/blob/trunk/clients/src/main/resources/common/message/ConsumerProtocolAssignment.json.
ConsumerGroupMemberMetadata holds the metadata for consumer group https://github.com/apache/kafka/blob/trunk/clients/src/main/resources/common/message/ConsumerProtocolSubscription.json.
ConsumerMessage encapsulates a Kafka message returned by the consumer.
ConsumerMetadataRequest is used for metadata requests.
ConsumerMetadataResponse holds the response for a consumer group meta data requests.
Control records are returned as a record by fetchRequest However unlike "normal" records, they mean nothing application wise.
CreateAclsRequest is an acl creation request.
CreateAclsResponse is a an acl response creation type.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
DeleteAclsRequest is a delete acl request.
DeleteAclsResponse is a delete acl response.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
DescribeAclsRequest is a describe acl request type.
DescribeAclsResponse is a describe acl response type.
No description provided by the author
A filter to be applied to matching client quotas.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
DescribeLogDirsRequest is a describe request to get partitions' log size.
DescribeLogDirsRequestTopic is a describe request about the log dir of one or more partitions within a Topic.
No description provided by the author
No description provided by the author
DescribeLogDirsResponsePartition describes a partition's log directory.
DescribeLogDirsResponseTopic contains a topic's partitions descriptions.
DescribeUserScramCredentialsRequest is a request to get list of SCRAM user names.
DescribeUserScramCredentialsRequestUser is a describe request about specific user name.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
FetchRequest (API key 1) will fetch Kafka messages.
No description provided by the author
No description provided by the author
FilterResponse is a filter response type.
No description provided by the author
No description provided by the author
No description provided by the author
GroupDescription contains each described group.
No description provided by the author
GroupMemberDescription contains the group members.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
IncrementalAlterConfigsRequest is an incremental alter config request type.
No description provided by the author
IncrementalAlterConfigsResponse is a response type for incremental alter config.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
KafkaVersion instances represent versions of the upstream Kafka broker.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
MatchingAcl is a matching acl type.
No description provided by the author
No description provided by the author
Message is a kafka message type.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
MockBroker is a mock Kafka broker that is used in unit tests.
MockConsumerMetadataResponse is a `ConsumerMetadataResponse` builder.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
MockFetchResponse is a `FetchResponse` builder.
MockFindCoordinatorResponse is a `FindCoordinatorResponse` builder.
No description provided by the author
No description provided by the author
No description provided by the author
MockInitProducerIDResponse is an `InitPorducerIDResponse` builder.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
MockMetadataResponse is a `MetadataResponse` builder.
MockOffsetCommitResponse is a `OffsetCommitResponse` builder.
MockOffsetFetchResponse is a `OffsetFetchResponse` builder.
MockOffsetResponse is an `OffsetResponse` builder.
MockProduceResponse is a `ProduceResponse` builder.
No description provided by the author
No description provided by the author
MockSequence is a mock response builder that is created from a sequence of concrete responses.
No description provided by the author
MockWrapper is a mock response builder that returns a particular concrete response regardless of the actual request passed to the `For` method.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
PacketDecodingError is returned when there was an error (other than truncated data) decoding the Kafka broker's response.
PacketEncodingError is returned from a failure while encoding a Kafka packet.
PartitionError is a partition error type.
PartitionMetadata contains each partition in the topic.
No description provided by the author
No description provided by the author
No description provided by the author
ProducerError is the type of error generated when the producer fails to deliver a message.
No description provided by the author
partition_responses in protocol.
ProducerMessage is the collection of elements passed to the Producer in order to send a message.
No description provided by the author
Describe a component for applying a client quota filter.
Record is kafka record type.
No description provided by the author
RecordHeader stores key and value for a record header.
Records implements a union type containing either a RecordBatch or a legacy MessageSet.
RequestResponse represents a Request/Response pair processed by MockBroker.
Resource holds information about acl resource type.
ResourceAcls is an acl resource type.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
StickyAssignorUserDataV0 holds topic partition information for an assignment.
StickyAssignorUserDataV1 holds topic partition information for an assignment.
StreamdalRuntimeConfig is an optional configuration structure that can be passed to kafka.FetchMessage() and kafka.WriteMessage() methods to influence streamdal shim behavior.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
TopicMetadata contains each topic in the response.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author

# Interfaces

AccessTokenProvider is the interface that encapsulates how implementors can generate access tokens for Kafka broker authentication.
AsyncProducer publishes Kafka messages using a non-blocking API.
BalanceStrategy is used to balance topics and partitions across members of a consumer group.
Client is a generic Kafka client.
ClusterAdmin is the administrative client for Kafka, which supports managing and inspecting topics, brokers, configurations and ACLs.
Consumer manages PartitionConsumers which process Kafka messages from brokers.
ConsumerGroup is responsible for dividing up processing of topics and partitions over a collection of processes (the members of the consumer group).
ConsumerGroupClaim processes Kafka messages from a given topic and partition within a consumer group.
ConsumerGroupHandler instances are used to handle individual topic/partition claims.
ConsumerGroupSession represents a consumer group member session.
ConsumerInterceptor allows you to intercept (and possibly mutate) the records received by the consumer before they are sent to the messages channel.
DynamicConsistencyPartitioner can optionally be implemented by Partitioners in order to allow more flexibility than is originally allowed by the RequiresConsistency method in the Partitioner interface.
Encoder is a simple interface for any type that can be encoded as an array of bytes in order to be sent as the key or value of a Kafka message.
No description provided by the author
MockResponse is a response builder interface it defines one method that allows generating a response based on a request body.
OffsetManager uses Kafka to store and fetch consumed partition offsets.
PartitionConsumer processes Kafka messages from a given topic and partition.
Partitioner is anything that, given a Kafka message and a number of partitions indexed [0...numPartitions-1], decides to which partition to send the message.
PartitionOffsetManager uses Kafka to store and fetch consumed partition offsets.
ProducerInterceptor allows you to intercept (and possibly mutate) the records received by the producer before they are published to the Kafka cluster.
SCRAMClient is a an interface to a SCRAM client implementation.
StdLogger is used to log error messages.
No description provided by the author
SyncProducer publishes Kafka messages, blocking until they have been acknowledged.
TestReporter has methods matching go's testing.T to avoid importing `testing` in the main part of the library.

# Type aliases

No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
BalanceStrategyPlan is the results of any BalanceStrategy.Plan attempt.
ByteEncoder implements the Encoder interface for Go byte slices so that they can be used as the Key or Value in a ProducerMessage.
CompressionCodec represents the various compression codecs recognized by Kafka in messages.
ConfigResourceType is a type for resources that have configs.
No description provided by the author
ConfigurationError is the type of error returned from a constructor (e.g.
ConsumerErrors is a type that wraps a batch of errors and implements the Error interface.
ControlRecordType ...
No description provided by the author
No description provided by the author
HashPartitionerOption lets you modify default values of the partitioner.
No description provided by the author
No description provided by the author
KError is the type of error that can be returned directly by the Kafka broker.
PartitionerConstructor is the type for a function capable of constructing new Partitioners.
ProduceCallback function is called once the produce response has been parsed or could not be read.
ProducerErrors is a type that wraps a batch of "ProducerError"s and implements the Error interface.
ProducerTxnStatusFlag mark current transaction status.
No description provided by the author
No description provided by the author
RequestNotifierFunc is invoked when a mock broker processes a request successfully and will provides the number of bytes read and written.
RequiredAcks is used in Produce Requests to tell the broker how many replica acknowledgements it must see before responding.
SASLMechanism specifies the SASL mechanism the client uses to authenticate with the broker.
No description provided by the author
StringEncoder implements the Encoder interface for Go strings so that they can be used as the Key or Value in a ProducerMessage.
No description provided by the author