Categorygithub.com/segmentio/kafka-go
modulepackage
0.4.47
Repository: https://github.com/segmentio/kafka-go.git
Documentation: pkg.go.dev

# README

kafka-go CircleCI Go Report Card GoDoc

Motivations

We rely on both Go and Kafka a lot at Segment. Unfortunately, the state of the Go client libraries for Kafka at the time of this writing was not ideal. The available options were:

  • sarama, which is by far the most popular but is quite difficult to work with. It is poorly documented, the API exposes low level concepts of the Kafka protocol, and it doesn't support recent Go features like contexts. It also passes all values as pointers which causes large numbers of dynamic memory allocations, more frequent garbage collections, and higher memory usage.

  • confluent-kafka-go is a cgo based wrapper around librdkafka, which means it introduces a dependency to a C library on all Go code that uses the package. It has much better documentation than sarama but still lacks support for Go contexts.

  • goka is a more recent Kafka client for Go which focuses on a specific usage pattern. It provides abstractions for using Kafka as a message passing bus between services rather than an ordered log of events, but this is not the typical use case of Kafka for us at Segment. The package also depends on sarama for all interactions with Kafka.

This is where kafka-go comes into play. It provides both low and high level APIs for interacting with Kafka, mirroring concepts and implementing interfaces of the Go standard library to make it easy to use and integrate with existing software.

Note:

In order to better align with our newly adopted Code of Conduct, the kafka-go project has renamed our default branch to main. For the full details of our Code Of Conduct see this document.

Kafka versions

kafka-go is currently tested with Kafka versions 0.10.1.0 to 2.7.1. While it should also be compatible with later versions, newer features available in the Kafka API may not yet be implemented in the client.

Go versions

kafka-go requires Go version 1.15 or later.

Connection GoDoc

The Conn type is the core of the kafka-go package. It wraps around a raw network connection to expose a low-level API to a Kafka server.

Here are some examples showing typical use of a connection object:

// to produce messages
topic := "my-topic"
partition := 0

conn, err := kafka.DialLeader(context.Background(), "tcp", "localhost:9092", topic, partition)
if err != nil {
    log.Fatal("failed to dial leader:", err)
}

conn.SetWriteDeadline(time.Now().Add(10*time.Second))
_, err = conn.WriteMessages(
    kafka.Message{Value: []byte("one!")},
    kafka.Message{Value: []byte("two!")},
    kafka.Message{Value: []byte("three!")},
)
if err != nil {
    log.Fatal("failed to write messages:", err)
}

if err := conn.Close(); err != nil {
    log.Fatal("failed to close writer:", err)
}
// to consume messages
topic := "my-topic"
partition := 0

conn, err := kafka.DialLeader(context.Background(), "tcp", "localhost:9092", topic, partition)
if err != nil {
    log.Fatal("failed to dial leader:", err)
}

conn.SetReadDeadline(time.Now().Add(10*time.Second))
batch := conn.ReadBatch(10e3, 1e6) // fetch 10KB min, 1MB max

b := make([]byte, 10e3) // 10KB max per message
for {
    n, err := batch.Read(b)
    if err != nil {
        break
    }
    fmt.Println(string(b[:n]))
}

if err := batch.Close(); err != nil {
    log.Fatal("failed to close batch:", err)
}

if err := conn.Close(); err != nil {
    log.Fatal("failed to close connection:", err)
}

To Create Topics

By default kafka has the auto.create.topics.enable='true' (KAFKA_AUTO_CREATE_TOPICS_ENABLE='true' in the wurstmeister/kafka kafka docker image). If this value is set to 'true' then topics will be created as a side effect of kafka.DialLeader like so:

// to create topics when auto.create.topics.enable='true'
conn, err := kafka.DialLeader(context.Background(), "tcp", "localhost:9092", "my-topic", 0)
if err != nil {
    panic(err.Error())
}

If auto.create.topics.enable='false' then you will need to create topics explicitly like so:

// to create topics when auto.create.topics.enable='false'
topic := "my-topic"

conn, err := kafka.Dial("tcp", "localhost:9092")
if err != nil {
    panic(err.Error())
}
defer conn.Close()

controller, err := conn.Controller()
if err != nil {
    panic(err.Error())
}
var controllerConn *kafka.Conn
controllerConn, err = kafka.Dial("tcp", net.JoinHostPort(controller.Host, strconv.Itoa(controller.Port)))
if err != nil {
    panic(err.Error())
}
defer controllerConn.Close()


topicConfigs := []kafka.TopicConfig{
    {
        Topic:             topic,
        NumPartitions:     1,
        ReplicationFactor: 1,
    },
}

err = controllerConn.CreateTopics(topicConfigs...)
if err != nil {
    panic(err.Error())
}

To Connect To Leader Via a Non-leader Connection

// to connect to the kafka leader via an existing non-leader connection rather than using DialLeader
conn, err := kafka.Dial("tcp", "localhost:9092")
if err != nil {
    panic(err.Error())
}
defer conn.Close()
controller, err := conn.Controller()
if err != nil {
    panic(err.Error())
}
var connLeader *kafka.Conn
connLeader, err = kafka.Dial("tcp", net.JoinHostPort(controller.Host, strconv.Itoa(controller.Port)))
if err != nil {
    panic(err.Error())
}
defer connLeader.Close()

To list topics

conn, err := kafka.Dial("tcp", "localhost:9092")
if err != nil {
    panic(err.Error())
}
defer conn.Close()

partitions, err := conn.ReadPartitions()
if err != nil {
    panic(err.Error())
}

m := map[string]struct{}{}

for _, p := range partitions {
    m[p.Topic] = struct{}{}
}
for k := range m {
    fmt.Println(k)
}

Because it is low level, the Conn type turns out to be a great building block for higher level abstractions, like the Reader for example.

Reader GoDoc

A Reader is another concept exposed by the kafka-go package, which intends to make it simpler to implement the typical use case of consuming from a single topic-partition pair. A Reader also automatically handles reconnections and offset management, and exposes an API that supports asynchronous cancellations and timeouts using Go contexts.

Note that it is important to call Close() on a Reader when a process exits. The kafka server needs a graceful disconnect to stop it from continuing to attempt to send messages to the connected clients. The given example will not call Close() if the process is terminated with SIGINT (ctrl-c at the shell) or SIGTERM (as docker stop or a kubernetes restart does). This can result in a delay when a new reader on the same topic connects (e.g. new process started or new container running). Use a signal.Notify handler to close the reader on process shutdown.

// make a new reader that consumes from topic-A, partition 0, at offset 42
r := kafka.NewReader(kafka.ReaderConfig{
    Brokers:   []string{"localhost:9092","localhost:9093", "localhost:9094"},
    Topic:     "topic-A",
    Partition: 0,
    MaxBytes:  10e6, // 10MB
})
r.SetOffset(42)

for {
    m, err := r.ReadMessage(context.Background())
    if err != nil {
        break
    }
    fmt.Printf("message at offset %d: %s = %s\n", m.Offset, string(m.Key), string(m.Value))
}

if err := r.Close(); err != nil {
    log.Fatal("failed to close reader:", err)
}

Consumer Groups

kafka-go also supports Kafka consumer groups including broker managed offsets. To enable consumer groups, simply specify the GroupID in the ReaderConfig.

ReadMessage automatically commits offsets when using consumer groups.

// make a new reader that consumes from topic-A
r := kafka.NewReader(kafka.ReaderConfig{
    Brokers:   []string{"localhost:9092", "localhost:9093", "localhost:9094"},
    GroupID:   "consumer-group-id",
    Topic:     "topic-A",
    MaxBytes:  10e6, // 10MB
})

for {
    m, err := r.ReadMessage(context.Background())
    if err != nil {
        break
    }
    fmt.Printf("message at topic/partition/offset %v/%v/%v: %s = %s\n", m.Topic, m.Partition, m.Offset, string(m.Key), string(m.Value))
}

if err := r.Close(); err != nil {
    log.Fatal("failed to close reader:", err)
}

There are a number of limitations when using consumer groups:

  • (*Reader).SetOffset will return an error when GroupID is set
  • (*Reader).Offset will always return -1 when GroupID is set
  • (*Reader).Lag will always return -1 when GroupID is set
  • (*Reader).ReadLag will return an error when GroupID is set
  • (*Reader).Stats will return a partition of -1 when GroupID is set

Explicit Commits

kafka-go also supports explicit commits. Instead of calling ReadMessage, call FetchMessage followed by CommitMessages.

ctx := context.Background()
for {
    m, err := r.FetchMessage(ctx)
    if err != nil {
        break
    }
    fmt.Printf("message at topic/partition/offset %v/%v/%v: %s = %s\n", m.Topic, m.Partition, m.Offset, string(m.Key), string(m.Value))
    if err := r.CommitMessages(ctx, m); err != nil {
        log.Fatal("failed to commit messages:", err)
    }
}

When committing messages in consumer groups, the message with the highest offset for a given topic/partition determines the value of the committed offset for that partition. For example, if messages at offset 1, 2, and 3 of a single partition were retrieved by call to FetchMessage, calling CommitMessages with message offset 3 will also result in committing the messages at offsets 1 and 2 for that partition.

Managing Commits

By default, CommitMessages will synchronously commit offsets to Kafka. For improved performance, you can instead periodically commit offsets to Kafka by setting CommitInterval on the ReaderConfig.

// make a new reader that consumes from topic-A
r := kafka.NewReader(kafka.ReaderConfig{
    Brokers:        []string{"localhost:9092", "localhost:9093", "localhost:9094"},
    GroupID:        "consumer-group-id",
    Topic:          "topic-A",
    MaxBytes:       10e6, // 10MB
    CommitInterval: time.Second, // flushes commits to Kafka every second
})

Writer GoDoc

To produce messages to Kafka, a program may use the low-level Conn API, but the package also provides a higher level Writer type which is more appropriate to use in most cases as it provides additional features:

  • Automatic retries and reconnections on errors.
  • Configurable distribution of messages across available partitions.
  • Synchronous or asynchronous writes of messages to Kafka.
  • Asynchronous cancellation using contexts.
  • Flushing of pending messages on close to support graceful shutdowns.
  • Creation of a missing topic before publishing a message. Note! it was the default behaviour up to the version v0.4.30.
// make a writer that produces to topic-A, using the least-bytes distribution
w := &kafka.Writer{
	Addr:     kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
	Topic:   "topic-A",
	Balancer: &kafka.LeastBytes{},
}

err := w.WriteMessages(context.Background(),
	kafka.Message{
		Key:   []byte("Key-A"),
		Value: []byte("Hello World!"),
	},
	kafka.Message{
		Key:   []byte("Key-B"),
		Value: []byte("One!"),
	},
	kafka.Message{
		Key:   []byte("Key-C"),
		Value: []byte("Two!"),
	},
)
if err != nil {
    log.Fatal("failed to write messages:", err)
}

if err := w.Close(); err != nil {
    log.Fatal("failed to close writer:", err)
}

Missing topic creation before publication

// Make a writer that publishes messages to topic-A.
// The topic will be created if it is missing.
w := &Writer{
    Addr:                   kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
    Topic:                  "topic-A",
    AllowAutoTopicCreation: true,
}

messages := []kafka.Message{
    {
        Key:   []byte("Key-A"),
        Value: []byte("Hello World!"),
    },
    {
        Key:   []byte("Key-B"),
        Value: []byte("One!"),
    },
    {
        Key:   []byte("Key-C"),
        Value: []byte("Two!"),
    },
}

var err error
const retries = 3
for i := 0; i < retries; i++ {
    ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
    defer cancel()
    
    // attempt to create topic prior to publishing the message
    err = w.WriteMessages(ctx, messages...)
    if errors.Is(err, kafka.LeaderNotAvailable) || errors.Is(err, context.DeadlineExceeded) {
        time.Sleep(time.Millisecond * 250)
        continue
    }

    if err != nil {
        log.Fatalf("unexpected error %v", err)
    }
    break
}

if err := w.Close(); err != nil {
    log.Fatal("failed to close writer:", err)
}

Writing to multiple topics

Normally, the WriterConfig.Topic is used to initialize a single-topic writer. By excluding that particular configuration, you are given the ability to define the topic on a per-message basis by setting Message.Topic.

w := &kafka.Writer{
	Addr:     kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
    // NOTE: When Topic is not defined here, each Message must define it instead.
	Balancer: &kafka.LeastBytes{},
}

err := w.WriteMessages(context.Background(),
    // NOTE: Each Message has Topic defined, otherwise an error is returned.
	kafka.Message{
        Topic: "topic-A",
		Key:   []byte("Key-A"),
		Value: []byte("Hello World!"),
	},
	kafka.Message{
        Topic: "topic-B",
		Key:   []byte("Key-B"),
		Value: []byte("One!"),
	},
	kafka.Message{
        Topic: "topic-C",
		Key:   []byte("Key-C"),
		Value: []byte("Two!"),
	},
)
if err != nil {
    log.Fatal("failed to write messages:", err)
}

if err := w.Close(); err != nil {
    log.Fatal("failed to close writer:", err)
}

NOTE: These 2 patterns are mutually exclusive, if you set Writer.Topic, you must not also explicitly define Message.Topic on the messages you are writing. The opposite applies when you do not define a topic for the writer. The Writer will return an error if it detects this ambiguity.

Compatibility with other clients

Sarama

If you're switching from Sarama and need/want to use the same algorithm for message partitioning, you can either use the kafka.Hash balancer or the kafka.ReferenceHash balancer:

  • kafka.Hash = sarama.NewHashPartitioner
  • kafka.ReferenceHash = sarama.NewReferenceHashPartitioner

The kafka.Hash and kafka.ReferenceHash balancers would route messages to the same partitions that the two aforementioned Sarama partitioners would route them to.

w := &kafka.Writer{
	Addr:     kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
	Topic:    "topic-A",
	Balancer: &kafka.Hash{},
}

librdkafka and confluent-kafka-go

Use the kafka.CRC32Balancer balancer to get the same behaviour as librdkafka's default consistent_random partition strategy.

w := &kafka.Writer{
	Addr:     kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
	Topic:    "topic-A",
	Balancer: kafka.CRC32Balancer{},
}

Java

Use the kafka.Murmur2Balancer balancer to get the same behaviour as the canonical Java client's default partitioner. Note: the Java class allows you to directly specify the partition which is not permitted.

w := &kafka.Writer{
	Addr:     kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
	Topic:    "topic-A",
	Balancer: kafka.Murmur2Balancer{},
}

Compression

Compression can be enabled on the Writer by setting the Compression field:

w := &kafka.Writer{
	Addr:        kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
	Topic:       "topic-A",
	Compression: kafka.Snappy,
}

The Reader will by determine if the consumed messages are compressed by examining the message attributes. However, the package(s) for all expected codecs must be imported so that they get loaded correctly.

Note: in versions prior to 0.4 programs had to import compression packages to install codecs and support reading compressed messages from kafka. This is no longer the case and import of the compression packages are now no-ops.

TLS Support

For a bare bones Conn type or in the Reader/Writer configs you can specify a dialer option for TLS support. If the TLS field is nil, it will not connect with TLS. Note: Connecting to a Kafka cluster with TLS enabled without configuring TLS on the Conn/Reader/Writer can manifest in opaque io.ErrUnexpectedEOF errors.

Connection

dialer := &kafka.Dialer{
    Timeout:   10 * time.Second,
    DualStack: true,
    TLS:       &tls.Config{...tls config...},
}

conn, err := dialer.DialContext(ctx, "tcp", "localhost:9093")

Reader

dialer := &kafka.Dialer{
    Timeout:   10 * time.Second,
    DualStack: true,
    TLS:       &tls.Config{...tls config...},
}

r := kafka.NewReader(kafka.ReaderConfig{
    Brokers:        []string{"localhost:9092", "localhost:9093", "localhost:9094"},
    GroupID:        "consumer-group-id",
    Topic:          "topic-A",
    Dialer:         dialer,
})

Writer

Direct Writer creation

w := kafka.Writer{
    Addr: kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"), 
    Topic:   "topic-A",
    Balancer: &kafka.Hash{},
    Transport: &kafka.Transport{
        TLS: &tls.Config{},
      },
    }

Using kafka.NewWriter

dialer := &kafka.Dialer{
    Timeout:   10 * time.Second,
    DualStack: true,
    TLS:       &tls.Config{...tls config...},
}

w := kafka.NewWriter(kafka.WriterConfig{
	Brokers: []string{"localhost:9092", "localhost:9093", "localhost:9094"},
	Topic:   "topic-A",
	Balancer: &kafka.Hash{},
	Dialer:   dialer,
})

Note that kafka.NewWriter and kafka.WriterConfig are deprecated and will be removed in a future release.

SASL Support

You can specify an option on the Dialer to use SASL authentication. The Dialer can be used directly to open a Conn or it can be passed to a Reader or Writer via their respective configs. If the SASLMechanism field is nil, it will not authenticate with SASL.

SASL Authentication Types

Plain

mechanism := plain.Mechanism{
    Username: "username",
    Password: "password",
}

SCRAM

mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
if err != nil {
    panic(err)
}

Connection

mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
if err != nil {
    panic(err)
}

dialer := &kafka.Dialer{
    Timeout:       10 * time.Second,
    DualStack:     true,
    SASLMechanism: mechanism,
}

conn, err := dialer.DialContext(ctx, "tcp", "localhost:9093")

Reader

mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
if err != nil {
    panic(err)
}

dialer := &kafka.Dialer{
    Timeout:       10 * time.Second,
    DualStack:     true,
    SASLMechanism: mechanism,
}

r := kafka.NewReader(kafka.ReaderConfig{
    Brokers:        []string{"localhost:9092","localhost:9093", "localhost:9094"},
    GroupID:        "consumer-group-id",
    Topic:          "topic-A",
    Dialer:         dialer,
})

Writer

mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
if err != nil {
    panic(err)
}

// Transports are responsible for managing connection pools and other resources,
// it's generally best to create a few of these and share them across your
// application.
sharedTransport := &kafka.Transport{
    SASL: mechanism,
}

w := kafka.Writer{
	Addr:      kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
	Topic:     "topic-A",
	Balancer:  &kafka.Hash{},
	Transport: sharedTransport,
}

Client

mechanism, err := scram.Mechanism(scram.SHA512, "username", "password")
if err != nil {
    panic(err)
}

// Transports are responsible for managing connection pools and other resources,
// it's generally best to create a few of these and share them across your
// application.
sharedTransport := &kafka.Transport{
    SASL: mechanism,
}

client := &kafka.Client{
    Addr:      kafka.TCP("localhost:9092", "localhost:9093", "localhost:9094"),
    Timeout:   10 * time.Second,
    Transport: sharedTransport,
}

Reading all messages within a time range

startTime := time.Now().Add(-time.Hour)
endTime := time.Now()
batchSize := int(10e6) // 10MB

r := kafka.NewReader(kafka.ReaderConfig{
    Brokers:   []string{"localhost:9092", "localhost:9093", "localhost:9094"},
    Topic:     "my-topic1",
    Partition: 0,
    MaxBytes:  batchSize,
})

r.SetOffsetAt(context.Background(), startTime)

for {
    m, err := r.ReadMessage(context.Background())

    if err != nil {
        break
    }
    if m.Time.After(endTime) {
        break
    }
    // TODO: process message
    fmt.Printf("message at offset %d: %s = %s\n", m.Offset, string(m.Key), string(m.Value))
}

if err := r.Close(); err != nil {
    log.Fatal("failed to close reader:", err)
}

Logging

For visiblity into the operations of the Reader/Writer types, configure a logger on creation.

Reader

func logf(msg string, a ...interface{}) {
	fmt.Printf(msg, a...)
	fmt.Println()
}

r := kafka.NewReader(kafka.ReaderConfig{
	Brokers:     []string{"localhost:9092", "localhost:9093", "localhost:9094"},
	Topic:       "my-topic1",
	Partition:   0,
	Logger:      kafka.LoggerFunc(logf),
	ErrorLogger: kafka.LoggerFunc(logf),
})

Writer

func logf(msg string, a ...interface{}) {
	fmt.Printf(msg, a...)
	fmt.Println()
}

w := &kafka.Writer{
	Addr:        kafka.TCP("localhost:9092"),
	Topic:       "topic",
	Logger:      kafka.LoggerFunc(logf),
	ErrorLogger: kafka.LoggerFunc(logf),
}

Testing

Subtle behavior changes in later Kafka versions have caused some historical tests to break, if you are running against Kafka 2.3.1 or later, exporting the KAFKA_SKIP_NETTEST=1 environment variables will skip those tests.

Run Kafka locally in docker

docker-compose up -d

Run tests

KAFKA_VERSION=2.3.1 \
  KAFKA_SKIP_NETTEST=1 \
  go test -race ./...

# Packages

Package gzip does nothing, it's kept for backward compatibility to avoid breaking the majority of programs that imported it to install the compression codec, which is now always included.
Package lz4 does nothing, it's kept for backward compatibility to avoid breaking the majority of programs that imported it to install the compression codec, which is now always included.
Package snappy does nothing, it's kept for backward compatibility to avoid breaking the majority of programs that imported it to install the compression codec, which is now always included.
Package topics is an experimental package that provides additional tooling around Kafka Topics.
Package zstd does nothing, it's kept for backward compatibility to avoid breaking the majority of programs that imported it to install the compression codec, which is now always included.

# Functions

Dial is a convenience wrapper for DefaultDialer.Dial.
DialContext is a convenience wrapper for DefaultDialer.DialContext.
DialLeader is a convenience wrapper for DefaultDialer.DialLeader.
DialPartition is a convenience wrapper for DefaultDialer.DialPartition.
FirstOffsetOf constructs an OffsetRequest which asks for the first offset of the parition given as argument.
LastOffsetOf constructs an OffsetRequest which asks for the last offset of the partition given as argument.
LookupPartition is a convenience wrapper for DefaultDialer.LookupPartition.
LookupPartitions is a convenience wrapper for DefaultDialer.LookupPartitions.
Marshal encodes v into a binary representation of the value in the kafka data format.
NewBrokerResolver constructs a Resolver from r.
NewBytes constructs a Bytes value from a byte slice.
NewConn returns a new kafka connection for the given topic and partition.
NewConnWith returns a new kafka connection configured with config.
NewConsumerGroup creates a new ConsumerGroup.
NewReader creates and returns a new Reader configured with config.
NewRecordReade reconstructs a RecordSet which exposes the sequence of records passed as arguments.
NewWriter creates and returns a new Writer configured with config.
ReadAll reads b into a byte slice.
TCP constructs an address with the network set to "tcp".
TimeOffsetOf constructs an OffsetRequest which asks for a partition offset at a given time.
Unmarshal decodes a binary representation from b into v.

# Constants

CoordinatorKeyTypeConsumer type is used when looking for a Group coordinator.
CoordinatorKeyTypeTransaction type is used when looking for a Transaction coordinator.
The least recent offset available for a partition.
The most recent offset available for a partition.
PatternTypeAny matches any resource pattern type.
PatternTypeLiteral represents a literal name.
PatternTypeMatch perform pattern matching.
PatternTypePrefixed represents a prefixed name.
PatternTypeUnknown represents any PatternType which this client cannot understand.
See https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/config/ConfigResource.java#L36.
1.
2.
0.
Seek to an absolute offset.
Seek relative to the current offset.
This flag may be combined to any of the SeekAbsolute and SeekCurrent constants to skip the bound check that the connection would do otherwise.
Seek relative to the last offset available in the partition.
Seek relative to the first offset available in the partition.

# Variables

DefaultClientID is the default value used as ClientID of kafka connections.
DefaultDialer is the default dialer used when none is specified.
DefaultTransport is the default transport used by kafka clients in this package.
ErrGenerationEnded is returned by the context.Context issued by the Generation's Start function when the context has been closed.
ErrGroupClosed is returned by ConsumerGroup.Next when the group has already been closed.

# Structs

AddOffsetsToTxnRequest is the request structure for the AddOffsetsToTxn function.
AddOffsetsToTxnResponse is the response structure for the AddOffsetsToTxn function.
AddPartitionsToTxnRequest is the request structure fo the AddPartitionsToTxn function.
AddPartitionsToTxnResponse is the response structure for the AddPartitionsToTxn function.
AddPartitionToTxn represents a partition to be added to a transaction.
AddPartitionToTxnPartition represents the state of a single partition in response to adding to a transaction.
AlterClientQuotasRequest represents a request sent to a kafka broker to alter client quotas.
AlterClientQuotasResponse represents a response from a kafka broker to an alter client quotas request.
AlterConfigsRequest represents a request sent to a kafka broker to alter configs.
AlterConfigsResponse represents a response from a kafka broker to an alter config request.
AlterConfigsResponseResource helps map errors to specific resources in an alter config response.
AlterPartitionReassignmentsRequest is a request to the AlterPartitionReassignments API.
AlterPartitionReassignmentsRequestAssignment contains the requested reassignments for a single partition.
AlterPartitionReassignmentsResponse is a response from the AlterPartitionReassignments API.
AlterPartitionReassignmentsResponsePartitionResult contains the detailed result of doing reassignments for a single partition.
AlterUserScramCredentialsRequest represents a request sent to a kafka broker to alter user scram credentials.
AlterUserScramCredentialsResponse represents a response from a kafka broker to an alter user credentials request.
ApiVersionsRequest is a request to the ApiVersions API.
ApiVersionsResponse is a response from the ApiVersions API.
ApiVersionsResponseApiKey includes the details of which versions are supported for a single API.
A Batch is an iterator over a sequence of messages fetched from a kafka server.
Broker represents a kafka broker in a kafka cluster.
Client is a high-level API to interract with kafka brokers.
Conn represents a connection to a kafka broker.
ConnConfig is a configuration object used to create new instances of Conn.
ConsumerGroup models a Kafka consumer group.
ConsumerGroupConfig is a configuration object used to create new instances of ConsumerGroup.
CRC32Balancer is a Balancer that uses the CRC32 hash function to determine which partition to route messages to.
CreateACLsRequest represents a request sent to a kafka broker to add new ACLs.
CreateACLsResponse represents a response from a kafka broker to an ACL creation request.
CreatePartitionsRequest represents a request sent to a kafka broker to create and update topic parititions.
CreatePartitionsResponse represents a response from a kafka broker to a partition creation request.
CreateTopicRequests represents a request sent to a kafka broker to create new topics.
CreateTopicResponse represents a response from a kafka broker to a topic creation request.
DeleteACLsRequest represents a request sent to a kafka broker to delete ACLs.
DeleteACLsResponse represents a response from a kafka broker to an ACL deletion request.
DeleteGroupsRequest represents a request sent to a kafka broker to delete consumer groups.
DeleteGroupsResponse represents a response from a kafka broker to a consumer group deletion request.
DeleteTopicsRequest represents a request sent to a kafka broker to delete topics.
DeleteTopicsResponse represents a response from a kafka broker to a topic deletion request.
DescribeACLsRequest represents a request sent to a kafka broker to describe existing ACLs.
DescribeACLsResponse represents a response from a kafka broker to an ACL describe request.
DescribeClientQuotasRequest represents a request sent to a kafka broker to describe client quotas.
DescribeClientQuotasReesponse represents a response from a kafka broker to a describe client quota request.
DescribeConfigResponseConfigEntry.
DescribeConfigResponseConfigSynonym.
DescribeConfigResponseResource.
DescribeConfigsRequest represents a request sent to a kafka broker to describe configs.
DescribeConfigsResponse represents a response from a kafka broker to a describe config request.
DescribeGroupsRequest is a request to the DescribeGroups API.
DescribeGroupsResponse is a response from the DescribeGroups API.
GroupMemberAssignmentsInfo stores the topic partition assignment data for a group member.
DescribeGroupsResponseGroup contains the response details for a single group.
MemberInfo represents the membership information for a single group member.
GroupMemberMetadata stores metadata associated with a group member.
DescribeUserScramCredentialsRequest represents a request sent to a kafka broker to describe user scram credentials.
DescribeUserScramCredentialsResponse represents a response from a kafka broker to a describe user credentials request.
The Dialer type mirrors the net.Dialer API but is designed to open kafka connections instead of raw network connections.
DurationStats is a data structure that carries a summary of observed duration values.
ElectLeadersRequest is a request to the ElectLeaders API.
ElectLeadersResponse is a response from the ElectLeaders API.
ElectLeadersResponsePartitionResult contains the response details for a single partition.
EndTxnRequest represets a request sent to a kafka broker to end a transaction.
EndTxnResponse represents a resposne from a kafka broker to an end transaction request.
FetchRequest represents a request sent to a kafka broker to retrieve records from a topic partition.
FetchResponse represents a response from a kafka broker to a fetch request.
FindCoordinatorRequest is the request structure for the FindCoordinator function.
FindCoordinatorResponse is the response structure for the FindCoordinator function.
FindCoordinatorResponseCoordinator contains details about the found coordinator.
Generation represents a single consumer group generation.
GroupMember describes a single participant in a consumer group.
GroupMemberTopic is a mapping from a topic to a list of partitions in the topic.
GroupProtocol represents a consumer group protocol.
GroupProtocolAssignment represents an assignment of topics and partitions for a group memeber.
Hash is a Balancer that uses the provided hash function to determine which partition to route messages to.
HeartbeatRequest represents a heartbeat sent to kafka to indicate consume liveness.
HeartbeatResponse represents a response from a heartbeat request.
IncrementalAlterConfigsRequest is a request to the IncrementalAlterConfigs API.
IncrementalAlterConfigsRequestConfig describes a single config key/value pair that should be altered.
IncrementalAlterConfigsRequestResource contains the details of a single resource type whose configs should be altered.
IncrementalAlterConfigsResponse is a response from the IncrementalAlterConfigs API.
IncrementalAlterConfigsResponseResource contains the response details for a single resource whose configs were updated.
InitProducerIDRequest is the request structure for the InitProducerId function.
InitProducerIDResponse is the response structure for the InitProducerId function.
JoinGroupRequest is the request structure for the JoinGroup function.
JoinGroupResponse is the response structure for the JoinGroup function.
JoinGroupResponseMember represents a group memmber in a reponse to a JoinGroup request.
LeastBytes is a Balancer implementation that routes messages to the partition that has received the least amount of data.
LeaveGroupRequest is the request structure for the LeaveGroup function.
LeaveGroupRequestMember represents the indentify of a member leaving a group.
LeaveGroupResponse is the response structure for the LeaveGroup function.
LeaveGroupResponseMember represents a member leaving the group.
ListGroupsRequest is a request to the ListGroups API.
ListGroupsResponse is a response from the ListGroups API.
ListGroupsResponseGroup contains the response details for a single group.
ListOffsetsRequest represents a request sent to a kafka broker to list of the offsets of topic partitions.
ListOffsetsResponse represents a response from a kafka broker to a offset listing request.
ListPartitionReassignmentsRequest is a request to the ListPartitionReassignments API.
ListPartitionReassignmentsRequestTopic contains the requested partitions for a single topic.
ListPartitionReassignmentsResponse is a response from the ListPartitionReassignments API.
ListPartitionReassignmentsResponsePartition contains the detailed result of ongoing reassignments for a single partition.
ListPartitionReassignmentsResponseTopic contains the detailed result of ongoing reassignments for a topic.
Message is a data structure representing kafka messages.
MetadataRequest represents a request sent to a kafka broker to retrieve its cluster metadata.
MetadatResponse represents a response from a kafka broker to a metadata request.
Murmur2Balancer is a Balancer that uses the Murmur2 hash function to determine which partition to route messages to.
OffsetCommit represent the commit of an offset to a partition.
OffsetFetchPartition represents the state of a single partition in responses to committing offsets.
OffsetCommitRequest represents a request sent to a kafka broker to commit offsets for a partition.
OffsetFetchResponse represents a response from a kafka broker to an offset commit request.
OffsetDelete deletes the offset for a consumer group on a particular topic for a particular partition.
OffsetDeletePartition represents the state of a status of a partition in response to deleting offsets.
OffsetDeleteRequest represents a request sent to a kafka broker to delete the offsets for a partition on a given topic associated with a consumer group.
OffsetDeleteResponse represents a response from a kafka broker to a delete offset request.
OffsetFetchPartition represents the state of a single partition in a consumer group.
OffsetFetchRequest represents a request sent to a kafka broker to read the currently committed offsets of topic partitions.
OffsetFetchResponse represents a response from a kafka broker to an offset fetch request.
OffsetRequest represents a request to retrieve a single partition offset.
Partition carries the metadata associated with a kafka partition.
PartitionAssignment represents the starting state of a partition that has been assigned to a consumer.
PartitionOffsets carries information about offsets available in a topic partition.
ProduceRequest represents a request sent to a kafka broker to produce records to a topic partition.
ProduceResponse represents a response from a kafka broker to a produce request.
ProducerSession contains useful information about the producer session from the broker's response.
RackAffinityGroupBalancer makes a best effort to pair up consumers with partitions whose leader is in the same rack.
RangeGroupBalancer groups consumers by partition Example: 5 partitions, 2 consumers C0: [0, 1, 2] C1: [3, 4] Example: 6 partitions, 3 consumers C0: [0, 1] C1: [2, 3] C2: [4, 5] .
RawProduceRequest represents a request sent to a kafka broker to produce records to a topic partition.
ReadBatchConfig is a configuration object used for reading batches of messages.
Reader provides a high-level API for consuming messages from kafka.
ReaderConfig is a configuration object used to create new instances of Reader.
ReaderStats is a data structure returned by a call to Reader.Stats that exposes details about the behavior of the reader.
ReferenceHash is a Balancer that uses the provided hash function to determine which partition to route messages to.
RoundRobin is an Balancer implementation that equally distributes messages across all available partitions.
RoundrobinGroupBalancer divides partitions evenly among consumers Example: 5 partitions, 2 consumers C0: [0, 2, 4] C1: [1, 3] Example: 6 partitions, 3 consumers C0: [0, 3] C1: [1, 4] C2: [2, 5] .
SummaryStats is a data structure that carries a summary of observed values.
SyncGroupRequest is the request structure for the SyncGroup function.
SyncGroupRequestAssignment represents an assignement for a goroup memeber.
SyncGroupResponse is the response structure for the SyncGroup function.
Topic represents a topic in a kafka cluster.
A ConsumerGroup and Topic as these are both strings we define a type for clarity when passing to the Client as a function argument N.B TopicAndGroup is currently experimental! Therefore, it is subject to change, including breaking changes between MINOR and PATCH releases.
Transport is an implementation of the RoundTripper interface.
TxnOffsetCommit represent the commit of an offset to a partition within a transaction.
TxnOffsetFetchPartition represents the state of a single partition in responses to committing offsets within a transaction.
TxnOffsetCommitRequest represents a request sent to a kafka broker to commit offsets for a partition within a transaction.
TxnOffsetFetchResponse represents a response from a kafka broker to an offset commit request within a transaction.
The Writer type provides the implementation of a producer of kafka messages that automatically distributes messages across partitions of a single topic using a configurable balancing policy.
WriterConfig is a configuration type used to create new instances of Writer.
WriterStats is a data structure returned by a call to Writer.Stats that exposes details about the behavior of the writer.

# Interfaces

The Balancer interface provides an abstraction of the message distribution logic used by Writer instances to route messages to the partitions available on a kafka cluster.
BrokerResolver is an interface implemented by types that translate host names into a network address.
GroupBalancer encapsulates the client side rebalancing logic.
Logger interface API for log.Logger.
The Resolver interface is used as an abstraction to provide service discovery of the hosts of a kafka cluster.
RoundTripper is an interface implemented by types which support interacting with kafka brokers.

# Type aliases

BalancerFunc is an implementation of the Balancer interface that makes it possible to use regular functions to distribute messages across partitions.
Bytes is an interface representing a sequence of bytes.
CoordinatorKeyType is used to specify the type of coordinator to look for.
Error represents the different error codes that may be returned by kafka.
GroupMemberAssignments holds MemberID => topic => partitions.
Header is a key/value pair type representing headers set on records.
LoggerFunc is a bridge between Logger and any third party logger Usage: l := NewLogger() // some logger r := kafka.NewReader(kafka.ReaderConfig{ Logger: kafka.LoggerFunc(l.Infof), ErrorLogger: kafka.LoggerFunc(l.Errorf), }).
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/resource/PatternType.java.
Record is an interface representing a single kafka record.
RecordReader is an interface representing a sequence of records.
Request is an interface implemented by types that represent messages sent from kafka clients to brokers.
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/resource/ResourceType.java.
Response is an interface implemented by types that represent messages sent from kafka brokers in response to client requests.
Version represents a version number for kafka APIs.
WriteError is returned by kafka.(*Writer).WriteMessages when the writer is not configured to write messages asynchronously.