package
25.1.0+incompatible
Repository: https://github.com/cockroachdb/cockroach.git
Documentation: pkg.go.dev

# Functions

MakeReaderMetadata returns column type metadata that will be written to all parquet files.
NewSchema generates a SchemaDefinition.
NewWriter constructs a new Writer which outputs to the given sink.
ReadFile reads a parquet file and returns the contained metadata and datums.
ReadFileAndVerifyDatums asserts that a parquet file's metadata matches the metadata from the writer and its data matches writtenDatums.
ValidateDatum validates that the "contents" of the expected datum matches the contents of the actual datum.
WithCompressionCodec specifies the compression codec to use when writing columns.
WithMaxRowGroupLength specifies the maximum number of rows to include in a row group when writing data.
WithMetadata adds arbitrary kv metadata to the parquet file which can be read by a reader.
WithVersion specifies the parquet version to use when writing data.

# Constants

CompressionBrotli is the Brotli compression codec.
CompressionGZIP is the GZIP compression codec.
CompressionNone represents no compression.
CompressionSnappy is the Snappy compression codec.
CompressionZSTD is the ZSTD compression codec.

# Structs

ReadDatumsMetadata contains metadata from the parquet file which was read.
A SchemaDefinition stores a parquet schema.
A Writer writes datums into an io.Writer sink.

# Type aliases

A CompressionCodec is the codec used to compress columns when writing parquet files.
An Option is a configurable setting for the Writer.