Categorygithub.com/iambus/parquet-go
modulepackage
0.4.0
Repository: https://github.com/iambus/parquet-go.git
Documentation: pkg.go.dev

# README

parquet-go


parquet-go is an implementation of the Apache Parquet file format in Go. It provides functionality to both read and write parquet files, as well as high-level functionality to manage the data schema of parquet files, to directly write Go objects to parquet files using automatic or custom marshalling and to read records from parquet files into Go objects using automatic or custom marshalling.

parquet is a file format to store nested data structures in a flat columnar format. By storing in a column-oriented way, it allows for efficient reading of individual columns without having to read and decode complete rows. This allows for efficient reading and faster processing when using the file format in conjunction with distributed data processing frameworks like Apache Hadoop or distributed SQL query engines like Presto and AWS Athena.

This implementation is divided into several packages. The top-level package is the low-level implementation of the parquet file format. It is accompanied by the sub-packages parquetschema and floor. parquetschema provides functionality to parse textual schema definitions as well as the data types to manually or programmatically construct schema definitions. floor is a high-level wrapper around the low-level package. It provides functionality to open parquet files to read from them or write to them using automated or custom marshalling and unmarshalling.

Supported Features

FeatureReadWriteNote
CompressionYesYesOnly Gzip and SNAPPY are supported out of the box, but it is possible to add other compressors, see the RegisterBlockCompressor function
Dictionary EncodingYesYes
Run Length Encoding / Bit-Packing HybridYesYesThe reader can read RLE/Bit-pack encoding, but the writer only uses bit-packing
Delta EncodingYesYes
Byte Stream SplitNoNo
Data page V1YesYes
Data page V2YesYes
Statistics in page meta dataNoNo
Index PagesNoNo
Dictionary PagesYesYes
EncryptionNoNo
Bloom FilterNoNo
Logical TypesYesYesSupport for logical type is in the high-level package (floor) the low level parquet library only supports the basic types, see the type mapping table

Supported Data Types

Type in parquetType in GoNote
booleanbool
int32int32See the note about the int type
int64int64See the note about the int type
int96[12]byte
floatfloat32
doublefloat64
byte_array[]byte
fixed_len_byte_array(N)[N]byte, []byteuse any positive number for N

Note: the low-level implementation only supports int32 for the INT32 type and int64 for the INT64 type in Parquet. Plain int or uint are not supported. The high-level floor package contains more extensive support for these data types.

Supported Logical Types

Logical TypeMapped to Go typesNote
STRINGstring, []byte
DATEint32, time.Timeint32: days since Unix epoch (Jan 01 1970 00:00:00 UTC); time.Time only in floor
TIMEint32, int64, time.Timeint32: TIME(MILLIS, ...), int64: TIME(MICROS, ...), TIME(NANOS, ...); time.Time only in floor
TIMESTAMPint64, int96, time.Timetime.Time only in floor
UUID[16]byte
LIST[]Tslices of any type
MAPmap[T1]T2maps with any key and value types
ENUMstring, []byte
BSON[]byte
DECIMAL[]byte, [N]byte
INT{,u}int{8,16,32,64}implementation is loose and will allow any INT logical type converted to any signed or unsigned int Go type.

Supported Converted Types

Converted TypeMapped to Go typesNote
UTF8string, []byte
TIME_MILLISint32Number of milliseconds since the beginning of the day
TIME_MICROSint64Number of microseconds since the beginning of the day
TIMESTAMP_MILLISint64Number of milliseconds since Unix epoch (Jan 01 1970 00:00:00 UTC)
TIMESTAMP_MICROSint64Number of milliseconds since Unix epoch (Jan 01 1970 00:00:00 UTC)
{,U}INT_{8,16,32,64}{,u}int{8,16,32,64}implementation is loose and will allow any converted type with any int Go type.
INTERVAL[12]byte

Please note that converted types are deprecated. Logical types should be used preferably.

Schema Definition

parquet-go comes with support for textual schema definitions. The sub-package parquetschema comes with a parser to turn the textual schema definition into the right data type to use elsewhere to specify parquet schemas. The syntax has been mostly reverse-engineered from a similar format also supported but barely documented in Parquet's Java implementation.

For the full syntax, please have a look at the parquetschema package Go documentation.

Generally, the schema definition describes the structure of a message. Parquet will then flatten this into a purely column-based structure when writing the actual data to parquet files.

A message consists of a number of fields. Each field either has type or is a group. A group itself consists of a number of fields, which in turn can have either a type or are a group themselves. This allows for theoretically unlimited levels of hierarchy.

Each field has a repetition type, describing whether a field is required (i.e. a value has to be present), optional (i.e. a value can be present but doesn't have to be) or repeated (i.e. zero or more values can be present). Optionally, each field (including groups) have an annotation, which contains a logical type or converted type that annotates something about the general structure at this point, e.g. LIST indicates a more complex list structure, or MAP a key-value map structure, both following certain conventions. Optionally, a typed field can also have a numeric field ID. The field ID has no purpose intrinsic to the parquet file format.

Here is a simple example of a message with a few typed fields:

message coordinates {
    required float64 latitude;
    required float64 longitude;
    optional int32 elevation = 1;
    optional binary comment (STRING);
}

In this example, we have a message with four typed fields, two of them required, and two of them optional. float64, int32 and binary describe the fundamental data type of the field, while longitude, latitude, elevation and comment are the field names. The parentheses contain an annotation STRING which indicates that the field is a string, encoded as binary data, i.e. a byte array. The field elevation also has a field ID of 1, indicated as numeric literal and separated from the field name by the equal sign =.

In the following example, we will introduce a plain group as well as two nested groups annotated with logical types to indicate certain data structures:

message transaction {
    required fixed_len_byte_array(16) txn_id (UUID);
    required int32 amount;
    required int96 txn_ts;
    optional group attributes {
        optional int64 shop_id;
        optional binary country_code (STRING);
        optional binary postcode (STRING);
    }
    required group items (LIST) {
        repeated group list {
            required int64 item_id;
            optional binary name (STRING);
        }
    }
    optional group user_attributes (MAP) {
        repeated group key_value {
            required binary key (STRING);
            required binary value (STRING);
        }
    }
}

In this example, we see a number of top-level fields, some of which are groups. The first group is simply a group of typed fields, named attributes.

The second group, items is annotated to be a LIST and in turn contains a repeated group list, which in turn contains a number of typed fields. When a group is annotated as LIST, it needs to follow a particular convention: it has to contain a repeated group named list. Inside this group, any fields can be present.

The third group, user_attributes is annotated as MAP. Similar to LIST, it follows some conventions. In particular, it has to contain only a single required group with the name key_value, which in turn contains exactly two fields, one named key, the other named value. This represents a map structure in which each key is associated with one value.

Examples

For examples how to use both the low-level and high-level APIs of this library, please see the directory examples. You can also check out the accompanying tools (see below) for more advanced examples. The tools are located in the cmd directory.

Tools

parquet-go comes with tooling to inspect and generate parquet tools.

parquet-tool

parquet-tool allows you to inspect the meta data, the schema and the number of rows as well as print the content of a parquet file. You can also use it to split an existing parquet file into multiple smaller files.

Install it by running go get github.com/fraugster/parquet-go/cmd/parquet-tool on your command line. For more detailed help on how to use the tool, consult parquet-tool --help.

csv2parquet

csv2parquet makes it possible to convert an existing CSV file into a parquet file. By default, all columns are simply turned into strings, but you provide it with type hints to influence the generated parquet schema.

You can install this tool by running go get github.com/fraugster/parquet-go/cmd/csv2parquet on your command line. For more help, consult csv2parquet --help.

Contributing

If you want to hack on this repository, please read the short CONTRIBUTING.md guide first.

Versioning

We use SemVer for versioning. For the versions available, see the tags on this repository.

Authors

See also the list of contributors who participated in this project.

License

Copyright 2020 Fraugster GmbH

This project is licensed under the Apache-2 License - see the LICENSE file for details.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

# Packages

No description provided by the author
No description provided by the author
Package floor provides a high-level interface to read from and write to parquet files.
No description provided by the author
Package parquetschema contains functions and data types to manage schema definitions for the parquet-go package.

# Functions

FileVersion sets the version of the file itself.
GetRegisteredBlockCompressors returns a map of compression codecs to block compressors that are currently registered.
Int96ToTime is a utility function to convert a Int96 Julian Date timestamp (https://en.wikipedia.org/wiki/Julian_day) to a time.Time.
NewBooleanStore creates new column store to store boolean values.
NewByteArrayStore creates a new column store to store byte arrays.
NewDataColumn creates a new data column of the provided field repetition type, using the provided column store to write data.
NewDoubleStore creates a new column store to store double (float64) values.
NewFileReader creates a new FileReader.
NewFileWriter creates a new FileWriter.
NewFixedByteArrayStore creates a new column store to store fixed size byte arrays.
NewFloatStore creates a new column store to store float (float32) values.
NewInt32Store create a new column store to store int32 values.
NewInt64Store creates a new column store to store int64 values.
NewInt96Store creates a new column store to store int96 values.
NewListColumn return a new LIST column, which is a group of converted type LIST with a repeated group named "list" as child which then contains a child which is the element column.
NewMapColumn returns a new MAP column, which is a group of converted type LIST with a repeated group named "key_value" of converted type MAP_KEY_VALUE.
RegisterBlockCompressor is a function to to register additional block compressors to the package.
TimeToInt96 is a utility function to convert a time.Time to an Int96 Julian Date timestamp (https://en.wikipedia.org/wiki/Julian_day).
WithCompressionCodec sets the compression codec used when writing the file.
WithCreator sets the creator in the meta data of the file.
WithDataPageV2 enables the writer to write pages in the new V2 format.
WithMaxRowGroupSize sets the rough maximum size of a row group before it shall be flushed automatically.
WithMetaData sets the key-value meta data on the file.
WithRowGroupMetaData adds key-value metadata to all columns.
WithRowGroupMetaDataForColumn adds key-value metadata to a particular column that is identified by its full dotted-notation name.
WithSchemaDefinition sets the schema definition to use for this parquet file.

# Variables

DefaultHashFunc is used to generate a hash value to detect and handle duplicate values.

# Structs

Column is composed of a schema definition for the column, a column store that contains the implementation to write the data to a parquet file, and any additional parameters that are necessary to correctly write the data.
ColumnParameters contains common parameters related to a column.
ColumnStore is the read/write implementation for a column.
FileReader is used to read data from a parquet file.
FileWriter is used to write data to a parquet file.

# Interfaces

No description provided by the author
SchemaCommon contains methods shared by FileReader and FileWriter to retrieve and set information related to the parquet schema and columns that are used by the reader resp.
SchemaReader is an interface with methods necessary in the FileReader.
SchemaWriter is an interface with methods necessary in the FileWriter to add groups and columns and to write data.

# Type aliases

FileWriterOption describes an option function that is applied to a FileWriter when it is created.
FlushRowGroupOption is an option to pass additiona configuration to FlushRowGroup.