# Functions
DecimalSize returns the minimum number of bytes necessary to represent a decimal with the requested precision.
DefaultWriterProps returns the default properties for the arrow writer, which are to use memory.DefaultAllocator and coerceTimestampUnit: arrow.Second.
FromParquet generates an arrow Schema from a provided Parquet Schema.
NewArrowWriteContext is for creating a re-usable context object that contains writer properties and other re-usable buffers for writing.
NewArrowWriterProperties creates a new writer properties object by passing in a set of options to control the properties.
NewFileReader constructs a reader for converting to Arrow objects from an existing parquet file reader object.
NewFileWriter returns a writer for writing Arrow directly to a parquetfile, rather than the ArrowColumnWriter and WriteArrow functions which allow writing arrow to an existing file.Writer, this will create a new file.Writer based on the schema provided.
NewSchemaManifest creates a manifest for mapping a parquet schema to a given arrow schema.
ReadTable is a convenience function to quickly and easily read a parquet file into an arrow table.
ToParquet generates a Parquet Schema from an arrow Schema using the given properties to make decisions when determining the logical/physical types of the columns.
WithAllocator specifies the allocator to be used by the writer whenever allocating buffers and memory.
WithCoerceTimestamps enables coercing of timestamp units to a specific time unit when constructing the schema and writing data so that regardless of the unit used by the datatypes being written, they will be converted to the desired time unit.
WithDeprecatedInt96Timestamps allows specifying to enable conversion of arrow timestamps to int96 columns when constructing the schema.
WithStoreSchema enables writing a binary serialized arrow schema to the file in metadata to enable certain read options (like "read_dictionary") to be set automatically
If called, the arrow schema is serialized and base64 encoded before being added to the metadata of the parquet file with the key "ARROW:schema".
WithTruncatedTimestamps called with true turns off the error that would be returned if coercing a timestamp unit would cause a loss of data such as converting from nanoseconds to seconds.
WriteArrowToColumn writes apache arrow columnar data directly to a ColumnWriter.
WriteTable is a convenience function to create and write a full array.Table to a parquet file.
# Structs
ArrowReadProperties is the properties to define how to read a parquet file into arrow arrays.
ArrowWriterProperties are used to determine how to manipulate the arrow data when writing it to a parquet file.
ColumnChunkReader is a reader that reads only a single column chunk from a single column in a single row group.
ColumnReader is used for reading batches of data from a specific column across multiple row groups to return a chunked arrow array.
FileReader is the base object for reading a parquet file into arrow object types.
FileWriter is an object for writing Arrow directly to a parquet file.
RowGroupReader is a reader for getting data only from a single row group of the file rather than having to repeatedly pass the index to functions on the reader.
SchemaField is a holder that defines a specific logical field in the schema which could potentially refer to multiple physical columns in the underlying parquet file if it is a nested type.
SchemaManifest represents a full manifest for mapping a Parquet schema to an arrow Schema.
# Interfaces
RecordReader is a Record Batch Reader that meets the interfaces for both array.RecordReader and arrio.Reader to allow easy progressive reading of record batches from the parquet file.
# Type aliases
WriterOption is a convenience for building up arrow writer properties.