package
20.2.19+incompatible
Repository: https://github.com/cockroachdb/cockroach.git
Documentation: pkg.go.dev
# Functions
ConvertToColumnOrdering converts an Ordering type (as defined in data.proto) to a sqlbase.ColumnOrdering type.
ConvertToMappedSpecOrdering converts a sqlbase.ColumnOrdering type to an Ordering type (as defined in data.proto), using the column indices contained in planToStreamColMap.
ConvertToSpecOrdering converts a sqlbase.ColumnOrdering type to an Ordering type (as defined in data.proto).
DeserializeExpr deserializes expr, binds the indexed variables to the provided IndexedVarHelper, and evaluates any constants in the expression.
ExprFmtCtxBase produces a FmtCtx used for serializing expressions; a proper IndexedVar formatting function needs to be added on.
GeneratePlanDiagram generates the data for a flow diagram.
GeneratePlanDiagramURL generates the json data for a flow diagram and a URL which encodes the diagram.
GetAggregateConstructor processes the specification of a single aggregate function.
GetAggregateFuncIdx converts the aggregate function name to the enum value with the same string representation.
GetAggregateInfo returns the aggregate constructor and the return type for the given aggregate function when applied on the given type.
GetMetricsMeta returns a metadata object from the pool of metrics metadata.
GetProducerMeta returns a producer metadata object from the pool.
GetWindowFuncIdx converts the window function name to the enum value with the same string representation.
GetWindowFunctionInfo returns windowFunc constructor and the return type when given fn is applied to given inputTypes.
LocalMetaToRemoteProducerMeta converts a ProducerMetadata struct to RemoteProducerMetadata.
MakeEvalContext serializes some of the fields of a tree.EvalContext into a distsqlpb.EvalContext proto.
No description provided by the author
NewError creates an Error from an error, to be sent on the wire.
No description provided by the author
RemoteProducerMetaToLocalMeta converts a RemoteProducerMetadata struct to ProducerMetadata and returns whether the conversion was successful or not.
StartMockDistSQLServer starts a MockDistSQLServer and returns the address on which it's listening.
# Constants
No description provided by the author
No description provided by the author
This setting exists just for backwards compatibility; it's equivalent to SCALAR when there are no grouping columns, and to NON_SCALAR when there are grouping columns.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
JSONB_AGG is an alias for JSON_AGG, they do the same thing.
No description provided by the author
No description provided by the author
No description provided by the author
A non-scalar aggregation returns no rows if there are no input rows; it may or may not have grouping columns.
No description provided by the author
No description provided by the author
A scalar aggregation has no grouping columns and always returns one result row.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
FlowIDTagKey is the key used for flow id tags in tracing spans.
The input streams are guaranteed to be ordered according to the column ordering field; rows from the streams are interleaved to preserve that ordering.
Rows from the input streams are interleaved arbitrarily.
No description provided by the author
No description provided by the author
Each row is sent to one stream, chosen by hashing certain columns of the row (specified by the hash_columns field).
Each row is sent to one stream, chosen according to preset boundaries for the values of certain columns of the row.
Each row is sent to all output streams.
Single output stream.
ProcessorIDTagKey is the key used for processor id tags in tracing spans.
No description provided by the author
No description provided by the author
This is the github.com/axiomhq/hyperloglog binary format (as of commit 730eea1) for a sketch with precision 14.
Stream that is part of the local flow.
Stream that has the other endpoint on a different node.
Special stream used when in "sync flow" mode.
StreamIDTagKey is the key used for stream id tags in tracing spans.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
GROUPS specifies frame in terms of peer groups (where "peers" mean rows not distinct in the ordering columns).
No description provided by the author
No description provided by the author
Offsets are stored within Bound.
RANGE specifies frame in terms of logical range (e.g.
ROWS specifies frame in terms of physical offsets (e.g.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
These mirror window functions from window_builtins.go.
# Variables
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
# Structs
AggregatorSpec is the specification for an "aggregator" (processor core type, not the logical plan computation stage).
No description provided by the author
BackfillerSpec is the specification for a "schema change backfiller".
No description provided by the author
BulkRowWriterSpec is the specification for a processor that consumes rows and writes them to a target table using AddSSTable.
CallbackMetadataSource is a utility struct that implements the MetadataSource interface by calling a provided callback.
ChangeAggregatorSpec is the specification for a processor that watches for changes in a set of spans.
No description provided by the author
ChangeFrontierSpec is the specification for a processor that receives span-level resolved timestamps, track them, and emits the changefeed-level resolved timestamp whenever it changes.
No description provided by the author
ConsumerHandshake is the first one or two message sent in the consumer->producer direction on a stream.
ConsumerSignal are messages flowing from consumer to producer (so, from RPC server to client) for the FlowStream RPC.
CSVWriterSpec is the specification for a processor that consumes rows and writes them to CSV files at uri.
No description provided by the author
No description provided by the author
DistSQLDrainingInfo represents the DistSQL draining state that gets gossiped for each node.
DistSQLVersionGossipInfo represents the DistSQL server version information that gets gossiped for each node.
No description provided by the author
Error is a generic representation including a string message.
EvalContext is used to marshall some planner.EvalContext members.
Expression is the representation of a SQL expression.
ExprHelper implements the common logic around evaluating an expression that depends on a set of values.
FlowID identifies a flow.
FlowSpec describes a "flow" which is a subgraph of a distributed SQL computation consisting of processors and streams.
HashJoinerSpec is the specification for a hash join processor.
InboundStreamNotification is the MockDistSQLServer's way to tell its clients that a new gRPC call has arrived and thus a stream has arrived.
IndexSkipTableReaderSpec is the specification for a table reader that is performing a loose index scan over rows in the table.
InputSyncSpec is the specification for an input synchronizer; it decides how to interleave rows from multiple input streams.
InterleavedReaderJoinerSpec is the specification for a processor that performs KV operations to retrieve rows from 2+ tables from an interleaved hierarchy, performs intermediate filtering on rows from each table, and performs a join on the rows from the 2+ tables.
No description provided by the author
InvertedFiltererSpec is the specification of a processor that does filtering on a table by evaluating an invertedexpr.SpanExpressionProto on an inverted index of the table.
InvertedJoinerSpec is the specification for an inverted join.
JobProgress identifies the job to report progress on.
JoinReaderSpec is the specification for a "join reader".
LocalPlanNodeSpec is the specification for a local planNode wrapping processor.
MergeJoinerSpec is the specification for a merge join processor.
No description provided by the author
No description provided by the author
MockDialer is a mocked implementation of the Outbox's `Dialer` interface.
MockDistSQLServer implements the DistSQLServer (gRPC) interface and allows clients to control the inbound streams.
NoopCoreSpec indicates a "no-op" processor core.
Ordering defines an order - specifically a list of column indices and directions.
No description provided by the author
The specification for a WITH ORDINALITY processor.
OutputRouterSpec is the specification for the output router of a processor; it decides how to send results to multiple output streams.
No description provided by the author
No description provided by the author
Span matches bytes in [start, end).
PostProcessSpec describes the processing required to obtain the output (filtering, projection).
No description provided by the author
Each processor has the following components: - one or more input synchronizers; each one merges rows between one or more input streams;
- a processor "core" which encapsulates the inner logic of each processor;
- a post-processing stage which allows "inline" post-processing on results (like projection or filtering);
- one or more output synchronizers; each one directs rows to one or more output streams.
ProducerData is a message that can be sent multiple times as part of a stream from a producer to a consumer.
ProducerHeader is a message that is sent once at the beginning of a stream.
No description provided by the author
ProducerMetadata represents a metadata record flowing through a DistSQL flow.
ProjectSetSpec is the specification of a processor which applies a set of expressions, which may be set-returning functions, to its input.
No description provided by the author
No description provided by the author
RemoteProducerMetadata represents records that a producer wants to pass to a consumer, other than data rows.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
Metrics are unconditionally emitted by table readers.
No description provided by the author
No description provided by the author
No description provided by the author
RowNum is used to count the rows sent from a processor.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
RestoreDataEntry will be specified at planning time to the SplitAndScatter processors, then those processors will stream these, encoded as bytes in rows to the RestoreDataProcessors.
RunSyncFlowCall is the MockDistSQLServer's way to tell its clients that a RunSyncFlowCall has arrived.
SampleAggregatorSpec is the specification of a processor that aggregates the results from multiple sampler processors and writes out the statistics to system.table_statistics.
SamplerSpec is the specification of a "sampler" processor which returns a sample (random subset) of the input columns and computes cardinality estimation sketches on sets of columns.
SequenceState is used to marshall the sessiondata.SequenceState struct.
Seq represents the last value of one sequence modified by the session.
No description provided by the author
No description provided by the author
SketchSpec contains the specification for a generated statistic.
SorterSpec is the specification for a "sorting aggregator".
No description provided by the author
No description provided by the author
StreamEndpointSpec describes one of the endpoints (input or output) of a physical stream.
No description provided by the author
TableReaderSpec is the specification for a "table reader".
ValuesCoreSpec is the core of a processor that has no inputs and generates "pre-canned" rows.
WindowerSpec is the specification of a processor that performs computations of window functions that have the same PARTITION BY clause.
Frame is the specification of a single window frame for a window function.
Bound specifies the type of boundary and the offset (if present).
Bounds specifies boundaries of the window frame.
Func specifies which function to compute.
WindowFn is the specification of a single window function.
ZigzagJoinerSpec is the specification for a zigzag join processor.
# Interfaces
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
DistSQLClient is the client API for DistSQL service.
DistSQLServer is the server API for DistSQL service.
DistSQLSpanStats is a tracing.SpanStats that returns a list of stats to output on a query plan.
FlowDiagram is a plan diagram that can be made into a URL.
MetadataSource is an interface implemented by processors and columnar operators that can produce metadata.
# Type aliases
AggregateConstructor is a function that creates an aggregate function.
These mirror the aggregate functions supported by sql/parser.
No description provided by the author
No description provided by the author
BytesEncodeFormat is the configuration for bytes to string conversions.
DistSQLVersion identifies DistSQL engine versions.
FileCompression list of the compression codecs which are currently supported for CSVWriter spec.
No description provided by the author
MetadataSources is a slice of MetadataSource.
The direction of the desired ordering for a column.
No description provided by the author
ScanVisibility controls which columns are seen by scans - just normal columns, or normal columns and also in-progress schema change columns.
No description provided by the author
No description provided by the author
StreamID identifies a stream; it may be local to a flow or it may cross machine boundaries.
BoundType indicates which type of boundary is used.
Exclusion specifies the type of frame exclusion.
Mode indicates which mode of framing is used.
No description provided by the author