package
0.9.0
Repository: https://github.com/datadog/datadog-agent.git
Documentation: pkg.go.dev

# Functions

NewObfuscator creates a new obfuscator.
NewSQLTokenizer creates a new SQLTokenizer for the given SQL string.

# Constants

list of available tokens; this list has been reduced because we don't need a full-fledged tokenizer to implement a Lexer.
list of available tokens; this list has been reduced because we don't need a full-fledged tokenizer to implement a Lexer.
list of available tokens; this list has been reduced because we don't need a full-fledged tokenizer to implement a Lexer.
list of available tokens; this list has been reduced because we don't need a full-fledged tokenizer to implement a Lexer.
a dollar-quoted string delimited by the tag "$func$"; gets special treatment when feature "dollar_quoted_func" is set.
https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-SYNTAX-DOLLAR-QUOTING.
list of available tokens; this list has been reduced because we don't need a full-fledged tokenizer to implement a Lexer.
EndChar is used to signal that the scanner has finished reading the query.
list of available tokens; this list has been reduced because we don't need a full-fledged tokenizer to implement a Lexer.
Filtered specifies that the token is a comma and was discarded by one of the filters.
FilteredBracketedIdentifier specifies that we are currently discarding a bracketed identifier (MSSQL).
FilteredGroupable specifies that the given token has been discarded by one of the token filters and that it is groupable together with consecutive FilteredGroupable tokens.
FilteredGroupableParenthesis is a parenthesis marked as filtered groupable.
list of available tokens; this list has been reduced because we don't need a full-fledged tokenizer to implement a Lexer.
list of available tokens; this list has been reduced because we don't need a full-fledged tokenizer to implement a Lexer.
list of available tokens; this list has been reduced because we don't need a full-fledged tokenizer to implement a Lexer.
list of available tokens; this list has been reduced because we don't need a full-fledged tokenizer to implement a Lexer.
list of available tokens; this list has been reduced because we don't need a full-fledged tokenizer to implement a Lexer.
list of available tokens; this list has been reduced because we don't need a full-fledged tokenizer to implement a Lexer.
list of available tokens; this list has been reduced because we don't need a full-fledged tokenizer to implement a Lexer.
list of available tokens; this list has been reduced because we don't need a full-fledged tokenizer to implement a Lexer.
list of available tokens; this list has been reduced because we don't need a full-fledged tokenizer to implement a Lexer.
list of available tokens; this list has been reduced because we don't need a full-fledged tokenizer to implement a Lexer.
list of available tokens; this list has been reduced because we don't need a full-fledged tokenizer to implement a Lexer.
list of available tokens; this list has been reduced because we don't need a full-fledged tokenizer to implement a Lexer.
list of available tokens; this list has been reduced because we don't need a full-fledged tokenizer to implement a Lexer.
list of available tokens; this list has been reduced because we don't need a full-fledged tokenizer to implement a Lexer.
list of available tokens; this list has been reduced because we don't need a full-fledged tokenizer to implement a Lexer.
list of available tokens; this list has been reduced because we don't need a full-fledged tokenizer to implement a Lexer.
list of available tokens; this list has been reduced because we don't need a full-fledged tokenizer to implement a Lexer.
list of available tokens; this list has been reduced because we don't need a full-fledged tokenizer to implement a Lexer.
list of available tokens; this list has been reduced because we don't need a full-fledged tokenizer to implement a Lexer.
list of available tokens; this list has been reduced because we don't need a full-fledged tokenizer to implement a Lexer.
list of available tokens; this list has been reduced because we don't need a full-fledged tokenizer to implement a Lexer.

# Structs

ObfuscatedQuery specifies information about an obfuscated SQL query.
Obfuscator quantizes and obfuscates spans.
SQLOptions holds options that change the behavior of the obfuscator for SQL.
SQLTokenizer is the struct used to generate SQL tokens for the parser.
A SyntaxError is a description of a JSON syntax error.

# Type aliases

TokenKind specifies the type of the token being scanned.