package
0.0.1-beta11
Repository: https://github.com/violet-eva-01/spark-connect.git
Documentation: pkg.go.dev

# Functions

Abs - Computes the absolute value.
Acos - Computes inverse cosine of the input column.
Acosh - Computes inverse hyperbolic cosine of the input column.
AddMonths - Returns the date that is `months` months after `start`.
AesDecrypt - Returns a decrypted value of `input` using AES in `mode` with `padding`.
AesEncrypt - Returns an encrypted value of `input` using AES in given `mode` with the specified `padding`.
ApproxCountDistinct - Aggregate function: returns a new :class:`~pyspark.sql.Column` for approximate distinct count of column `col`.
Array - Creates a new array column.
ArrayAgg - Aggregate function: returns a list of objects with duplicates.
ArrayCompact - Collection function: removes null values from the array.
ArrayDistinct - Collection function: removes duplicate values from the array.
ArrayExcept - Collection function: returns an array of the elements in col1 but not in col2, without duplicates.
ArrayIntersect - Collection function: returns an array of the elements in the intersection of col1 and col2, without duplicates.
ArrayJoin - Concatenates the elements of `column` using the `delimiter`.
ArrayMax - Collection function: returns the maximum value of the array.
ArrayMin - Collection function: returns the minimum value of the array.
ArrayRepeat - Collection function: creates an array containing a column repeated count times.
ArraySize - Returns the total number of elements in the array.
ArraysOverlap - Collection function: returns true if the arrays contain any common non-null element; if not, returns null if both the arrays are non-empty and any of them contains a null element; returns false otherwise.
ArraysZip - Collection function: Returns a merged array of structs in which the N-th struct contains all N-th values of input arrays.
ArrayUnion - Collection function: returns an array of the elements in the union of col1 and col2, without duplicates.
Asc - Returns a sort expression based on the ascending order of the given column name.
Ascii - Computes the numeric value of the first character of the string column.
AscNullsFirst - Returns a sort expression based on the ascending order of the given column name, and null values return before non-null values.
AscNullsLast - Returns a sort expression based on the ascending order of the given column name, and null values appear after non-null values.
Asin - Computes inverse sine of the input column.
Asinh - Computes inverse hyperbolic sine of the input column.
Atan - Compute inverse tangent of the input column.
Atan2 - Atan2 is the Golang equivalent of atan2: (col1: Union[ForwardRef('ColumnOrName'), float], col2: Union[ForwardRef('ColumnOrName'), float]) -> pyspark.sql.connect.column.Column.
Atanh - Computes inverse hyperbolic tangent of the input column.
Avg - Aggregate function: returns the average of the values in a group.
Base64 - Computes the BASE64 encoding of a binary column and returns it as a string column.
Bin - Returns the string representation of the binary value of the given column.
No description provided by the author
BitAnd - Aggregate function: returns the bitwise AND of all non-null input values, or null if none.
BitCount - Returns the number of bits that are set in the argument expr as an unsigned 64-bit integer, or NULL if the argument is NULL.
BitGet - Returns the value of the bit (0 or 1) at the specified position.
BitLength - Calculates the bit length for the specified string column.
BitmapBitPosition - Returns the bit position for the given input column.
BitmapBucketNumber - Returns the bucket number for the given input column.
BitmapConstructAgg - Returns a bitmap with the positions of the bits set from all the values from the input column.
BitmapCount - Returns the number of set bits in the input bitmap.
BitmapOrAgg - Returns a bitmap that is the bitwise OR of all of the bitmaps from the input column.
BitOr - Aggregate function: returns the bitwise OR of all non-null input values, or null if none.
BitwiseNot - Computes bitwise not.
BitwiseNOT - Computes bitwise not.
BitXor - Aggregate function: returns the bitwise XOR of all non-null input values, or null if none.
BoolAnd - Aggregate function: returns true if all values of `col` are true.
No description provided by the author
BoolOr - Aggregate function: returns true if at least one value of `col` is true.
Bround - Round the given value to `scale` decimal places using HALF_EVEN rounding mode if `scale` >= 0 or at integral part when `scale` < 0.
Btrim - Remove the leading and trailing `trim` characters from `str`.
CallFunction - Call a SQL function.
Cardinality - Collection function: returns the length of the array or map stored in the column.
Cbrt - Computes the cube-root of the given value.
Ceil - Computes the ceiling of the given value.
Ceiling - Computes the ceiling of the given value.
Char - Returns the ASCII character having the binary equivalent to `col`.
CharacterLength - Returns the character length of string data or number of bytes of binary data.
CharLength - Returns the character length of string data or number of bytes of binary data.
Coalesce - Returns the first column that is not null.
No description provided by the author
CollectList - Aggregate function: returns a list of objects with duplicates.
CollectSet - Aggregate function: returns a set of objects with duplicate elements eliminated.
Concat - Concatenates multiple input columns together into a single column.
ConcatWs - Concatenates multiple input string columns together into a single string column, using the given separator.
Contains - Returns a boolean.
Conv - Convert a number in a string column from one base to another.
Corr - Returns a new :class:`~pyspark.sql.Column` for the Pearson Correlation Coefficient for “col1“ and “col2“.
Cos - Computes cosine of the input column.
Cosh - Computes hyperbolic cosine of the input column.
Cot - Computes cotangent of the input column.
Count - Aggregate function: returns the number of items in a group.
CountDistinct - Returns a new :class:`Column` for distinct count of “col“ or “cols“.
CountIf - Returns the number of `TRUE` values for the `col`.
CountMinSketch - Returns a count-min sketch of a column with the given esp, confidence and seed.
CovarPop - Returns a new :class:`~pyspark.sql.Column` for the population covariance of “col1“ and “col2“.
CovarSamp - Returns a new :class:`~pyspark.sql.Column` for the sample covariance of “col1“ and “col2“.
Crc32 - Calculates the cyclic redundancy check value (CRC32) of a binary column and returns the value as a bigint.
CreateMap - Creates a new map column.
Csc - Computes cosecant of the input column.
CumeDist - Window function: returns the cumulative distribution of values within a window partition, i.e.
Curdate - Returns the current date at the start of query evaluation as a :class:`DateType` column.
CurrentCatalog - Returns the current catalog.
CurrentDatabase - Returns the current database.
CurrentDate - Returns the current date at the start of query evaluation as a :class:`DateType` column.
CurrentSchema - Returns the current database.
CurrentTimestamp - Returns the current timestamp at the start of query evaluation as a :class:`TimestampType` column.
CurrentTimezone - Returns the current session local timezone.
CurrentUser - Returns the current database.
Dateadd - Returns the date that is `days` days after `start`.
DateAdd - Returns the date that is `days` days after `start`.
Datediff - Returns the number of days from `start` to `end`.
DateDiff - Returns the number of days from `start` to `end`.
DateFormat - Converts a date/timestamp/string to a value of string in the format specified by the date format given by the second argument.
DateFromUnixDate - Create date from the number of `days` since 1970-01-01.
Datepart is the Golang equivalent of datepart: (field: 'ColumnOrName', source: 'ColumnOrName') -> pyspark.sql.connect.column.Column.
DatePart is the Golang equivalent of date_part: (field: 'ColumnOrName', source: 'ColumnOrName') -> pyspark.sql.connect.column.Column.
DateSub - Returns the date that is `days` days before `start`.
DateTrunc - Returns timestamp truncated to the unit specified by the format.
Day - Extract the day of the month of a given date/timestamp as integer.
Dayofmonth - Extract the day of the month of a given date/timestamp as integer.
Dayofweek - Extract the day of the week of a given date/timestamp as integer.
Dayofyear - Extract the day of the year of a given date/timestamp as integer.
Days - Partition transform function: A transform for timestamps and dates to partition data into days.
Decode - Computes the first argument into a string from a binary using the provided character set (one of 'US-ASCII', 'ISO-8859-1', 'UTF-8', 'UTF-16BE', 'UTF-16LE', 'UTF-16').
Degrees - Converts an angle measured in radians to an approximately equivalent angle measured in degrees.
DenseRank - Window function: returns the rank of rows within a window partition, without any gaps.
Desc - Returns a sort expression based on the descending order of the given column name.
DescNullsFirst - Returns a sort expression based on the descending order of the given column name, and null values appear before non-null values.
DescNullsLast - Returns a sort expression based on the descending order of the given column name, and null values appear after non-null values.
E - Returns Euler's number.
Elt - Returns the `n`-th input, e.g., returns `input2` when `n` is 2.
Encode - Computes the first argument into a binary from a string using the provided character set (one of 'US-ASCII', 'ISO-8859-1', 'UTF-8', 'UTF-16BE', 'UTF-16LE', 'UTF-16').
Endswith - Returns a boolean.
EqualNull - Returns same result as the EQUAL(=) operator for non-null operands, but returns true if both are null, false if one of the them is null.
Every - Aggregate function: returns true if all values of `col` are true.
Exp - Computes the exponential of the given value.
Explode - Returns a new row for each element in the given array or map.
ExplodeOuter - Returns a new row for each element in the given array or map.
Expm1 - Computes the exponential of the given value minus one.
No description provided by the author
Extract - Extracts a part of the date/timestamp or interval source.
Factorial - Computes the factorial of the given value.
FindInSet - Returns the index (1-based) of the given string (`str`) in the comma-delimited list (`strArray`).
Flatten - Collection function: creates a single array from an array of arrays.
No description provided by the author
No description provided by the author
Floor - Computes the floor of the given value.
FormatNumber - Formats the number X to a format like '#,--#,--#.--', rounded to d decimal places with HALF_EVEN round mode, and returns the result as a string.
FormatString - Formats the arguments in printf-style and returns the result as a string column.
FromUnixtime - Converts the number of seconds from unix epoch (1970-01-01 00:00:00 UTC) to a string representing the timestamp of that moment in the current system time zone in the given format.
FromUtcTimestamp - This is a common function for databases supporting TIMESTAMP WITHOUT TIMEZONE.
Get - Collection function: Returns element of array at given (0-based) index.
Getbit - Returns the value of the bit (0 or 1) at the specified position.
GetJsonObject - Extracts json object from a json string based on json `path` specified, and returns json string of the extracted json object.
Greatest - Returns the greatest value of the list of column names, skipping null values.
Grouping - Aggregate function: indicates whether a specified column in a GROUP BY list is aggregated or not, returns 1 for aggregated or 0 for not aggregated in the result set.
GroupingId - Aggregate function: returns the level of grouping, equals to (grouping(c1) << (n-1)) + (grouping(c2) << (n-2)) + ..
Hash - Calculates the hash code of given columns, and returns the result as an int column.
Hex - Computes hex value of the given column, which could be :class:`pyspark.sql.types.StringType`, :class:`pyspark.sql.types.BinaryType`, :class:`pyspark.sql.types.IntegerType` or :class:`pyspark.sql.types.LongType`.
HistogramNumeric - Computes a histogram on numeric 'col' using nb bins.
HllSketchEstimate - Returns the estimated number of unique values given the binary representation of a Datasketches HllSketch.
Hour - Extract the hours of a given timestamp as integer.
Hours - Partition transform function: A transform for timestamps to partition data into hours.
Hypot - Computes “sqrt(a^2 + b^2)“ without intermediate overflow or underflow.
Ifnull - Returns `col2` if `col1` is null, or `col1` otherwise.
Initcap - Translate the first letter of each word to upper case in the sentence.
Inline - Explodes an array of structs into a table.
InlineOuter - Explodes an array of structs into a table.
InputFileBlockLength - Returns the length of the block being read, or -1 if not available.
InputFileBlockStart - Returns the start offset of the block being read, or -1 if not available.
InputFileName - Creates a string column for the file name of the current Spark task.
Instr - Locate the position of the first occurrence of substr column in the given string.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
Isnan - An expression that returns true if the column is NaN.
Isnotnull - Returns true if `col` is not null, or false otherwise.
Isnull - An expression that returns true if the column is null.
JavaMethod - Calls a method with reflection.
JsonArrayLength - Returns the number of elements in the outermost JSON array.
JsonObjectKeys - Returns all the keys of the outermost JSON object as an array.
JsonTuple - Creates a new row for a json column according to the given field names.
Kurtosis - Aggregate function: returns the kurtosis of the values in a group.
LastDay - Returns the last day of the month which the given date belongs to.
Lcase - Returns `str` with all characters changed to lowercase.
Least - Returns the least value of the list of column names, skipping null values.
Left - Returns the leftmost `len`(`len` can be string type) characters from the string `str`, if `len` is less or equal than 0 the result is an empty string.
Length - Computes the character length of string data or number of bytes of binary data.
Levenshtein - Computes the Levenshtein distance of the two given strings.
No description provided by the author
Ln - Returns the natural logarithm of the argument.
Localtimestamp - Returns the current timestamp without time zone at the start of query evaluation as a timestamp without time zone column.
Locate - Locate the position of the first occurrence of substr in a string column, after position pos.
Log - Returns the first argument-based logarithm of the second argument.
Log10 - Computes the logarithm of the given value in Base 10.
Log1p - Computes the natural logarithm of the "given value plus one".
Log2 - Returns the base-2 logarithm of the argument.
Lower - Converts a string expression to lower case.
Lpad - Left-pad the string column to width `len` with `pad`.
Ltrim - Trim the spaces from left end for the specified string value.
MakeDate - Returns a column with a date built from the year, month and day columns.
MakeDtInterval - Make DayTimeIntervalType duration from days, hours, mins and secs.
MakeInterval - Make interval from years, months, weeks, days, hours, mins and secs.
MakeTimestamp - Create timestamp from years, months, days, hours, mins, secs and timezone fields.
MakeTimestampLtz - Create the current timestamp with local time zone from years, months, days, hours, mins, secs and timezone fields.
MakeTimestampNtz - Create local date-time from years, months, days, hours, mins, secs fields.
MakeYmInterval - Make year-month interval from years, months.
MapConcat - Returns the union of all the given maps.
MapEntries - Collection function: Returns an unordered array of all entries in the given map.
MapFromArrays - Creates a new map from two arrays.
MapFromEntries - Collection function: Converts an array of entries (key value struct types) to a map of values.
MapKeys - Collection function: Returns an unordered array containing the keys of the map.
MapValues - Collection function: Returns an unordered array containing the values of the map.
Mask - Masks the given string value.
Max - Aggregate function: returns the maximum value of the expression in a group.
MaxBy - Returns the value associated with the maximum value of ord.
Md5 - Calculates the MD5 digest and returns the value as a 32 character hex string.
Mean - Aggregate function: returns the average of the values in a group.
Median - Returns the median of the values in a group.
Min - Aggregate function: returns the minimum value of the expression in a group.
MinBy - Returns the value associated with the minimum value of ord.
Minute - Extract the minutes of a given timestamp as integer.
Mode - Returns the most frequent value in a group.
MonotonicallyIncreasingId - A column that generates monotonically increasing 64-bit integers.
Month - Extract the month of a given date/timestamp as integer.
Months - Partition transform function: A transform for timestamps and dates to partition data into months.
NamedStruct - Creates a struct with the given field names and values.
Nanvl - Returns col1 if it is not NaN, or col2 if col1 is NaN.
Negate - Returns the negative value.
Negative - Returns the negative value.
NextDay - Returns the first date which is later than the value of the date column based on second `week day` argument.
Now - Returns the current timestamp at the start of query evaluation.
Ntile - Window function: returns the ntile group id (from 1 to `n` inclusive) in an ordered window partition.
Nullif - Returns null if `col1` equals to `col2`, or `col1` otherwise.
Nvl - Returns `col2` if `col1` is null, or `col1` otherwise.
Nvl2 - Returns `col2` if `col1` is not null, or `col3` otherwise.
OctetLength - Calculates the byte length for the specified string column.
Overlay - Overlay the specified portion of `src` with `replace`, starting from byte position `pos` of `src` and proceeding for `len` bytes.
ParseUrl - Extracts a part from a URL.
PercentRank - Window function: returns the relative rank (i.e.
Pi - Returns Pi.
Pmod - Returns the positive value of dividend mod divisor.
Posexplode - Returns a new row for each element with position in the given array or map.
PosexplodeOuter - Returns a new row for each element with position in the given array or map.
Position - Returns the position of the first occurrence of `substr` in `str` after position `start`.
Positive - Returns the value.
Pow - Returns the value of the first argument raised to the power of the second argument.
Printf - Formats the arguments in printf-style and returns the result as a string column.
Product - Aggregate function: returns the product of the values in a group.
Quarter - Extract the quarter of a given date/timestamp as integer.
Radians - Converts an angle measured in degrees to an approximately equivalent angle measured in radians.
Rand - Generates a random column with independent and identically distributed (i.i.d.) samples uniformly distributed in [0.0, 1.0).
Randn - Generates a column with independent and identically distributed (i.i.d.) samples from the standard normal distribution.
Rank - Window function: returns the rank of rows within a window partition.
Reflect - Calls a method with reflection.
Regexp - Returns true if `str` matches the Java regex `regexp`, or false otherwise.
RegexpCount - Returns a count of the number of times that the Java regex pattern `regexp` is matched in the string `str`.
RegexpExtract - Extract a specific group matched by the Java regex `regexp`, from the specified string column.
RegexpLike - Returns true if `str` matches the Java regex `regexp`, or false otherwise.
RegexpSubstr - Returns the substring that matches the Java regex `regexp` within the string `str`.
RegrAvgx - Aggregate function: returns the average of the independent variable for non-null pairs in a group, where `y` is the dependent variable and `x` is the independent variable.
RegrAvgy - Aggregate function: returns the average of the dependent variable for non-null pairs in a group, where `y` is the dependent variable and `x` is the independent variable.
RegrCount - Aggregate function: returns the number of non-null number pairs in a group, where `y` is the dependent variable and `x` is the independent variable.
RegrIntercept - Aggregate function: returns the intercept of the univariate linear regression line for non-null pairs in a group, where `y` is the dependent variable and `x` is the independent variable.
RegrR2 - Aggregate function: returns the coefficient of determination for non-null pairs in a group, where `y` is the dependent variable and `x` is the independent variable.
RegrSlope - Aggregate function: returns the slope of the linear regression line for non-null pairs in a group, where `y` is the dependent variable and `x` is the independent variable.
RegrSxx - Aggregate function: returns REGR_COUNT(y, x) * VAR_POP(x) for non-null pairs in a group, where `y` is the dependent variable and `x` is the independent variable.
RegrSxy - Aggregate function: returns REGR_COUNT(y, x) * COVAR_POP(y, x) for non-null pairs in a group, where `y` is the dependent variable and `x` is the independent variable.
RegrSyy - Aggregate function: returns REGR_COUNT(y, x) * VAR_POP(y) for non-null pairs in a group, where `y` is the dependent variable and `x` is the independent variable.
Repeat - Repeats a string column n times, and returns it as a new string column.
Replace - Replaces all occurrences of `search` with `replace`.
Reverse - Collection function: returns a reversed string or an array with reverse order of elements.
Right - Returns the rightmost `len`(`len` can be string type) characters from the string `str`, if `len` is less or equal than 0 the result is an empty string.
Rint - Returns the double value that is closest in value to the argument and is equal to a mathematical integer.
Rlike - Returns true if `str` matches the Java regex `regexp`, or false otherwise.
Round - Round the given value to `scale` decimal places using HALF_UP rounding mode if `scale` >= 0 or at integral part when `scale` < 0.
RowNumber - Window function: returns a sequential number starting at 1 within a window partition.
Rpad - Right-pad the string column to width `len` with `pad`.
Rtrim - Trim the spaces from right end for the specified string value.
Sec - Computes secant of the input column.
Second - Extract the seconds of a given date as integer.
Sentences - Splits a string into arrays of sentences, where each sentence is an array of words.
Sequence - Generate a sequence of integers from `start` to `stop`, incrementing by `step`.
Sha - Returns a sha1 hash value as a hex string of the `col`.
Sha1 - Returns the hex string result of SHA-1.
Sha2 - Returns the hex string result of SHA-2 family of hash functions (SHA-224, SHA-256, SHA-384, and SHA-512).
Shiftleft - Shift the given value numBits left.
ShiftLeft - Shift the given value numBits left.
Shiftright - (Signed) shift the given value numBits right.
ShiftRight - (Signed) shift the given value numBits right.
Shiftrightunsigned - Unsigned shift the given value numBits right.
ShiftRightUnsigned - Unsigned shift the given value numBits right.
Shuffle - Collection function: Generates a random permutation of the given array.
Sign - Computes the signum of the given value.
Signum - Computes the signum of the given value.
Sin - Computes sine of the input column.
Sinh - Computes hyperbolic sine of the input column.
Size - Collection function: returns the length of the array or map stored in the column.
Skewness - Aggregate function: returns the skewness of the values in a group.
Slice - Collection function: returns an array containing all the elements in `x` from index `start` (array indices start at 1, or from the end if `start` is negative) with the specified `length`.
Some - Aggregate function: returns true if at least one value of `col` is true.
Soundex - Returns the SoundEx encoding for a string Soundex is the Golang equivalent of soundex: (col: 'ColumnOrName') -> pyspark.sql.connect.column.Column.
SparkPartitionId - A column for partition ID.
Split - Splits str around matches of the given pattern.
SplitPart - Splits `str` by delimiter and return requested part of the split (1-based).
Sqrt - Computes the square root of the specified float value.
Stack - Separates `col1`, ..., `colk` into `n` rows.
Startswith - Returns a boolean.
Std - Aggregate function: alias for stddev_samp.
Stddev - Aggregate function: alias for stddev_samp.
StddevPop - Aggregate function: returns population standard deviation of the expression in a group.
StddevSamp - Aggregate function: returns the unbiased sample standard deviation of the expression in a group.
No description provided by the author
StrToMap - Creates a map after splitting the text into key/value pairs using delimiters.
Struct - Creates a new struct column.
Substr - Returns the substring of `str` that starts at `pos` and is of length `len`, or the slice of byte array that starts at `pos` and is of length `len`.
Substring - Substring starts at `pos` and is of length `len` when str is String type or returns the slice of byte array that starts at `pos` in byte and is of length `len` when str is Binary type.
SubstringIndex - Returns the substring from string str before count occurrences of the delimiter delim.
Sum - Aggregate function: returns the sum of all values in the expression.
SumDistinct - Aggregate function: returns the sum of distinct values in the expression.
Tan - Computes tangent of the input column.
Tanh - Computes hyperbolic tangent of the input column.
TimestampMicros - Creates timestamp from the number of microseconds since UTC epoch.
TimestampMillis - Creates timestamp from the number of milliseconds since UTC epoch.
TimestampSeconds - Converts the number of seconds from the Unix epoch (1970-01-01T00:00:00Z) to a timestamp.
ToBinary - Converts the input `col` to a binary value based on the supplied `format`.
ToChar - Convert `col` to a string based on the `format`.
ToDate - Converts a :class:`~pyspark.sql.Column` into :class:`pyspark.sql.types.DateType` using the optionally specified format.
ToDegrees - ToDegrees is the Golang equivalent of toDegrees: (col: 'ColumnOrName') -> pyspark.sql.connect.column.Column.
ToNumber - Convert string 'col' to a number based on the string format 'format'.
ToRadians - ToRadians is the Golang equivalent of toRadians: (col: 'ColumnOrName') -> pyspark.sql.connect.column.Column.
ToTimestamp - Converts a :class:`~pyspark.sql.Column` into :class:`pyspark.sql.types.TimestampType` using the optionally specified format.
ToTimestampLtz - Parses the `timestamp` with the `format` to a timestamp without time zone.
ToTimestampNtz - Parses the `timestamp` with the `format` to a timestamp without time zone.
ToUnixTimestamp - Returns the UNIX timestamp of the given time.
ToUtcTimestamp - This is a common function for databases supporting TIMESTAMP WITHOUT TIMEZONE.
ToVarchar - Convert `col` to a string based on the `format`.
Translate - A function translate any character in the `srcCol` by a character in `matching`.
Trim - Trim the spaces from both ends for the specified string column.
Trunc - Returns date truncated to the unit specified by the format.
TryAdd - Returns the sum of `left`and `right` and the result is null on overflow.
TryAesDecrypt - This is a special version of `aes_decrypt` that performs the same operation, but returns a NULL value instead of raising an error if the decryption cannot be performed.
TryAvg - Returns the mean calculated from values of a group and the result is null on overflow.
TryDivide - Returns `dividend`/`divisor`.
TryElementAt - (array, index) - Returns element of array at given (1-based) index.
TryMultiply - Returns `left`*`right` and the result is null on overflow.
TrySubtract - Returns `left`-`right` and the result is null on overflow.
TrySum - Returns the sum calculated from values of a group and the result is null on overflow.
TryToBinary - This is a special version of `to_binary` that performs the same operation, but returns a NULL value instead of raising an error if the conversion cannot be performed.
TryToNumber - Convert string 'col' to a number based on the string format `format`.
TryToTimestamp - Parses the `col` with the `format` to a timestamp.
Typeof - Return DDL-formatted type string for the data type of the input.
Ucase - Returns `str` with all characters changed to uppercase.
Unbase64 - Decodes a BASE64 encoded string column and returns it as a binary column.
Unhex - Inverse of hex.
UnixDate - Returns the number of days since 1970-01-01.
UnixMicros - Returns the number of microseconds since 1970-01-01 00:00:00 UTC.
UnixMillis - Returns the number of milliseconds since 1970-01-01 00:00:00 UTC.
UnixSeconds - Returns the number of seconds since 1970-01-01 00:00:00 UTC.
UnixTimestamp - Convert time string with given pattern ('yyyy-MM-dd HH:mm:ss', by default) to Unix time stamp (in seconds), using the default timezone and the default locale, returns null if failed.
Upper - Converts a string expression to upper case.
UrlDecode - Decodes a `str` in 'application/x-www-form-urlencoded' format using a specific encoding scheme.
UrlEncode - Translates a string into 'application/x-www-form-urlencoded' format using a specific encoding scheme.
User - Returns the current database.
Variance - Aggregate function: alias for var_samp Variance is the Golang equivalent of variance: (col: 'ColumnOrName') -> pyspark.sql.connect.column.Column.
VarPop - Aggregate function: returns the population variance of the values in a group.
VarSamp - Aggregate function: returns the unbiased sample variance of the values in a group.
Version - Returns the Spark version.
Weekday - Returns the day of the week for date/timestamp (0 = Monday, 1 = Tuesday, ..., 6 = Sunday).
Weekofyear - Extract the week number of a given date as integer.
WidthBucket - Returns the bucket number into which the value of this expression would fall after being evaluated.
Window - Bucketize rows into one or more time windows given a timestamp specifying column.
WindowTime - Computes the event time from a window column.
Xpath - Returns a string array of values within the nodes of xml that match the XPath expression.
XpathBoolean - Returns true if the XPath expression evaluates to true, or if a matching node is found.
XpathDouble - Returns a double value, the value zero if no match is found, or NaN if a match is found but the value is non-numeric.
XpathFloat - Returns a float value, the value zero if no match is found, or NaN if a match is found but the value is non-numeric.
XpathInt - Returns an integer value, or the value zero if no match is found, or a match is found but the value is non-numeric.
XpathLong - Returns a long integer value, or the value zero if no match is found, or a match is found but the value is non-numeric.
XpathNumber - Returns a double value, the value zero if no match is found, or NaN if a match is found but the value is non-numeric.
XpathShort - Returns a short integer value, or the value zero if no match is found, or a match is found but the value is non-numeric.
XpathString - Returns the text contents of the first xml node that matches the XPath expression.
Xxhash64 - Calculates the hash code of given columns using the 64-bit variant of the xxHash algorithm, and returns the result as a long column.
Year - Extract the year of a given date/timestamp as integer.
Years - Partition transform function: A transform for timestamps and dates to partition data into years.