# Functions
No description provided by the author
Segment a text document into segments of Chinese separated by either puncuation or non-Chinese text.
# Structs
Tokenizes Chinese text using a dictionary.
A text segment that contains either Chinese or non-Chinese text.
A text token contains the results of tokenizing a string.
# Interfaces
Tokenizes Chinese text.