# Functions
IsValid checks if the cache is valid (less than 1 hour old).
Load loads the model cache from the file.
No description provided by the author
Save saves the model cache to the file.
StreamingSSEResponse handles streaming responses from OpenAI's API.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
WithMaxCompletionTokens specifies the max number of completion tokens to generate.
WithMaxTokens specifies the max number of tokens to generate.
WithModel specifies which model name to use.
No description provided by the author
No description provided by the author
No description provided by the author
WithN will add an option to set how many chat completion choices to generate for each input message.
No description provided by the author
WithSeed will add an option to use deterministic sampling.
No description provided by the author
WithTemperature specifies the model temperature, a hyperparameter that regulates the randomness, or creativity, of the AI's responses.
No description provided by the author
WithTopP will add an option to use top-p sampling.
# Constants
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
Chat message role defined by the OpenAI API.
Chat message role defined by the OpenAI API.
Chat message role defined by the OpenAI API.
Chat message role defined by the OpenAI API.
Chat message role defined by the OpenAI API.
Chat message role defined by the OpenAI API.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
Provider constants for all supported providers.
Provider constants for all supported providers.
Provider constants for all supported providers.
Provider constants for all supported providers.
Provider constants for all supported providers.
Provider constants for all supported providers.
Provider constants for all supported providers.
Provider constants for all supported providers.
Provider constants for all supported providers.
Provider constants for all supported providers.
Provider constants for all supported providers.
Provider constants for all supported providers.
No description provided by the author
# Variables
No description provided by the author
# Structs
No description provided by the author
No description provided by the author
ChatCompletionRequest represents a request structure for chat completion API.
ChatCompletionResponse represents a response structure for chat completion API.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
CompletionTokensDetails Breakdown of tokens used in a completion.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
LogProb represents the probability information for a token.
LogProbs is the top-level structure containing the log probability information.
Model represents a language model in the LLM system.
ModelCache represents the cache of models.
No description provided by the author
PromptTokensDetails Breakdown of tokens used in the prompt.
Provider represents a provider of LLM services.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
Usage Represents the total token usage per request to OpenAI.
No description provided by the author
# Type aliases
No description provided by the author
No description provided by the author
No description provided by the author
Deprecated: use FunctionDefinition instead.
No description provided by the author
No description provided by the author
ProviderType is a string type for the names of supported LLM providers.
RequestOption is a function that configures a ChatCompletionRequest.
No description provided by the author