# README
AI Util
A unified platform to build apps with AI.
Features
- Supported AI Providers:
- Conversation Controls
- Token Management + Limits
- Resource Management
Installation
go get github.com/ztkent/ai-util
Example
client, _ := aiutil.NewAIClient("openai", "gpt-3.5-turbo", 0.5)
conversation := aiutil.NewConversation("You are an example assistant.", 100000, true)
response, _ := client.SendCompletionRequest(CtxWithTimeout, conversation, "Say hello!")
Required API Keys
Service | Environment Variable |
---|---|
OpenAI | OPENAI_API_KEY |
Replicate | REPLICATE_API_TOKEN |
Available Models
OpenAI Models
Model Name | Model Identifier | Cost (IN/OUT per 1M tokens) |
---|---|---|
GPT-3.5 Turbo | gpt-3.5-turbo | $0.50 / $1.50 |
GPT-4 | gpt-4 | $30.00 / $60.00 |
GPT-4 Turbo | gpt-4-turbo | $10.00 / $30.00 |
Replicate Models
Model Name | Model Identifier | Cost (IN/OUT per 1M tokens) |
---|---|---|
Meta Llama 2-70b | meta/llama-2-70b | $0.65 / $2.75 |
Meta Llama 2-13b | meta/llama-2-13b | $0.10 / $0.50 |
Meta Llama 2-7b | meta/llama-2-7b | $0.05 / $0.25 |
Meta Llama 2-13b Chat | meta/llama-2-13b-chat | $0.10 / $0.50 |
Meta Llama 2-70b Chat | meta/llama-2-70b-chat | $0.65 / $2.75 |
Meta Llama 2-7b Chat | meta/llama-2-7b-chat | $0.05 / $0.25 |
Meta Llama 3-8b | meta/meta-llama-3-8b | $0.05 / $0.25 |
Meta Llama 3-70b | meta/meta-llama-3-70b | $0.65 / $2.75 |
Meta Llama 3-8b Instruct | meta/meta-llama-3-8b-instruct | $0.05 / $0.25 |
Meta Llama 3-70b Instruct | meta/meta-llama-3-70b-instruct | $0.65 / $2.75 |
Mistral 7B | mistralai/mistral-7b-v0.1 | $0.05 / $0.25 |
Mistral 7B Instruct | mistralai/mistral-7b-instruct-v0.2 | $0.05 / $0.25 |
Mixtral 8x7B Instruct | mistralai/mixtral-8x7b-instruct-v0.1 | $0.30 / $1.00 |
# Functions
No description provided by the author
Generate a resource message based on the path and type, return the message to append to the conversation.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
Estimate the number of tokens in the message using the OpenAI tokenizer.
No description provided by the author
No description provided by the author
Ensure we have the right env variables set for the given source.
Determine if the user's input contains a resource command There is usually some limit to the number of tokens.
No description provided by the author
Start a new conversation with the system prompt A system prompt defines the initial context of the conversation This includes the persona of the bot and any information that you want to provide to the model.
# Constants
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
OpenAI Models.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
Open-Source Models via Replicate.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
# Structs
No description provided by the author
The OAIClient struct is a wrapper around the OpenAI client.
No description provided by the author
# Interfaces
No description provided by the author
# Type aliases
No description provided by the author
No description provided by the author
No description provided by the author