Categorygithub.com/dataleap-labs/llm
modulepackage
0.0.0-20250225232307-7b226075a081
Repository: https://github.com/dataleap-labs/llm.git
Documentation: pkg.go.dev

# README

The purpose of this project is to provide a common interface for LLMs (Sonnet, GPT4o etc) across different API providers (Anthropic, Bedrock, Vertex, OpenAI, ...).

Currently we support the following LLMs and API providers:

  • Claude models on Anthropic and Vertex (Google)
  • OpenAI models on OpenAI and Azure
  • Gemini models on Vertex

Examples

Simple text message
openAIKey := "abc"
openai := llm.NewOpenAILLM(openAIKey)

systemPrompt := "You only use Emojis."
textRequest := llm.ChatCompletionRequest{
		SystemPrompt: &systemPrompt,
		Model:        llm.ModelGPT4o,
		Messages: []llm.InputMessage{
			{
				Role: llm.RoleUser,
				MultiContent: []llm.ContentPart{
					{
						Type: llm.ContentTypeText,
						Text: `What is the purpose of life?`
					},
				},
			},
		},
		JSONMode:    false,
		MaxTokens:   1000,
		Temperature: 0,
	}

response, err := openai.CreateChatCompletion(context.Background(), textRequest)
if err != nil {
	panic(err)
}

fmt.Println(response)
Initialize Claude, Gemini and OpenAI client
// OpenAI model from OpenAI API
openAIKey := ""
openai := llm.NewOpenAILLM(openAIKey)

// Claude model from OpenAI API
anthropicKey := ""
claude := llm.NewAnthropicLLM(anthropicKey)

// Gemini model from Vertex API
geminiKey := ""
gemini, err := llm.NewGeminiLLM(geminiKey)
if err != nil {
    panic(err)
}

// Claude model from Vertex API
// read google credentials from file
credBytes, err := os.ReadFile("google.json")
if err != nil {
    panic(err)
}

claudeVertex := llm.NewVertexLLM(credBytes, "project-id", "location")
Message with image input
openAIKey := ""
openai := llm.NewOpenAILLM(openAIKey)


// read image from test_images/cats.png and convert to base64
imageBytesDogs, _ := os.ReadFile("images/dogs.png")
imageBytesCats, _ := os.ReadFile("images/cats.png")


imageBase64Dogs := base64.StdEncoding.EncodeToString(imageBytesDogs)
imageBase64Cats := base64.StdEncoding.EncodeToString(imageBytesCats)

systemPrompt := "You are a cat and speak in cat language."
imageRequest := llm.ChatCompletionRequest{
	SystemPrompt: &systemPrompt,
	Model:        llm.ModelGPT4o,
	Messages: []llm.InputMessage{
		{
			Role: llm.RoleUser,
			MultiContent: []llm.ContentPart{
				{
					Type: llm.ContentTypeText,
					Text: "Image number 1",
				},
				{
					Type:      llm.ContentTypeImage,
					Data:      imageBase64Cats,
					MediaType: "image/png",
				},
				{
					Type: llm.ContentTypeText,
					Text: "Image number 2",
				},
				{
					Type:      llm.ContentTypeImage,
					Data:      imageBase64Dogs,
					MediaType: "image/png",
				},
				{
					Type: llm.ContentTypeText,
					Text: `Compare the two images`.
				},
			},
		},
	},
	JSONMode:    false,
	MaxTokens:   1000,
	Temperature: 0,
}

response, _ := openai.CreateChatCompletion(context.Background(), imageRequest)
fmt.Println(reponse)
Message with tool use
openAIKey := ""
openai := llm.NewOpenAILLM(openAIKey)

toolRequestWithToolResponse := llm.ChatCompletionRequest{
	Model: llm.ModelGPT4o,
	Messages: []llm.InputMessage{
		{
			Role: llm.RoleUser,
			MultiContent: []llm.ContentPart{
				{
					Type: llm.ContentTypeText,
					Text: "What is the weather in Paris, France?",
				},
			},
		},
		{
			Role: llm.RoleAssistant,
			ToolCalls: []llm.ToolCall{{
				ID:   "123",
				Type: "function",
				Function: llm.ToolCallFunction{
					Name:      "get_weather",
					Arguments: "{\"location\": \"Paris, France\"}",
				},
			}},
		},
		{
			Role:         llm.RoleTool,
			MultiContent: nil,
			ToolResults: []llm.ToolResult{
				{
					ToolCallID:   "123",
					FunctionName: "get_weather",
					Result:       "15 degrees",
					IsError:      false,
				},
			},
		},
	},
	Tools: []llm.Tool{
		{
			Type: "function",
			Function: &llm.Function{
				Name:        "get_weather",
				Description: "Get current temperature for a given location.",
				Parameters: map[string]interface{}{
					"type": "object",
					"properties": map[string]interface{}{
						"location": map[string]interface{}{
							"type":        "string",
							"description": "City and country e.g. Bogotá, Colombia",
						},
					},
					"required":             []string{"location"},
					"additionalProperties": false,
				},
			},
		},
	},
	JSONMode:    false,
	MaxTokens:   1000,
	Temperature: 0,
}

response, _ := openai.CreateChatCompletion(context.Background(), toolRequestWithToolResponse)
fmt.Println(response)

Overview

The package structure is quite simple. There are two core interfaces. The first interface is the LLM interface which is implemented for every LLM model. E.g. in the file claude.go you can see the implementation of that interface for the claude models.

type LLM interface {
	CreateChatCompletion(ctx context.Context, req ChatCompletionRequest) (ChatCompletionResponse, error)
	CreateChatCompletionStream(ctx context.Context, req ChatCompletionRequest) (ChatCompletionStream, error)
}

The second interface is the StreamHandler interface. You can use it to define your own stream handler e.g. for SSE streaming over a RestAPI. In the example folder streaming you can find a simple implementation which just prints out incoming tokens.

type StreamHandler interface {
	// Called once right before tokens start streaming.
	OnStart()

	// Called whenever the LLM produces a new token (partial output).
	OnToken(token string)

	// If the LLM triggers an external tool call, you can capture it here.
	OnToolCall(toolCall ToolCall)

	// Called when the LLM produces a final complete message.
	OnComplete(message OutputMessage)

	// Called if an error occurs during the streaming process.
	OnError(err error)
}

Roadmap

  • Accurate token counting across models
  • Tests
  • Contribute bedrock adapter in https://github.com/liushuangls/go-anthropic to also allow us to use it here
  • Add support for computer use
  • Better error messages for features that are only supported for some models or API providers (e.g. caching)
  • Batch processing

Credits

OpenAI model integration https://github.com/sashabaranov/go-openai

Antropic model integration https://github.com/liushuangls/go-anthropic

# Functions

NewAnthropicLLM creates a new Claude LLM client (via Anthropic API).
No description provided by the author
NewGeminiLLM creates a new Gemini LLM client.
NewOpenAILLM creates a new OpenAI LLM client.
NewVertexLLM creates a new Claude LLM client (via Vertex AI custom integration).
No description provided by the author

# Constants

No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
ContentTypeImage indicates that a content part is an image.
ContentTypeText indicates that a content part is text.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author

# Structs

ChatCompletionRequest represents a request for a chat completion.
ChatCompletionResponse represents the response from a chat completion request.
Choice represents a single completion choice.
ClaudeLLM implements the LLM interface for Anthropic's Claude.
No description provided by the author
Function represents a function definition.
GeminiLLM implements the LLM interface for Google's Gemini.
GeminiOptions contains configuration options for the Gemini model.
Message represents a single message in a conversation.
OpenAILLM implements the LLM interface for OpenAI.
No description provided by the author
Tool represents a function that can be called by the LLM.
ToolCall represents a tool/function call from the LLM.
No description provided by the author
No description provided by the author
Usage represents token usage information.

# Interfaces

ChatCompletionStream represents a streaming chat completion.
LLM defines the interface that all LLM providers must implement.
StreamHandler defines how to handle streaming tokens, tool calls, and completion events from an LLM.

# Type aliases

No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
LLMProvider represents different LLM SDK providers.
No description provided by the author
No description provided by the author
Role represents the role of a conversation participant.