Categorygithub.com/sozercan/aikit
module
0.16.0
Repository: https://github.com/sozercan/aikit.git
Documentation: pkg.go.dev

# README

AIKit ✨


AIKit is a comprehensive platform to quickly get started to host, deploy, build and fine-tune large language models (LLMs).

AIKit offers two main capabilities:

  • Inference: AIKit uses LocalAI, which supports a wide range of inference capabilities and formats. LocalAI provides a drop-in replacement REST API that is OpenAI API compatible, so you can use any OpenAI API compatible client, such as Kubectl AI, Chatbot-UI and many more, to send requests to open LLMs!

  • Fine-Tuning: AIKit offers an extensible fine-tuning interface. It supports Unsloth for fast, memory efficient, and easy fine-tuning experience.

šŸ‘‰ For full documentation, please see AIKit website!

Features

Quick Start

You can get started with AIKit quickly on your local machine without a GPU!

docker run -d --rm -p 8080:8080 ghcr.io/sozercan/llama3.1:8b

After running this, navigate to http://localhost:8080/chat to access the WebUI!

API

AIKit provides an OpenAI API compatible endpoint, so you can use any OpenAI API compatible client to send requests to open LLMs!

curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
    "model": "llama-3.1-8b-instruct",
    "messages": [{"role": "user", "content": "explain kubernetes in a sentence"}]
  }'

Output should be similar to:

{
  // ...
    "model": "llama-3.1-8b-instruct",
    "choices": [
        {
            "index": 0,
            "finish_reason": "stop",
            "message": {
                "role": "assistant",
                "content": "Kubernetes is an open-source container orchestration system that automates the deployment, scaling, and management of applications and services, allowing developers to focus on writing code rather than managing infrastructure."
            }
        }
    ],
  // ...
}

That's it! šŸŽ‰ API is OpenAI compatible so this is a drop-in replacement for any OpenAI API compatible client.

Pre-made Models

AIKit comes with pre-made models that you can use out-of-the-box!

If it doesn't include a specific model, you can always create your own images, and host in a container registry of your choice!

CPU

[!NOTE] AIKit supports both AMD64 and ARM64 CPUs. You can run the same command on either architecture, and Docker will automatically pull the correct image for your CPU.

Depending on your CPU capabilities, AIKit will automatically select the most optimized instruction set.

ModelOptimizationParametersCommandModel NameLicense
šŸ¦™ Llama 3.2Instruct1Bdocker run -d --rm -p 8080:8080 ghcr.io/sozercan/llama3.2:1bllama-3.2-1b-instructLlama
šŸ¦™ Llama 3.2Instruct3Bdocker run -d --rm -p 8080:8080 ghcr.io/sozercan/llama3.2:3bllama-3.2-3b-instructLlama
šŸ¦™ Llama 3.1Instruct8Bdocker run -d --rm -p 8080:8080 ghcr.io/sozercan/llama3.1:8bllama-3.1-8b-instructLlama
šŸ¦™ Llama 3.3Instruct70Bdocker run -d --rm -p 8080:8080 ghcr.io/sozercan/llama3.3:70bllama-3.3-70b-instructLlama
ā“‚ļø MixtralInstruct8x7Bdocker run -d --rm -p 8080:8080 ghcr.io/sozercan/mixtral:8x7bmixtral-8x7b-instructApache
šŸ…æļø Phi 3.5Instruct3.8Bdocker run -d --rm -p 8080:8080 ghcr.io/sozercan/phi3.5:3.8bphi-3.5-3.8b-instructMIT
šŸ”” Gemma 2Instruct2Bdocker run -d --rm -p 8080:8080 ghcr.io/sozercan/gemma2:2bgemma-2-2b-instructGemma
āŒØļø Codestral 0.1Code22Bdocker run -d --rm -p 8080:8080 ghcr.io/sozercan/codestral:22bcodestral-22bMNLP
QwQ32Bdocker run -d --rm -p 8080:8080 ghcr.io/sozercan/qwq:32bqwq-32b-previewApache 2.0

NVIDIA CUDA

[!NOTE] To enable GPU acceleration, please see GPU Acceleration.

Please note that only difference between CPU and GPU section is the --gpus all flag in the command to enable GPU acceleration.

ModelOptimizationParametersCommandModel NameLicense
šŸ¦™ Llama 3.2Instruct1Bdocker run -d --rm --gpus all -p 8080:8080 ghcr.io/sozercan/llama3.2:1bllama-3.2-1b-instructLlama
šŸ¦™ Llama 3.2Instruct3Bdocker run -d --rm --gpus all -p 8080:8080 ghcr.io/sozercan/llama3.2:3bllama-3.2-3b-instructLlama
šŸ¦™ Llama 3.1Instruct8Bdocker run -d --rm --gpus all -p 8080:8080 ghcr.io/sozercan/llama3.1:8bllama-3.1-8b-instructLlama
šŸ¦™ Llama 3.3Instruct70Bdocker run -d --rm --gpus all -p 8080:8080 ghcr.io/sozercan/llama3.3:70bllama-3.3-70b-instructLlama
ā“‚ļø MixtralInstruct8x7Bdocker run -d --rm --gpus all -p 8080:8080 ghcr.io/sozercan/mixtral:8x7bmixtral-8x7b-instructApache
šŸ…æļø Phi 3.5Instruct3.8Bdocker run -d --rm --gpus all -p 8080:8080 ghcr.io/sozercan/phi3.5:3.8bphi-3.5-3.8b-instructMIT
šŸ”” Gemma 2Instruct2Bdocker run -d --rm --gpus all -p 8080:8080 ghcr.io/sozercan/gemma2:2bgemma-2-2b-instructGemma
āŒØļø Codestral 0.1Code22Bdocker run -d --rm --gpus all -p 8080:8080 ghcr.io/sozercan/codestral:22bcodestral-22bMNLP
QwQ32Bdocker run -d --rm --gpus all -p 8080:8080 ghcr.io/sozercan/qwq:32bqwq-32b-previewApache 2.0
šŸ“ø Flux 1 DevText to image12Bdocker run -d --rm --gpus all -p 8080:8080 ghcr.io/sozercan/flux1:devflux-1-devFLUX.1 [dev] Non-Commercial License

Apple Silicon (experimental)

[!NOTE] To enable GPU acceleration on Apple Silicon, please see Podman Desktop documentation. For more information, please see GPU Acceleration.

Apple Silicon is an experimental runtime and it may change in the future. This runtime is specific to Apple Silicon only, and it will not work as expected on other architectures, including Intel Macs.

Only gguf models are supported on Apple Silicon.

ModelOptimizationParametersCommandModel NameLicense
šŸ¦™ Llama 3.2Instruct1Bpodman run -d --rm --device /dev/dri -p 8080:8080 ghcr.io/sozercan/applesilicon/llama3.2:1bllama-3.2-1b-instructLlama
šŸ¦™ Llama 3.2Instruct3Bpodman run -d --rm --device /dev/dri -p 8080:8080 ghcr.io/sozercan/applesilicon/llama3.2:3bllama-3.2-3b-instructLlama
šŸ¦™ Llama 3.1Instruct8Bpodman run -d --rm --device /dev/dri -p 8080:8080 ghcr.io/sozercan/applesilicon/llama3.1:8bllama-3.1-8b-instructLlama
šŸ…æļø Phi 3.5Instruct3.8Bpodman run -d --rm --device /dev/dri -p 8080:8080 ghcr.io/sozercan/applesilicon/phi3.5:3.8bphi-3.5-3.8b-instructMIT
šŸ”” Gemma 2Instruct2Bpodman run -d --rm --device /dev/dri -p 8080:8080 ghcr.io/sozercan/applesilicon/gemma2:2bgemma-2-2b-instructGemma

What's next?

šŸ‘‰ For more information and how to fine tune models or create your own images, please see AIKit website!

# Packages

No description provided by the author
No description provided by the author