Categorygithub.com/0xAFz/storm
repositorypackage
0.0.0-20250108181518-d8cfca1d3739
Repository: https://github.com/0xafz/storm.git
Documentation: pkg.go.dev

# Packages

No description provided by the author
No description provided by the author

# README

Storm

Storm is a microservice for managing user status updates using an event-driven architecture. The service is built with Go and utilizes gRPC for interface communication and Kafka for event streaming.

This service is designed to be lightweight, scalable, and production-ready, utilizing Kubernetes as its orchestration layer. This documentation includes:

  • Local development setup.
  • Production deployment approaches:
    1. End-to-End Automation (one command).
    2. Manual Steps with Automation.

Table of Contents

  1. Prerequisites
  2. Running Locally
  3. Project Architecture
  4. Production Deployment

Prerequisites

Tools and Versions

To run or deploy the project, ensure you have the following installed:

Development:

ToolVersionPurpose
Go1.22.7+Building and running the service.
Docker + Compose27.1.1+Managing dependencies locally.
Protobuf28.3+Generating gRPC code from .proto files.

Production:

ToolVersionPurpose
Terraformv1.9.6+Automating resource provisioning.
Ansible2.17.1+Setting up Kubernetes clusters.
Kubectlv1.31.2+Managing Kubernetes clusters.
Python3 + pip3.12.4+Running auxiliary scripts.
BashLatestExecuting automation scripts.

Running Locally

Steps to Get Started

  1. Clone the Repository:

    git clone https://github.com/0xAFz/storm.git
    cd storm
    
  2. Start Dependencies (Kafka):

    docker compose up -d
    
  3. Set Up Environment Variables:

    cp .env.example .env
    vim .env
    

    Update the values in .env as needed (e.g., Kafka broker addresses, service ports).

  4. Generate gRPC Code: Run the following command to compile the .proto files into Go code:

    make proto
    
  5. Run the Application:

    make run
    # or
    go run main.go
    
  6. Test the Service: Use grpcurl to test the service:

    grpcurl -plaintext localhost:50051 list
    

Project Architecture

Logical Architecture

  1. Serivce sends a status update → Received via gRPC interface.
  2. A Kafka event is published to a specified topic.

Key Components

ComponentDescription
gRPC ServiceHandles incoming user status updates.
KafkaStores and forwards events for consumers.
ConfigurationsManaged via .env files for simplicity.
KubernetesEnsures the service is scalable and fault-tolerant in production.

Production Deployment

Production deployment supports two approaches: fully automated or manual with automation.


End-to-End Automation

This approach automates everything from VM provisioning to Kubernetes resource deployment.

Steps

  1. Navigate to Production Directory:

    cd prod/
    
  2. Setup Environment Variables:

    cp .env.example .env
    vim .env
    
    cp terraform/compute/.env.example terraform/compute/.env
    vim terraform/compute/.env
    
    cp terraform/storm/.env.example terraform/storm/.env
    vim terraform/storm/.env
    

    Update the values for:

    • OpenStack credentials (for Terraform).
    • Cloudflare credentials (for DNS management).
    • Gitlab repo credentials (for Container registery).
    • Kubernetes configurations.
  3. Setup Ansible Variables:

    vim ansible/inventory/group_vars/...
    

    Replace placeholders with actual values

  4. Run the Deployment Script:

    ./deploy.sh up
    
  5. Clean up all resources

    ./deploy.sh down
    

What Happens:

  1. Terraform provisions VMs using OpenStack.
  2. A Python script generates an Ansible inventory.
  3. A Python script add DNS records on Cloudflare.
  4. Ansible:
    • Installs Kubernetes (K3s).
    • Configures the cluster (e.g., ingress nginx, cert-manager).
  5. Terraform deploys:
    • Kafka Cluster: A highly available Kafka setup.
    • Storm Microservice: Configurations, deployment, service, and ingress.

Manual Steps

This approach allows more flexibility and supports different cloud providers.

Steps

  1. Manually Create VMs: Create VMs in your preferred cloud provider (ensure DNS records (e.g., A storm.domain.tld 192.168.1.100) are configured).

  2. Generate Ansible Inventory: Update the inventory file with your server details:

    vim ansible/inventory/hosts.yml
    
  3. Setup Ansible Variables:

    vim ansible/inventory/group_vars/...
    

    Replace placeholders with actual values

  4. Run Ansible Playbook: Install Kubernetes on your nodes:

    ansible-playbook -i ansible/inventory/hosts.yml ansible/playbooks/cluster.yml
    

    Deploying with YAML manifests

    1. Set Config for Kubectl: The ansible generates kube config in your local: ~/.kube/storm/config

      export KUBECONFIG=~/.kube/storm/config
      
    2. Deploy Kafka: Use the Strimzi operator:

      # deploy strimzi opreator using helm
      helm repo add strimzi https://strimzi.io/charts/
      helm repo update
      
      # create a namespace for kafka
      kubectl create namespace kafka
      
      # apply kafka manifest
      kubectl apply -f k8s/kafka/kafka-cluster.yml
      
    3. Deploy Storm Microservice: Apply resources:

      kubectl apply -f k8s/storm/storm.yml
      

    Deploying with Terraform

    1. Deploy Kafka:
         terraform chdir=terraform/kafka init
         terraform chdir=terraform/kafka apply
      
    2. Deploy Storm Microservice
         terraform chdir=terraform/storm init
         terraform chdir=terraform/storm apply
      

5. Testing in Production

  1. Verify Kafka is Running:

    kubectl get po -n kafka
    
  2. Check Storm Service: Ensure the service is running:

    kubectl get po -n storm
    
  3. Test Service Endpoints:

    • List gRPC methods:
      grpcurl storm.domain.tld:443 list
      
    • Send a request:
      grpcurl -d '{"user_id": 1234, "status": true}' storm.domain.tld status.v1.Status/UpdateStatus
      
  4. Expected Response:

    {
        "message": "ok"
    }