Categorygithub.com/kubescape/host-scanner
repositorypackage
0.0.0-20240902123000-27d63d6d9c47
Repository: https://github.com/kubescape/host-scanner.git
Documentation: pkg.go.dev

# Packages

No description provided by the author
No description provided by the author

# README

Kubescape Host-Scanner

Test Suite build

code size

Description

This component is a data acquisition component in the Kubescape project. Its goal is to collect information about the Kubernetes node host for further security posture evaluation in Kubescape.

Deployment

Host-scanner is deployed as a privileged Kubernetes DaemonSet in the cluster. It publishes an API for clients to read host information.

Supported APIs

endpointtest-commanddescriptionexample
/healthzkubectl curl "http://<host-scanner-pod-name>:7888/healthz" -n <NAMESPACE>Returns liveness status of host-scanner.[example] {"alive": true}
/readyzkubectl curl "http://<host-scanner-pod-name>:7888/readyz" -n <NAMESPACE>Returns readiness status of host-scanner. Return 503 in case host-scanner is not ready yet.[example] {"ready": true}
/controlplaneinfokubectl curl "http://<host-scanner-pod-name>:7888/controlplaneinfo" -n <NAMESPACE>Returns ControlPlane related information.example
/cniinfokubectl curl "http://<host-scanner-pod-name>:7888/cniinfo" -n <NAMESPACE>Returns container network interface information.example
/kernelversionkubectl curl "http://<host-scanner-pod-name>:7888/kernelversion" -n <NAMESPACE>Returns the kernel version.example
/kubeletinfokubectl curl "http://<host-scanner-pod-name>:7888/kubeletinfo" -n <NAMESPACE>Returns kubelet information.example
/kubeproxyinfokubectl curl "http://<host-scanner-pod-name>:7888/kubeproxyinfo" -n <NAMESPACE>Returns kube-proxy command line information.example
/cloudproviderinfokubectl curl "http://<host-scanner-pod-name>:7888/cloudproviderinfo" -n <NAMESPACE>Returns cloud provider information metadata.example
/osreleasekubectl curl "http://<host-scanner-pod-name>:7888/osrelease" -n <NAMESPACE>Returns information on the node's operating system.example
/openedportskubectl curl "http://<host-scanner-pod-name>:7888/openedports" -n <NAMESPACE>Returns information on open ports.example
/linuxsecurityhardeningkubectl curl "http://<host-scanner-pod-name>:7888/linuxsecurityhardening" -n <NAMESPACE>Returns information about security hardening feature.example
/versionkubectl curl "http://<host-scanner-pod-name>:7888/version" -n <NAMESPACE>Returns the build version of the host-scanner.---

Local usage - Setup, Build and Test

1. Prerequisites

2. Run Host-Scanner on local environment

Using Armo's built in script (with minikube)

To build it:

python3 ./scripts/build-host-scanner-local.py --build

To revert it:

python3 ./scripts/build-host-scanner-local.py --revert

For Help:

python3 ./scripts/build-host-scanner-local.py --help

Using private cloud providers

Setup

Connect to your cluster (different cloud providers have different methods to connect the cluster). should be done once for each new cluster.

Make sure your cluster is kubectl current context

kubectl config current-context

Create docker-registry secret. Namespace is defined in the deployment yaml. To find the full path for the docker config file run: ls ~/.docker/config.json

>kubectl create secret docker-registry <MySecretName> --from-file=.dockerconfigjson=</path/to/.docker/config.json> -n <namespace>

Login, Build and Push image to remote image repository

Docker hub

Login to dockerhub

docker login

Create a new repository in dockerhub and mark it as private

"myRepoName" should be the exact name of the image repository name in dockerhub

docker build -f build/Dockerfile . -t <myRepoName>:<MyImageTag>

Push image to a remote repository (dockerhub in our case)

docker push <myRepoName>:<MyImageTag>

AWS ECR

Follow these instructions.

Configure deployment.yaml

Configure k8s-deployment.yaml with the pushed image and and imagePullSecrets. Example:

apiVersion: v1
kind: Namespace
metadata:
 labels:
   app: host-sensor
   kubernetes.io/metadata.name: armo-kube-host-sensor
   tier: armo-kube-host-sensor-control-plane
 name: armo-kube-host-sensor

---
apiVersion: apps/v1
kind: DaemonSet
metadata:
 name: host-sensor
 namespace: armo-kube-host-sensor
 labels:
   k8s-app: armo-kube-host-sensor
spec:
 selector:
   matchLabels:
     name: host-sensor
 template:
   metadata:
     labels:
       name: host-sensor
   spec:
     tolerations:
     # this toleration is to have the daemonset runnable on master nodes
     # remove it if your masters can't run pods
     - key: node-role.kubernetes.io/control-plane
       operator: Exists
       effect: NoSchedule
     - key: node-role.kubernetes.io/master
       operator: Exists
       effect: NoSchedule
     containers:
     - name: host-sensor
       image: myRepoName:MyImageTag
       securityContext:
         privileged: true
         readOnlyRootFilesystem: true
         procMount: Unmasked
       ports:
         - name: http
           hostPort: 7888
           containerPort: 7888
       resources:
         limits:
           cpu: 1m
           memory: 200Mi
         requests:
           cpu: 1m
           memory: 200Mi
       volumeMounts:
       - mountPath: /host_fs
         name: host-filesystem
     imagePullSecrets:
     - name: MySecretName
     terminationGracePeriodSeconds: 120
     dnsPolicy: ClusterFirstWithHostNet
     automountServiceAccountToken: false
     volumes:
     - hostPath:
         path: /
         type: Directory
       name: host-filesystem
     hostNetwork: true
     hostPID: true
     hostIPC: true

3. Apply and Test

Create host-scanner pod

kubectl apply -f https://raw.githubusercontent.com/kubescape/kubescape/master/core/pkg/hostsensorutils/hostsensor.yaml

If command failed, use the below instead:

kubectl apply -f deployment/k8s-deployment.yaml

Verify pod is launched successfully

kubectl get pods -A

Test an API

kubectl curl "http://<host-scanner-pod-name>:7888/<api-endpoint>" --namespace <namespace>

View pod logs

On a new terminal, view pod logs:

kubectl logs <host-scanner-pod-name> --namespace <namespace> -f

Contributions

Thanks to all our contributors! Check out our CONTRIBUTING file to learn how to join them.