You can install the KubeClarity backend using Helm, or you can build and run it locally.
Prerequisites
KubeClarity requires these Kubernetes permissions:
Permission | Reason |
---|---|
Read secrets in CREDS_SECRET_NAMESPACE (default: kubeclarity) | This allows you to configure image pull secrets for scanning private image repositories. |
Read config maps in the KubeClarity deployment namespace. | This is required for getting the configured template of the scanner job. |
List pods in cluster scope. | This is required for calculating the target pods that need to be scanned. |
List namespaces. | This is required for fetching the target namespaces to scan in K8s runtime scan UI. |
Create and delete jobs in cluster scope. | This is required for managing the jobs that scan the target pods in their namespaces. |
Prerequisites for AWS
If you are installing KubeClarity on AWS, complete the following steps. These are needed because KubeClarity uses a persistent PostgreSQL database, and that requires a volume.
- Make sure that your EKS cluster is 1.23 or higher.
- Install the EBS CSI Driver EKS add-on. For details, see Amazon EKS add-ons.
- Configure the EBS CSI Driver with IAMServiceRole and policies. For details, see Creating the Amazon EBS CSI driver IAM role.
Install using Helm
-
Add the Helm repository.
-
Save the default KubeClarity chart values.
-
(Optional) Check the configuration in the
values.yaml
file and update the required values if needed. You can skip this step to use the default configuration.- To enable and configure the supported SBOM generators and vulnerability scanners, check the
analyzer
andscanner
configurations under thevulnerability-scanner
section. You can skip this step to use the default configuration settings.
- To enable and configure the supported SBOM generators and vulnerability scanners, check the
-
Deploy KubeClarity with Helm.
-
If you have customized the
values.yaml
file, run: -
To use the default configuration, run:
-
For an OpenShift Restricted SCC compatible installation, run:
-
-
Wait until all the pods are in ‘Running’ state. Check the output of the following command:
The output should be similar to:
-
Port-forward to the KubeClarity UI.
-
(Optional) Install a sample application (sock shop) to run your scans on.
-
Create a namespace for the application.
-
Install the application.
-
Check that the installation was successful.
Expected output:
-
-
Open the KubeClarity UI in your browser at http://localhost:9999/. The KubeClarity dashboard should appear. KubeClarity UI has no data to report vulnerabilities after a fresh install, so there is no data on the dashboard.
-
If you also want to try KubeClarity using its command-line tool, Install the CLI. Otherwise, you can run runtime scans using the dashboard.
Uninstall using Helm
Later if you have finished experimenting with KubeClarity, you can delete the backend by completing the following steps.
-
Helm uninstall
-
Clean the resources. By default, Helm doesn’t remove the PVCs and PVs for the StatefulSets. Run the following command to delete them all:
Build and run locally with demo data
-
Build the UI and the backend and start the backend locally, either using Docker, or without it:
-
Using docker:
-
Build UI and backend (the image tag is set using VERSION):
-
Run the backend using demo data:
-
-
Local build:
-
Build UI and backend
-
Copy the built site:
-
Run the backend locally using demo data:
-
-
-
Open the KubeClarity UI in your browser: http://localhost:9999/