When running the KubeClarity CLI to scan for vulnerabilities, the CLI needs to download the relevant vulnerability databases to the location where the KubeClarity CLI is running. Running the CLI in a CI/CD pipeline will result in downloading the databases on each run, wasting time and bandwidth. For this reason, several of the supported scanners have a remote mode in which a server is responsible for the database management and possibly scanning of the artifacts.
Note: The examples below are for each of the scanners, but they can be combined to run together the same as they can be in non-remote mode.
1 - Trivy
The Trivy scanner supports remote mode using the Trivy server. The Trivy server can be deployed as documented here: Trivy client-server mode.
Instructions to install the Trivy CLI are available here: Trivy install.
The Aqua team provides an official container image that can be used to run the server in Kubernetes or docker, which we’ll use in the examples.
Start the server:
docker run -p8080:8080 --rm aquasec/trivy:0.41.0 server --listen0.0.0.0:8080
Run a scan using the server:
SCANNERS_LIST="trivy"SCANNER_TRIVY_SERVER_ADDRESS="http://<trivy server address>:8080" ./kubeclarity_cli scan --input-type sbom nginx.sbom
Authentication
The Trivy server also provides token based authentication to prevent unauthorized use of a Trivy server instance. You can enable it by running the server with --token flag:
docker run -p8080:8080 --rm aquasec/trivy:0.41.0 server --listen0.0.0.0:8080 --token mytoken
Then pass the token to the scanner:
SCANNERS_LIST="trivy"SCANNER_TRIVY_SERVER_ADDRESS="http://<trivy server address>:8080"SCANNER_TRIVY_SERVER_TOKEN="mytoken" ./kubeclarity_cli scan --input-type sbom nginx.sbom
2 - Grype
Grype supports remote mode using grype-server, a RESTful grype wrapper which provides an API that receives an SBOM and returns the grype scan results for that SBOM. Grype-server ships as a container image, so can be run in Kubernetes or via Docker standalone.
Start the server:
docker run -p9991:9991 --rm gcr.io/eticloud/k8sec/grype-server:v0.1.5
Run a scan using the server:
SCANNERS_LIST="grype"SCANNER_GRYPE_MODE="remote"SCANNER_REMOTE_GRYPE_SERVER_ADDRESS="<grype server address>:9991"SCANNER_REMOTE_GRYPE_SERVER_SCHEMES="https" ./kubeclarity_cli scan --input-type sbom nginx.sbom
If the grype server is deployed with TLS, you can override the default URL scheme like this:
SCANNERS_LIST="grype"SCANNER_GRYPE_MODE="remote"SCANNER_REMOTE_GRYPE_SERVER_ADDRESS="<grype server address>:9991"SCANNER_REMOTE_GRYPE_SERVER_SCHEMES="https" ./kubeclarity_cli scan --input-type sbom nginx.sbom
3 - Dependency track
Generate certificates
First generate a self-signed RSA key and certificate that the server can use for TLS.
INGRESSGATEWAY_SERVICE_IP=$(kubectl get svc -n ingress-nginx ingress-nginx-controller -ojsonpath='{.status.loadBalancer.ingress[0].ip}')echo$INGRESSGATEWAY_SERVICE_IP34.135.8.34
Add a DNS record
Add a DNS record into the /etc/hosts file for the NGINX loadblancer IP address. For example, for INGRESSGATEWAY_SERVICE_IP=34.135.8.34:
$ cat /etc/hosts
### Host Database## localhost is used to configure the loopback interface# when the system is booting. Do not change this entry.##127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
# Added by Docker Desktop# To allow the same kube context to work on the host and the container:127.0.0.1 kubernetes.docker.internal
# End of section34.135.8.34 dependency-track-apiserver.dependency-track