Logo

Robusta KRR

Prometheus-based Kubernetes Resource Recommendations
Usage docs Β»

Report Bug Β· Request Feature Β· Slack Channel

## About The Project ![Product Name Screen Shot][product-screenshot] Robusta KRR (Kubernetes Resource Recommender) is a CLI tool for optimizing resource allocation in Kubernetes clusters. It gathers pod usage data from Prometheus and recommends requests and limits for CPU and memory. This reduces costs and improves performance. ### Features - No Agent Required: Robusta KRR is a CLI tool that runs on your local machine. It does not require running Pods in your cluster. - Prometheus Integration: Gather resource usage data using built-in Prometheus queries, with support for custom queries coming soon. - Extensible Strategies: Easily create and use your own strategies for calculating resource recommendations. - Future Support: Upcoming versions will support custom resources (e.g. GPUs) and custom metrics. ### Resource Allocation Statistics According to a recent [Sysdig study](https://sysdig.com/blog/millions-wasted-kubernetes/), on average, Kubernetes clusters have: - 69% unused CPU - 18% unused memory By right-sizing your containers with KRR, you can save an average of 69% on cloud costs. ### How it works #### Metrics Gathering Robusta KRR uses the following Prometheus queries to gather usage data: - CPU Usage: ``` sum(irate(container_cpu_usage_seconds_total{{namespace="{object.namespace}", pod="{pod}", container="{object.container}"}}[{step}])) ``` - Memory Usage: ``` sum(container_memory_working_set_bytes{job="kubelet", metrics_path="/metrics/cadvisor", image!="", namespace="{object.namespace}", pod="{pod}", container="{object.container}"}) ``` [_Need to customize the metrics? Tell us and we'll add support._](https://github.com/robusta-dev/krr/issues/new) #### Algorithm By default, we use a _simple_ strategy to calculate resource recommendations. It is calculated as follows (_The exact numbers can be customized in CLI arguments_): - For CPU, we set a request at the 99th percentile with no limit. Meaning, in 99% of the cases, your CPU request will be sufficient. For the remaining 1%, we set no limit. This means your pod can burst and use any CPU available on the node - e.g. CPU that other pods requested but aren’t using right now. - For memory, we take the maximum value over the past week and add a 5% buffer. #### Prometheus connection Find about how KRR tries to find the default prometheus to connect here. ### Difference with Kubernetes VPA | Feature πŸ› οΈ | Robusta KRR πŸš€ | Kubernetes VPA 🌐 | | --------------------------- | ---------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------- | | Resource Recommendations πŸ’‘ | βœ… CPU/Memory requests and limits | βœ… CPU/Memory requests and limits | | Installation Location 🌍 | βœ… Not required to be installed inside the cluster, can be used on your own device, connected to a cluster | ❌ Must be installed inside the cluster | | Workload Configuration πŸ”§ | βœ… No need to configure a VPA object for each workload | ❌ Requires VPA object configuration for each workload | | Immediate Results ⚑ | βœ… Gets results immediately (given Prometheus is running) | ❌ Requires time to gather data and provide recommendations | | Reporting πŸ“Š | βœ… Detailed CLI Report, web UI in [Robusta.dev](https://home.robusta.dev/) | ❌ Not supported | | Extensibility πŸ”§ | βœ… Add your own strategies with few lines of Python | :warning: Limited extensibility | | Custom Metrics πŸ“ | πŸ”„ Support in future versions | ❌ Not supported | | Custom Resources πŸŽ›οΈ | πŸ”„ Support in future versions (e.g., GPU) | ❌ Not supported | | Explainability πŸ“– | πŸ”„ Support in future versions (Robusta will send you additional graphs) | ❌ Not supported | | Autoscaling πŸ”€ | πŸ”„ Support in future versions | βœ… Automatic application of recommendations | ### Robusta UI integration If you are using [Robusta SaaS](https://platform.robusta.dev/), then KRR is integrated starting from [v0.10.15](https://github.com/robusta-dev/robusta/releases/tag/0.10.15). You can view all your recommendations (previous ones also), filter and sort them by either cluster, namespace or name. More features (like seeing graphs, based on which recommendations were made) coming soon. [Tell us what you need the most!](https://github.com/robusta-dev/krr/issues/new) ![Robusta UI Screen Shot][ui-screenshot]

(back to top)

## Getting Started ### Installation 1. Make sure you have [Python 3.9](https://www.python.org/downloads/) (or greater) installed 2. Clone the repo: ```sh git clone https://github.com/robusta-dev/krr ``` 3. Navigate to the project root directory (`cd ./krr`) 4. Install requirements: ```sh pip install -r requirements.txt ``` 5. Run the tool: ```sh python krr.py --help ```

(back to top)

## Usage Straightforward usage, to run the simple strategy: ```sh python krr.py simple ``` If you want only specific namespaces (default and ingress-nginx): ```sh python krr.py simple -n default -n ingress-nginx ``` By default krr will run in the current context. If you want to run it in a different context: ```sh python krr.py simple -c my-cluster-1 -c my-cluster-2 ``` If you want to get the output in JSON format (--logtostderr is required so no logs go to the result file): ```sh python krr.py simple --logtostderr -f json > result.json ``` If you want to get the output in YAML format: ```sh python krr.py simple --logtostderr -f yaml > result.yaml ``` If you want to see additional debug logs: ```sh python krr.py simple -v ``` More specific information on Strategy Settings can be found using ```sh python krr.py simple --help ```

(back to top)

## Prometheus auto-discovery By default, KRR will try to auto-discover the running Prometheus by scanning those labels: ```python "app=kube-prometheus-stack-prometheus" "app=prometheus,component=server" "app=prometheus-server" "app=prometheus-operator-prometheus" "app=prometheus-msteams" "app=rancher-monitoring-prometheus" "app=prometheus-prometheus" ``` If none of those labels result in finding Prometheus, you will get an error and will have to pass the working url explicitly (using the `-p` flag).

(back to top)

## Example of using port-forward for Prometheus If your prometheus is not auto-connecting, you can use `kubectl port-forward` for manually forwarding Prometheus. For example, if you have a Prometheus Pod called `kube-prometheus-st-prometheus-0`, then run this command to port-forward it: ```sh kubectl port-forward pod/kube-prometheus-st-prometheus-0 9090 ``` Then, open another terminal and run krr in it, giving an explicit prometheus url: ```sh python krr.py simple -p http://127.0.0.1:9090 ```

(back to top)

## Creating a Custom Strategy/Formatter Look into the `examples` directory for examples on how to create a custom strategy/formatter.

(back to top)

## Building _We are planning to use pyinstaller to build binaries for distribution. Right now you can build the binaries yourself, but we're not distributing them yet._ 1. Install the project manually (see above) 2. Navigate to the project root directory 3. Install poetry (https://python-poetry.org/docs/#installing-with-the-official-installer) 4. Install requirements with dev dependencies: ```sh poetry install --group dev ``` 5. Build the binary: ```sh poetry run pyinstaller krr.py ``` 6. The binary will be located in the `dist` directory. Test that it works: ```sh cd ./dist/krr ./krr --help ```

(back to top)

## Testing _We use pytest to run tests._ 1. Install the project manually (see above) 2. Navigate to the project root directory 3. Install poetry (https://python-poetry.org/docs/#installing-with-the-official-installer) 4. Install dev dependencies: ```sh poetry install --group dev ``` 5. Install robusta_krr as editable dependency: ```sh pip install -e . ``` 6. Run the tests: ```sh poetry run pytest ```

(back to top)

## Contributing Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are **greatly appreciated**. If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again! 1. Fork the Project 2. Create your Feature Branch (`git checkout -b feature/AmazingFeature`) 3. Commit your Changes (`git commit -m 'Add some AmazingFeature'`) 4. Push to the Branch (`git push origin feature/AmazingFeature`) 5. Open a Pull Request

(back to top)

## License Distributed under the MIT License. See `LICENSE.txt` for more information.

(back to top)

## Contact If you have any questions, feel free to contact support@robusta.dev Project Link: [https://github.com/robusta-dev/krr](https://github.com/robusta-dev/krr)

(back to top)

[contributors-shield]: https://img.shields.io/github/contributors/othneildrew/Best-README-Template.svg?style=for-the-badge [contributors-url]: https://github.com/othneildrew/Best-README-Template/graphs/contributors [forks-shield]: https://img.shields.io/github/forks/othneildrew/Best-README-Template.svg?style=for-the-badge [forks-url]: https://github.com/othneildrew/Best-README-Template/network/members [stars-shield]: https://img.shields.io/github/stars/othneildrew/Best-README-Template.svg?style=for-the-badge [stars-url]: https://github.com/othneildrew/Best-README-Template/stargazers [issues-shield]: https://img.shields.io/github/issues/othneildrew/Best-README-Template.svg?style=for-the-badge [issues-url]: https://github.com/othneildrew/Best-README-Template/issues [license-shield]: https://img.shields.io/github/license/othneildrew/Best-README-Template.svg?style=for-the-badge [license-url]: https://github.com/othneildrew/Best-README-Template/blob/master/LICENSE.txt [linkedin-shield]: https://img.shields.io/badge/-LinkedIn-black.svg?style=for-the-badge&logo=linkedin&colorB=555 [linkedin-url]: https://linkedin.com/in/othneildrew [product-screenshot]: images/screenshot.jpeg [ui-screenshot]: images/ui_screenshot.jpeg