h3h3h3

4 minute read · December 18, 2023

Kubernetes Autoscaling in Dremio 24.3

Jonathan Dixon

Jonathan Dixon · Product Management, Dremio

With the release of Dremio Software Enterprise Edition 24.3, we’ve added Kubernetes autoscaling to Dremio. Kubernetes autoscaling for Dremio streamlines resource management with an emphasis on memory and CPU utilization in Dremio workloads.  Now, Dremio on Kubernetes scales automatically, reducing time spent in administration and R&D to forensically size clusters.

Understanding Kubernetes Autoscaling

Kubernetes autoscaling is a dynamic resource management feature within the Kubernetes container orchestration system. It allows for automatic adjustment of the number of running instances based on real time or historical metrics. In the context of Dremio this means you have the ability to scale up and down the number of Dremio executors without the need for manual intervention.

Historically, it was possible to have Dremio scale up the number of executors. The challenge was scaling down while queries were running in the environment - before Kubernetes autoscaling, these queries would not run to completion. In Dremio 24.3, while using Kubernetes, users can ensure that their queries are not interrupted by scaling events as Dremio will ensure that each executor is drained prior to termination.

Kubernetes autoscaling enables dynamic management of memory and CPU resource allocation for Dremio workloads, with customization also possible by leveraging Dremio’s Prometheus metrics. Scaling can be performed on individual engines or the entire cluster, meaning that if you have workloads where you want to limit resources, you have the autonomy to dictate this. Dremio will automatically ensure that as a specific workload grows, so does the associated engine[s]. After the peak period has passed, Dremio will revert engines to their original size.

Getting Started With Autoscaling

The Dremio Helm chart has been updated with v24.3, with all of the necessary configuration enabled out of the box. In the docs folder a getting started guide has been added to give the three simple steps necessary to get up and running. This will also walk you through deploying Prometheus to enable customization, if desired. For most, this will constitute a simple 5 minute change of their Helm Chart configuration.

A Deeper Dive into Practical Benefits

Kubernetes autoscaling with Dremio delivers several benefits:

  1. Resource Allocation Efficiency: Kubernetes autoscaling, simplifies the allocation of memory and CPU in Dremio workloads. This saves time and reduces the complexity of resource management.
  2. Cost-Efficiency: The autoscaling metrics lead to more efficient resource allocation. This results in cost savings as customers can avoid over-provisioning resources that may not be necessary.
  3. Tailored Scalability: Dremio's user-customizable scaling metrics enable you to adapt precisely to changing requirements. This level of control ensures that data analysis remains smooth and responsive.

In Summary

Dremio's version 24.3, featuring Kubernetes autoscaling with Prometheus metrics is about making sure that Dremio’s executors can adapt as user workloads change. By enabling scaling for executors, Dremio is ensuring that it scales as is required, without manual intervention, meaning that there’s less time spent in administration and R&D forensically sizing the cluster.

Kubernetes Autoscaling is available now!  If you’re already a Dremio Self-Managed customer, it’s easy to upgrade. Visit our Support Portal to download the latest version. Not yet a Dremio user? Visit the Get Started page to find offerings for Dremio Cloud or Dremio Self-Managed.

Ready to Get Started?

Bring your users closer to the data with organization-wide self-service analytics and lakehouse flexibility, scalability, and performance at a fraction of the cost. Run Dremio anywhere with self-managed software or Dremio Cloud.