4 minute read · October 21, 2021

Is Your Data on S3? 21 Reasons Why Dremio Is the Best SQL Engine for You!

Preeti Kodikal

Preeti Kodikal · Product Marketing Director, Dremio

Hey, it’s 2021!

So, here are 21 reasons why Dremio is the best SQL engine for your data residing on Amazon S3!

  1. Lightning-fast queries directly on data on S3.
  2. No need to copy data into data warehouses, cubes or extracts. Keep your data on Amazon S3 and retain control over your data.
  3. A service-like experience in your AWS account — Dremio lives in your AWS account, but feels like a service.
  4. Up and running in a few clicks.
  5. No configuration or tuning.
  6. Backups and upgrades — automated and seamless.
  7. 4-100x faster than other SQL engines.
  8. 10x lower infrastructure costs than other SQL engines.
  9. No need to size the environment based on your peak workload.
  10. Elastic engines that start and stop automatically based on your queries.
  11. The shared semantic layer provides consistent semantics across all users, applications and tools and empowers self-service data access for analysts and data scientists and provides centralized security and governance. This eliminates data sprawl and inconsistent insights across your company.
  12. Centralized security and governance rules and policies over your data lake, defined in the semantic layer, can be reused by any downstream tools or applications. The semantic layer enables you to provide different views of data to different users and roles, including row and column-level security.
  13. Consistent business logic and KPIs to all data consumers across your organization
  14. Continue to use the best-in-class BI tools of your choice — Tableau, Power BI, Looker, Cognos, SuperSet etc.
  15. Join across S3 and other AWS and on-prem databases — Dremio can perform joins across S3, ADLS and other datasets to help with multi-cloud strategies.
  16. Empower your data analysts and data scientists to discover, curate, analyze and share data directly from S3.
  17. Leverage Amazon SageMaker for machine learning. Get 10x performance increases compared to ODBC/JDBC with Dremio’s ability to use Apache Arrow Flight. This is very important for Data Science, especially when dealing with large datasets.
  18. Bring your data to life and access previously inaccessible data through interactive dashboards directly on your data lake through native Dremio connectors.
  19. Eliminate the time, effort, resources and pain associated with costly, complex and rigid data pipelines required to move and copy data into proprietary data warehouses.
  20. Enable your data engineers to focus their time on strategic projects instead of responding to continuous non-value adding data access and ETL/ELT requests.
  21. Empower your data engineers to eliminate data sprawl and inconsistent reports.

Too bad it’s not 2035! I could have provided you with many more reasons as to why Dremio is your best choice for querying your data directly from Amazon S3!

If you are curious and want to know more about Dremio for AWS, check out this Dremio for AWS datasheet for more details!

Get started today with Dremio AWS Edition or Dremio Cloud.

Ready to Get Started?

Bring your users closer to the data with organization-wide self-service analytics and lakehouse flexibility, scalability, and performance at a fraction of the cost. Run Dremio anywhere with self-managed software or Dremio Cloud.