Scaling Python Data Science with Dask

Session Abstract

With the growing significance of big data for data science and machine learning, scaling data work is more important than ever.

Dask is an open source library for parallel computing in Python. It provides a complete framework for distributed computing and makes it easy for data professionals and DevOps engineers to scale their workflows quickly. Dask is used in a wide range of domains from finance and retail to academia and life sciences. It is also leveraged internally by numerous special-purpose tools like XGBoost, RAPIDS, PyTorch, Prefect, Airflow and more.

In this session, you will:
• Learn about Dask, what it can and can’t do, how it works and who uses it
• See how Dask augments traditional database query engines with more advanced machine learning capabilities, and how these technologies can be leveraged to work with data lakes
• See real-world examples, including data science pipelines at Capital One and ML workflows at Walmart
• Understand both the power and simplicity of using Dask for your own projects

However, deploying distributed systems in the cloud is hard. We’ll finish by discussing the design behind Coiled, a Dask cloud service designed to provide scalable Python with minimal fuss.