Webinars

Tracking & Triggering Pattern with Spark Stateful Streaming

Inside Adobe Experience Platform we noticed we needed to track actions happening at the control plane level and act upon them at lower levels like data lake, ingestion processes, etc. Using Apache Spark Stateful Streaming we’ve been able to create services that act by starting processes like compacting data, consolidating data, and cleaning data, minimizing processing time while keeping everything under defined SLAs. This talk presents a pattern that we’ve been using in production for the last two to three years inside Adobe Experience Platform in multiple services and with no high-severity on-call interventions and minimal-to-none operational costs on high throughput ingestion flows.

Topics Covered

Data Lake Engines
Dremio Subsurface for Apache Spark

Ready to Get Started? Here Are Some Resources to Help

Whitepaper Thumb

Whitepaper

Harness Snowflake Data’s Full Potential with Dremio

read more
Whitepaper Thumb

Whitepaper

Simplifying Data Mesh for Self-Service Analytics on an Open Data Lakehouse

read more
Whitepaper Thumb

Whitepaper

Dremio Upgrade Testing Framework

read more
get started

Get Started Free

No time limit - totally free - just the way you like it.

Sign Up Now
demo on demand

See Dremio in Action

Not ready to get started today? See the platform in action.

Watch Demo
talk expert

Talk to an Expert

Not sure where to start? Get your questions answered fast.

Contact Us

Ready to Get Started?

Bring your users closer to the data with organization-wide self-service analytics and lakehouse flexibility, scalability, and performance at a fraction of the cost. Run Dremio anywhere with self-managed software or Dremio Cloud.