In data engineering we often default to processing in nightly or hourly batches, but that pattern is not enough anymore. Our customers know information is created all the time and realize it should be available much sooner. While the move to stream processing adds complexity, the tools we have available make it achievable for teams of any size.
This session exposes beginners and experienced tech professionals to techniques on how to implement real-time stream processes using Azure Event Hubs or Apache Kafka. We’ll dive into three options to build streaming data pipelines: Azure Databricks, Azure Stream Analytics, and Confluent Cloud.
Sign up to stay up to date with news, special announcements and educational content.
Redgate will only contact you about PASS Data Community Summit (in line with our Privacy Policy) unless you separately request emails about Redgate. You can unsubscribe from these updates at any time.