Once registered you can log in to create your perfect agenda.
This one-day training introduces data engineering concepts for enterprise data warehousing using Microsoft Fabric Data Factory. Designed for those new to data engineering and self-taught professionals looking to expand their knowledge, this course balances theoretical concepts with practical demonstrations. The day begins with a lecture-based overview covering the essentials of enterprise data warehousing and Microsoft Fabric's role in modern data architectures. We'll explain key terminology, fundamental concepts, and how Fabric Data Factory fits into the broader data engineering landscape. This foundation ensures all participants share a common understanding before moving into more technical content. The second part transitions to demonstration-focused learning, where you'll observe practical implementation of pipeline-driven staging and loading processes. Rather than marketing promises, we'll show you real-world patterns that address common challenges in data warehousing projects. Each demonstration illustrates pragmatic solutions that balance theoretical best practices with practical constraints. The final section continues with demonstrations of lifecycle management, showing straightforward approaches to monitoring data pipelines, implementing maintenance routines, and establishing sensible governance. We focus on practical techniques that can be implemented immediately, regardless of your organization's maturity level. This training emphasizes practical knowledge, providing a realistic view of data engineering with Microsoft Fabric that you can apply regardless of your current experience level.
The pressure on data engineers to deliver data faster and reliably continues to grow, especially with increasing demands for data governance, security, and compliance.
This workshop provides a practical, hands-on approach to accelerating database and data warehouse changes while maintaining security and reliability. Whether working with Fabric Data Warehouse or Azure SQL Database, you'll learn best practices for version-controlling database changes and implementing CI/CD workflows using GitHub, Azure DevOps, and SQL Projects to all variants of SQL Server.
We will also showcase how to supercharge development with CoPilot, enabling faster SQL and Fabric coding while ensuring best practices are followed.
By the end of this session, attendees will:
Understand how to automate database change management efficiently.
Learn how to apply DevOps and CI/CD principles to database deployments.
Gain insights into leveraging CoPilot for AI-assisted development in SQL and Fabric.
See practical demos and real-world implementation strategies.
The session will be a hands-on workshop combining interactive demos, guided exercises, and real-world case studies, where attendees will learn to automate database deployments, leverage CoPilot for SQL development, and implement CI/CD best practices using GitHub, Azure DevOps, and SQL Projects.
If you're a data engineer, database administrator, or DevOps engineer looking to modernize your database delivery pipeline, this session is for you!
Unlock the power of automation and enhance your database management skills in this full-day PowerShell workshop designed for DBAs. This session will equip you with essential skills and streamline your daily operations and reduce manual errors through scripting and automation. We will start with PowerShell fundamentals before moving into advanced scripting best practices tailored to the needs of SQL Server management. A key highlight of the workshop is our in-depth exploration of the **dbatools module**—a powerful, community-built toolkit that simplifies SQL Server management tasks. With over 700 cmdlets/functions it is a treasure trove of greatness (in my opinion). This mostly demo session with many real-world examples, you’ll learn how to leverage dbatools to automate tasks such as backups, restores, performance monitoring, and migrations, dramatically enhancing your operational efficiency. Whether you’re looking to learn PowerShell and dbatools or to explore automation strategies, this workshop offers a rich blend of theoretical insights and interactive scenarios. Join me for a day of learning, collaboration, and fun that will allow you to take full control of your database operations with the combined power of PowerShell and dbatools.
Are your users frustrated by slow reports? Do your SQL Server instances—on-premises or in Azure—struggle under high demand? Whether you manage a single server or a large-scale environment, performance tuning is essential, and it doesn’t have to be overwhelming. In this full-day session, learn how to identify and resolve performance bottlenecks using a wide range of tools, scripts, and best practices. We’ll start with practical techniques for analyzing your environment, reading execution plans, and tuning for performance. You'll gain a clear understanding of how everyday maintenance tasks—and even infrastructure—can impact your server’s responsiveness. This session focuses on SQL Server 2019 and newer, including Azure SQL Database and SQL Server 2025, covering the latest performance enhancements and cloud-specific considerations. We’ll walk through real-world examples of common performance problems and how to fix them using straightforward, repeatable methods. You’ll leave with: A checklist of key performance areas to evaluate in your environment—whether in the cloud or on-prem Strategies for addressing both query-level and server-level issues Insights into how SQL Server and Azure features can work for—or against—you. Confidence to apply what you’ve learned, regardless of your current skill level. Designed for DBAs, developers, and anyone responsible for SQL Server performance, this session emphasizes practical, real-world solutions you can use right away—on any platform.
Let’s be honest: even experienced developers write T-SQL that starts to smell over time. Maybe it works, but it’s tangled, hard to maintain, and full of traps for future you. In this full day, demo-packed session, Erik Darling and Kendra Little will walk you through the real world query problems that quietly haunt OLTP systems—and how to fix them without rewriting the whole app. We’ll dissect the subtle stuff that tanks performance: implicit conversions, sneaky NULL logic, non-sargable filters, and joins that don’t do what you think they do. You’ll compare EXISTS to JOINs, untangle OR conditions, and learn when EXCEPT and INTERSECT save you from disaster. You'll see where views go off the rails, when temp tables and table variables shine, and how to modify data in a way that won’t make your DBA cry. Along the way, you’ll learn to leverage window functions, cross apply, and patterns for parameterization that hold up under pressure. You’ll know exactly how to refactor messy code into queries that are easier to understand, debug, and evolve—without sacrificing intent or introducing subtle bugs. If you’ve ever looked at a query and thought, “I have no idea what this does, and I’m afraid to touch it,” this session is for you. You already know how to write T-SQL that works. Now it’s time to write T-SQL you’re proud of.
A well-designed Power BI report should engage, inform, and be accessible to all users. Yet, many reports suffer from poor usability, cognitive overload, and accessibility barriers, making insights harder to interpret and act upon. In this interactive workshop, you’ll explore UX best practices and digital accessibility principles to create reports that are intuitive, clear, and inclusive. Through hands-on exercises, case studies, and live critiques, you’ll gain practical strategies to enhance usability and accessibility in your Power BI reports. What You’ll Learn: - Identify audience needs and design for different personas - Apply UX best practices to improve clarity and reduce cognitive fatigue - Recognize and fix common accessibility challenges in Power BI reports - Integrate accessibility checks and automation into your reporting workflows The workshop includes two key segments: UX-Driven Report Design – Learn about different audiences, layout strategies, and improve usability through a hands-on redesign challenge. Accessibility in Power BI – Experience digital barriers firsthand, apply real-time fixes, and explore Power BI’s accessibility features to enhance inclusivity. By the end, you’ll have actionable techniques, tools, and best practices to build user-friendly, effective, and inclusive Power BI reports.
You've been working with Microsoft SQL Server for a few years. You're comfortable reading wait stats, analyzing execution plans, and designing indexes, but now you're starting to look at other databases. You want to leverage what you already know, but have it translated into Postgres terms to understand what that database platform has available. Postgres is just similar enough to fool you into thinking it'll be easy - but so dramatically different that you'll need help making the jump. Let's get you started in one fun day packed with side-by-side examples comparing SQL Server versus Postgres.
Advance your DAX skills in this workshop designed to bridge the gap between basic and advanced concepts. Through real-world scenarios, you’ll learn to write efficient expressions, solve complex business challenges, and create dynamic, impactful reports with DAX. Here are a few examples of what you can learn in this workshop: • Using OR conditions between slicers in DAX. • Creating a slicer that filters multiple columns in Power BI. • Learn how to use REMOVEFILTERS / VALUES for “natural” hierarchical calculations. • Show updated year-to-date actuals and forecasts in the same chart. • When and how to use visual calculations in DAX. • Optimize cumulative totals using variables and windows. • Implement different types of ranking calculations. • Aggregate relative periods (like each new customer's first 30 days of purchase) efficiently. Good experience writing DAX measures in Power BI or Analysis Services is a prerequisite for attending this training. You must know row context, filter context, and context transition. You are comfortable using CALCULATE and are not afraid to learn something new.
You’ve seen it before: the procedure that looks like it was generated by an AI trained on Stack Overflow and despair. It’s got MERGE. It’s got RIGHT JOINs. It’s got logic so tangled you’d need a flowchart, a flashlight, and a therapist to debug it. And now… it’s your problem. In this full-day festival of query-fixing, Erik Darling and Kendra Little lead you through the real world mysteries of advanced T-SQL: the strange, the slow, and the occasionally cursed. You’ll tackle tangled paging logic, rescue window functions and indexed views from spools and spills, and finally learn when to keep a CTE—and when to yeet it. We’ll refactor data modifications that block like linebackers, decode procedural patterns, and write dynamic SQL that’s powerful and polite. You’ll learn when to CROSS APPLY, dig into views vs. inline TVFs, and discover why RIGHT JOIN is not simply LEFT JOIN’s syntactic twin. We’ll uncover when user-defined functions wreck your query execution plans—and how to rewrite them with flair. If you’ve ever been curious about why that query sometimes takes SO long and how to best rewrite it without just guessing, this is your playground. Expect fast demos, big laughs, and a glorious cheat sheet to take home. Because refactoring SQL isn’t just necessary—it’s super fun when you're in the right party.
Wondering how to get started with enterprise Fabric Development? This workshop will provide attendees with the knowledge and tools needed to create a enterprise-ready Microsoft Fabric environment. The workshop covers everything from getting started gathering data from different sources, transforming the data into an analytical model, securing access, and monitoring it’s performance. As Fabric provides many different methods for performing these tasks, we will cover a variety of different development tools, including using shortcuts, copying data, pipelines, Data Flow Gen 2, notebooks, and explain which is the best choice in a given situation. Participants will practice the steps in hands on exercises using medallion architecture to transform the data into an analytical lakehouse, which can be used for ad-hoc querying, and of course as a source for Power BI reports. Participants will learn how to provide ongoing maintenance, security and monitoring of the lakehouse to ensure it is an enterprise level solution. The workshop experience and examples will provide participants with the knowledge needed to implement the techniques to create their own lakehouse. By the end of the session, participants will not only understand the technical steps involved but also when and why to choose a lakehouse architecture for their organizational data needs.
All the Azure Data offerings are great. But they are also confusing. Which one is right for you? Which size do you need? The answer is of course: It depends. Join us for a day of unmystifying the jungle of offerings! We will walk you through the different service offerings from SQL Server running in a VM over Azure SQL DB up to Fabric. To make sure, this is applicable and actionable, we will clearly structure this day by use cases for both, your needs to have your data land in Azure as well as how to make it accessible for consumption: - HA/DR – are you intending to use Azure only as your backup datacenter? - Migration – is Azure going to be your new home? - ETL, Replication, Mirroring and Links – are you only intending to run some of your workloads, like analytics in the cloud and need to build a landing zone for your data from other sources? - Streaming - Are you getting data from sensors or other devices? - Analytics – Is Fabric really your only choice to run reports in the future? This demo packed day will be your fast track to figure out which of the countless offerings is right for you and what it will take you to get there. We’ll focus on the technical aspects but also take a look at implications like security, governance and of course: cost.
In today's data-driven world, SQL Server continues to a powerhouse for organizations looking to leverage their data effectively. This all-day training session offers practical, actionable insights for optimizing SQL Server environments and ensuring operational efficiency, whether on-premises or in the cloud. We’ll start by breaking down the basics of hardware and performance. You’ll learn how SQL Server uses system resources like CPU, memory, and storage, and how to choose the right setup for your environment. We’ll showcase both on-prem and cloud-based options so you can make smart choices that fit your organization’s needs. From there, we’ll walk through essential day-to-day administration tasks. You’ll learn how to configure your SQL Server environment, set up backups, manage routine maintenance, and build simple disaster recovery plans. We’ll use real-world examples to help you understand what to do, why it matters, and how to handle common challenges that come up in a DBA’s world. In this session - You will: • Learn about critical facets of SQL Server architecture • Exam common configurations & administrative practices • Review high availability & disaster recovery options for SQL Server By the end of the day, you’ll walk away with the confidence and knowledge to start managing SQL Server environments effectively—and a solid foundation to grow from as your experience builds.
SQL Server performance tuning can be overwhelming, but it doesn’t have to be. In this session, we will break down key strategies that help improve query performance, optimize indexes, and enhance database efficiency. If you already know SQL and want to step into performance tuning, this session is for you. We will cover essential tuning techniques, such as indexing strategies, query execution plans, and common performance bottlenecks. Along the way, we’ll also explore how Generative AI (GenAI) tools can assist in performance analysis, query optimization, and database monitoring. Additionally, we will touch on how Python can be used for database performance analysis and automation. Expect a practical discussion with real-world examples, covering: - How to analyze and optimize slow queries. - The role of indexing and execution plans in performance tuning. - A brief look at how GenAI can provide performance insights. By the end of this session, you will have a strong foundation to start your journey in SQL Server performance tuning, along with an understanding of how GenAI can enhance the optimization process.
Modern data professionals increasingly find themselves managing diverse database technologies, often in the same organization. This session is designed for those who want to sharpen their proficiency across Oracle, MySQL and MongoDB, learning the tips, tools and techniques that can reduce the friction to efficiency when working across platforms. We'll explore the unique strengths and quirks of each database, focusing on administrative monitoring, administration and performance tuning. We'll see how tools can streamline your multiplatform development and operations. If you're balancing enterprise and open-source databases, along with emerging NoSQL use cases, this session is your practical toolkit.
Achieving peak performance in PostgreSQL databases requires mastering the art of query tuning. Developers and DBAs often grapple with diagnosing and resolving performance bottlenecks, wasting valuable time on trial-and-error approaches. This session introduces a systematic methodology for tuning PostgreSQL queries, leveraging tools like Wait Time analysis, explain plans, and SQL diagramming. Attendees will learn to identify costly operations, select optimal execution plans, and apply proven best practices through real-world case studies. Whether you are a novice or an experienced professional, this presentation will empower you to optimize queries efficiently, streamline database performance, and save countless hours in troubleshooting.
Still Fighting TempDB Contention? Meet Memory-Optimized TempDB! TempDB contention has always been a challenge for DBAs, especially in high-concurrency OLTP environments. To address this, SQL Server 2019 and 2022 introduced several enhancements, including Memory-Optimized TempDB, to reduce bottlenecks and improve performance. In this session, I’ll show you how Memory-Optimized TempDB works, when to use it, and how to implement it. I’ll demonstrate how to resolve contention using this feature. I'll also highlight its limitations, and provide strategies to manage them effectively. By the end of this session, you'll have a clear understanding of how and when to leverage Memory-Optimized TempDB to enhance server performance.
Real-time Intelligence in Microsoft Fabric empowers data professionals to seamlessly process and analyze highly granular, event-driven data. At its core lies the Kusto engine and the Kusto Query Language (KQL), delivering powerful capabilities for real-time data analysis. This session explores how you can leverage KQL to build efficient, event-driven solutions in Fabric with real-world examples. In this session, we will discuss key features of KQL that make it a game-changer for interactive analysis of data in motion. We'll cover basic syntax, before exploring various built-in functions and operators of KQL demonstrated through live queries on sample real-time data. You may think you have to learn a brand new language, but you’ll quickly realise that KQL isn’t as alien as it seems, and that you can get going pretty quickly with filtering, aggregating and joins as you would with T-SQL. We will also discuss the storage architecture of Eventhouse and why it is the optimal store for your event data in Fabric but also various use cases for workspace monitoring and log retention in Fabric. Once you have an understanding of how to store and analyze your event-driven data in Fabric with KQL and Eventhouse, you’ll be ready to deliver powerful real-time visuals and actions. Come along if you are a developer, data engineer, or analyst seeking practical examples and best practices for crafting KQL queries to drive real-time data analysis and actions. You will leave this session equipped with the skills to unlock the full potential of real-time data in Fabric.
If you ask a user how often they need data their initial answer is often "in real time", right? Once you solve for getting them that data in real-time, then how to direct it where it needs to go? Where can you collect this data in motion from and what kinds of actions and transformations can you take on it as it moves around? How can you best support the business deriving insights from this data in motion? The Real-Time Hub is your starting point for building real-time applications in Fabric. It's a kind of action center for bringing real-time events from a variety of sources (and clouds) into Fabric, learning how to analyze - and act upon - that data, storing it, and then visualizing it in real-time dashboards. In this session, we will explore what you can do with Real-Time Hub and what real world scenarios you can unlock based on our real world experiences. Following on from there, we'll be using some real-time sample data sources provided within the hub to show you how to easy it is to pull in some real-time data. Once we've discussed and demonstrated the various data sources you can connect to (many of which you have in your environment today), we will demonstrate the no-code experience of Eventstreams to show how easy it is to get moving working with data in motion in your Fabric environment. If you have data moving around your data estate, it's worth checking out these new and notable ways to work with it.
In this session you will learn how to evolve your Azure SQL DBA skills in the domain of security, compliance, authentication and connectivity, from the perspective of an on-premises DBA now supporting databases in Azure. On the example of a fully-managed Azure SQL PaaS service, you will gain a deep understanding of the security and compliance concepts the platform offers. You will understand authentication and best practices related to using WinAuth and EntraID with your Azure SQL resources – and how it maps with resources migrated from your on-premises SQL Server. We will review how to use advanced threat protection to automatically detect any security vulnerabilities. You will learn about Microsoft Purview, which helps you gain visibility, safeguard and manage sensitive data, govern, and manage critical data risks and regulatory requirements in Azure. We will also cover the basics of networking in Azure SQL and what is required to securely connect to, and access, your Azure SQL resources. In each of the areas and throughout the session, we will map on-premises SQL Server DBA responsibilities to the Azure SQL DBA role – highlighting what responsibilities are new, which ones stay the same, and what is shared or fully delegated to Microsoft. You will walk away with an understanding of the relevant DBA skills you need to evolve as an Azure SQL DBA.
Sign up to stay up to date with news, special announcements and educational content.
Redgate will only contact you about PASS Data Community Summit (in line with our Privacy Policy) unless you separately request emails about Redgate. You can unsubscribe from these updates at any time.