PASS Summit Europe (Frankfurt)
All Sessions
-
General Session
Advanced Access Control Patterns in Microsoft Fabric
Jon Voege
Learn to navigate the complexity of securing enterprise data platforms with sophisticated access control strategies tailored for Microsoft Fabric. This session flips the script, and starts with the solution. Through the lens of three different types of organizations we will look at the access and permission patterns that make each work, and which compromises they need to accept. We will explore access strategies for Workspace, Item and Compute level access, compare RLS in OneLake Security with RLS in semantic models, and discuss other 'gotchas' and lesser known nuances of the different permission options. Join this session if you want a look beyond the basics of permission management and be ready to take your own Microsoft Fabric security setup to the next level.
General Session
Advanced Access Control Patterns in Microsoft Fabric
Jon Voege
Inspari
Session Goals: • The differences between the different access paradigms in Microsoft Fabric. • When and why to apply Worspace Roles, Item Permissions and Granular permissions respectively. • Surprising nuances and "gotchas" regarding details of each permission layer.
Session Prerequisites: Knowledge of creating and maintaining semantic models and Power BI reports is required. Experience sharing models and reports with other users is beneficial. Experience sharing Fabric backend artifacts and workspace accesses is beneficial.
Track: Analytics
Level: Level 300
Theme: Security
Learn to navigate the complexity of securing enterprise data platforms with sophisticated access control strategies tailored for Microsoft Fabric. This session flips the script, and starts with the solution. Through the lens of three different types of organizations we will look at the access and permission patterns that make each work, and which compromises they need to accept. We will explore access strategies for Workspace, Item and Compute level access, compare RLS in OneLake Security with RLS in semantic models, and discuss other 'gotchas' and lesser known nuances of the different permission options. Join this session if you want a look beyond the basics of permission management and be ready to take your own Microsoft Fabric security setup to the next level. -
General Session
No Black Box: How Fabric DW Works, Performs, and Gets Monitored Right
Mariya Ali
Most Microsoft Fabric Data Warehouse users know what to do — but not what's actually happening beneath the query they just ran. In this session, you'll get a first-hand architectural walkthrough of Fabric DW from the product team that designed and shipped it. We'll start where most sessions don't — the engine itself. How does Fabric DW separate compute from storage using OneLake? What is V-Order optimization, why does it exist, and when does it hurt you instead of help you? How does the concurrency model work across SQL pools, and what are the real-world capacity implications of the choices you make today? From there, we'll move into performance: the patterns that silently kill query throughput, how to read query execution plans in a distributed columnar engine — not in theory, but in a live demo using real warehouse telemetry. Finally, we'll deep-dive into the monitoring and observability stack: Query Insights, Warehouse Insights, SQL Pool Insights, and Query Plan Visualization. You'll leave knowing exactly which DMV-equivalent views surface the signals that matter, how to build proactive alerting, and how enterprise teams — including large healthcare organizations running production workloads — are using these tools to drive meaningful cost reduction and SLA reliability. Whether you're evaluating Fabric, mid-migration, or already in production and hitting walls you don't fully understand, this session gives you the architecture mental model you've been missing.
General Session
No Black Box: How Fabric DW Works, Performs, and Gets Monitored Right
Mariya Ali
Microsoft
Session Goals: • Understand how Fabric DW's architecture drives query performance decisions. • Navigate the full monitoring stack to diagnose production issues fast. • Leave with an observability playbook you can apply on Monday.
Session Prerequisites: Attendees should have working familiarity with SQL and basic data warehousing concepts (tables, schemas, query execution). Prior exposure to Microsoft Fabric is helpful but not required.
Track: Architecture
Level: Level 300
Theme: AI + Data
Most Microsoft Fabric Data Warehouse users know what to do — but not what's actually happening beneath the query they just ran. In this session, you'll get a first-hand architectural walkthrough of Fabric DW from the product team that designed and shipped it. We'll start where most sessions don't — the engine itself. How does Fabric DW separate compute from storage using OneLake? What is V-Order optimization, why does it exist, and when does it hurt you instead of help you? How does the concurrency model work across SQL pools, and what are the real-world capacity implications of the choices you make today? From there, we'll move into performance: the patterns that silently kill query throughput, how to read query execution plans in a distributed columnar engine — not in theory, but in a live demo using real warehouse telemetry. Finally, we'll deep-dive into the monitoring and observability stack: Query Insights, Warehouse Insights, SQL Pool Insights, and Query Plan Visualization. You'll leave knowing exactly which DMV-equivalent views surface the signals that matter, how to build proactive alerting, and how enterprise teams — including large healthcare organizations running production workloads — are using these tools to drive meaningful cost reduction and SLA reliability. Whether you're evaluating Fabric, mid-migration, or already in production and hitting walls you don't fully understand, this session gives you the architecture mental model you've been missing. -
General Session
Productivity Hacks for the Modern Data Pro
Nick Hape
Kellyn Pot'Vin-Gorman
Between firefighting performance issues, managing schema changes, and supporting development teams, database professionals are stretched thin. This session is your boost of practical tips, tools, and workflows to help you get more done with less stress. We’ll explore how leading teams are simplifying repetitive tasks, improving code quality, and staying productive across hybrid environments. Plus, see how new AI-powered features can automate routine checks, suggest optimizations, and even generate realistic test data to accelerate development. Whether you're a DBA, developer, or somewhere in between, you’ll walk away with actionable ideas and a live demo of tools that make it all possible.
General Session
Productivity Hacks for the Modern Data Pro
Nick Hape
Redgate
Kellyn Pot'Vin-Gorman
Redgate
Track: Database Management
Level: Level 100
Between firefighting performance issues, managing schema changes, and supporting development teams, database professionals are stretched thin. This session is your boost of practical tips, tools, and workflows to help you get more done with less stress. We’ll explore how leading teams are simplifying repetitive tasks, improving code quality, and staying productive across hybrid environments. Plus, see how new AI-powered features can automate routine checks, suggest optimizations, and even generate realistic test data to accelerate development. Whether you're a DBA, developer, or somewhere in between, you’ll walk away with actionable ideas and a live demo of tools that make it all possible. -
General Session
PowerShell Moves Data Around – Fast and Flexible
Andreas Jordan
I use PowerShell very successfully to access various database systems. Even though many people think of Microsoft SQL Server first when they think of PowerShell, the underlying mechanisms also work seamlessly with many other platforms. In my projects, I frequently work with Oracle and PostgreSQL as well. In this session, we will exchange data between different database systems and even Excel files. We'll explore the underlying technologies and learn how to achieve high performance without sacrificing flexibility. Finally, we'll look at how to wrap the entire process in transactions to ensure data integrity and secure, reliable transfers.
General Session
PowerShell Moves Data Around – Fast and Flexible
Andreas Jordan
ORDIX AG
Session Goals: • Understand PowerShell data access via .NET providers. • Apply high‑performance transfer methods like streaming and bulk operations. • Build reliable multi‑DB workflows using transactions and robust error handling.
Session Prerequisites: Basic knowledge of PowerShell is helpful, but not required to understand the presented concepts.
Track: Database Management
Level: Level 200
Theme: Cloud + Multi-DB
I use PowerShell very successfully to access various database systems. Even though many people think of Microsoft SQL Server first when they think of PowerShell, the underlying mechanisms also work seamlessly with many other platforms. In my projects, I frequently work with Oracle and PostgreSQL as well. In this session, we will exchange data between different database systems and even Excel files. We'll explore the underlying technologies and learn how to achieve high performance without sacrificing flexibility. Finally, we'll look at how to wrap the entire process in transactions to ensure data integrity and secure, reliable transfers. -
General Session
Shattering Records: Architecting a 1,000,000 IOPS SQL Server
Christoph Petersen
Chris Madden
Modern SQL Server workloads, from massive parallel processing to high-concurrency OLTP, now demand I/O performance once limited to specialized on-premises hardware. Achieving 1,000,000 IOPS in cloud or virtual environments is no longer theoretical—it is a requirement for scale. This session deconstructs the infrastructure needed to sustain this level of performance. We will analyze the convergence of compute, storage, and networking, identifying where standard configurations fail under extreme load. We’ll dissect technical hurdles including CPU selection, NUMA-aware memory association, and networking throughput limits. Additionally, we map these infrastructure layers back to SQL Server internals to tune the database engine to leverage the performance provided by the underlying hardware. Designed for DBAs and architects pushing physical limits, this session provides a technical blueprint for the fastest modern database environments. Attendees will leave with: • Hardware Blueprint: Comparing CPU families and storage solutions for sub-millisecond latency. • Configuration Checklists: Methodologies for host-level storage offloading and networking. • Tuning Guide: SQL Server changes to prevent engine bottlenecks. • Cost-Performance Analysis: Frameworks for right-sizing resources without over-provisioning.
General Session
Shattering Records: Architecting a 1,000,000 IOPS SQL Server
Christoph Petersen
Google Germany GmbH
Chris Madden
Google
Session Goals: • Performance: Aligning VM specifications with next-gen block storage to eliminate I/O bottlenecks. • Continuity: Evaluate HA/DR patterns that maintain performance parity during failover. • Cost: Balance resource elasticity and SQL Server core licensing to maximize price-performance.
Session Prerequisites: Designed for DBAs and architects who need to push SQL Server to its physical limits, this session provides the technical blueprint for building one of the fastest database environments possible today.
Track: Architecture
Level: Level 300
Theme: Cloud + Multi-DB
Modern SQL Server workloads, from massive parallel processing to high-concurrency OLTP, now demand I/O performance once limited to specialized on-premises hardware. Achieving 1,000,000 IOPS in cloud or virtual environments is no longer theoretical—it is a requirement for scale. This session deconstructs the infrastructure needed to sustain this level of performance. We will analyze the convergence of compute, storage, and networking, identifying where standard configurations fail under extreme load. We’ll dissect technical hurdles including CPU selection, NUMA-aware memory association, and networking throughput limits. Additionally, we map these infrastructure layers back to SQL Server internals to tune the database engine to leverage the performance provided by the underlying hardware. Designed for DBAs and architects pushing physical limits, this session provides a technical blueprint for the fastest modern database environments. Attendees will leave with: • Hardware Blueprint: Comparing CPU families and storage solutions for sub-millisecond latency. • Configuration Checklists: Methodologies for host-level storage offloading and networking. • Tuning Guide: SQL Server changes to prevent engine bottlenecks. • Cost-Performance Analysis: Frameworks for right-sizing resources without over-provisioning. -
General Session
When SQL Server Slows Down: Triage for Infrastructure Bottlenecks
Bjoern Peters
When users report that SQL Server is slow, the investigation often jumps straight to indexes, plans, or code. Sometimes that is correct. But in many real-world environments, the first bottleneck is elsewhere: poor memory allocation across instances, storage that cannot sustain the workload, weak tempdb decisions, inconsistent configuration, or infrastructure choices that quietly constrain the engine long before query tuning begins. This session presents a practical triage approach for SQL Server performance investigations with a strong focus on infrastructure and configuration realities. We will look at how to separate query-level symptoms from platform-level causes, how to use wait statistics and core server signals to narrow the problem space quickly, and how to spot situations where hardware, layout, or operational decisions are doing more damage than the workload itself. Attendees will leave with a more disciplined troubleshooting sequence and a stronger understanding of where infrastructure and configuration issues often hide.
General Session
When SQL Server Slows Down: Triage for Infrastructure Bottlenecks
Bjoern Peters
Kramer&Crew
Session Goals: • Distinguish query-level symptoms from infrastructure and configuration bottlenecks. • Use wait statistics and core server indicators to narrow the problem space faster. • Apply a practical triage sequence that avoids wasting time in the wrong troubleshooting layer.
Session Prerequisites: Day-to-day SQL Server experience is recommended. Familiarity with performance troubleshooting, wait statistics, memory pressure, tempdb, and storage basics is helpful, but no advanced internals knowledge is required.
Track: Database Management
Level: Level 300
Theme: AI + Data
When users report that SQL Server is slow, the investigation often jumps straight to indexes, plans, or code. Sometimes that is correct. But in many real-world environments, the first bottleneck is elsewhere: poor memory allocation across instances, storage that cannot sustain the workload, weak tempdb decisions, inconsistent configuration, or infrastructure choices that quietly constrain the engine long before query tuning begins. This session presents a practical triage approach for SQL Server performance investigations with a strong focus on infrastructure and configuration realities. We will look at how to separate query-level symptoms from platform-level causes, how to use wait statistics and core server signals to narrow the problem space quickly, and how to spot situations where hardware, layout, or operational decisions are doing more damage than the workload itself. Attendees will leave with a more disciplined troubleshooting sequence and a stronger understanding of where infrastructure and configuration issues often hide. -
Pre-Conference
Adding PostgreSQL to your SQL Server Skill Set
Grant Fritchey
Pat Wright
More organizations are adding PostgreSQL to their technology stack than ever before. The challenge is that they are not immediately replacing their existing technology, which means more and more people need to understand both SQL Server and PostgreSQL. This session is explicitly designed to support people who already know SQL Server in their journey to add PostgreSQL to their skill set. The session covers the areas of overlap between the two platforms, as well as all the differences that can make learning PostgreSQL a challenge. Not only does this all-day session teach PostgreSQL, but it also explores tooling, documentation, the cloud, and other resources to help on the journey of adding PostgreSQL to an existing SQL Server skill set.
Pre-Conference
Adding PostgreSQL to your SQL Server Skill Set
Grant Fritchey
Redgate
Pat Wright
Redgate
Session Goals: • Learn about the differences in language between SQL Server and PostgreSQL. • Understand what's different and unique in how PostgreSQL operates.
Session Prerequisites: General understanding of SQL Server in order to understand the mapping.
Track: Database Management
Level: Level 200
Theme: Cloud + Multi-DB
More organizations are adding PostgreSQL to their technology stack than ever before. The challenge is that they are not immediately replacing their existing technology, which means more and more people need to understand both SQL Server and PostgreSQL. This session is explicitly designed to support people who already know SQL Server in their journey to add PostgreSQL to their skill set. The session covers the areas of overlap between the two platforms, as well as all the differences that can make learning PostgreSQL a challenge. Not only does this all-day session teach PostgreSQL, but it also explores tooling, documentation, the cloud, and other resources to help on the journey of adding PostgreSQL to an existing SQL Server skill set. -
General Session
Postgres for non-Postgres DBAs
Michael Banck
PostgreSQL is the most advanced Open-Source relational database. Over the last decade, it has established itself as the go-to data store for various applications (like Vector/AI, GIS, JSON/key-value/document or full-text search). "Just use Postgres" is a well-known term in the database industry to indicate that for almost all workloads and usages PostgreSQL can be the go-to solution. PostgreSQL offers a wide variety of features, while being relatively low-maintenance for most workloads, in particular if one of the managed services is used. However, if no dedicated PostgreSQL DBA is on staff, somebody still needs to look after Postgres servers or services. Otherwise, the performance might deteriorate or some hard limits/errors could be hit at some point with increasing database size or user count. This talk will give a quick overview over Postgres, what minimal initial tuning is necessary and what the current limitations are for a mostly hands-off operation. It will also provide some best-practices for installation and configuration and what pitfalls to look out for. It is intended for DBAs, data engineers or people whose primary role is not database management but who have to look after PostgreSQL for one reason or another.
General Session
Postgres for non-Postgres DBAs
Michael Banck
credativ GmbH
Session Goals: • An overview over PostgreSQL and its current position in the data industry. • Best-Practices for PostgreSQL setup and administration. • Knowing which pitfalls to avoid.
Session Prerequisites: DBA experience with a non-PostgreSQL relational database would be useful.
Track: Database Management
Level: Level 200
PostgreSQL is the most advanced Open-Source relational database. Over the last decade, it has established itself as the go-to data store for various applications (like Vector/AI, GIS, JSON/key-value/document or full-text search). "Just use Postgres" is a well-known term in the database industry to indicate that for almost all workloads and usages PostgreSQL can be the go-to solution. PostgreSQL offers a wide variety of features, while being relatively low-maintenance for most workloads, in particular if one of the managed services is used. However, if no dedicated PostgreSQL DBA is on staff, somebody still needs to look after Postgres servers or services. Otherwise, the performance might deteriorate or some hard limits/errors could be hit at some point with increasing database size or user count. This talk will give a quick overview over Postgres, what minimal initial tuning is necessary and what the current limitations are for a mostly hands-off operation. It will also provide some best-practices for installation and configuration and what pitfalls to look out for. It is intended for DBAs, data engineers or people whose primary role is not database management but who have to look after PostgreSQL for one reason or another. -
General Session
Pokemon, Choose Your Index: You Can't Have Them All
Brent Ozar
Index tuning often sounds abstract: selectivity, key order, includes, trade-offs. In this session, let's turn it into a game! Delivered entirely through live demos and audience interaction, Brent Ozar turns index design into a series of Pokémon-style battles. Using the Users table from the Stack Overflow database, Brent Ozar presents real-world query patterns and invites the audience to “play” index cards—each representing a different index design—to see which ones win, which ones struggle, and which ones backfire. As each query battle unfolds, you’ll see why some indexes are super-effective against certain workloads but weak against others, why covering everything is tempting (and dangerous), and why you can’t catch—or create—every index without paying a price. Brent will explain the results in plain English, tying each outcome back to how SQL Server actually uses indexes under the covers. By the end of the session, you’ll walk away with a much stronger intuition for choosing the *right* index for a workload, understanding trade-offs, and explaining index decisions to developers and managers alike. No slides full of theory—just hands-on demos, real queries, and a room full of data professionals playing along.
General Session
Pokemon, Choose Your Index: You Can't Have Them All
Brent Ozar
Brent Ozar Unlimited
Session Goals: • Understand how index column order affects query processing. • Learn why you shouldn't start indexes with non-selective columns. • Discover how to craft indexes that are useful for all kinds of queries.
Session Prerequisites: You should be comfortable reading T-SQL queries.
Track: Database Management
Level: Level 300
Theme: AI + Data
Index tuning often sounds abstract: selectivity, key order, includes, trade-offs. In this session, let's turn it into a game! Delivered entirely through live demos and audience interaction, Brent Ozar turns index design into a series of Pokémon-style battles. Using the Users table from the Stack Overflow database, Brent Ozar presents real-world query patterns and invites the audience to “play” index cards—each representing a different index design—to see which ones win, which ones struggle, and which ones backfire. As each query battle unfolds, you’ll see why some indexes are super-effective against certain workloads but weak against others, why covering everything is tempting (and dangerous), and why you can’t catch—or create—every index without paying a price. Brent will explain the results in plain English, tying each outcome back to how SQL Server actually uses indexes under the covers. By the end of the session, you’ll walk away with a much stronger intuition for choosing the *right* index for a workload, understanding trade-offs, and explaining index decisions to developers and managers alike. No slides full of theory—just hands-on demos, real queries, and a room full of data professionals playing along. -
General Session
POURing Accessibility into Power BI: Steps for Inclusive Analytics
Juliana Smith
Many organisations focus on accessibility for websites, but overlook Power BI, even though reports used for decisions, risk, and performance are digital products too. If people can’t access them, they’re excluded. My perspective is shaped by experience: after a work injury caused nerve damage, a simple hardware change transformed my ability to work and reframed accessibility as essential, not optional. This session translates WCAG principles into practical Power BI development techniques, showing how layout, colour, navigation, tab order, screen‑reader behaviour and text choices affect usability. We also explore how AI/DAX/UDF can generate meaningful alternative text for visuals, improving accuracy and reducing manual effort. Attendees will leave with clear, repeatable methods to make Power BI reports more inclusive, usable, and compliant by design.
General Session
POURing Accessibility into Power BI: Steps for Inclusive Analytics
Juliana Smith
Turner & Towsend
Session Goals: • Provide practical techniques to apply accessibility design principles through layout, navigation, color, and screen‑reader‑aware choices. • Show how DAX/UDFs generate alternative text with less manual effort. • Translate WCAG into Power BI design practice.
Session Prerequisites: Experience designing Power BI reports, especially visuals, navigation, and complex layouts. Attendees should understand core UI principles and be ready to apply them in advanced scenarios focused on accessibility and inclusive design.
Track: Analytics
Level: Level 300
Theme: AI + Data
Many organisations focus on accessibility for websites, but overlook Power BI, even though reports used for decisions, risk, and performance are digital products too. If people can’t access them, they’re excluded. My perspective is shaped by experience: after a work injury caused nerve damage, a simple hardware change transformed my ability to work and reframed accessibility as essential, not optional. This session translates WCAG principles into practical Power BI development techniques, showing how layout, colour, navigation, tab order, screen‑reader behaviour and text choices affect usability. We also explore how AI/DAX/UDF can generate meaningful alternative text for visuals, improving accuracy and reducing manual effort. Attendees will leave with clear, repeatable methods to make Power BI reports more inclusive, usable, and compliant by design. -
Pre-Conference
Dev-Prod Demon Hunters: Finding the Real Cause of Production Slowness
Brent Ozar
Production is slow. Development is fast. The same query runs in both. Somewhere between the two, a performance demon is hiding—and this session is about hunting it down. Inspired by Brent Ozar's love of the K-Pop Demon Hunters theme song, this class is delivered almost entirely as live demos, not slides. Brent Ozar will run real queries against two environments labeled “dev” and “prod,” then work through them exactly the way an experienced DBA would in the real world: comparing server settings, analyzing execution plans, and uncovering the subtle differences that led SQL Server to make different decisions. Each “hunt” reveals another demon—statistics, configuration, data distribution, or plan choice—and shows how easily a test environment can lie. Along the way, Brent will demonstrate practical techniques you can use immediately: running sp_Blitz to surface meaningful environment differences, comparing execution plans to understand *why* SQL Server behaved differently, and making targeted changes to development so it better reflects production reality. By the end, you’ll understand how to stop guessing, stop blaming the engine, and follow the clues that lead to the truth—because when dev and prod finally move in sync, that’s when performance goes golden.
Pre-Conference
Dev-Prod Demon Hunters: Finding the Real Cause of Production Slowness
Brent Ozar
Brent Ozar Unlimited
Session Goals: • Discover what caused query plans to vary from production. • Learn how to quickly assess environment differences that would cause query plan changes. • Understand how to change dev to more closely match prod.
Session Prerequisites: You should already be comfortable writing queries, reading execution plans, and using the First Responder Kit to gather data about your server's wait stats and health.
Track: Database Management
Level: Level 300
Theme: Cloud + Multi-DB
Production is slow. Development is fast. The same query runs in both. Somewhere between the two, a performance demon is hiding—and this session is about hunting it down. Inspired by Brent Ozar's love of the K-Pop Demon Hunters theme song, this class is delivered almost entirely as live demos, not slides. Brent Ozar will run real queries against two environments labeled “dev” and “prod,” then work through them exactly the way an experienced DBA would in the real world: comparing server settings, analyzing execution plans, and uncovering the subtle differences that led SQL Server to make different decisions. Each “hunt” reveals another demon—statistics, configuration, data distribution, or plan choice—and shows how easily a test environment can lie. Along the way, Brent will demonstrate practical techniques you can use immediately: running sp_Blitz to surface meaningful environment differences, comparing execution plans to understand *why* SQL Server behaved differently, and making targeted changes to development so it better reflects production reality. By the end, you’ll understand how to stop guessing, stop blaming the engine, and follow the clues that lead to the truth—because when dev and prod finally move in sync, that’s when performance goes golden. -
General Session
Intelligent Data Experiences with Fabric Data Agents and Foundry
Alpa Buddhabhatti
Generative AI is changing how users interact with analytics data. New capabilities such as Fabric Data Agents, Copilot, and Microsoft Foundry are helping organizations move beyond traditional dashboards to more natural and intelligent ways of working with data. In this session, you’ll learn how to use features in Microsoft Fabric including Fabric Data Agents, Copilot, and OneLake to enable AI-powered analytics experiences. Through practical scenarios and a live demo, you’ll see how users can interact conversationally with trusted datasets and explore insights more efficiently. We’ll also discuss practical ways to introduce these capabilities into your existing analytics environment while maintaining governance and control. By the end of the session, you’ll understand how Fabric Data Agents and Microsoft Foundry help teams deliver modern, AI-powered analytics experiences across their organization.
General Session
Intelligent Data Experiences with Fabric Data Agents and Foundry
Alpa Buddhabhatti
Freelance Consultant
Session Goals: • Understand how Fabric Data Agents enable conversational interaction with trusted datasets in Microsoft Fabric. • Learn how Copilot and Microsoft Foundry support AI-powered analytics experiences. • Explore how to introduce these capabilities while maintaining governance and control.
Session Prerequisites: None.
Track: Analytics
Level: Level 100
Theme: AI + Data
Generative AI is changing how users interact with analytics data. New capabilities such as Fabric Data Agents, Copilot, and Microsoft Foundry are helping organizations move beyond traditional dashboards to more natural and intelligent ways of working with data. In this session, you’ll learn how to use features in Microsoft Fabric including Fabric Data Agents, Copilot, and OneLake to enable AI-powered analytics experiences. Through practical scenarios and a live demo, you’ll see how users can interact conversationally with trusted datasets and explore insights more efficiently. We’ll also discuss practical ways to introduce these capabilities into your existing analytics environment while maintaining governance and control. By the end of the session, you’ll understand how Fabric Data Agents and Microsoft Foundry help teams deliver modern, AI-powered analytics experiences across their organization. -
General Session
Migration at Scale: Accelerating MSSQL to PostgreSQL Using Automation & AI
InduTeja Aligeti
Organizations are rapidly moving from SQL Server to PostgreSQL to reduce licensing costs and embrace cloud-native architectures. However, large-scale migrations introduce challenges such as schema conversion, code compatibility, performance tuning, and cutover risk. This session provides a practical roadmap for accelerating SQL Server to PostgreSQL migrations using automation and AI. We will demonstrate how AI can assist in code conversion, identify migration risks, generate compatibility fixes, and optimize performance post-migration. You will learn how to automate assessment, schema conversion, testing, and deployment pipelines to reduce manual effort and improve migration reliability. Through real-world examples and demos, attendees will leave with a repeatable migration framework designed for enterprise-scale workloads.
General Session
Migration at Scale: Accelerating MSSQL to PostgreSQL Using Automation & AI
InduTeja Aligeti
Amazon
Session Goals: • Understand challenges in migrating large workloads from SQL Server to PostgreSQL. • Learn how automation accelerates migration. • Discover how AI-assisted techniques identify, recommend fixes, and improve performance, enabling faster, and a repeatable framework for scalable database modernization.
Session Prerequisites: Basic understanding of SQL Server or PostgreSQL administration is helpful. Familiarity with database migration concepts.
Track: Database Management
Level: Level 300
Theme: Cloud + Multi-DB
Organizations are rapidly moving from SQL Server to PostgreSQL to reduce licensing costs and embrace cloud-native architectures. However, large-scale migrations introduce challenges such as schema conversion, code compatibility, performance tuning, and cutover risk. This session provides a practical roadmap for accelerating SQL Server to PostgreSQL migrations using automation and AI. We will demonstrate how AI can assist in code conversion, identify migration risks, generate compatibility fixes, and optimize performance post-migration. You will learn how to automate assessment, schema conversion, testing, and deployment pipelines to reduce manual effort and improve migration reliability. Through real-world examples and demos, attendees will leave with a repeatable migration framework designed for enterprise-scale workloads. -
General Session
Don't Let Your Permissions be Hijacked!
Erland Sommarskog
This session is about a security threat, you may or may not have considered. You are sysadmin on a production server, and an attacker could lure you to unknowingly run malicious code (ab)using your sysadmin permissions, performing operations advantageous to the attacker. That could be data theft, manipulation, installation of backdoor logins etc. The attacker could be someone in a database who has the power to install code, for instance someone in the db_owner or db_ddladmin roles. But it could also be a developer with access to the deployment pipeline. Not only sysadmin is at threat for attacks like this, I will also discuss how a user in the db_owner role could be attacked by developers with less permissions. In this session, you will see examples of how such attacks can be conducted, and how you can defend yourself against them, which turns out to be relatively easy, once you are aware of the danger.
General Session
Don't Let Your Permissions be Hijacked!
Erland Sommarskog
Erland Sommarskog SQL-Konsult AB
Session Goals: Attendees will learn that co-workers with malicious intent (or who just want to work around corporate red tape) can lure you to execute code you should not execute, but which is hidden somewhere in the database.
Session Prerequisites: This is a level 300 session. That is, you should have experience of working with database security as DBA or similar role.
Track: Database Management
Level: Level 300
Theme: Security
This session is about a security threat, you may or may not have considered. You are sysadmin on a production server, and an attacker could lure you to unknowingly run malicious code (ab)using your sysadmin permissions, performing operations advantageous to the attacker. That could be data theft, manipulation, installation of backdoor logins etc. The attacker could be someone in a database who has the power to install code, for instance someone in the db_owner or db_ddladmin roles. But it could also be a developer with access to the deployment pipeline. Not only sysadmin is at threat for attacks like this, I will also discuss how a user in the db_owner role could be attacked by developers with less permissions. In this session, you will see examples of how such attacks can be conducted, and how you can defend yourself against them, which turns out to be relatively easy, once you are aware of the danger. -
Pre-Conference
Building an AI-Ready Analytics Platform with Microsoft Fabric
Alpa Buddhabhatti
Many organizations want to deliver AI-driven insights—but struggle with fragmented pipelines, inconsistent governance, and data platforms that aren’t designed for modern analytics at scale. This hands-on full-day workshop shows how to design and implement a practical, real-world analytics platform using Microsoft Fabric that prepares your data for trusted reporting, automation, and AI scenarios. Working through guided exercises and architecture-driven examples, attendees will explore the Fabric environment, set up repository-integrated development workflows, and implement scalable ingestion and transformation patterns using Data Factory, Notebooks, SQL, etc. You’ll learn how to apply Medallion architecture and metadata-driven design to build structured Lakehouse solutions that support reliable analytics and enterprise-ready data engineering. The session also demonstrates how CI/CD practices improve deployment confidence and how trusted datasets can be prepared for intelligent consumption through Microsoft Foundry. Participants will explore how Data Agents and Copilot can accelerate development and enable natural interaction with enterprise data. By the end of the workshop, you’ll leave with reusable architecture patterns, practical implementation techniques, and a clear blueprint to build secure, scalable, AI-ready analytics platforms you can apply immediately in your own environment.
Pre-Conference
Building an AI-Ready Analytics Platform with Microsoft Fabric
Alpa Buddhabhatti
Freelance Consultant
Session Goals: • Design an end-to-end analytics platform using Microsoft Fabric with Medallion architecture. • Implement metadata-driven pipelines and CI/CD for scalable deployments. • Prepare trusted datasets for AI scenarios and consumption through Microsoft Foundry using Copilot and Data Agents.
Session Prerequisites: Attendees should have a basic understanding of data engineering or analytics concepts such as ingestion, transformation, and SQL. Prior Microsoft Fabric experience is not required. An Azure or Fabric subscription is helpful for demos.
Track: Analytics
Level: Level 200
Theme: AI + Data
Many organizations want to deliver AI-driven insights—but struggle with fragmented pipelines, inconsistent governance, and data platforms that aren’t designed for modern analytics at scale. This hands-on full-day workshop shows how to design and implement a practical, real-world analytics platform using Microsoft Fabric that prepares your data for trusted reporting, automation, and AI scenarios. Working through guided exercises and architecture-driven examples, attendees will explore the Fabric environment, set up repository-integrated development workflows, and implement scalable ingestion and transformation patterns using Data Factory, Notebooks, SQL, etc. You’ll learn how to apply Medallion architecture and metadata-driven design to build structured Lakehouse solutions that support reliable analytics and enterprise-ready data engineering. The session also demonstrates how CI/CD practices improve deployment confidence and how trusted datasets can be prepared for intelligent consumption through Microsoft Foundry. Participants will explore how Data Agents and Copilot can accelerate development and enable natural interaction with enterprise data. By the end of the workshop, you’ll leave with reusable architecture patterns, practical implementation techniques, and a clear blueprint to build secure, scalable, AI-ready analytics platforms you can apply immediately in your own environment. -
Keynote
Redgate Keynote: The Data Professional of the Future
Kellyn Pot'Vin-Gorman
Steve Jones
Laura Copeland
The data professional of 2026 might be a career database expert…or simply the closest thing your organization has to a data professional. The database landscape has never been more complex, and the modern data professional is tasked with balancing shifting platform trends and emerging technology like AI with the ever-present need to keep databases and the data they contain secure – in an era when organizational pressure to deliver value from data is stronger and more persistent than it’s ever been. In this session you’ll learn more about the pressures and challenges faced by the data professional of today, as well as trusted advice on how to navigate today’s and tomorrow’s database landscape, no matter where you are on your professional journey.
Keynote
Redgate Keynote: The Data Professional of the Future
Kellyn Pot'Vin-Gorman
Redgate
Steve Jones
Redgate
Laura Copeland
Redgate
Track: Professional Development
Level: Level 100
The data professional of 2026 might be a career database expert…or simply the closest thing your organization has to a data professional. The database landscape has never been more complex, and the modern data professional is tasked with balancing shifting platform trends and emerging technology like AI with the ever-present need to keep databases and the data they contain secure – in an era when organizational pressure to deliver value from data is stronger and more persistent than it’s ever been. In this session you’ll learn more about the pressures and challenges faced by the data professional of today, as well as trusted advice on how to navigate today’s and tomorrow’s database landscape, no matter where you are on your professional journey. -
General Session
Forget Visual Query Plans, Parse the XML
Richard Douglas
The SSMS graphical execution plan is a simplification. It hides thread skew in parallel operators, buries memory grant wait times, shows only the first missing index when there are several, silently misorders index column recommendations, and renders scalar UDFs as zero-cost operations. The XML underneath shows all — and most people never read it. In this session, Richard demonstrates what the graphical plan isn't showing you, using a live demo of Plan Investigator — his free, public beta built specifically to surface these hidden details. We'll parse real execution plans and expose what SSMS misses: per-thread row distribution in parallel plans, memory queuing time before a query could even start executing, composite index column gaps and much much more! Every finding is traced back to the specific XML attribute that contains it, attendees will leave understanding not just what the tool found, but how to find it themselves using nothing but native SQL Server capabilities.
General Session
Forget Visual Query Plans, Parse the XML
Richard Douglas
Redgate
Session Goals: • Identify what SSMS hides in the graphical plan and where to find it in the XML • Diagnose parallel skew, memory grant issues, index column gaps, and more from the XML • Leave with Plan Investigator — a free public beta that highlights these issues instantly
Session Prerequisites: Working knowledge of SQL Server execution plans. You should be able to read a basic graphical plan in SSMS. Familiarity with index seeks, sorts, & parallelism is helpful but not required. Suitable for DBAs & developers who tune queries regularly.
Track: Development
Level: Level 300
Theme: AI + Data
The SSMS graphical execution plan is a simplification. It hides thread skew in parallel operators, buries memory grant wait times, shows only the first missing index when there are several, silently misorders index column recommendations, and renders scalar UDFs as zero-cost operations. The XML underneath shows all — and most people never read it. In this session, Richard demonstrates what the graphical plan isn't showing you, using a live demo of Plan Investigator — his free, public beta built specifically to surface these hidden details. We'll parse real execution plans and expose what SSMS misses: per-thread row distribution in parallel plans, memory queuing time before a query could even start executing, composite index column gaps and much much more! Every finding is traced back to the specific XML attribute that contains it, attendees will leave understanding not just what the tool found, but how to find it themselves using nothing but native SQL Server capabilities. -
General Session
Reinventing Yourself in Tech: Skills, Transitions & Mindset for the AI Era
Shubhangi Goyal
With AI reshaping every role, reinvention is no longer optional, it’s a core career skill. This session dives into how to evolve your career in a fast-changing industry: shifting from traditional BI to modern data roles, adding AI literacy, learning, and building credibility. We’ll discuss the mindset shifts required for reinvention, the skills to prioritize, and the common traps people fall into when transitioning. Through real stories and actionable guidance, this talk empowers attendees to embrace change, step into new opportunities, and build a future-proof tech career.
General Session
Reinventing Yourself in Tech: Skills, Transitions & Mindset for the AI Era
Shubhangi Goyal
Admiral Group Plc
Session Goals: • Know which skills to prioritize and how to shift from traditional roles into modern, AI-era data roles. • Understand the skills required to step into new opportunities confidently. • Learn how to demonstrate expertise, avoid common transition traps, and position yourself for a future-proof tech career.
Session Prerequisites: This session does not require any prior knowledge.
Track: Professional Development
Level: Level 100
Theme: AI + Data
With AI reshaping every role, reinvention is no longer optional, it’s a core career skill. This session dives into how to evolve your career in a fast-changing industry: shifting from traditional BI to modern data roles, adding AI literacy, learning, and building credibility. We’ll discuss the mindset shifts required for reinvention, the skills to prioritize, and the common traps people fall into when transitioning. Through real stories and actionable guidance, this talk empowers attendees to embrace change, step into new opportunities, and build a future-proof tech career. -
General Session
Data and AI Governance in Microsoft Fabric under the EU AI Act
Dr. Andre Ebert
Artificial Intelligence enables companies to unlock unprecedented potential and accelerate business processes exponentially. However, we simultaneously face a dual challenge: How can we scale our AI innovations and remain globally competitive while complying with strict new regulations? The trustworthiness and security of powerful AI-driven products rely heavily on the seamless interplay of Data Governance and AI Governance. Furthermore, the EU AI Act now provides a legally binding framework for these standards. This raises the question: How can we sustainably implement these concepts in our company and actively profit from them? In the first part of this session, we will address these exact questions. We aim to establish a shared understanding of AI Governance, Data Governance, and their complex relationship with the EU AI Act within the context of modern Cloud Data Platforms. Following this, we will provide a hands-on demonstration using Microsoft Fabric combined with Microsoft Purview to showcase how the technical requirements of the EU AI Act can already be fully addressed today. Finally, we will highlight that technology alone cannot mitigate every risk. Crucial aspects, such as the context-specific classification of ethical risks or the Fundamental Rights Impact Assessment (FRIA), fundamentally rely on organizational maturity and human judgment.
General Session
Data and AI Governance in Microsoft Fabric under the EU AI Act
Dr. Andre Ebert
inovex GmbH
Session Goals: • Understand AI & Data governance with the EU AI Act. • How to ensure technical compliance with fabric.
Session Prerequisites: Familiarity with challenges of AI & Data governance as well as with the EU AI Act is useful but not a necessity.
Track: Architecture
Level: Level 200
Theme: Security
Artificial Intelligence enables companies to unlock unprecedented potential and accelerate business processes exponentially. However, we simultaneously face a dual challenge: How can we scale our AI innovations and remain globally competitive while complying with strict new regulations? The trustworthiness and security of powerful AI-driven products rely heavily on the seamless interplay of Data Governance and AI Governance. Furthermore, the EU AI Act now provides a legally binding framework for these standards. This raises the question: How can we sustainably implement these concepts in our company and actively profit from them? In the first part of this session, we will address these exact questions. We aim to establish a shared understanding of AI Governance, Data Governance, and their complex relationship with the EU AI Act within the context of modern Cloud Data Platforms. Following this, we will provide a hands-on demonstration using Microsoft Fabric combined with Microsoft Purview to showcase how the technical requirements of the EU AI Act can already be fully addressed today. Finally, we will highlight that technology alone cannot mitigate every risk. Crucial aspects, such as the context-specific classification of ethical risks or the Fundamental Rights Impact Assessment (FRIA), fundamentally rely on organizational maturity and human judgment. -
General Session
Vector Search in SQL Server 2025 – Basics and Beyond
Ben Weissman
Vector search is one of the most exciting additions in SQL Server 2025, bringing powerful AI capabilities directly into the database engine. In this session, we will explore how the new vector data type enables you to store and query embeddings, allowing semantic search scenarios that go far beyond traditional keyword-based approaches. You will learn how embeddings represent meaning, how they can be generated, and how SQL Server makes it possible to work with them natively. We will start with the core concepts, building a solid understanding of vectors, similarity metrics, and how semantic search differs from classical techniques. From there, we will dive into practical implementation details, including how to design tables for vector data, execute similarity searches, and integrate these features into existing workloads. A major focus of this session will be indexing and performance. We will look at how vector indexes work in SQL Server 2025, how to create and maintain them, and what kind of performance improvements you can expect. Through live demos, you will see how different approaches impact query speed and accuracy. By the end of this session, you will have a clear understanding of how to bring AI-driven search capabilities into your own solutions using SQL Server 2025.
General Session
Vector Search in SQL Server 2025 – Basics and Beyond
Ben Weissman
Solisyon
Session Goals: • Understand what vector embeddings are. • Understand how to create and manage vector indexes. • Understand how to build a semantic search layer in SQL.
Session Prerequisites: Good understanding of T-SQL and tables in SQL Server
Track: Database Management
Level: Level 200
Theme: AI + Data
Vector search is one of the most exciting additions in SQL Server 2025, bringing powerful AI capabilities directly into the database engine. In this session, we will explore how the new vector data type enables you to store and query embeddings, allowing semantic search scenarios that go far beyond traditional keyword-based approaches. You will learn how embeddings represent meaning, how they can be generated, and how SQL Server makes it possible to work with them natively. We will start with the core concepts, building a solid understanding of vectors, similarity metrics, and how semantic search differs from classical techniques. From there, we will dive into practical implementation details, including how to design tables for vector data, execute similarity searches, and integrate these features into existing workloads. A major focus of this session will be indexing and performance. We will look at how vector indexes work in SQL Server 2025, how to create and maintain them, and what kind of performance improvements you can expect. Through live demos, you will see how different approaches impact query speed and accuracy. By the end of this session, you will have a clear understanding of how to bring AI-driven search capabilities into your own solutions using SQL Server 2025. -
General Session
From Disarray to Hooray: SQL Server Functions That Simplify T-SQL
Stephanie Reis
Peter Kruis
Ever run into one of those SQL Server moments where you think: "There has to be an easier way to do this"? The kind where you've got a chain of string functions, a CASE expression that reads like a crime scene, a comment that basically says "don't touch this", and code that has been copied since 2005. In a lot of cases, we don't need those messy custom-made functions anymore, as SQL Server already has something built in that would make our life easier. In this session, Stephanie and Peter take you through the lesser-known dungeons of SQL Server functions. Not the ones everyone memorizes but the ones you only discover after you've written the hard way and then later realize there was a cleaner option all along. We'll show how these functions can simplify data cleanup, make reporting logic easier to read, and replace bits of customer code you've been dragging around for years. To keep it fun, we'll do this in acts. Each act is a short story; a problem we keep seeing, the typical over engineered solution, and then the "wait, that exists?" moment. Expect quick demos, a few SQL memes that will feel painfully familiar, and practical takeaways you can use the next time you've stuck in a query spaghetti. If you write T-SQL and you like solutions that are simple, predictable, and easy to maintain, you'll leave with a list of function and patterns you'll actually use.
General Session
From Disarray to Hooray: SQL Server Functions That Simplify T-SQL
Stephanie Reis
AAA Washington
Peter Kruis
Kruis Database Consultancy
Session Goals: •Discover lesser-known SQL Server functions you may not have encountered before. • Identify built-in SQL Server functions that replace common overengineered T-SQL patterns. • Apply practical function patterns to clean up data and reporting logic immediately.
Session Prerequisites: A working knowledge of basic SQL (SELECT, JOINs, WHERE, GROUP BY). Familiarity with T-SQL syntax in SQL Server.
Track: Development
Level: Level 200
Theme: Cloud + Multi-DB
Ever run into one of those SQL Server moments where you think: "There has to be an easier way to do this"? The kind where you've got a chain of string functions, a CASE expression that reads like a crime scene, a comment that basically says "don't touch this", and code that has been copied since 2005. In a lot of cases, we don't need those messy custom-made functions anymore, as SQL Server already has something built in that would make our life easier. In this session, Stephanie and Peter take you through the lesser-known dungeons of SQL Server functions. Not the ones everyone memorizes but the ones you only discover after you've written the hard way and then later realize there was a cleaner option all along. We'll show how these functions can simplify data cleanup, make reporting logic easier to read, and replace bits of customer code you've been dragging around for years. To keep it fun, we'll do this in acts. Each act is a short story; a problem we keep seeing, the typical over engineered solution, and then the "wait, that exists?" moment. Expect quick demos, a few SQL memes that will feel painfully familiar, and practical takeaways you can use the next time you've stuck in a query spaghetti. If you write T-SQL and you like solutions that are simple, predictable, and easy to maintain, you'll leave with a list of function and patterns you'll actually use.
