Previous Speakers
Aaron West
Sales Engineer, SIOS Technology Corp
Aaron West
SIOS Technology Corp, Sales Engineer
Aaron serves as one of the Sales Engineers at SIOS Technology Corp. With more than 20 years in the IT industry and more than 10 years working specifically in the field of high availability and load balancing, he offers a substantial wealth of knowledge to enterprises aiming to establish unbreakable systems. His consultancy and implementation efforts have spanned diverse industry verticals including healthcare, finance, government, and others, allowing him to represent specialised vendors proficient in high availability, disaster recovery, and fault-tolerant computing methodologies.
-
Building a Faster, Cheaper, and More Resilient SQL Server in the Cloud
Running SQL Server in the cloud brings both opportunities and challenges. This session explores how to build high-performing, highly available SQL Server environments optimized for cloud platforms. We will cover proven strategies for performance tuning, cost control, and licensing optimization across both Microsoft Windows and Linux. Attendees will gain a clear understanding of how licensing impacts costs on each platform and how choosing Linux in certain scenarios can deliver meaningful savings. We will introduce SIOS Technology as a powerful solution for achieving true OS-agnostic high availability in the cloud. You will see how SIOS enables robust failover protection for both Windows and Linux while helping you reduce licensing expenses by using SQL Server Standard Edition instead of the Enterprise Edition often required by native solutions like Always On Availability Groups. We will also address the often-overlooked challenge of patching SQL Server and its underlying operating systems in the cloud. Learn best practices and automation techniques that strengthen security, ensure stability, and streamline administration.
Abhi Jayanty
Data Engineer, Quorum
Abhi Jayanty
Quorum, Data Engineer
I’m a Data Engineer and data analytics enthusiast experienced in working with the Azure data platform and Microsoft Fabric. With multiple Azure and Fabric certifications, I’m passionate about sharing knowledge and love presenting on topics like Kusto, Fabric and how they work with the wider Azure ecosystem. I'm also a part of Redgate's Community Ambassadors programme, helping Redgate to support data community events around the world through speaking and content creation. Outside of data, I’m an avid fan of football and Formula 1, and I enjoy cooking, whisky, and making plans to travel the world.
-
Getting Started with Real-Time Intelligence in Microsoft Fabric
Real-Time Intelligence in Fabric is designed to enable organisations to bring their streaming, high-granularity, time-sensitive event data into Fabric and build various analytical, visual and action-oriented data applications and experiences. In this session, you will learn about the various components of Fabric Real-Time Intelligence through a full end-to-end solution which will show you how to ingest, analyze and visualize real-time streaming data in Fabric. We will discuss the various streaming sources (and clouds) that you can connect to with the Real-Time Hub, and explore best practice when it comes to cleaning and preparing this data for superior query times. You’ll be introduced to the Kusto Query Language, with live demonstrations of queries on sample real-time data, covering syntax, built-in functions, and operators, and all the key features of KQL that make it a game-changer for analysis of data in motion. We’ll also explore Real-Time Dashboards by building powerful live visuals, as well as covering how to connect Power BI to our streaming data in Fabric and best optimise our reports. Lastly, we’ll demonstrate how you can take your visuals from your reports and dashboards to the next level by building alerts and rules with Activator to efficiently and pro-actively manage your data and Fabric estate. You will leave this session equipped with practical examples and best practices for real-time data analysis to deliver instant value on your streaming data.
-
Change Data Capture Made Easy With Microsoft Fabric
-
The Data Community Needs YOU: Why Your Voice Matters
-
Build A Fabric Real-time Intelligence & Power BI Solution in One Day
-
Build A Fabric Real-time Intelligence Solution in One Day
-
Analyze Your Real-Time Data With Fabric Eventhouse & KQL
Real-time Intelligence in Microsoft Fabric empowers data professionals to seamlessly process and analyze highly granular, event-driven data. At its core lies the Kusto engine and the Kusto Query Language (KQL), delivering powerful capabilities for real-time data analysis. This session explores how you can leverage KQL to build efficient, event-driven solutions in Fabric with real-world examples. In this session, we will discuss key features of KQL that make it a game-changer for interactive analysis of data in motion. We'll cover basic syntax, before exploring various built-in functions and operators of KQL demonstrated through live queries on sample real-time data. You may think you have to learn a brand new language, but you’ll quickly realise that KQL isn’t as alien as it seems, and that you can get going pretty quickly with filtering, aggregating and joins as you would with T-SQL. We will also discuss the storage architecture of Eventhouse and why it is the optimal store for your event data in Fabric but also various use cases for workspace monitoring and log retention in Fabric. Once you have an understanding of how to store and analyze your event-driven data in Fabric with KQL and Eventhouse, you’ll be ready to deliver powerful real-time visuals and actions. Come along if you are a developer, data engineer, or analyst seeking practical examples and best practices for crafting KQL queries to drive real-time data analysis and actions. You will leave this session equipped with the skills to unlock the full potential of real-time data in Fabric.
-
NoSQL Data Stores – What, Why, How
Relational SQL databases remain the winner when it comes to optimal and structured data storage and management. However, as data storage needs become more complex, especially with the rise of AI, NoSQL databases are making waves. NoSQL (Not-Only-SQL) is a powerful alternative, excelling at handling large volumes of unstructured, non-relational data and offering flexible data structures for ever-evolving schemas. In this session, we will explore the basics and core concepts of different types of NoSQL databases, such as document stores, key-value stores, and graph databases. We’ll discuss the ideal use cases and best practices for each store, their unique strengths, and how they compare to traditional relational databases in various scenarios. We will also cover practical examples, such as using graph databases for more efficient retrieval in AI applications, or leveraging document stores for storing and querying raw logs. By the end of the session, attendees will be equipped with the knowledge required to implement NoSQL data storage in Azure, as well as being able to make informed decisions about when and where to implement NoSQL databases. While NoSQL is not always the answer, choosing the right data store will always ensure scalability and performance for your data solutions.
-
Using Log Analytics to Efficiently Monitor Your Azure Environment
-
Change Data Capture Made Easy With Microsoft Fabric
-
Kusto in Action: Powering Real-Time Intelligence in Fabric
-
The Data Community Needs YOU: Why Your Voice Matters
-
Community Conversation: Beyond the Data – Building Meaningful Connections in the Global Community
The global data community is more than just a network, it’s a vibrant space to learn, share, and grow together. Our guests will share how they make the most of this amazing community by building genuine relationships, attending events, and embracing the power of asking for help. Whether you’re looking to expand your knowledge, connect with peers, or simply find support when you need it, this discussion will show you how to turn community engagement into a rewarding experience.
Abhishek Shukla
Sr. Technical Account Manager, Amazon
Abhishek Shukla
Amazon, Sr. Technical Account Manager
Technology enthusiast and cloud operations specialist with deep expertise in data analytics and object storage solutions. Currently driving innovation at Amazon Web Services, where I leverage cutting-edge cloud technologies to solve complex data challenges and optimize storage architectures. Passionate about transforming raw data into actionable insights and helping organizations scale their cloud infrastructure efficiently. Committed to advancing the field of cloud computing through hands-on experience with enterprise-grade AWS services and data-driven solutions.
-
Oracle to Amazon Aurora PostgreSQL Migration: Challenges, Strategies
Organizations face complex challenges when migrating from Oracle to Amazon Aurora/RDS PostgreSQL. In this presentation, we'll explore key migration hurdles, including SQL syntax variations, and proprietary Oracle features. A significant focus will be on developing the right target architecture. We'll discuss target platform strategies that satisfies requirements and addresses identified pain points, proper capacity planning and performance benchmarking. While automated tools like Amazon DMS and SCT facilitate aspects of the migration, we'll highlight areas requiring manual intervention and strategic planning. Post-migration challenges will be discussed, including adapting to PostgreSQL's MVCC architecture, vacuuming processes, query optimization techniques, statistics management, and indexing strategies. We'll emphasize the importance of developing comprehensive testing suites and validation routines. We'll discuss strategies for ensuring data integrity, functional equivalence, and performance parity between the source Oracle system and the target PostgreSQL environment. We'll explore the operational shift required when moving to a managed database service, including adapting to the shared responsibility model. Real-world migration examples and lessons learned will be shared, highlighting challenges and successful strategies implemented by our customers. We'll discuss leveraging AI/ML and generative AI to streamline the migration process, and help solving migration issues.
Adam Jorgensen
CEO & Founder, Everest Partners
Adam Jorgensen
Everest Partners, CEO & Founder
Adam knows what it means to lead through transformation—because he’s lived it. A nationally recognized data strategist, growth advisor, and executive coach, Adam has built and exited companies worth over $50 million, led high-performance teams at scale, and helped private equity–backed firms unlock massive growth. He has coached leaders at Microsoft, Amazon, Intuit, the NFL, and on Wall Street, authored over a dozen bestselling books on data and analytics, and served as a Microsoft MVP, Regional Director, and PASS community leader. But Adam’s impact goes far beyond boardrooms and balance sheets. After his own personal transformation, Adam has become a voice for what’s possible—professionally and personally—when you stop surviving and start leading with clarity, courage, and conviction. Adam brings unmatched energy, hard-won wisdom, and deep empathy to every stage he steps on. His talks blend tactical insight with human storytelling, earning standing ovations and sparking real, Monday-ready transformation. He’s here to equip you to lead differently in a faster world.
-
Community Keynote: The Data Leader's Playbook for a Faster World
The world of data is moving faster than ever, but your inbox, your dashboards, and your burnout don’t care. You weren’t trained for this pace. You were trained to be brilliant. But in today’s AI-fueled, always-on environment, brilliance isn’t enough. To truly lead, you need to rise above the noise and learn how to shape the signal others follow. In this fast-paced and deeply practical closing keynote, longtime data leader and growth strategist Adam Jorgensen shares five powerful leadership moves designed for today’s reality. These aren’t buzzwords or high-level theories—they’re habits, mindset shifts, and human skills that separate data professionals who get overlooked from those who get trusted, promoted, and heard. You’ll learn how to move from solo contributor to strategic partner… from “being right” to being trusted… from surviving speed to leading through it. You’ll leave inspired—but more importantly, you’ll leave equipped. This is your new playbook. The next version of your leadership starts here.
-
PASSport to Networking
Kick off your PASS Summit experience with an energizing session designed to spark real, lasting connections. Guided by Adam Jorgensen, this interactive networking event invites you to meet fellow attendees in a relaxed setting, where conversation prompts lead to meaningful exchanges, new friendships, and maybe even your next big opportunity. Come curious, leave connected.
Adam Machanic
Head of Data, …
Adam Machanic
…, Head of Data
Adam Machanic is a technology leader with over 20 years of experience in data architecture, data systems design, and data-focused software engineering. An established database industry expert, Adam has a keen interest in scalable and high performance SQL implementations — both proprietary and open-source. He is the original author of sp_whoisactive, an award-winning monitoring stored procedure used by tens of thousands of SQL Server DBAs.
-
Query Performance Rewrites: What Works, What Doesn't, and Why
For many database professionals, crafting an optimal query is half guesswork and half black art. If you switch the order of the WHERE predicates while burning sandalwood incense, will the query suddenly move a bit quicker? Or maybe you need to use a CTE—that will help, right? While speculative methods like these can eventually yield success, it doesn't come without excessive hand wringing and time wasting. But it doesn't have to be this way! In this session we'll start by revisiting the foundations. You’ll learn about the key logical guarantees on which query optimization is based, and why a rewrite that doesn't bend a guarantee is almost never going to help. From there we'll investigate various query patterns and take a hard look at some commonly suggested tuning advice. Are subqueries a problem? Do you need to worry about join order in your query? What about the oft-maligned IN predicate? All of this will be illustrated with various examples in both SQL Server and PostgreSQL. By the end of the session, you'll understand which rewrites might work, in which scenarios, and—most importantly—which you can totally ignore. And as a bonus, you might even be able to retire your incense collection.
Akshata Revankar
Sr Data & Applied Scientist, Microsoft
Akshata Revankar
Microsoft, Sr Data & Applied Scientist
20+ Years of experience in data engineering and data reporting space. Have worked with Oracle database, SQL Server, SSIS, Informatica Power Center, Hadoop systems, Qlik and Power BI. Enjoy being in the data space and learning new things.
-
Smart AI : Unlocking LLMs with RAG and Microsoft Fabric
-
Power BI CI/CD Simplified: GitHub & Azure Devops Automation
Overview This session will provide a comprehensive exploration of leveraging Azure DevOps/Github repositories & pipelines to implement Continuous Integration and Continuous Deployment (CI/CD) for Power BI assets. Attendees will gain insights into the end-to-end lifecycle of versioning Power BI items and seamlessly deploying them to a production environment through a fully automated process. Key Topics Covered: This session will provide an understanding of the PBIP format for Power BI files and demonstrate how to integrate Fabric workspaces with Azure DevOps/Github repositories for efficient version control. It will cover the use of deployment pipelines to seamlessly move Fabric workspace assets across environments. Participants will also learn how to secure deployments using Service Principals and set up Azure DevOps pipelines to leverage Deployment Pipeline APIs and Github Actions for automating the process of moving Power BI assets from development to production environments. Light on slides, heavy on demo.
-
Optimizing Power BI Monitoring with Log Analytics
-
Decoding JSON for Data Insights with Power BI
-
Unlocking the Power of Real-Time Data: A Kusto Journey
-
Fortify your Insights: Row-Level Security in Power BI
Alain Hunkins
CEO & Founder, Hunkins Leadership Group
Alain Hunkins
Hunkins Leadership Group, CEO & Founder
Alain Hunkins is a globally recognized leadership strategist, bestselling author of Cracking the Leadership Code: Three Secrets to Building Strong Leaders, and a regular Forbes columnist on leadership strategy. With over 25 years of experience, Alain empowers professionals and organizations to cultivate stronger leaders, sharper teams, and smarter workplaces. He has delivered impactful keynotes and leadership development workshops for a diverse range of clients, including Fortune 500 companies like Pfizer, Microsoft, GE, Lockheed Martin, and Walmart, impacting over 3,000 groups across 30 countries. Alain's approach is distinctly practical and research-backed, helping organizations tackle today’s most pressing people challenges. Whether it's navigating the complexities of leading hybrid teams, fostering a robust culture of accountability, or closing the critical gap between strategic vision and flawless execution, Alain provides tangible solutions. As both a dynamic keynote speaker and a hands-on facilitator, he brings energy, insight, and humor to every session. He consistently delivers actionable tools that enable leaders to drive greater alignment, boost engagement, and build thriving cultures where both people and performance excel.
-
Cracking the Communication Code: How Teams can Collaborate & Work Smarter
Communication is the most undertrained skill in tech—and yet, it’s the one that determines whether your ideas get heard, your team stays aligned, and your influence grows. In this dynamic and experiential workshop, attendees will work through the most common communication challenges facing tech teams—from difficult conversations and feedback delivery to stakeholder buy-in and executive updates. Participants will leave with proven communication tools they can apply immediately. plus scripts, structures, and practice opportunities that bring the tools to life.
-
Crack the Leadership Code: Lead with Clarity, Connection, and Confidence
At the heart of every successful organization is a leader who knows how to inspire trust, drive clarity, and foster collaboration. But in today’s rapidly changing, post-pandemic workplace, the old leadership playbook no longer works—and many leaders are left struggling to keep up. This highly interactive workshop offers a practical, research-backed framework to help you become the kind of leader people want to follow. Built around three essential pillars—Connection, Communication, and Collaboration—this session will help you shorten your leadership learning curve and thrive in complexity. You’ll explore the hidden reasons why leading today is more difficult than ever and learn how to overcome the most common challenges leaders face: unclear expectations, poor team dynamics, and low engagement. Whether you’re managing a team, leading cross-functional projects, or aspiring to step into leadership, you’ll leave with tools you can apply immediately. Through hands-on exercises and guided reflection, you’ll gain new insight into your leadership strengths and growth areas—and build a roadmap to lead with more confidence, empathy, and impact.
Alex Chianuri
CEO, Plexifact
Alex Chianuri
Plexifact, CEO
Alex is a technology and data professional with 25+ years of experience in financial services and fintech. He became one of the youngest VPs at JPMorgan and served as lead data warehouse architect at Bridgewater Associates and Head of Data Management at The Rohatyn Group. As Founder and CEO of Plexifact since 2012, Alex has built data platforms for dozens of clients including hedge funds, private equity firms, and fintech companies managing assets from startups to $130+ billion. His work spans operational and analytical data warehouses serving risk management, data science, and portfolio management teams. Alex and his team are PLEXI, a low-code data engineering platform that automates creation and management of data infrastructure including physical models, metadata layers, and quality monitoring pipelines. He holds a B.A. in Computer Science from Queens College, CUNY.
-
Converged Data Management: A Practical Approach to Enterprise Data Management
In today's AI-driven landscape, organizations face a critical challenge that precedes any successful analytics initiative: robust data engineering. While many enterprises struggle with fragmented, costly approaches to data management, a more systematic methodology is emerging. This talk introduces the Data Supply Chain framework – a standardized, modular approach that transforms how organizations acquire, process, and deliver data assets. By decomposing complex data engineering workflows into consistent, repeatable patterns, this methodology significantly reduces both implementation complexity and long-term maintenance costs. Drawing from real-world implementations, we'll explore: • Core architectural components of the Data Supply Chain • Standardized patterns for data ingestion, transformation, and delivery • Technology selection criteria for each pipeline stage • Metrics demonstrating reduced implementation time and operational costs For data leaders seeking to scale analytics capabilities while controlling technical debt, this presentation offers a blueprint for building data infrastructure that enables rather than hinders innovation. Attendees will leave with actionable strategies to implement these patterns within their existing data ecosystems, regardless of industry or organizational maturity.
Alexander Arvidsson
Chief Technical Officer, Analytics Masterminds
Alexander Arvidsson
Analytics Masterminds, Chief Technical Officer
Alexander is the chief technology officer at Analytics Masterminds where he spends his days helping clients of all shapes and sizes to take better care of – and make more sense of – their data. He has spent the last 25 years poking around with data, databases and related infrastructure services such as storage, networking and virtualization, occasionally emerging from the technical darkness to attend a Star Wars convention somewhere in the world. He is a long time Data Platform MVP, frequent international speaker, podcaster, Pluralsight author, blogger and a Microsoft Certified Trainer, focusing on the Microsoft data platform stack.
-
Fabric FinOps – Cost Optimization for Microsoft Fabric
A software-as-a-service offering such as Microsoft Fabric offers a fixed cost per month, making budgeting much easier. That should be the end of the conversation, but it's actually only the beginning – how do you make the most of the capacity units you get with the fixed price? How can you ensure that you are getting the most out of your monthly investment? The different Fabric workloads all have different ways of using capacity units – some are cost CUs per minute of runtime, some cost per query, some are straight forward to calculate, some are not. Take dataflows gen 2 an an example – by far the most expensive way of doing data integration compared to pipelines or copy jobs, but what does "expensive" mean in the grand scheme of things? Depending on your organization's previous experience and skill level, dataflows might instead end up being the cheapest option! There is much more to optimizing the bang for your buck with Fabric than only looking at the relative costs of capacity units, as it is easy to get lost in the technology instead of looking at the big picture. This session will look at capacity units as a concept, compare how the different workloads drive capacity unit usage, explore ways to optimize workload usage to decrease compute costs, and discuss other factors that impact the overall cost conversation around Fabric.
-
Fabric in Perspective – Interfacing Microsoft Fabric With the World Beyond
-
From SQL to Spark and Back Again – Spark and Python for SQL Users
-
Migrating to Microsoft Fabric – Notes From the Field
-
Podcasting Explained – Everything You Need for Starting Your Own Podcast
Allison Harris
Data Engineer, NFP
Allison Harris
NFP, Data Engineer
I am a new Data Engineer at NFP
-
A Data Engineer Teaches A DBA Python: Mother/Daughter Duo Collaborates
When a DBA needed to learn basic Python when moving to the cloud, she turned to a data engineer for help. DBAs and data engineers are often at odds—not this mother/daughter duo. Join them as they share insights and experiences of working together to expand their skill sets. You will leave with an understanding of why a DBA would need to learn a new language complete with practical examples focusing on the Pyspark library, a Python API for Apache Spark, and the popular Python libraries Pandas and Numpy.
Alpa Buddhabhatti
Consultant, Freelance / Independent Consultant
Alpa Buddhabhatti
Freelance / Independent Consultant, Consultant
Alpa Buddhabhatti is a Microsoft MVP, Microsoft Certified Trainer (MCT), and a Lead Data Engineer and Data Architect, passionate about building modern data platforms using Azure technologies. She leverages services like Data Factory, Azure SQL, Microsoft Fabric, Azure AI, Logic Apps, and serverless tools to deliver scalable, automated solutions. Alpa uses Generative AI and Azure OpenAI to unlock insights and drive innovation. Alpa is a frequent international speaker at events like SQLBits, Microsoft Ignite, PASS Summit, and more—known for translating complex data and AI concepts into practical value and empowering others to grow their skills.
-
Real-World End-to-End ETL with Microsoft Fabric
In this session, we'll walk through the process of designing and building a real-world end-to-end ETL pipeline using Microsoft Fabric. From data ingestion to transformation and loading into a Lakehouse, you'll learn how to orchestrate the entire flow using Fabric Data Pipelines, Lakehouse tables, and SQL-based transformations. We’ll also explore how to visualize processed data in Power BI and extend the pipeline by integrating insights into Azure AI Foundry for intelligent applications. Along the way, we’ll cover best practices for structuring ETL stages, monitoring pipeline execution, and optimizing performance. The live demo will cover: • Microsoft Fabric • OneLake • Azure SQL • Power BI • Azure AI Foundry You'll see how to connect to data sources, transform data at scale, visualize results, and enrich outcomes using AI — all within the Fabric ecosystem.
-
Multi-Modal Azure AI with Microsoft Fabric and Azure AI Foundry
In this session, discover how to build intelligent, multi-modal AI solutions by combining the data orchestration power of Microsoft Fabric with the model-building capabilities of Azure AI Foundry. Learn how to prepare and manage structured and unstructured data using Lakehouses, then connect that data to vision, text, and tabular AI models for rich, context-aware outcomes. We’ll walk through real-world use cases and show how Fabric pipelines feed into multi-modal AI workflows—enabling solutions like document summarization, image classification, and chat-based analytics. Live demo will cover: • Data preparation with Microsoft Fabric and OneLake • Connecting structured and unstructured data • Training and deploying multi-modal models in Azure AI Foundry • Using outputs in business applications or Power BI
Alvaro Costa-Neto
Sr. Database Specialist Solutions Architect, Amazon Web Services
Alvaro Costa-Neto
Amazon Web Services, Sr. Database Specialist Solutions Architect
Alvaro Costa-Neto is a Senior Database Specialist Solutions Architect at AWS who specializes in helping customers implement cloud-based database solutions. With over 20 years of experience in database technologies, he has extensive expertise in both Microsoft SQL Server and various open-source database engines. His role focuses on designing and architecting database solutions for AWS customers.
-
SQL Server Hosting on AWS: Options and Considerations
-
Seamless Transition: Migrating from Commercial to Open Source Databases
Migrating from legacy SQL Server databases is time consuming and resource intensive. Babelfish extends your PostgreSQL-Compatible Edition database with the ability to accept database connections from SQL Server clients. Join this session to learn how SQL Server applications work directly with Aurora PostgreSQL with few to no code changes compared to traditional migration and without changing database drivers.
Amanda Denning
Dba, Jasper Engines and Transmissions
Amanda Denning
Jasper Engines and Transmissions, Dba
-
Communicating Through Leadership Friction
A spontaneous session that does a deep dive into the real communication struggles technical experts face when engaging with non-technical stakeholders, where the goal isn't just clarity, but building trust and strategic influence. We explore common pitfalls—from jargon overload to the 'invisible cost' of unarticulated technical decisions—and shared practical strategies for translation and engagement. You'll walk away with actionable insights and confidence needed to convert technical expertise into recognized business value across the organization.
Amar Digamber Patil
Microsoft
Amar Digamber Patil
Microsoft
I'm a builder at heart, an entrepreneurial engineer turned product leader with a proven track record of turning complex business challenges into scalable, high-impact technical solutions. I thrive at the intersection of engineering, product strategy, and AI, always focused on delivering real value through innovation. Throughout my career, I’ve led and contributed to high-stakes initiatives across software, AI, manufacturing systems, and cloud platforms, navigating ambiguity, driving cross-functional collaboration, and launching products that matter. I believe in empowering teams with a shared vision while creating space for experimentation, learning, and ownership.
-
SQL Database in Fabric: The Unified Database for AI Apps and Analytics
Discover how SQL in Fabric brings transactional and analytical workloads together in one cloud-native database. In this session, we’ll show how developers and data teams can simplify AI-driven application development with near real-time insights and built-in AI, seamless OneLake integration, and end-to-end analytics—all in a single, unified experience.
Amit Khandelwal
Principal Product Manager, Microsoft
Amit Khandelwal
Microsoft, Principal Product Manager
I am a Principal Product Manager at Microsoft with over 15 years of experience. I have played a key role in the development of SQL Server on Linux, contributing significantly to Microsoft's cross-platform solutions. My career at Microsoft began as an intern, and I have since held various roles including Support Engineer, Technical Lead, Support Escalation Engineer, and Premier Field Engineer. In these roles, I provided critical support and resolved complex technical issues for enterprise customers using SQL Server. Currently, I oversee SQL Server on Linux and Containers. With over a decade of database experience, I have designed SQL Server-based data platforms for Tier 1 customers across diverse business segments. I am also the author of "Ultimate SQL Server and Azure SQL for Data Management and Modernization." As a frequent speaker at industry conferences, I share insights on technology trends and gather customer feedback. Notable presentations include sessions at DevDays Asia (Taipei), SQL BITS (London), PASS Data Community Summit (Seattle), Red Hat Summit, SUSECON, Internal Microsoft Events, Bangalore UG group, Data Exposed, and DataMinds.
-
Best of Both Worlds: SQL Server on Red Hat Linux Enterprise and OpenShift
Join us for an exciting and demo filled session where we explore how customers can leverage the powerful combination of SQL Server on Red Hat Linux Enterprise and OpenShift to achieve unparalleled performance and flexibility. We will delve into the latest Ansible enhancements helping with deployment automation, including the new functionality for enabling AES encryption for user accounts using adutil and support for the latest mssql-tool18 package. Discover how SQL Server container images can be utilized with WSL2, and learn about the custom password policy introduced with SQL 2025. We'll also showcase AI-related features for SQL Server 2025 with some impressive RAG-based demos. Additionally, we'll highlight improvements in SELinux supporting features, ensuring robust security and compliance. This session is designed to provide valuable insights and practical demonstrations, helping you make the most of SQL Server on Red Hat Linux Enterprise and OpenShift. Don't miss out on this opportunity to enhance your database management capabilities and stay ahead in the ever-evolving tech landscape and add technologies
-
SQL Server 2025 on Ubuntu Pro: Deployment Strategies and Features
-
How to Build a Secure & Resilient Data Estate for SQL Server-Backed AI Apps
-
How to Unify a SQL Server Availability Group Across Windows and Linux
-
How to Build Your Database as a Service with SQL Server Containers & DH2i
-
How to Migrate SQL Server Workloads to Red Hat OpenShift with DxEnterprise
-
How to Provision a SQL Server Availability Group Cluster in AKS/EKS
The path to true high availability for critical SQL Server workloads in the cloud has never been for the faint of heart. For organizations pursuing further modernization by deploying containers in the cloud, the complexity is dialed up even further. Until now… Join this presentation for a step-by-step demonstration showing you two different approaches your organization can employ to drastically simplify the deployment of secure and highly available SQL Server containers in the cloud: APPROACH 1: Use a DxEnterprise Helm chart and StatefulSets to deploy a 3-replica AG in AKS/EKS. APPROACH 2: Use DxEnterprise’s SQL Server Operator to automate the deployment of a customized Availability Group (AG) containing three replicas in AKS/EKS. Both approaches to SQL Server container deployment in EKS/AKS are executable in minutes, and they integrate powerful proprietary benefits like: SQL Server sidecar containers to avoid custom image/support headaches Fully automatic failover for SQL Server Availability Groups in Kubernetes Zero trust network access tunnels to securely connect any replica, anywhere A clear path has been paved to peak SQL Server scalability and cost-efficiency with containers in the cloud. Join this session to see how you can get there without sacrificing network security and high availability.
-
How to Build a Secure & Resilient Data Estate for SQL Server-Backed AI Apps
The impending release of SQL Server 2025 and its support for vector databases unlocks a brand-new pathway into the ‘Age of AI’ for organizations across countless verticals. In the same way, it provides a robust and reliable database alternative for organizations that have already endeavored into the creation of their own AI applications. Regardless of the chosen technology, only AI databases architected with a keen focus on scalability, security, and resilience will meet the dynamic needs of modern enterprises. Join this demo-centric presentation to be shown step-by-step how your organization can leverage Azure AI, Microsoft SQL Server 2025, and DH2i to build a comprehensive solution for deploying enterprise AI at scale. We’ll show you how you can use a SQL Server Operator to automate the deployment of an Availability Group in Kubernetes, providing an optimally scalable, secure, and highly available database backbone for your AI applications. Additionally, we’ll demonstrate fully automatic failover of an AI workload between Kubernetes replicas—a non-negotiable capability for achieving maximum resiliency. Attendees will leave with a full, actionable framework for building highly available, production AI apps with Azure AI, Microsoft SQL Server 2025, and DH2i.
-
How to Migrate SQL Server Workloads to Red Hat OpenShift with DxEnterprise
As organizations seek to modernize their infrastructure and improve SQL Server scalability, many are turning to containerization and orchestration platforms like Red Hat OpenShift. Migrating existing SQL Server workloads to these new environments can be complex and daunting, especially when the task at-hand involves migrating cross-platform from Windows to Linux for the first time. In this step-by-step demonstration, we’ll show you how you can deploy a secure, cross-platform SQL Server Availability Group (AG) that seamlessly spans from an on-premises Windows Server node to a newly created OpenShift cluster in Azure. We'll automate the deployment of this unique AG using DxEnterprise’s SQL Server Operator for Kubernetes, and be sure to demonstrate: – AG customization – The ability to control # of replicas, async or sync replication, etc. – The speedy workload migration from Windows to OpenShift using AG – Fully automatic, database-level HA for the new OpenShift workload with DxEnterprise If your organization has any SQL Server modernization ambitions at all and is eyeing OpenShift as a potential hub for virtualization and container orchestration, make this session a priority. You'll leave with an actionable understanding of an easy, secure, and highly available approach to OpenShift migration.
Amit Parikh
Field CTO – Data Solutions, Quest Software
Amit Parikh
Quest Software, Field CTO – Data Solutions
With nearly four decades in IT, I've evolved from hands-on systems administration and software development to leading strategic engagements as Field CTO at Quest Software. I bring a unique blend of technical depth, business acumen, and market vision, enabling organizations to modernize their data platforms, embrace AI and observability, and drive sustainable innovation. My career spans roles in pre/post-sales consulting, product and project management, and IT delivery—equipping me to advise both technical and business leaders with credibility and clarity. Today, I work closely with customers, industry peers, and internal stakeholders to align data strategy with AI-readiness, improve governance and performance, and help companies navigate complex hybrid/cloud ecosystems. I regularly speak at global conferences, podcasts, and user groups, with a focus on: Data Observability & Governance AI/ML Workload Readiness Cloud-Native Architectures & RDBMS Strategy Metadata Intelligence & Data Lineage Outside of tech, I live and breathe cricket—an ICC Certified Coach and Umpire who finds as much joy in the middle of the pitch as in the middle of a data architecture debate. “Cricket is Life. Life is Cricket!” Let’s connect—I'd love to talk data, AI, or cricket!
-
AI-Ready Data Architecture: Navigating Trade-offs for Real-World Success
As AI projects move from proof-of-concept to production, organizations face a critical challenge: modernizing data architectures in ways that enable innovation without compromising governance, trust, or performance. This session presents a practical framework for evaluating today’s most common architectural patterns—data lakehouse, data mesh, and data fabric—through the lens of AI readiness. We’ll explore the key trade-offs each approach brings and examine how metadata, lineage, and data modeling play a pivotal role in reducing risk and accelerating AI adoption. You’ll also gain insight into hybrid data movement strategies—including replication, streaming, and virtualization—and how observability can help you detect performance bottlenecks before they derail your initiatives. Drawing on real-world modernization scenarios that include integrating legacy platforms with cloud-native systems like Snowflake, this session equips you with actionable strategies to build a future-ready architecture that scales with your AI ambitions.
Amol Shanbhag
Product Manager/ Customer Success Architect, Tableau Salesforce
Amol Shanbhag
Tableau Salesforce, Product Manager/ Customer Success Architect
I have worked in the tech industry for the better part of the last two decades at organizations such as Microsoft, Expedia , Tableau and Salesforce. I also have experience at startups that got acquired (Assurestart -> Homesite -> AmFam) Most recently, with over four years of experience at Salesforce, I lead the strategic customer success management for Tableau and its integration with AI features like Einstein Copilot and Agents. My role involves incorporating the voice of the field into product backlogs for both on-premise and cloud solutions, ensuring alignment with customer needs. I also focus on security compliance for integrations and enhancing workflows to optimize user adoption. Alongside my leadership role, I guide onboarding and learning programs for team members and clients, enhancing their Tableau experience. Previously, I collaborated with Apple as an embedded consultant to enable Tableau adoption and performance improvements across teams, working closely with cross-functional teams and Professional Services to ensure successful implementations.
-
Designing Actionable KPI Dashboards That Truly Impress Your Executives
-
Developing Agentic Analytics Skills: The Data Pro, Concierge & Inspector
-
Try something new: SQL DBA/Dev to Management (Without losing your tech)
-
Developing Agentic Analytics Skills: The Data Pro, Concierge & Inspector
-
Designing Actionable KPI Dashboards That Truly Impress Your Executives
-
3 Ways to Visualize Outliers
-
Smarter Requirement Gathering for Data Projects: Ask, Align, Deliver
-
The Art and Science Behind Mastering Consistent Data Governance At Scale
Achieving robust data governance at scale is both an art and a science. In this engaging session, we reveal how to implement practices that ensure data consistency, security, and compliance throughout sprawling digital ecosystems. Learn how to create governance frameworks that adapt to complex environments while driving agility and innovation. Through real-world case studies and strategic blueprints, attendees will explore proactive methodologies to standardize data management, enforce accountability, and foster a culture of continuous improvement—even as data volumes explode.
-
In-House Innovators: Driving Impact with Highly Visible Side-Projects
-
Lightning Talks-02: A Rapid-Fire Exploration of Key Tech Topics
Andreas Wolter
CEO, Data Architect, Sarpedon Quality Lab LLC
Andreas Wolter
Sarpedon Quality Lab LLC, CEO, Data Architect
Andreas Wolter is a former Program Manager for Access Control in Azure SQL and SQL Server at Microsoft. In this role, he spearheaded the revamp of SQL Server's permission system and the design of the external authorization system used by Purview policies and Azure database and data warehouse in Fabric. He has over 20 years of experience with SQL Server, is one of only 7 Microsoft Certified Solutions Master for Data Platform (MCSM) and has been a regular speaker at conferences worldwide for over a decade. Andreas is the founder of Sarpedon Quality Lab LLC, a consulting company specializing in SQL Server performance, high availability, and security, which he manages in cooperation with Sarpedon Quality Lab Germany.
-
Quickstart into Performance Monitoring & Troubleshooting for SQL
A consistent performance-experience is crucial for a successful business. If you are developing and testing SQL databases you need to understand where to look and what to look for. Depending on whether your SQL database is hosted on-prem, in Azure SQL or Fabric, there are some differences in what tools are available. This session will give you an overview over the available tools, explain where they overlap and where limitations require a different approach using built-in SQL functionalities. Among the things you will be introduced to will be the database watcher, extended events, wait stats and DMVs. So next time someone asks you to take a look at a badly performing database application, you know where to look.
-
Practical Insights on SQL Server Consolidation and Migration
-
Contained Availability Groups – Best Practices from Real-World Projects
-
Practical Insights on SQL Server Consolidation and Migration
-
Enhancing Data Security for SQL Server and Azure SQL: A Strategic Approach
SQL Server and Azure SQL offer a variety of functionalities and services designed to safeguard your most valuable asset: your data. But features alone do not protect if not carefully thought through and working in a siloed manner. Without an overall security strategy, it is too easy to miss gaps between security controls and find oneself exposed when a serious attack occurs. In today’s environment of “hacking as a service”, state-funded and orchestrated hacking groups, being properly prepared for all scenarios, can become vital to a company’s survival. Led by a former program manager for SQL security at Microsoft, this session will reflect on the current threat landscape and explain the most common breach patterns as they occurred over the last year, as well as how to stop them from occurring. We will look at various attack vectors, common security risks, discuss what ransomware and data exfiltration attacks have in common, and how that can help us to detect attacks or limit the blast radius. This session is tailored for security managers and architects looking for a strategic perspective on security concepts, rather than concentrating on individual controls.
-
Data protection next level: what comes after access control
-
SQL Server under attack: SQL Injection
-
Lock down your SQL Server: essential steps to securing your data
-
Contained Availability Groups – why should you use them?
-
Quickstart into Extended Events
Andres Bolanos
Escalation Engineer, Microsoft
Andres Bolanos
Microsoft, Escalation Engineer
I’ve spent the past 18 years working with data technologies—starting with development and SQL Server, then moving into Azure Synapse, and now focusing on Microsoft Fabric Real-Time Intelligence. I currently work at Microsoft as an Escalation Engineer, focusing on Fabric Real-Time Intelligence, Fabric Data Warehouse, Azure Synapse, and Data Explorer (Kusto). I’m a graduate of MIT’s Product Management program, and I enjoy identifying recurring customer challenges to drive meaningful product improvements. My focus is on making data platforms both powerful and practical for real-world use.
-
Navigating the Future: SQL Server to Fabric Real-Time Intelligence
In today's data ecosystem, professionals face a critical decision: stick with familiar SQL Server technology or venture into Microsoft Fabric's Real-Time Intelligence (RTI) databases. This choice can significantly impact performance, scalability, and overall business intelligence capabilities. As a former SQL DBA and Microsoft Escalation Engineer who's worked extensively with both Azure Synapse and Fabric RTI, I'll guide you through this decision-making process with clarity and practical insights. We'll explore when SQL Server remains the optimal choice and when RTI databases offer compelling advantages. You'll discover the architecture differences that matter, performance considerations, and cost implications of each approach. I'll demonstrate how your existing SQL skills transfer to Kusto Query Language (KQL), showing familiar patterns and highlighting key differences. Through real-world scenarios and demonstrations, we'll examine migration paths, hybrid approaches, and integration strategies between these technologies. You'll see firsthand how these systems handle time-series data, complex analytics, and large-scale workloads differently. By the end of this session, you'll have a clear framework for database selection decisions and practical knowledge to implement or migrate to Fabric RTI when appropriate for your organization's needs.
Andrew Guard
Solutions Architect, Redgate
Andrew Guard
Redgate, Solutions Architect
I've been at Redgate for five years now, focusing primarily on our DevOps tooling and enabling clients with digital transformations. Previous background as a software developer for 8 years, so I'm very comfortable talking about leveraging CI/CD tooling in the database space, informed by that experience.
-
Deploy with Confidence – Scaling Database Change Without the Risk
-
Modern Database Development: Real-World Lessons from the Front Lines
Join a panel of seasoned database professionals and industry experts as they dive into the toughest challenges facing modern development and operations teams. From navigating monolithic legacy systems, to wrangling with the data layer in the age of AI, this session explores the real-world roadblocks teams encounter when deploying databases at scale. You'll hear firsthand from organizations about their strategies for reducing downtime risk, managing inconsistent processes across diverse environments, and improving code quality. Whether you’re a developer, DBA, or DevOps leader, you’ll leave with practical insights and proven approaches to modernize your database deployment practices – no matter how complex your estate.
Andy Leonard
Chief Data Engineer, Enterprise Data & Analytics
Andy Leonard
Enterprise Data & Analytics, Chief Data Engineer
Andy Leonard is a husband, dad, and grandfather; founder and Chief Data Engineer at Enterprise Data & Analytics; an Azure Data Factory and Fabric Data Factory trainer, consultant, and developer; a SQL Server database and data warehouse developer; an author, engineer, and farmer.
-
Data Engineering Fundamentals With Fabric Data Factory
Join Andy Leonard and Stephen Leonard for a comprehensive one-day training at PASS Data Community Summit 2025, introducing data engineering concepts for enterprise data warehousing using Microsoft Fabric Data Factory. Tailored for newcomers to data engineering and self-taught professionals seeking to deepen their expertise, this course blends theoretical foundations with practical demonstrations, enhanced by AI-driven techniques to accelerate development. The day begins with a lecture-based overview, covering the essentials of enterprise data warehousing, Microsoft Fabric's role in modern data architectures, and key terminology. We'll explore how Fabric Data Factory integrates into the broader data engineering landscape, with a focus on leveraging AI to streamline design and implementation processes. This foundation ensures all participants share a common understanding before diving into technical content. The second part shifts to demonstration-focused learning, showcasing practical implementations of pipeline-driven staging and loading processes. You'll observe real-world patterns that tackle common data warehousing challenges, incorporating AI tools to optimize workflows and enhance efficiency. Each demonstration balances theoretical best practices with pragmatic solutions tailored to enterprise constraints. The final section delves into lifecycle management, demonstrating straightforward approaches to monitoring data pipelines, implementing maintenance routines, and establishing effective governance. We'll highlight how AI can accelerate these processes, offering techniques to improve scalability and adaptability. This training emphasizes actionable knowledge, equipping you with practical skills and AI-enhanced strategies to apply immediately, regardless of your experience level or organization's maturity.
-
Bridging Cloud and On-Premises: Data Integration with Fabric Data Factory
This session demonstrates practical techniques for creating reliable data pipelines between cloud sources and on-premises SQL Server databases using Microsoft Fabric Data Factory. Designed for data professionals seeking to implement hybrid data solutions, this session addresses the common challenges of cross-environment data integration. We begin with a brief overview of hybrid data movement concepts and the architectural components that enable secure, efficient transfer between cloud and on-premises environments. The core of the session features a comprehensive Fabric Data Factory pipeline demonstration, showing the complete implementation process from configuring cloud source connections to writing data into on-premises SQL Server tables. We'll configure a Fabric Data Factory Copy Activity, explaining configuration options and best practices for reliable data movement across environments. The demonstration will highlight practical solutions to common challenges, including network security considerations, authentication methods, error handling, and performance optimization techniques specific to Fabric Data Factory pipelines. You'll see firsthand how to configure, test, and deploy a working pipeline that bridges the cloud-to-on-premises gap. By the end, you'll understand the key components of Fabric Data Factory pipelines required for reliable cloud-to-on-premises data integration and have observed a working implementation that you can adapt to your specific requirements.
Andy Levy
Senior Data Platform Engineer, SS&C Advent
Andy Levy
SS&C Advent, Senior Data Platform Engineer
Andy is a database administrator, PowerShell fan, former developer, Open Source contributor, RVer, and connoisseur of dad jokes (not in that order). He’s worn a number of IT hats since 1999 before landing in database administration, including web server administration and development, systems integration, and database development. When he isn’t picking queries apart and wrangling unruly herds of databases, he can be found planning next summer's family camping trips or mentoring the TAN[X] FIRST Robotics Competition team.
-
Vacation-Proof Your Environment: Documentation That Saves the Day
-
Answering the Auditor's Call with Automation
As DBAs, we're called on regularly to produce documentation for security & compliance audits. Being able to show who has what level of access to an instance is the minimum, but we're often asked for more. Collecting this information and compiling it into something usable by auditors could take you hours or even days. But with automation, you can pull it all together in a matter of minutes while you're getting that second cup of coffee from the kitchen. Through the PowerShell demos presented in this session, you'll learn how to build documentation of your backup regimen, who has access to your databases, and show that you're staying current with SQL Server patches from Microsoft. Whether you have one SQL Server instance or one hundred, you'll be able to create a script to automatically format this data so that it's usable for your auditors – and hopefully be so complete that you don't receive follow-up questions.
-
Vacation-Proof Your Environment: Documentation That Saves the Day
-
Answering the Auditor's Call with Automation
As DBAs, we're called on regularly to produce documentation for security & compliance audits. Being able to show who has what level of access to an instance is the minimum, but we're often asked for more. Collecting this information and compiling it into something usable by auditors could take you hours or even days. But with automation, you can pull it all together in a matter of minutes while you're getting that second cup of coffee from the kitchen. Through the PowerShell demos presented in this session, you'll learn how to build documentation of your backup regimen, who has access to your databases, and show that you're staying current with SQL Server patches from Microsoft. Whether you have one SQL Server instance or one hundred, you'll be able to create a script to automatically format this data so that it's usable for your auditors – and hopefully be so complete that you don't receive follow-up questions.
-
Creating a Self-Serve System Status Page
-
Hobby Huddle: Mentoring a High School Robotics Team with Andy Levy
Hosted in the Community Zone, these Hobby Huddle sessions are a fun way for people in the community to showcase their passions and hobbies outside of everyday work life. There will be a designated seating area for you to join these highly entertaining and informative back-to-back mini sessions.
Andy Yun
Consulting Field Solutions Architect, Pure Storage
Andy Yun
Pure Storage, Consulting Field Solutions Architect
Andy Yun is a Consulting Field Solutions Architect at Pure Storage, who has been with SQL Server for over 20 years as both a Database Developer and Administrator. He focuses on performance tuning, with expertise in T-SQL, storage engine internals, and monitoring. Andy strongly believes in passing knowledge onto others, regularly speaking at conferences and user groups, and mentoring industry colleagues. Andy is a former Microsoft MVP, co-founder of the Chicago SQL Association, former co-leader of the Chicago Suburban User Group and Chicago SQL Saturday Organizing Committee.
-
A Beginner's Guide to Becoming a Performance Tuner – T-SQL Edition
-
Debating Two Viewpoints on Indexing – Development vs Operations
-
A Practical Introduction to Vector Search in SQL Server 2025
Are you uncertain if SQL Server 2025 Vector Search has useful applications? Concerned that it is just more AI hype? Then join me in this session, where I’ll break down how Vector Search works, where it fits into the AI landscape, and how you can use it today—on-prem and without changing your core architecture. We’ll cover: * The foundations of vector embeddings and similarity search * How SQL Server implements Vector Search, including indexing and querying * Code demos and practical use case exploration You’ll leave armed with the knowledge to look beyond the hype and understand how Vector Search can work for you.
-
Deep-Dive Workshop: Accelerating AI, Performance and Resilience with SQL Server 2025
SQL Server 2025 delivers powerful new capabilities for AI, developer productivity, and performance all ready to run at scale on modern infrastructure. In this 1/2 day deep dive workshop, you'll get a first look at what's new and how to put it to work immediately. Learn directly from Microsoft and Pure Storage experts how to harness SQL Server’s vector capabilities and walk through real-world demos covering semantic search, change event streaming, and using external tables to manage vector embeddings. You'll also see how new REST APIs securely expose SQL Server's internals for automation and observability including snapshots, performance monitoring, and backup management. The workshop wraps up with insights into core engine enhancements: optimized locking, faster backups using ZSTD compression all running on a modern Pure Storage foundation that brings scale and resilience to your data platform. Whether you're a DBA, developer, or architect, this session will equip you with practical strategies for harnessing Microsoft SQL Server 2025 and Pure Storage to accelerate your organization's AI and data goals.
-
Hidden Pathways to Achieving Peak SQL Server Performance
-
Debating Two Viewpoints on Indexing – Development vs Operations
-
Unlock the Power of Analogies to Simplify Complex Topics
As data professionals, we often face the challenge of explaining complex data concepts to audiences who do not share our expertise. Analogies are a powerful tool to bridge that gap, transforming intricate ideas into relatable, digestible tidbits. If you’ve joined me before, you know how much I love using "my silly analogies" to demystify technology. In this lightning talk, I’ll share my key techniques to crafting relatable and impactful analogies. By the end, you’ll have new tools to connect with your audience, turn complexity into clarity, and leaving a lasting impression.
-
How Mapping Your Confidence Will Empower You
-
Two Keys to Help You Crush Your Next In-Person Presentation
-
Mitigating Your Data Bloat with Partitioning & Data Virtualization
Are you managing a VLDB where cold data must be retained, but it’s dragging down maintenance, backups, and query performance? What if you could shrink your database footprint while still letting users query that data exactly as they always have? In this session, we’ll start by reviewing the classic approach using partitioning and filegroup strategies. Then we’ll dive deep into a modern, more flexible solution: Data Virtualization. You’ll learn how external tables/CETAS can enable seamless access to archived data stored in object storage, with zero changes to your users’ T-SQL code. This isn’t just about saving space — it’s about redefining what “archiving” can mean in today’s SQL Server environments. You’ll leave with practical techniques — both classic and modern — for reducing database bloat, improving manageability, lowering storage needs, and preserving fast, transparent access to historical data.
-
Lightning Talks-01: A Rapid-Fire Exploration of Key Tech Topics
Anthony Nocentino
Senior Principal Field Solutions Architect, Pure Storage
Anthony Nocentino
Pure Storage, Senior Principal Field Solutions Architect
Anthony is a Senior Principal Field Solutions Architect at Pure Storage as well as a Pluralsight Author, and a Microsoft Data Platform MVP. Anthony designs solutions, deploys the technology, and provides expertise on business system performance, architecture, and security. Anthony has a Bachelors and Masters in Computer Science with research publications in high performance/low latency data access algorithms and spatial database systems.
-
Building an LLM Chatbot on Your Laptop – No Cloud Required!
Want to build a chatbot that can answer questions using your own data? This session shows you how, and no cloud is required! In this session, you will learn how to build a Retrieval-Augmented Generation (RAG) chatbot using open-source tools combined with SQL Server for vector storage – all from your laptop. We will cover topics such as LLM fundamentals, embeddings, similarity search, and how to integrate LLMs with your own data. By the end of the session, you will have a working chatbot and practical knowledge of how AI can enhance your data platform and a new way to elevate your SQL Server skills with AI.
-
Deep-Dive Workshop: Accelerating AI, Performance and Resilience with SQL Server 2025
SQL Server 2025 delivers powerful new capabilities for AI, developer productivity, and performance all ready to run at scale on modern infrastructure. In this 1/2 day deep dive workshop, you'll get a first look at what's new and how to put it to work immediately. Learn directly from Microsoft and Pure Storage experts how to harness SQL Server’s vector capabilities and walk through real-world demos covering semantic search, change event streaming, and using external tables to manage vector embeddings. You'll also see how new REST APIs securely expose SQL Server's internals for automation and observability including snapshots, performance monitoring, and backup management. The workshop wraps up with insights into core engine enhancements: optimized locking, faster backups using ZSTD compression all running on a modern Pure Storage foundation that brings scale and resilience to your data platform. Whether you're a DBA, developer, or architect, this session will equip you with practical strategies for harnessing Microsoft SQL Server 2025 and Pure Storage to accelerate your organization's AI and data goals.
-
Building an LLM Chatbot on Your Laptop – No Cloud Required!
Want to build a chatbot that can answer questions using your own data? This session shows you how, and no cloud is required! In this session, you will learn how to build a Retrieval-Augmented Generation (RAG) chatbot using open-source tools combined with SQL Server for vector storage – all from your laptop. We will cover topics such as LLM fundamentals, embeddings, similarity search, and how to integrate LLMs with your own data. By the end of the session, you will have a working chatbot and practical knowledge of how AI can enhance your data platform and a new way to elevate your SQL Server skills with AI.
-
Designing AI-Ready Databases: Architecture, Performance, and Operations
-
Bringing Enterprise AI to Your Operational Databases
-
Discovering Real-World Performance Insights with DTrace
Anupama Natarajan
Data and AI Consultant, Pearl Innovations Limited
Anupama Natarajan
Pearl Innovations Limited, Data and AI Consultant
I am a Cloud, Data and AI Consultant with 25+ years of experience working in design and development of Data Warehouse, Business Intelligence, AI enabled applications and SaaS integrated solutions. I am a Microsoft MVP (Most Valuable Professional) for Data Platform and Artificial Intelligence, MCT (Microsoft Certified Trainer) and really passionate in sharing knowledge. I enjoy solving complex business problems with innovative solutions using Microsoft technologies and use that experience in my trainings. I do speak at conferences (PASS Summit, SQL Saturdays, Data Platform Summit, Difinity) and organise local user group meetups and SQLSaturday Organiser in Wellington, New Zealand.
-
Break the ETL Chains: Mirroring for Snowflake Databases in Microsoft Fabric
With the General Availability (GA) of Mirroring for Snowflake Databases in Microsoft Fabric, the wait is over for real-time, no-copy access to Snowflake data — directly within your Fabric workspace. This session explores how organizations can eliminate data duplication, streamline analytics, and unlock insights without ever moving data out of Snowflake. Join us for an in-depth session that walks through the architecture, setup, and use cases of Snowflake Mirroring in Fabric. We’ll demonstrate how Microsoft Fabric now allows you to connect and query Snowflake data using Direct Lake mode, enabling low-latency insights, Power BI integration, and data governance — all while keeping your data within its original platform. Through live demos and practical scenarios, you'll learn how to: Set up mirroring for your Snowflake databases in just a few steps Ensure secure, governed access via OneLake and Fabric permissions Use Power BI to visualize mirrored Snowflake data without impacting performance Understand scenarios where mirroring adds value (e.g., cross-cloud insights, cost efficiency, and regulatory compliance) Whether you're a Fabric enthusiast, a Snowflake pro, or simply looking to simplify your data estate, this session is your launchpad to take full advantage of the new mirroring capabilities.
-
Smarter Together: Agentic Capabilities + Fabric = Intelligent Automation
-
Smarter Together: Agentic Capabilities + Fabric = Intelligent Automation
As enterprises strive to harness the full potential of their data assets, the future lies not just in data integration but in intelligent, autonomous systems that can act on behalf of users. With the introduction of Agentic Capabilities, Microsoft Fabric takes a bold leap toward self-driven analytics and decision-making. In this 75-minute session, we’ll explore what “agentic” really means in the context of Microsoft Fabric and how this next-generation paradigm is revolutionizing data workflows, automating tasks, and enabling AI-powered decision-making — all with minimal human intervention. You’ll discover how agents in Fabric can now: Understand goals, Trigger data flows, Monitor outcomes, and Proactively take action using real-time data across the Fabric ecosystem. This session covers both the technical capabilities and practical use cases of agentic intelligence, showing you how to create, manage, and collaborate with intelligent agents inside Fabric using tools like Copilot, Data Factory, Synapse Pipelines, and Power BI.
-
Break the ETL Chains: Mirroring for Snowflake Databases in Microsoft Fabric
-
Share Smarter: Fabric Open Mirroring for Federated Analytics
-
Build Real-Time Dashboards with Fabric’s Eventstream and KQL
Real-time insights don’t have to be rocket science. In this session, we show how Microsoft Fabric’s Real-Time Analytics—powered by Eventstream and Kusto Query Language (KQL)—lets you stream, transform, and visualize data in seconds. Whether you're working with IoT sensors or social media feeds, see how easy it is to create interactive dashboards that update in near real-time.
-
Governance in Action: Microsoft Purview Meets Fabric
-
Lightning Talks-02: A Rapid-Fire Exploration of Key Tech Topics
Arthi Ramasubramanian Iyer
Microsoft Corporation
Arthi Ramasubramanian Iyer
Microsoft Corporation
-
Breakfast with the Microsoft Data Leadership Team
Get your day started early at PASS Data Community Summit with a free breakfast and a Q&A session with a panel of leaders across Microsoft hosted by Bob Ward. Tell us what is top of mind for you across SQL Server, Azure SQL, Microsoft Fabric and topics like AI. This is always one of the most popular sessions at the PASS Data Community Summit, so you won’t want to miss it!
Arun Vijayraghavan
Principal Product Manager, Microsoft Azure Data & AI, Microsoft
Arun Vijayraghavan
Microsoft, Principal Product Manager, Microsoft Azure Data & AI
Arun Vijayraghavan is a Product Manager in the Microsoft Azure SQL Database product group, focusing on the Cloud, Data, and AI space. With over 25 years of experience, Arun started his career as a developer and architect before transitioning into product management. Still a developer at heart, Arun is passionate about solving developer problems, which has led him to specialize in developer platforms and new product launches within the Data and AI domain. Arun has an established track record providing strategic guidance for companies and is recognized for innovative work in the AI and Data space, holding a pending patent in the field. An avid blogger, Arun is also a passionate teacher and mentor, having guided numerous students. He volunteers his time teaching youth how to use AI and serves as an AI usage Policy advisor for non-profit organizations. You can follow him at https://www.linkedin.com/in/avijayraghavan/
-
The Enterprise AI Ready database
-
AI Ready Apps with SQL Database in Microsoft Fabric
Explore how to build enterprise-grade Retrieval-Augmented Generation (RAG) systems by harnessing the power of SQL AI features, vector-based search, and Microsoft Fabric. This session delves into modern architectures that integrate structured data with large language model (LLM) capabilities to enable real-time, intelligent, and secure applications. Participants will learn how to optimize performance, enhance data security, and manage complex enterprise deployments while advancing the boundaries of data-driven decision-making. Attendees will leave with a practical playbook for deploying robust, secure, and scalable RAG pipelines ready for real-world applications.
Arvind Shyamsundar
Principal Product Manager, Microsoft
Arvind Shyamsundar
Microsoft, Principal Product Manager
Arvind Shyamsundar is a Principal Product Manager at Microsoft, working on innovations in Azure SQL Database, such as Hyperscale. Arvind taps into his rich experience, previously working with Microsoft’s customers as part of the Customer Advisory Team (CAT) and Microsoft Services. Arvind is well known in the SQL community and has presented at conferences like PASS, SQL Saturday, DPS, Ignite and various user groups.
-
Building Scalable Secure AI-Ready Apps with Azure SQL Hyperscale
Build AI apps that run securely and scale with your needs with Azure SQL Database Hyperscale. We’ll cover native vector indexes for semantic search, read scale‑out for low‑latency RAG, using the model of your choice, from T‑SQL. We will show how to build modern AI Agents using all the tools you need with databases and MCP Servers. Modernize your AI application using the power of Azure SQL Database Hyperscale.
Aswin Manmadhan
Technical Solutions Lead, KingswaySoft
Aswin Manmadhan
KingswaySoft, Technical Solutions Lead
Aswin Manmadhan is a Technical Solutions Lead at KingswaySoft, specializing in data integration and automation solutions across the Microsoft and Salesforce ecosystems. With years of experience helping organizations streamline complex data flows, Aswin has deep expertise in designing and optimizing ETL processes that drive efficiency and reliability. He works closely with clients to implement scalable integration architectures using KingswaySoft’s powerful SSIS-based toolkits.
-
Mastering Data Sync with KingswaySoft: Strategies for Reliable Integration
In today’s data-driven world, organizations rely on seamless integration across platforms to ensure data consistency, reliability, and accuracy. KingswaySoft offers a suite of powerful integration toolkits that enable businesses to synchronize data between systems like Microsoft Dynamics 365, Salesforce, SharePoint, and SQL Server with ease. In this session, we'll explore how to design and implement robust, efficient, and scalable synchronization processes between various data sources and destinations. You’ll learn best practices for handling incremental data updates, managing complex data transformations, and ensuring data integrity. Whether you’re synchronizing cloud systems, on-premises databases, or hybrid environments, this session will provide you with the knowledge to build reliable and maintainable ETL solutions for your data synchronization needs.
Atte Sukari
Senior Data Engineer, Norrin
Atte Sukari
Norrin, Senior Data Engineer
Data enthusiast working with Azure and a broad range of other technologies. Passionate about turning complex data into real business value.
-
Scaling Pete's Plumbing Data Pipelines: From Efficient to Excessive
Join us as we follow Pete’s Plumbing, an imaginary company, from humble beginnings to a data-driven powerhouse (with a few bumps along the way). What starts as a simple system for tracking work hours soon grows into a full-blown data transformation adventure. As Pete’s team expands, so do the challenges of managing data, manual mistakes, ERP struggles, and scaling bottlenecks that become harder to ignore. As the business grows, the pressure to make data-driven decisions increases, and Pete realizes that more data means more complexity. But where’s the sweet spot? Come and join us as Pete’s consultants, exploring the tricky balance between efficiency and overengineering. How should Pete proceed? Let’s dive in and find out!
Bachar Rifai
Solution Architect, AWS
Bachar Rifai
AWS, Solution Architect
Bachar Rifai is a Partner Solution Architect at AWS with over 20 years of experience in database technologies and data management. During his 4+ years at AWS, he has developed specialized expertise in helping automotive and manufacturing organizations implement and scale generative AI solutions that drive operational transformation. With deep technical knowledge spanning traditional database systems and modern AI architectures, Bachar excels at translating complex GenAI concepts into practical, actionable strategies for automotive and manufacturing executives. He has successfully guided numerous manufacturers through their AWS cloud journey, enabling them to deploy cutting-edge AI-driven solutions that deliver measurable business impact. Bachar's unique approach combines technical depth with real-world application, making him a trusted advisor for both engineering teams and business leaders. His work focuses on helping automotive and manufacturing companies leverage generative AI to gain competitive advantage in an increasingly data-driven industry landscape. Known for his ability to bridge the gap between technical complexity and business value, Bachar continues to help organizations unlock the transformative potential of AI while building scalable, enterprise-ready solutions on AWS.
-
RDS SQL Server with AI: Building Intelligent Self-Healing Database Systems
Traditional database administration often relies on reactive approaches to performance issues, security threats, and maintenance challenges. This session reimagines RDS SQL Server management through the lens of generative AI to create truly intelligent, self-healing database environments. Unlike current solutions that use GenAI merely for optimization or monitoring, we'll explore building a comprehensive autonomous database ecosystem that learns, adapts, and proactively resolves issues. Attendees will learn how to integrate Amazon Bedrock, SageMaker, and custom LLMs with RDS SQL Server to enable advanced capabilities: predictive index recommendations based on query patterns, automatic schema evolution suggestions, anomaly detection with root cause analysis, and intelligent backup strategies tailored to application workloads. Through practical demonstrations, you'll see how to implement a feedback loop where your RDS environment continuously improves through learning from operational data. We'll cover architecting the GenAI integration pipeline, training custom models on SQL Server-specific patterns, implementing secure API gateways between RDS and AI services, and building dashboards that provide explainable AI insights for your database operations team. Real-world case studies will demonstrate how these techniques have reduced administration overhead by 70% while improving performance and security posture.
Bala Narasimhan
Group Product Manager, Google
Bala Narasimhan
Google, Group Product Manager
Bala Narasimhan is Group Product Manager for Cloud SQL. Bala has spent his entire career in enterprise software both as a developer and a product manager. He started dabbling in databases at Oracle and then became a founding engineer and product manager at ParAccel building a massively parallel data warehouse with a PostgreSQL interface. Bala has also spent time at Salesforce.com and Nutanix.
-
Achieve Peak SQL Server Performance with Gemini
Discover the power and efficiency of Google Cloud's managed database offering for SQL Server. This session will delve into how Cloud SQL for SQL Server delivers near-zero downtime for maintenance, showcasing its robust high-availability features. Learn how assistive agents like Gemini CLI provide unparalleled visibility and optimization for your SQL Server deployments. We'll also explore the benefits of a unified database center for fleet management, highlighting the superior price-performance you can achieve with Cloud SQL for SQL Server.
Barney Lawrence
Senior Consultant, Simpson Associates
Barney Lawrence
Simpson Associates, Senior Consultant
Barney Lawrence has over a decade's experience helping people make the most of their data on the Microsoft data platform. Having moved from the guy who knows Excel, to Data Analyst, to Accidental BI Developer and finally to Actual BI Developer, he currently works as a Senior Consultant for Simpson Associates focusing on local government and health data.
-
What's What and Who's Who? Cross System Record Linkage with Fabric
-
(Almost) Everything You Wanted to Know About Purview Data Governance
Microsoft's Purview Data Governance delivers a broad set of tools to allow an organisation to ensure its data assets are discoverable, well understood, and trusted. This session will take anyone looking to understand or to get started using Purview Data Governance. This session will take a tour of Purview's Data Map and Unified Catalog visiting all the key features and providing lessons learned from real world projects. We will also look at how to successfully deliver Purview within your organisation, how to sequence your roll out, how to effectively deliver a pilot project and how much it will cost you. Finally, we'll take a peek at the Purview road map focusing on future features you need to be aware of and prepare for. By the end of the session, you will be ready to take what you have learned and begin preparing for a successful rollout of Purview within your organisation.
-
What's What and Who's Who? Cross System Record Linkage with Fabric
-
(Almost) Everything You Wanted To Know About Purview Data Governance
Microsoft's Purview Data Governance delivers a broad set of tools to allow an organisation to ensure its data assets are discoverable, well understood, and trusted. This session will take anyone looking to understand or to get started using Purview Data Governance. This session will take a tour of Purview's Data Map and Unified Catalog visiting all the key features and providing lessons learned from real world projects. We will also look at how to successfully deliver Purview within your organisation, how to sequence your roll out, how to effectively deliver a pilot project and how much it will cost you. Finally, we'll take a peek at the Purview road map focusing on future features you need to be aware of and prepare for. By the end of the session, you will be ready to take what you have learned and begin preparing for a successful rollout of Purview within your organisation.
Bart Vernaillen
MSSQL Process Automation Expert, D-Bart
Bart Vernaillen
D-Bart, MSSQL Process Automation Expert
BV is a SQL Server performance specialist with a focus on automation. He created the D-BART method: Detect, Blueprint, Automate, Report, Thrive; a practical approach that helps DBA-teams standardize and scale operations without scaling effort. He’s the author of PlanInspector, a tool for smarter query plan analysis, and regularly shares hands-on insights through talks, tools, or blog posts.
-
Analyzing A Bunch Of Query Plans With PowerShell
In SQL Server, we can capture query plans using Extended Events—but then what? Analyzing a single query plan can be a complex task, but what if you have hundreds or even thousands of them? To simplify this, I wrote PowerShell code that parses the query plan XML and stores the extracted data in a database. This approach makes it much easier to identify performance issues and gain insights in a large set of query plans. Session Content: First, we'll take a closer look at the query optimization process. Then we’ll examine the XML structure of a query plan and identify key elements within the plan. Next, we'll walk through the PowerShell script that parses the XML and writes the extracted data to a database. Finally, I'll demonstrate how to retrieve and analyze valuable insights from the database.
Basudeb Sarkar
Principal Software Engineering Manager, Microsoft
Basudeb Sarkar
Microsoft, Principal Software Engineering Manager
As a Principal Software Engineering Manager at Microsoft, I lead the integration of SQL Database within Microsoft Fabric, shaping a unified data platform that brings together operational workloads with advanced analytics and AI. My team builds translytical capabilities—combining transactional performance with analytical scale—to ensure SQL services are enterprise-ready and deliver real-time insights through seamless operational-to-analytics data sync. I drive product readiness and customer enablement for core data integration experiences. Partnering across engineering, PM, and field teams, I focus on delivering solutions that simplify ingestion into Fabric SQL, optimize performance, and provide an intuitive user experience. Beyond technical execution, I’m passionate about technical strategy, AI-driven innovation, and mentoring. I’ve helped advance the integration of Copilot and intelligent performance features into SQL tools and collaborate across engineering, data science, UX, and product to shape our roadmap. I foster a culture of learning, code quality, and innovation, and I’m known for collaborative leadership that empowers teams to build impactful, customer-centric data solutions.
-
SQL Database in Fabric: The Unified Database for AI Apps and Analytics
Discover how SQL in Fabric brings transactional and analytical workloads together in one cloud-native database. In this session, we’ll show how developers and data teams can simplify AI-driven application development with near real-time insights and built-in AI, seamless OneLake integration, and end-to-end analytics—all in a single, unified experience.
Ben DeBow
Founder & CEO, Fortified Data
Ben DeBow
Fortified Data, Founder & CEO
Ben DeBow is a business leader, author, and innovator on a mission to move the technology industry from the era of abundance to the era of efficiency. As the founder and CEO of Fortified, a next-generation database consultancy twice named to the Inc. 5000 list of fastest-growing private companies, Ben has been implementing mission-critical data platforms around the globe for the past two decades.
-
Legacy Meets AI: Sustainable Data Strategies for the Modern Enterprise
As companies rush to adopt AI and other modern technologies, they often overlook the financial and operational inefficiencies of their existing systems. This session explores how businesses can reduce costs by modernizing inefficient processes, optimizing data workflows, and refactoring outdated code. Attendees will learn to strike a balance between supporting legacy systems and driving innovation with Microsoft technologies, focusing on strategies to manage technical debt while enabling efficient AI and application development. Join CIO advisor and author Ben DeBow along with business communication expert Blythe Morrow to learn how you can navigate the tension between legacy system costs and AI/cloud sustainability goals while working cross-functionally to align your business around new technologies and processes.
-
What do business leaders think when they look at your code
-
FinOps for Data: Reclaiming Control Through Cost Transparency
# FinOps for Data: Reclaiming Control Through Cost Transparency In today's technology landscape, every organization is developing a FinOps practice as all technology decisions carry financial implications. As data professionals, we face a striking reality: enterprise data footprints consume 20-40% of technology budgets, yet DBAs and data engineers often struggle to maintain ownership over code, data models, and architectures. The time has come to leverage our unique data expertise to change this dynamic. FinOps for data provides a framework that transforms technical objections into financial conversations everyone understands. Rather than simply stating "this is bad code," imagine the impact of declaring "this code will cost $48,999 over the next three years." Suddenly, stakeholders are listening. This session explores how data professionals can align with FinOps teams to drive better decisions around data lifecycle management and architectural choices. You'll learn strategies to quantify the financial impact of data design decisions and build a data-centric FinOps practice. We'll discuss developing metrics that communicate technical debt effectively and creating governance structures that balance innovation with cost. Join us to discover how FinOps for data can help you regain influence, drive better conversations about data architecture, and establish yourself as a strategic partner in technology investment decisions.
-
Congratulations, You're Now a DBA!
Ben Johnston
Principal Architect, I-Tech Solutions, Inc.
Ben Johnston
I-Tech Solutions, Inc., Principal Architect
Ben is a data architect from Iowa and has been working with SQL Server since version 6.5 in the late 90's. Ben focuses on performance tuning, warehouse implementations and optimizations, database security, and system integrations.
-
Unmasking SQL Server Masking and RLS – Side Channel Attacks
Expanded data access and data democratization has introduced the need for advanced security mechanisms. Row level security (RLS) and dynamic data masking (Masking) are both available in SQL Server to allow enterprises and architects to manage data security at a more granular level. While these patterns are helpful in some situations, they are vulnerable to side-channel attacks. These are mentioned briefly in the Microsoft documentation, but without extensive examples. This session is a demonstration of implementing data masking, row level security, along with a discussion of use cases for each and how they work together. After showing how they work, this will be an active demonstration of each pattern getting broken using various methods, primarily side-channel attacks. Finally, mitigations to these attacks will be discussed and demonstrated. This will end with an assessment of how each strategy fits into a project and suitability for various types of projects.
-
SQL Performance Tips for Development Teams
-
Managed Instance Migration – Avoiding the Pain
-
SQL Performance Tips for Development Teams
Application developers usually have a different focus in the database compared to administrators or even database developers. Developers are the first line of defense to a high-performing database system. This session is focused on helping developers, and even administrators, understand how database design has a direct impact on performance. It focuses on actionable items to help alleviate current problems. An even more important aspect is preventing problems before they happen. This session focuses on finding trouble spots, like blocking and deadlocks, your worst performing queries, and how you can avoid these issues. Maintenance of statistics and indexes are covered as well as methods to add new indexes without creating redundancy.
-
Unmasking SQL Server Masking and RLS – Side Channel Attacks
Ben Miller
Architect, DBADuck Consulting
Ben Miller
DBADuck Consulting, Architect
Ben Miller, aka @dbaduck, is a seasoned database professional with over 25 years of experience in SQL Server, PowerShell, and automation. As a passionate advocate for efficiency and innovation, Ben specializes in empowering IT professionals and DBAs to harness the power of PowerShell for database and server management. A Microsoft Data Platform and PowerShell MVP, speaker, and community contributor, Ben has a knack for breaking down complex topics into practical, actionable insights. When he’s not automating workflows or sharing his knowledge, you’ll find him engaging with the #SQLFamily or exploring new tech to help others work smarter. Follow Ben on all Social Media (@dbaduck) and dive into the world of automation and database magic with him!
-
Database Migrations Made Easy with PowerShell and dbatools
Database migrations can be complex, time-consuming, and risky—unless you have the right tools. Enter dbatools, a community-driven PowerShell module designed to make SQL Server migrations easy, reliable, and repeatable. In this session, you'll learn how to use PowerShell and dbatools to handle migrations of databases, logins, jobs, credentials, and more—with just a few lines of code. We'll walk through real-world scenarios, highlight key commands, and demonstrate how to validate and monitor your migrations for success. Whether you're moving a single database or an entire instance, dbatools takes the pain out of the process and replaces it with automation, consistency, and confidence. Perfect for DBAs, developers, and sysadmins alike—if you manage SQL Server environments, this is a session you don’t want to miss.
-
Getting Started as PowerShell DBA
-
Getting Started as a PowerShell DBA
New to PowerShell and wondering how it fits into the world of database administration? This session is for you! Getting Started as a PowerShell DBA is designed to help SQL Server DBAs take their first steps with PowerShell — no scripting experience required. We'll cover what PowerShell is, why it's useful for DBAs, and how you can start using it right away to save time and reduce manual effort. You'll see simple, real-world examples like checking server status, running SQL queries, and automating backups — all done through easy-to-understand PowerShell scripts. By the end, you'll have the confidence to start exploring PowerShell on your own and a set of starter scripts to take back to the office.
-
SQL Development through Behavior Analysis
T-SQL isn’t just about writing code that works — it’s about writing code that behaves well. In this session, we’ll explore how understanding the behavior of your SQL code can lead to better performance, reliability, and maintainability. It is important to understand how SQL behaves behind the scenes. Features and statements are easy to use, but it is easy to misuse them as well. This session takes a practical look at developing T-SQL with intention: how the optimizer interprets your logic, and how small changes in syntax can lead to big differences in execution. We’ll walk through real examples of “working” queries that don’t behave optimally, and uncover what’s really going on under the hood. Whether you’re tuning stored procedures, writing reports, or building new logic, this session will help you think like the engine — and write better SQL because of it.
-
PowerShell DBA Dream dbatools Workshop
Unlock the power of automation and enhance your database management skills in this full-day PowerShell workshop designed for DBAs. This session will equip you with essential skills and streamline your daily operations and reduce manual errors through scripting and automation. We will start with PowerShell fundamentals before moving into advanced scripting best practices tailored to the needs of SQL Server management. A key highlight of the workshop is our in-depth exploration of the **dbatools module**—a powerful, community-built toolkit that simplifies SQL Server management tasks. With over 700 cmdlets/functions it is a treasure trove of greatness (in my opinion). This mostly demo session with many real-world examples, you’ll learn how to leverage dbatools to automate tasks such as backups, restores, performance monitoring, and migrations, dramatically enhancing your operational efficiency. Whether you’re looking to learn PowerShell and dbatools or to explore automation strategies, this workshop offers a rich blend of theoretical insights and interactive scenarios. Join me for a day of learning, collaboration, and fun that will allow you to take full control of your database operations with the combined power of PowerShell and dbatools.
-
The Art of Receiving Feedback
-
Interesting Insights from the MSDB Database
-
Backups in the Modern SQL Server World including SQL 2025
-
RESTORE in the Modern SQL Server World including SQL Server 2025
-
Creating and Managing a Recovery Strategy for your Backup Strategy
-
Bad Bosses are NOT the reason you cannot advance in your career
-
The Ultimate Guide to Ola Hallengren's Maintenance Solution
-
Why PowerShell Should Be In Your DBA Toolbox
PowerShell is a must-have tool for modern DBAs looking to automate, streamline, and scale their work. With modules like dbatools, complex SQL Server tasks—such as migrations, backups, and audits—can be reduced to a single line of code. Combine that with SMO for fine-grained database control and WMI for system-level insights, and you have a powerful scripting environment that bridges infrastructure and database administration. This session will showcase practical examples of how PowerShell can simplify daily DBA tasks, improve consistency, and boost productivity. Whether you're maintaining one server or a hundred, you'll learn why PowerShell deserves a permanent spot in your DBA toolbox.
Ben Weissman
Data Passionist, Solisyon
Ben Weissman
Solisyon, Data Passionist
Ben has been working with SQL Server since SQL Server 6.5, mainly in the BI/Datawarehousing field. He is a Microsoft Data Platform MVP, a (co-)author of multiple Tech-Books on topics such as Biml, SQL Server, Kubernetes and Fabric, as well as a regular speaker at national and international events. He has also published multiple Video courses at Pluralsight and other platforms. Ben is also a co-organizer of DataGrillen, New Stars of Data, SQL Konferenz and others as well as a volunteer and mentor for many other Data Platform events.
-
Power Hour
Power Hour. The session. The myth. The legendary waste of your valuable PASS Summit time. The one session that must be experienced – live and in person – to be believed. If you want to learn critical skills and cutting-edge technologies… please check the schedule for a different session to attend. But if you want to see Power BI, PowerApps, Flow, Power Singalong, and other Power Platform tools used in ways in which their makers never intended, there is place you should rather be. Power Hour delivers wacky, frequently unseemly, and generally funny demos, all with a serious point about the flexibility and power of Microsoft Power Platform services, tools, and technologies. Nearly endless swag, probably. All are welcome. Must not miss. Professional driver on a closed course. Do not attempt without appropriate safety measures in place.
-
The what, how and why of SQL Database in Fabric
-
REST APIs, AI and Vectors in SQL Server 2025
Vector search is at the core of modern AI applications, powering recommendation engines, semantic search, and image recognition. Traditionally, implementing vector-based queries required adding a dedicated vector database, introducing complexity and additional infrastructure. But with SQL Server 2025, you can now store, query, and optimize vector embeddings natively – all within your existing environment, running entirely on-premises. Why does this matter? For organizations needing full control over their data, SQL Server 2025 enables AI-powered search without relying on external cloud services or third-party databases. In this demo-driven session, we'll start with how REST APIs are integrated into SQL Server 2025 as they are not only a cool feature on their own but also a prequisite to integrate AI into your workloads. We’ll keep going by looking at what vectors are, why they’re essential for AI applications, and how SQL Server 2025 simplifies vector search. You'll learn how to store and query vector embeddings efficiently, explore real-world scenarios like AI-powered recommendations and intelligent search, and discover best practices for indexing and optimizing performance. Of course, we’ll also take a look on what you’ll need to get going right away in your own environment! If you're looking for a way to integrate AI-powered search within your on-premises SQL Server – without external infrastructure – this is a session you won’t want to miss!
-
SQL Server 2025: REST API Access and JSON Processing
-
Building Bridges: A Path to Active Community Engagement
-
Getting Data In and Out of Azure
All the Azure Data offerings are great. But they are also confusing. Which one is right for you? Which size do you need? The answer is of course: It depends. Join us for a day of unmystifying the jungle of offerings! We will walk you through the different service offerings from SQL Server running in a VM over Azure SQL DB up to Fabric. To make sure, this is applicable and actionable, we will clearly structure this day by use cases for both, your needs to have your data land in Azure as well as how to make it accessible for consumption: – HA/DR – are you intending to use Azure only as your backup datacenter? – Migration – is Azure going to be your new home? – ETL, Replication, Mirroring and Links – are you only intending to run some of your workloads, like analytics in the cloud and need to build a landing zone for your data from other sources? – Streaming – Are you getting data from sensors or other devices? – Analytics – Is Fabric really your only choice to run reports in the future? This demo packed day will be your fast track to figure out which of the countless offerings is right for you and what it will take you to get there. We’ll focus on the technical aspects but also take a look at implications like security, governance and of course: cost.
-
BFFs are forever
-
Community Conversation: From "What If" to "Why Not"- Real Stories of Becoming a Public Speaker
Everyone starts somewhere – and for our panelists, that somewhere was their very first talk in the New Stars of Data program, an initiative helping new voices shine in the data community. Now, a few months or years later, they’re back to share what happened next. In this fast-paced, story-driven session, 10+ speakers will each take just a few minutes to reflect on their journey, from nervous debuts to confident presenters, community contributors, and even MVPs. You’ll hear honest lessons, funny behind-the-scenes moments, and advice for anyone who’s ever thought, "I could never do that." Whether you’re a future speaker, a mentor, or just here for inspiration, this is your chance to see what happens when you take the leap – and keep going.
-
Community Meet & Greet: New Stars of Data
Join us for a relaxed and inspiring meet & greet with some of the amazing former and new New Stars of Data! Whether you’re curious about what it’s like to take your first steps as a speaker, want to reconnect with alumni, or just enjoy great conversations with people who share your passion for the data community – this is the place to be. Hear stories, exchange experiences, and get inspired by those who’ve taken the stage (and those who soon will). Come say hi, make connections, and celebrate how far our New Stars have come!
Blythe Morrow
Principal Consultant, PaperSword B2B
Blythe Morrow
PaperSword B2B, Principal Consultant
Blythe is a product expert focused on SQL Server and database product development. She has more than two decades of success delivering multi million-dollar revenue growth through innovative marketing, product positioning, messaging, and competitive differentiation. She spends her days developing products that Data Pros use every day. She has a background in Microsoft SQL Server, PowerBI and Data Warehousing. She speaks at many conferences and events around the world, helping improve the work lives of DBAs everywhere.
-
Legacy Meets AI: Sustainable Data Strategies for the Modern Enterprise
As companies rush to adopt AI and other modern technologies, they often overlook the financial and operational inefficiencies of their existing systems. This session explores how businesses can reduce costs by modernizing inefficient processes, optimizing data workflows, and refactoring outdated code. Attendees will learn to strike a balance between supporting legacy systems and driving innovation with Microsoft technologies, focusing on strategies to manage technical debt while enabling efficient AI and application development. Join CIO advisor and author Ben DeBow along with business communication expert Blythe Morrow to learn how you can navigate the tension between legacy system costs and AI/cloud sustainability goals while working cross-functionally to align your business around new technologies and processes.
-
Critical Communication – Managing Yourself and Others
Want to manage people? Lead teams? Influence business decision-making? Senior professionals should know how to communicate with clarity, frame conversations, and influence the decisions of others. And your ability to have difficult conversations and engage people affects your chances of moving into a team leadership role. We often attempt or avoid difficult conversations every day-whether dealing with an under-performing employee, disagreeing with a spouse, or negotiating with a client. There are a few key frameworks that can help us see a situation from many sides, control our reactions, and communicate more persuasively. Join Blythe in a session where we tackle the art of communication and what it really means to manage others.
-
What do business leaders think when they look at your code
-
Pitching your ideas: influencing at work
-
Legacy Meets AI: Sustainable Data Strategies for the Modern Enterprise
-
Managing Yourself and Others
Want to manage people? Lead teams? Influence business decision-making? Senior professionals should know how to communicate with clarity, frame conversations, and influence the decisions of others. And your ability to have difficult conversations and engage people affects your chances of moving into a team leadership role. We often attempt or avoid difficult conversations every day-whether dealing with an under-performing employee, disagreeing with a spouse, or negotiating with a client. There are a few key frameworks that can help us see a situation from many sides, control our reactions, and communicate more persuasively. We'll tackle the art of communication and what it really means to manage others.
-
Translating Application Efficiency into Business Value
-
Pitching your ideas: influencing at work
Bob Ward
Principal Architect, Microsoft
Bob Ward
Microsoft, Principal Architect
Bob Ward is a Principal Architect for the Microsoft Azure Data team, which owns the development for Microsoft SQL ground to cloud to fabric. Bob has worked for Microsoft for 31+ years on every version of SQL Server shipped from OS/2 1.1 to SQL Server 2025 including Azure SQL and SQL database in Fabric. Bob is a well-known speaker on SQL Server, Azure SQL, AI, and Microsoft Fabric often presenting talks on new releases, internals, and specialized topics at events such as SQLBits, Microsoft Build, Microsoft Ignite, PASS Summit, Fabric Community Conference, and Redgate Summit. You can also learn Azure SQL from him on the popular series https://aka.ms/azuresql4beginners. You can follow him at @bobwardms or linkedin.com/in/bobwardms. Bob is the author of the books Pro SQL Server on Linux, SQL Server 2019 Revealed, Azure SQL Revealed, SQL Server 2022 Revealed, and Azure SQL Revealed 2nd Edition available from Apress Media.
-
Secure, Scalable AI Solutions with Microsoft SQL, Ground to Cloud to Fabric
In an age where data drives innovation and artificial intelligence reshapes industries, organizations find themselves navigating the complexities of scalability, security, and advanced analytics. This keynote explores how Microsoft SQL empowers developers, IT Professionals, and administrators to leverage AI to unlock actionable insights while ensuring data security and seamless scalability across diverse environments—from SQL Server 2025 to Azure SQL to Microsoft Fabric. In our closing keynote you will hear about all the latest innovations for using AI with your data powered by the industry proven SQL Server platform that is trusted by organizations all over the world. You will hear about how developers can build intelligent AI agent applications with their data using a secure and scalable platform with familiar languages and AI-assisted tools such as GitHub Copilot. The advanced vector capabilities available in SQL Server 2025, Azure SQL, and SQL Database in Fabric will significantly provide smarter searching techniques for your data connected to AI models in a secure, isolated fashion, letting you pick the model of your choice, ground or cloud. You will also see how Copilot experiences in tools like SQL Server Management Studio (SSMS) can improve the productivity for developers, users, and administrators of SQL Server. Whether you are managing data on-premises, migrating to the cloud, or operating within hybrid models, this keynote will provide a comprehensive roadmap to scale your use of AI while safeguarding data integrity at every stage. Join us to explore how Microsoft SQL connects ground, cloud, and fabric into a cohesive and secure AI-powered data platform.
-
Experience SQL Server 2025: The AI-Ready Enterprise Database
Come learn about the latest information about SQL Server 2025. You will learn how to bring AI to your data with AI applications using built-in vector capabilities ground or cloud. In addition, you will learn the most significant SQL Server in a decade for developers including features like JSON, RegEx, REST API, GraphQL, and Change Streaming. You will also learn about all the new engine features for security, performance, and availability. All of this can be cloud connected to Azure and Fabric. This is your best session to learn about everything you need to know about the latest release of SQL Server.
-
Deep-Dive Workshop: Accelerating AI, Performance and Resilience with SQL Server 2025
SQL Server 2025 delivers powerful new capabilities for AI, developer productivity, and performance all ready to run at scale on modern infrastructure. In this 1/2 day deep dive workshop, you'll get a first look at what's new and how to put it to work immediately. Learn directly from Microsoft and Pure Storage experts how to harness SQL Server’s vector capabilities and walk through real-world demos covering semantic search, change event streaming, and using external tables to manage vector embeddings. You'll also see how new REST APIs securely expose SQL Server's internals for automation and observability including snapshots, performance monitoring, and backup management. The workshop wraps up with insights into core engine enhancements: optimized locking, faster backups using ZSTD compression all running on a modern Pure Storage foundation that brings scale and resilience to your data platform. Whether you're a DBA, developer, or architect, this session will equip you with practical strategies for harnessing Microsoft SQL Server 2025 and Pure Storage to accelerate your organization's AI and data goals.
-
Secure, Scalable AI Solutions with Microsoft SQL, Ground to Cloud to Fabric
In an age where data drives innovation and artificial intelligence reshapes industries, organizations find themselves navigating the complexities of scalability, security, and advanced analytics. This keynote explores how Microsoft SQL empowers developers, IT Professionals, and administrators to leverage AI to unlock actionable insights while ensuring data security and seamless scalability across diverse environments—from SQL Server 2025 to Azure SQL to Microsoft Fabric. In our closing keynote you will hear about all the latest innovations for using AI with your data powered by the industry proven SQL Server platform that is trusted by organizations all over the world. You will hear about how developers can build intelligent AI agent applications with their data using a secure and scalable platform with familiar languages and AI-assisted tools such as GitHub Copilot. The advanced vector capabilities available in SQL Server 2025, Azure SQL, and SQL Database in Fabric will significantly provide smarter searching techniques for your data connected to AI models in a secure, isolated fashion, letting you pick the model of your choice, ground or cloud. You will also see how Copilot experiences in tools like SQL Server Management Studio (SSMS) can improve the productivity for developers, users, and administrators of SQL Server. Whether you are managing data on-premises, migrating to the cloud, or operating within hybrid models, this keynote will provide a comprehensive roadmap to scale your use of AI while safeguarding data integrity at every stage. Join us to explore how Microsoft SQL connects ground, cloud, and fabric into a cohesive and secure AI-powered data platform.
-
Experience SQL Server 2025: The AI-Ready Enterprise Database
Come learn about the latest information about SQL Server 2025. You will learn how to bring AI to your data with AI applications using built-in vector capabilities ground or cloud. In addition, you will learn the most significant SQL Server in a decade for developers including features like JSON, RegEx, REST API, GraphQL, and Change Streaming. You will also learn about all the new engine features for security, performance, and availability. All of this can be cloud connected to Azure and Fabric. This is your best session to learn about everything you need to know about the latest release of SQL Server.
-
Becoming Azure SQL DBA – New Opportunities in Azure – Panel Discussion
As an Azure SQL DBA, your role expands beyond its traditional scope. This shift in responsibilities creates exciting opportunities to develop new cloud skills – from integrating with diverse Azure services to adopting modern coding practices with AI – empowering you to grow and innovate. This wrap-up session of the learning pathway includes a 10-minute introduction followed by a dynamic 50-minute interactive panel discussion. Bring your questions and engage directly with experts ready to share insights and practical guidance. Moderated by Bob Ward, this session is your opportunity to clarify concepts, explore best practices, and connect with industry leaders.
-
Breakfast with the Microsoft Data Leadership Team
Get your day started early at PASS Data Community Summit with a free breakfast and a Q&A session with a panel of leaders across Microsoft hosted by Bob Ward. Tell us what is top of mind for you across SQL Server, Azure SQL, Microsoft Fabric and topics like AI. This is always one of the most popular sessions at the PASS Data Community Summit, so you won’t want to miss it!
-
Lunch and Learn: All things SQL with Bob Ward
Join us for a lunch and learn with Bob Ward as he covers all things SQL and dives into his latest release: SQL Server 2025 Unveiled: The AI-Ready Enterprise Database with Microsoft Fabric Integration. If you know SQL Server, you will love this session. Come learn about SQL Server 2025's advanced, scalable AI integrations and integration with Microsoft’s leading analytics solution: Microsoft Fabric. This will be an exclusive event sure to fill up quickly! Add it to your schedule now!
-
Microsoft Keynote: What's New in Azure Databases: Real-World Solutions for Developers and Enterprises
Whether you're modernizing for peak performance and AI readiness or building the next generation of intelligent apps and agents, enterprises require robust, scalable databases that not only meet today’s demanding requirements but also unlock tomorrow’s possibilities. Join CVP of Azure Databases Shireesh Thota to discover how Microsoft empowers your vision with breakthrough innovations across SQL Server, Azure SQL, Azure Cosmos DB, and Azure Database for PostgreSQL. Learn how to accelerate AI-powered experiences with Copilot and Fabric, and see dynamic demos that showcase how Microsoft’s unified, enterprise-ready data platform, together with AMD, can help you achieve your transformation goals.
-
Inside SQL Server 2025
Join Bob Ward and friends to go deep into the next major release of SQL Server, SQL Server 2025, the Enterprise AI-ready database. You will learn the fundamentals of what capabilities are in the release so you can plan and make key decisions on when and how to upgrade. This session will then go deep into all the major features including but not limited to: AI built-in, JSON, RegEx, REST APIs, Change Event Streaming, Fabric Mirroring, new concurrency enhancements, performance improvements, HA enhancements, and security. You will learn all the latest innovations of SQL Server 2025 including plenty of demonstrations and sample code you can take home to try on your own. Come see all the excitement of the modern database platform reimagined with SQL Server 2025.
-
SQL Server 2025 and AMD in the era of AI
Join us as we talk about how SQL Server 2025 and AMD Technologies are utilizing AI, while showing you to do the same across your applications. We’ll recap the key features of SQL Server 2025 and talk about how advanced vector extensions in AMDs processor boost the performance of your database. This session will showcase how SQL Server 2025 provides the services you need to get you starting quickly to build AI applications securely with your own data delivered on a foundation of AMD-powered innovation.
Brian Hibberd
Senior Database Developer, Scentsy
Brian Hibberd
Scentsy, Senior Database Developer
I love data. I find it to be the most exhilarating technical challenge I’ve known in my career. There’s the challenge of capturing it at scale, the challenge of getting answers quickly, the challenge of moving it in near real-time, the challenge of keeping it consistent (and clean!), the challenge of transforming it into its own analytical value-add to the business. The challenge of keeping it safe. I get excited about these things. I'm a 25-year veteran of IT and a recovering full-stack developer, mostly on the Microsoft stack. I've always gravitated towards the back end, which was formalized in the last decade by becoming a database developer. I've worked with SQL Server, PostgreSQL, and DynamoDB. They're fascinating technologies, always providing opportunities to learn and grow, and so right up my alley.
-
Hobby Huddle: Better Portraits With Your Smartphone Camera with Brian Hibberd
Hosted in the Community Zone, these Hobby Huddle sessions are a fun way for people in the community to showcase their passions and hobbies outside of everyday work life. There will be a designated seating area for you to join these highly entertaining and informative back-to-back mini sessions.
Brian Stauber
Coach + Consultant, Stauber Coaching and Consulting
Brian Stauber
Stauber Coaching and Consulting, Coach + Consultant
Brian Stauber is a coach and consultant specializing in improving self-knowledge, interpersonal communication, and creating sustainable and effective work environments. He is a trained facilitator and prides himself on crafting creative and thought-provoking experiences tailored to the unique personalities of teams. A former IT professional, he’s seen the massive impact a focus on understanding and listening can have on the success of projects. He's also a trained improv actor, professional Dungeon Master, and loves getting into theoretical discussion about increasing collaboration at the tabletop. Check out more information about him at https://www.staubercoaching.com/
-
Communication Skills for Better User Engagement: A Hands-On Workshop!
-
Style your BI Mullet: Getting Managed Self Service BI right for your Org.
-
Beyond the Tech: Improving Collaboration in Data Governance
Most data problems are really people problems in a trench coat. Sure, data governance tools can make it easy to implement data quality & governance (DQ&G) policies, but implementation is the easy part. The hard part? Defining–and getting your people to actually agree on–the details in those policies. No technology can decide for you who should be the data steward for which data points, nor can any DQ&G software resolve internal disputes over who should be able to access what sensitive data. Organizations often rush to implement new data systems before they resolve existing data issues. While it’s certainly more fun to explore the latest tech, the greatest risk to any data project is not choosing the wrong tool, but failing to resolve ongoing challenges related to security, data quality, and consistent data categorization and usage. Because conversations around DQ&G can become so contentious, we believe that collaborative communication strategies are an essential part of the DQ&G toolkit, one that could potentially save your organization from spending thousands in project overruns and months of delays down the road. Join self-identified Data Plumber Lenore Flower and collaboration expert Brian Stauber to learn practical strategies for tackling the most stubborn roadblocks in your data governance plan. This deep dive will provide both practical communications skills and methodologies and core items to incorporate as you build (or grow) your organization’s DQ&G practice.
Bruno Adam
DBA, DELOITTE
Bruno Adam
DELOITTE, DBA
-
Business Brews & Breakthroughs with Redgate
Camilo Leon
Principal Database Specialist Solutions Architect, Amazon
Camilo Leon
Amazon, Principal Database Specialist Solutions Architect
Camilo Leon is a Principal Solutions Architect at AWS specialized on databases and based off San Francisco, California. He works with AWS customers to provide architectural guidance and technical support for the design, deployment and management of their AWS relational database workloads and business applications. In his spare time, he enjoys mountain biking, photography, and movies.
-
SageMaker Lakehouse and RDS SQL Server, a Gen-AI Data Integration Use Case
Generative AI solutions continue to rapidly gain traction worldwide, capturing the imagination of users and customers across various industries. It has now become paramount for businesses to integrate Generative AI capabilities into their customer-facing services and applications. The challenge they often face is the need to leverage massive amounts of relational data hosted on on-premises SQL Server databases to contextualize these new Generative AI solutions. This session showcases how Amazon Relational Database Service (Amazon RDS) for SQL Server and Amazon SageMaker Lakehouse can work together to address this challenge. By leveraging the native integration points between these managed services, you can develop integrated solutions that use Retrieval Augmented Generation (RAG) to contextualize responses generated by generic foundation models and add more relevant, accurate, Generative AI capabilities to existing services and applications.
Carlos Lopez
Cloud Data Architect, GBM
Carlos Lopez
GBM, Cloud Data Architect
Carlos López is a Cloud Data Architect at GBM, Microsoft Data Platform MVP Azure Data Tech & Fabric community group leader and RedGate SQL Saturday Boardmember, blogger & speaker
-
Enhancing AI-Powered Applications with Vector Search in Azure SQL
With the rise of generative AI, integrating **efficient vector search** into applications is crucial for enabling intelligent retrieval-augmented generation (RAG) workflows. **Azure SQL and SQL Database in Microsoft Fabric** now offer **native vector search capabilities**, empowering developers to build AI-driven solutions with minimal effort. In this session, we’ll explore how **vectorized searches** enhance **data retrieval and AI interactions**, leveraging the latest **Microsoft SQL advancements**. We’ll also introduce **langchain-sqlserver**, a new package that allows **SQL Server to function as a Vectorstore in LangChain**, making AI integration seamless. Attendees will gain insights into **use cases** of vector search, learn how **RAG-based intelligent applications** are driven, and discover how **Azure SQL DB, LangChain, and LLMs** work together to enhance AI-driven solutions. Whether you're a developer, data engineer, or AI enthusiast, this session will equip you with the knowledge to integrate vector search into your applications efficiently.
Chandan Shukla
Synechron
Chandan Shukla
Synechron
-
Redgate Masking + Math = Zero-Trust Data Analytics Without Losing Accuracy
Data masking is essential, but what happens when your application needs to compute on masked data — without revealing the underlying values? In this session, we go beyond standard Dynamic Data Masking or Redgate’s static masking, and explore how basic principles from Multi-Party Computation (MPC) and zero-trust design can be applied in SQL Server environments using Redgate tools. You’ll learn how to build a zero-trust data architecture that: Masks sensitive data using Redgate SQL Data Masker Preserves computability (e.g., masked salaries still summable) Leverages lightweight MPC-style math (XOR splitting, obfuscated joins) Allows safe analytics without exposing sensitive values We’ll simulate examples with masked but computationally valid values (like income brackets, counts, aggregates), showing how functional security can exist without decrypting anything. This is for DBAs, security architects, and data engineers who want to protect data and still get value from it — because security shouldn't kill analytics.
Chris Hawkins
Solutions Engineer, Redgate
Chris Hawkins
Redgate, Solutions Engineer
Chris Hawkins is a Solutions Engineer at Redgate Software, based in the UK. With over seven years of experience in the role, Chris works closely with database professionals around the world, helping them tackle challenges and achieve their goals throughout their end-to-end Database DevOps journey. This year marks his third time speaking at PASS, and he's thrilled to be back in Seattle. If you spot him around the event, don’t be a stranger—say hi!
-
Guardrails and Good Data – How Ops Teams Stay Secure at Scale
-
Guardrails and Good Data – How Ops Teams Stay Secure at Scale
-
Break the Compliance Bottleneck – Automate Secure Test Data in 60 Minutes
Rising data privacy regulations coupled with the need for speed, mean delivering high-quality software quickly and securely is non-negotiable. This session will show you how to automate the creation and delivery of secure test data in just 60 minutes using Redgate. We’ll show you how to: – Automatically classify sensitive data (PII, PCI, PHI, and more) – Apply robust, policy-driven masking to meet compliance standards. – Use data subsetting to accelerate masking and reduce test data volume. Integrate the delivery of compliant test data into your CI/CD pipelines. With a live demonstration and practical guidance, this session is perfect for DevOps engineers, DBAs, and developers looking to simplify test data provisioning while staying secure and compliant. Bring your challenges and your curiosity – this session is interactive, practical, and designed to deliver value fast.
Chris Madden
Solution Lead, Storage, Google
Chris Madden
Google, Solution Lead, Storage
Chris hails from California and is a problem solving engineer at heart. Over the last 25 years he's taken part in technology trends from proprietary systems through to Cloud native designs, and the ever increasing importance of IT as a competitive advantage. These days Chris is a Solution Lead at Google Cloud focusing on compute and storage infrastructure solutions to support the most demanding applications.
-
BigQuery: Free Your Data (and Your Weekends!) – A DBA's Guide
-
SQL Server at Hyperscale: Hitting a Half Million IOPS in the Cloud
Can you really achieve a half million IOPS for SQL Server in the public cloud without breaking the bank or sacrificing availability? Yes – and this session shows you how on Google Cloud. Dive deep with us through presentations and live demonstrations as we architect a solution for maximum performance, resilience, and cost-efficiency. We'll explore: * The 500k IOPS Formula: Configuring optimal Compute Engine instances combined with the massive throughput of Hyperdisk storage. * Advanced HA/DR Designs: Implementing robust availability using Google Cloud's synchronous cross-zone and asynchronous cross-region mirroring capabilities. * Cloud-Native Cost Optimization: Utilizing elasticity to precisely scale resources (up/down, in/out) and manage SQL Server license costs effectively. Gain practical, actionable insights into building and managing hyperspeed SQL Server environments on Google Cloud. Leave ready to implement these strategies immediately.
-
Click, Deploy, Relax: Zero to SQL Server with Workload Manager
-
Train your Pokémon card recognition model, with AI and Postgres on steroids
-
Is Your Next DBA an AI ?!? Bridging AI Agents & Databases
Chris Randvere
Solutions Engineer, Redgate
Chris Randvere
Redgate, Solutions Engineer
I am a Technology Sales and Solutions Engineer who puts an emphasis on assisting customers to strive for improvement within their IT environment through improved technologies. Nearly 30 years of experience in Sales and Support of enterprise products found in the data center. Servers, Storage, SQL Server, Rack Infrastructure. DevOps evangelist.
-
The Morning After – What Your Database Was Doing While You Slept
Ever start your day with a flood of alerts and no idea what happened overnight? This session is for you. We’ll show how to use monitoring history, baselines, and alert patterns to quickly understand what went wrong (and what didn’t). Learn how to prioritize issues, communicate with dev teams, and get ahead of recurring problems. Join the database experts as they demonstrate how modern diagnostic and alerting capabilities can help you sleep better at night.
Chris Sorensen
President, Iteration Insights Ltd.
Chris Sorensen
Iteration Insights Ltd., President
Chris Sorensen CPA, CGA, is the Founder and President of Iteration Insights, a Calgary-based Microsoft Partner in the Data and AI space. With several Microsoft Certifications and more than 20 years of consulting and teaching experience, Chris brings a practical lens to his training, going beyond theory to everyday technology use. Chris has overseen hundreds of data projects across diverse industries, delivering solutions and empowering organizations with analytics. He is a published author for Microsoft Press, covering topics in data analytics, Power BI, and Microsoft data certification exam prep.
-
Unlocking the Power of Microsoft Fabric: A Beginner's Guide
-
Building A Data Competency Around The Microsoft Suite Of Data Products
This session explores the strategic implementation of a robust data competency within organizations that are looking to or already leverage the Microsoft suite of data and analytics products such as Excel, Power BI, Fabric, and Azure. We will describe the many roles that needed to support a well functioning analytics competency across and organization, and not just in IT. Most importantly, we emphasize the importance of empathy across a practice, as you progress on the path to excellence. Attendees will gain an understanding of how to build a cohesive data framework that integrates these tools from the perspective of people and process. We will delve into methodologies for training teams to effectively utilize Excel for complex data analysis, creating impactful visualizations with Power BI, and utilizing Fabric for data integration. Join us to discover best practices, overcome common challenges, and unlock the full potential of Microsoft’s data ecosystem in fostering a culture of data excellence within your organization.
Chris Woods
DBA Manager, Consilio
Chris Woods
Consilio, DBA Manager
Chris Woods manages the DBA team at Consilio, overseeing hundreds of thousands of eDiscovery databases, many of which are exceptionally large. He has spent more than 26 years architecting, automating, and performance tuning MSSQL environments. A self-proclaimed scripting enthusiast, he loves finding creative solutions for managing and deploying databases at scale. Outside of the data world, he enjoys traveling with the family and has a passion for science fiction books, movies, and shows. You can find him at sqlcwby.com and on Linked-In. And yes, he is that SQL guru's doppelgänger.
-
Modern Database Development: Real-World Lessons from the Front Lines
Join a panel of seasoned database professionals and industry experts as they dive into the toughest challenges facing modern development and operations teams. From navigating monolithic legacy systems, to wrangling with the data layer in the age of AI, this session explores the real-world roadblocks teams encounter when deploying databases at scale. You'll hear firsthand from organizations about their strategies for reducing downtime risk, managing inconsistent processes across diverse environments, and improving code quality. Whether you’re a developer, DBA, or DevOps leader, you’ll leave with practical insights and proven approaches to modernize your database deployment practices – no matter how complex your estate.
Christian Henrik Reich
Principal Architect, twoday Data & AI Denmark
Christian Henrik Reich
twoday Data & AI Denmark, Principal Architect
Currently work @ twoday’s Data & AI DK's department for Technologies and architecture, and is a part of Mugato as a senior developer and AI developer. Started programming as kid, and still do. Have made everything from embedded programming to data warehouses. Last decade, focus has mainly been on data. From optimizing and infrastructure to designing and building data solutions in cloud and on-premise.
-
An Apache Spark query's journey through the layers of Microsoft Fabric
-
Outperform Spark with Python Notebooks in Fabric
-
Beyond Chatbots: Leveraging AI for Unstructured Data Processing in Fabric
-
ML and AI Capabilities in Microsoft Fabric
Microsoft Fabric is becoming the one-stop shop for data in Azure, including machine learning and AI. Fabric's maturity is starting to enable real projects with its machine learning and AI capabilities. As with many other aspects of Fabric, there are also new libraries and tools for machine learning and AI. These might be different, especially for those coming from Azure ML. We’ll also discuss how Azure ML remains relevant and how to navigate Fabric’s workspace, capacities, and associated costs to maximize your project's efficiency and potential.
-
Transforming Data into Gold: A Live Demonstration of MS Fabric Lakehouse
Christoph Petersen
EMEA Solution Lead, Google
Christoph Petersen
Google, EMEA Solution Lead
Christoph is a Solution Lead in the EMEA Infrastructure Solutions Practice, helping customers to migrate workloads to Google Cloud at scale. He specializes on Microsoft workloads running on Google Cloud, addressing both the technical and commercial aspects. Before joining Google, Christoph held various roles at Microsoft in consulting and in technical pre-sales for Azure. Christoph has been speaking at conferences such as Microsoft TechCon, Google Tech Summit, PASS Data Community Summit as well as user groups like Azure Meetups and Google Cloud community events.
-
SQL Server at Hyperscale: Hitting a Half Million IOPS in the Cloud
Can you really achieve a half million IOPS for SQL Server in the public cloud without breaking the bank or sacrificing availability? Yes – and this session shows you how on Google Cloud. Dive deep with us through presentations and live demonstrations as we architect a solution for maximum performance, resilience, and cost-efficiency. We'll explore: * The 500k IOPS Formula: Configuring optimal Compute Engine instances combined with the massive throughput of Hyperdisk storage. * Advanced HA/DR Designs: Implementing robust availability using Google Cloud's synchronous cross-zone and asynchronous cross-region mirroring capabilities. * Cloud-Native Cost Optimization: Utilizing elasticity to precisely scale resources (up/down, in/out) and manage SQL Server license costs effectively. Gain practical, actionable insights into building and managing hyperspeed SQL Server environments on Google Cloud. Leave ready to implement these strategies immediately.
-
Click, Deploy, Relax: Zero to SQL Server with Workload Manager
-
BigQuery: Free Your Data (and Your Weekends!) – A DBA's Guide
-
Train your Pokémon card recognition model, with AI and Postgres on steroids
-
Is Your Next DBA an AI ?!? Bridging AI Agents & Databases
Christopher Crow
Technical Marketing Engineer, Pure Storage
Christopher Crow
Pure Storage, Technical Marketing Engineer
Chris has been in IT since 1999 where he specializes in cloud-native applications, cloud infrastructure, Kubernetes and open-source technologies. Chris currently works in technical marking proving automation and training materials for students to learn Kubernetes and related technologies. Chris resides in Tacoma, Washington where he enjoys playing bass, camping with his family, and nerding out on craft beer.
-
Database DevOps: CD for Stateful Applications
Running stateful applications on Kubernetes can provide many of the same advantages as stateless applications. In this talk, Stephen and Chris will share some thoughts on managing stateful applications as part of a CD Pipeline so that applications – and the application's data – can be versioned and deployed safely and repeatedly. This talk will discuss managing persistent data and databases within Kubernetes, as well as managing structural changes to a database as part of a CD process. The talk will dive into automation approaches and tooling for managing data migrations between environments and running database migrations within a CI/CD pipeline. The talk will feature real-world examples where we discuss specific schema migrations and their performance impacts. We will also discuss how to leverage governance to ensure compliance while empowering your developers through automation. With Kubernetes and Liquibase, we can provide something better than before: A more testable, repeatable, and open way to deploy stateful applications. This talk features a practical demo of how CD tooling can empower users to automate data migrations within Kubernetes.
Christopher Schmidt
Microsoft
Christopher Schmidt
Microsoft
-
Effortless Data Transformation with Microsoft Fabric
The ability to transform and analyze data as it arrives is a game-changer for businesses seeking timely and actionable insights. This session will introduce you to some of the strengths of Real Time Intelligence in Microsoft Fabric, focusing on its capabilities for effortlessly transforming data. Say goodbye to complex orchestration paths and brittle pipelines, and let the database automatically handle the transformation for you. We'll delve into the core concepts, practical applications, and best practices that make KQL an essential tool for modern data analytics. Join us to learn how you can unlock the potential of event driven architectures and revolutionize the way you approach data transformation and analytics.
Cindy Gross
CEO, Befriending Dragons
Cindy Gross
Befriending Dragons, CEO
Cindy Gross is a leadership navigation coach who helps women and non-binary leaders in tech lead with courage, clarity, and confidence. A former Microsoft data and cloud architect with 25+ years in enterprise tech, she knows firsthand what it means to thrive in high-pressure, male-dominated environments—while carrying the emotional labor of being “the only one.” Now, as the founder of Befriending Dragons, Cindy supports underestimated leaders who are ready to stop shape-shifting to survive and start leading from their full selves. She blends systems thinking, equity-centered coaching, and lived experience to help clients navigate burnout, bias, and big decisions with grounded insight and practical strategy. Her work invites leaders to reclaim their voice, set powerful boundaries, and lead in ways that are both effective and deeply aligned. Cindy’s keynotes and workshops have sparked reflection and change in tech teams, leadership cohorts, coaching communities, and public sector organizations. She speaks with sharp insight, mythic perspective, and deep compassion for what it really takes to lead when the system wasn’t built for you. Her message is simple and powerful: you’re not broken. You get to lead as your full self—and when you do, you change everything.
-
Ignite Your Inner Leader: Build Boundaries & Influence While Thriving
-
I Got the Role…. Now What?
You fought for it. You worked long and hard, grew in ways you didn’t know you could, and sacrificed other parts of your life to land that dream leadership role. And now… what? Like the dog who finally caught the big red car, you’re staring at this new reality, feeling off-balance and unsure of what comes next. Welcome to the Dragon Toolkit—not just checklists and how-to guides, but real conversations about what happens inside you when everything shifts. The relationships that evolve or disappear. The discomfort of no longer being the expert at everything. The changed power dynamics that leave you questioning where you stand. I get it. I went from a hyper-logical, high-stakes role at Microsoft to deeply empathetic, one-on-one coaching, asking people to talk about what makes them uncomfortable. That was a culture shock. I had to redefine everything—how I saw myself, how I communicated, and how I led in an unfamiliar space. In this session, we’ll talk about navigating uncertainty, embracing growth, and becoming the leader you already know you can be. You’ll leave with a Dragon Toolkit packed with curious questions, awareness exercises, and evidence-based strategies to help you step into leadership with clarity and confidence.
-
Communicating Through Leadership Friction
A spontaneous session that does a deep dive into the real communication struggles technical experts face when engaging with non-technical stakeholders, where the goal isn't just clarity, but building trust and strategic influence. We explore common pitfalls—from jargon overload to the 'invisible cost' of unarticulated technical decisions—and shared practical strategies for translation and engagement. You'll walk away with actionable insights and confidence needed to convert technical expertise into recognized business value across the organization.
Conor Cunningham
Partner Architect/Engineering Manager, Azure Data, Microsoft
Conor Cunningham
Microsoft, Partner Architect/Engineering Manager, Azure Data
Conor has worked at Microsoft for 27 years building database engines including Jet Red (the original engine inside of Microsoft Access), SQL Server, Azure SQL Database, Synapse Data Warehouse, and Fabric Data Warehouse. He is the author of several peer-reviewed papers, numerous patents, and a co-author of SQL Server 2008/2012 Internals where he wrote the chapter on how the query optimizer in SQL works. Conor led the releases of SQL Server 2016, 2017, and 2019, and more recently he has been building and running an engineering team in Austin, Texas. His research interests include query optimization, scalability, distributed systems, query execution performance on modern hardware architectures, and designing future architectures for databases and data processing. Conor has a B.S. Computer Science from The University of Texas at Austin and a M.S. Computer Science from the University of Washington, Seattle.
-
An Inside Look at Building SQL Server/Azure and Fabric Data Warehouse
Have you ever wondered how Microsoft builds and releases its database engines (SQL Server, SQL Azure, Fabric Data Warehouse)? This talk goes through some of the behind-the-scenes details about how the engineering team works, how features come together, and how the engineers learn how to build the software you use to manage your data. Having the Regional Summit in Dallas gives us a unique opportunity to talk about our engineering office in Austin and how it contributed features and enhancements to recent releases. Come and learn about which SQL features are built right here in Texas!
Damir Bulic
CEO, Spectral Core
Damir Bulic
Spectral Core, CEO
Damir Bulic is the CEO of Spectral Core, an ISV with customers in more than 100 countries. With over 30 years of professional experience, Damir has deep and broad expertise in multiple software fields. He is the original author and architect of Spectral Core’s software but now leaves (most of) development to the highly capable Spectral Core team. Some of his work: – Emparsen: a state-of-the-art parser generator built from the ground up. It produces strongly typed SQL parsers that run up to 500x faster than those generated by ANTLR. – Full Convert: a data migration tool capable of seamlessly migrating tables and data between approximately 50 database engines. On the market since 2004, it has thousands of happy customers in over 100 countries. It supports both creating the target from scratch and comparison/on-demand replication. – Omni Loader: a sophisticated data migration tool for the largest databases, capable of migrating over 50 TB per day on a single machine. Comes with full CDC support. – SQL Tran: a SQL code translator with unparalleled accuracy and capabilities. It runs 1000x faster than other tools on the market, includes static analysis (name binding, broken reference tracking), data lineage, target database creation, synthetic data generation, and a sophisticated correctness and performance testing framework. SQL Tran can fully translate SQL codebases with multiple million lines of code in under a minute, on a single machine. – SQL App: a database management tool. – SQL Format: an autocomplete and refactoring plugin for SSMS. – Documenter: a database schema documentation tool. – Luceed: co-author of a highly successful ERP system on the market for over 25 years. Luceed is used by thousands of businesses in 10+ countries. – Wrote a Windows system loader from scratch by reverse-engineering the PE file format, enabling software renting long before SaaS was a thing. – Embryo: co-author of a 3D game for Amiga written in MC680x0 assembler. Also wrote both the 3D and 2D modelers for it.
-
Super-fast migration to Fabric
Lightning-Fast Migrations to Fabric by Damir Bulić, CEO of Spectral Core, explores the technical and strategic challenges of migrating databases to Microsoft Fabric and how Spectral Core’s tools address them. Spectral Core, an independent software vendor with over 20 years in business and customers in 100+ countries, specializes in fully self-tuning database migration solutions. As an official Microsoft partner for Fabric migrations, the company handles both data and code migration. Fabric, a rapidly evolving and massively distributed platform, differs fundamentally from SQL Server and Azure Synapse. Its Parquet-based storage engine impacts all aspects of migration, creating challenges in data types, ingestion, and code compatibility. These include collation and UTF-8 differences, size mismatches in NCHAR/NVARCHAR, BLOB size limits, and lack of CLR/XML support. Optimal data ingestion requires Parquet conversion, but extraction methods often yield suboptimal files. Code migration hurdles include case sensitivity, unsupported statements, and missing features like computed columns, default values, and identity columns. Spectral Core’s approach involves building emulations to bridge feature gaps—covering aspects such as casing, cursor handling, default values from ALTER TABLE, named parameters, sequence handling, merge operations, identity inserts, SET ROWCOUNT, table variables, sys view access, and user-defined types. Two flagship products power these migrations: Omni Loader, which supports over 30 database engines, achieves up to 70 TB/day migration on a single machine, and provides schema creation, replication, and change data capture; and SQL Tran, a bespoke SQL code translation IDE capable of translating millions of lines of SQL in seconds, building full data lineage models, catching errors, and auto-generating tests for correctness and performance.
Damu Venkatesan
Data Consultant, SQL Data BI
Damu Venkatesan
SQL Data BI, Data Consultant
Damu Venkatesan is a seasoned Data Consultant in the IT industry. Specializing in architecting and implementing data solutions, he has extensive expertise with Microsoft Fabric, SQL Server, Excel, and SharePoint. Throughout his career, Damu has successfully delivered numerous Business Intelligence (BI) and Data Warehouse (DW) solutions, as well as managed complex data migrations, for clients across diverse sectors including healthcare, financial services, and government. Damu is an active leader in the data community, serving as the co-leader of the Microsoft Fabric User Group in Atlanta and as a board member of the DAMA Georgia chapter. He is also a certified Data Management Professional (CDMP), Fabric Analytics Engineer, and Power BI Data Analyst. His passion for leveraging cutting-edge technologies to solve real-world challenges makes him a sought-after expert and speaker at industry events.
-
Data Modeling for Analytics
Do you want to improve your ability to model data for analytics and reporting? Join this session to learn the foundational techniques for building data models that will meet your organization’s needs—regardless of industry. In this beginner-friendly session, you will gain a solid understanding of why good data modeling is crucial, how to create and maintain effective data models, and the key principles to follow in the process. We’ll start by exploring how to identify essential data domains and elements and organize them in a way that sets the foundation for accurate modeling. From there, we’ll dive into dimensional modeling techniques, specifically focusing on the star schema. This type of model is highly effective for enabling easy and insightful reporting, dashboard creation, and data visualization across your organization. By the end of the session, you’ll walk away with a clear understanding of core data modeling concepts and techniques that you can apply immediately when working with tools like Power BI in Fabric or Microsoft Excel for data analysis. Whether you're just starting out or looking to refresh your skills, this session will equip you with the knowledge to model data that powers insightful decisions.
-
DAX 101
Daniel Cai
Managing Director, KingswaySoft
Daniel Cai
KingswaySoft, Managing Director
Daniel Cai is the Managing Director and Founder of KingswaySoft, a company specializing in high-quality software solutions for data connectivity, data integration, and data synchronization. With a strong background in designing and developing large-scale enterprise applications, Daniel's expertise lies in leveraging the latest technologies to solve complex data challenges. He is best known for his work on the KingswaySoft Integration Toolkit, which has become the go-to solution for thousands of organizations worldwide. Daniel is a frequent speaker at industry conferences and a trusted voice in the data integration community, known for blending deep technical insight with real-world applications.
-
Mastering Data Sync with KingswaySoft: Strategies for Reliable Integration
In today’s data-driven world, organizations rely on seamless integration across platforms to ensure data consistency, reliability, and accuracy. KingswaySoft offers a suite of powerful integration toolkits that enable businesses to synchronize data between systems like Microsoft Dynamics 365, Salesforce, SharePoint, and SQL Server with ease. In this session, we'll explore how to design and implement robust, efficient, and scalable synchronization processes between various data sources and destinations. You’ll learn best practices for handling incremental data updates, managing complex data transformations, and ensuring data integrity. Whether you’re synchronizing cloud systems, on-premises databases, or hybrid environments, this session will provide you with the knowledge to build reliable and maintainable ETL solutions for your data synchronization needs.
-
The Enduring Power of SSIS – Supercharging the Proven ETL Platform with Modern Connectivity
Ever wonder if SSIS is a "legacy"? Or is it a battle-tested, high-performance ETL engine that's being underestimated? This session challenges the "legacy" label and dives deep into advanced SSIS development for today's hybrid cloud world. We'll explore how to leverage the SSIS Integration Runtime in Azure Data Factory, but more importantly, how to solve some recent challenges: modern data connectivity. In this session, we will learn how to supercharge your ETL development using the SSIS platform, including the use of 3rd-party components like KingswaySoft to connect to any modern applications (Salesforce, Dynamics 365, REST APIs, SharePoint Online) without writing a single line of code. We will walk through some concrete, real-world use cases, comparing the "hard way" (custom development) vs. the "smart way" (connector-based development). You'll leave with practical strategies to make SSIS a fast, powerful, and maintainable core of your modern data platform.
Danny Kruge
Mission Critical Engineer, Schuberg Philis
Danny Kruge
Schuberg Philis, Mission Critical Engineer
Danny Kruge is a Microsoft Data Plaform MVP with a robust background in SQL Server and over a decade of experience in the field. Originally from Leicester, England, and now residing in the Netherlands, he has dedicated years to honing his skills and expanding his expertise into areas like Azure and Windows Engineering. Danny is particularly passionate about Azure Data Factory, where he leverages his deep knowledge to design efficient data workflows. As an active community leader, he organizes the Azure Heroes Netherlands Community Day and the Azure user group NL, helping to foster knowledge-sharing and collaboration among tech enthusiasts. Committed to continuous learning and community support, Danny enjoys sharing his insights to help others navigate data challenges and avoid common mistakes.
-
Automating SQL Permissions Scripting
-
Chat With Azure SQL
-
Migrating 8TB of Data
-
Cross-Tenant Azure SQL Migrations
Discover how to perform seamless, no-downtime migrations of Azure SQL databases across tenants using geo-replication and the Microsoft backbone. This session will delve into the technical steps to set up secure cross-tenant geo-replication without exposing public endpoints. Learn how to maintain read-only replicas in the current tenant to keep applications running during the migration process, ensuring business continuity. Whether you’re planning a tenant-to-tenant migration or exploring strategies for hybrid operational setups, this session will equip you with practical insights and best practices to achieve secure, efficient, and downtime-free transitions.
Danny de Haan
Solutions Engineer, Redgate Software
Danny de Haan
Redgate Software, Solutions Engineer
Danny has been working in IT for nearly 20 years in various roles. With over 15 years of experience with SQL Server, he has always placed a strong emphasis on Security, Risk Management, and Compliance for the SQL Server Data Platform. Ensuring industry best practices and regulatory requirements are followed from an infrastructural point of view. While speaking at (public) events, Danny also works as a Solutions Engineer for Redgate Software, helping customers on their Database DevOps journey.
-
Kerberos: A Dive into Delegation & SQL Server
In this session, we will explore the inner workings of Kerberos authentication in Active Directory and its critical role in modern network security, with a special focus on delegation and its application to SQL Server environments. Kerberos is the gold standard for secure authentication in distributed systems, but its complexities – particularly around delegation – can be challenging to navigate. This talk will break down the key concepts of Kerberos, explaining what it is, how it works, from the login process up to the use of services like SQL Server. We'll be looking into delegation, the technical aspects of it, how the various delegation types work, including their weaknesses and how this relates to our SQL Data Platform. Last but not least we'll also take a look at attack scenario's hackers use and what you can do to prevent certain attacks.
-
Securing the SQL Server Data Platform: Defining a Security Strategy
In today’s world of escalating cyber threats, securing your SQL Server environment has become a critical priority. The phrase "It's not a matter of if, but when you're going to be hacked" has never been more true. With external attacks becoming more sophisticated, organizations must prepare for the inevitable and safeguard their SQL Server data platform against data breaches, unauthorized access, and data loss. This talk will focus on the essential steps to secure SQL Server environments against external threats and prevent the loss of critical data. We'll explore how to design a security strategy that reduces the attack surface and ensures your data is protected – even when facing advanced, targeted attacks. From securing network connections and enforcing strong authentication mechanisms to implementing encryption, we'll discuss a range of practical, actionable measures that will help you safeguard your SQL Server Platform from malicious external actors. How to monitor and audit these settings, not only for yourself, but also for compliancy purposes.
-
Database Development with a Security-First Mentality
-
Kerberos: A dive into delegation & SQL Server
-
Securing the SQL Server Data Platform: Defining a Security Strategy
In today’s world of escalating cyber threats, securing your SQL Server environment has become a critical priority. The phrase, "It's not a matter of if, but when you're going to be hacked", has never been more true. With external attacks becoming more sophisticated, organizations must prepare for the inevitable and safeguard their SQL Server data platform against data breaches, unauthorized access, and data loss. This talk will focus on the essential steps to secure SQL Server environments against external threats and prevent the loss of critical data. We'll explore how to design a security strategy that reduces the attack surface and ensures your data is protected – even when facing advanced, targeted attacks. From securing network connections and enforcing strong authentication mechanisms to implementing encryption, we'll discuss a range of practical, actionable measures that will help you safeguard your SQL Server Platform from malicious external actors. How to monitor and audit these settings, not only for yourself, but also for compliancy purposes.
-
Database Development with a Security-First Mentality
-
Future-Proofing Your Database Estate: Smarter Monitoring for Strategic Growth
As database environments evolve spanning hybrid infrastructures, diverse platforms, and growing performance demands, it's no longer enough to monitor what's happening now. Strategic estate planning requires a forward-looking approach. In this session, you'll discover how you can use Redgate Monitor to plan for the future with confidence. Learn how to leverage historical performance data, disk usage trends, and alert patterns to inform decisions around patching, capacity planning, and workload optimization. We'll explore how to identify risks before they become issues, and how to adapt your monitoring strategy to support long-term goals. You’ll also get a first look at new deployment options designed for scalability and cost-efficiency, including running Redgate Monitor’s data repository on PostgreSQL with TimescaleDB. Whether you're managing SQL Server, exploring PostgreSQL, or preparing for a cloud-first future, this session will equip you with practical insights and tools to evolve your monitoring strategy, and manage your estate with agility and foresight.
-
Kerberos: A dive into delegation & SQL Server
-
Securing the SQL Server Data Platform: Defining a security strategy
-
Database Development with a Security-First Mentality
In today’s increasingly digital world, ensuring the security of data is paramount. This 75-minute session, will dive into the essential practices and principles of secure SQL Server database development. Attendees will learn how to adopt a proactive approach to database security from the initial stages of development, incorporating best practices to safeguard sensitive information against unauthorized access, data breaches, and malicious attacks. The session will explore critical security features within SQL Server, such as encryption, authentication, role-based access control, and auditing. Furthermore, we will address the importance of regular security assessments and continuous monitoring to stay ahead of evolving threats. By the end of the presentation, developers will have the knowledge and tools to integrate security seamlessly into the database development lifecycle, ensuring their SQL Server databases are resilient and compliant with industry standards and regulations.
David Bermingham
Senior Technical Evangelist, SIOS Technology
David Bermingham
SIOS Technology, Senior Technical Evangelist
David Bermingham is a recognized expert in Windows Server Failover Clustering and SQL Server high availability, with over 20 years of experience helping organizations design and implement resilient infrastructure solutions across on-prem, hybrid, and cloud environments. A former Microsoft Cloud and Cluster MVP, David currently serves as the Director of Customer Success at SIOS Technology, where he works closely with enterprises deploying mission-critical applications in Azure, AWS, and Google Cloud. David is a frequent speaker at global conferences including SQL Saturday, Microsoft Ignite, and cloud vendor technical summits. His expertise spans SQL Server Availability Groups, Distributed Availability Groups, and SANless Failover Cluster Instances, with a practical focus on simplifying HA/DR architectures across multi-cloud environments.
-
Building Resilient SQL Server HA/DR in a Multi-Cloud World
Running SQL Server in the cloud is common. Running it reliably across multiple cloud providers? That’s where things get interesting. In this session, we’ll dive into how to architect SQL Server high availability and disaster recovery solutions that span Azure, AWS, and Google Cloud—giving you resilience that goes beyond a single provider’s boundaries. As a former Microsoft Clustering MVP and Cloud & Datacenter MVP, I’ve worked with organizations that need to reduce risk, avoid lock-in, and meet aggressive RTO/RPO goals—all while keeping SQL Server highly available. We'll look at real-world, multi-cloud designs using familiar SQL Server technologies like Always On Availability Groups and Failover Cluster Instances (FCIs), and explore how to overcome the networking, storage, and domain challenges that make multi-cloud tough. Whether you’re designing an active/passive deployment with a secondary in a different cloud, or trying to build cross-cloud failover into your business continuity plan, you’ll get practical guidance and architecture patterns you can use. This isn’t theoretical—these are real solutions for real-world problems. Bring your cloud curiosity and your SQL Server uptime expectations. You’ll leave knowing how to take them both to the next level.
Davide Mauri
Principal Product Manager, Microsoft
Davide Mauri
Microsoft, Principal Product Manager
Experienced Principal Product Manager with a 20+ year track record of innovation across AI, cloud databases, and enterprise-scale data platforms. Former Microsoft MVP (12 years), speaker, and author. Adept at leading cross-functional teams, driving developer-first product strategies, and launching industry-defining features. Passionate about empowering developers and customers through scalable, intelligent, API-first, SQL-enabled solutions.
-
Becoming Azure SQL DBA – Copilot and AI
In this session you will learn how to unlock the future of data productivity with a hands-on exploration of AI-powered Copilots! We’ll dive into Copilot in SQL Server Management Studio (SSMS), Microsoft Copilot in Azure with NL2SQL and SQL, and Copilot for SQL Databases in Microsoft Fabric. Discover how each Copilot meets you where you are, and how each introduces efficiency into your SQL workflow. With real-world demos to showcase their power in action, and many best practice tips along the way, you’ll leave inspired and ready to transform how you interact with SQL, anywhere. In each of the areas and throughout the session, we will map on-premises SQL Server DBA responsibilities to the Azure SQL DBA role – highlighting what responsibilities are new, which ones stay the same, and what is shared or fully delegated to Microsoft. You will walk away with an understanding of the relevant DBA skills you need to evolve as an Azure SQL DBA.
-
Be a SQL Python Hero with VS Code, GitHub Copilot & MSSQL-Python Driver
Ready to supercharge your SQL development workflow? This 60-minute session shows how Python is becoming an essential companion for SQL developers. Discover how the enhanced MSSQL extension for Visual Studio Code, combined with GitHub Copilot, accelerates everything from schema design to data generation, import and export and query writing. We’ll dive into real-world demos that showcase how Python scripts can seamlessly integrate with SQL Server, Azure SQL, and Fabric SQL databases using the new mssql-python driver—bringing security, performance, and flexibility to your projects. Whether you’re building apps, automating tasks, or exploring advanced analytics, this session will help you understand the full potential of SQL + Python and take advantage of latest tools to stay ahead of the curve.
-
Building Scalable Secure AI-Ready Apps with Azure SQL Hyperscale
Build AI apps that run securely and scale with your needs with Azure SQL Database Hyperscale. We’ll cover native vector indexes for semantic search, read scale‑out for low‑latency RAG, using the model of your choice, from T‑SQL. We will show how to build modern AI Agents using all the tools you need with databases and MCP Servers. Modernize your AI application using the power of Azure SQL Database Hyperscale.
-
Becoming Azure SQL DBA – New Opportunities in Azure – Panel Discussion
As an Azure SQL DBA, your role expands beyond its traditional scope. This shift in responsibilities creates exciting opportunities to develop new cloud skills – from integrating with diverse Azure services to adopting modern coding practices with AI – empowering you to grow and innovate. This wrap-up session of the learning pathway includes a 10-minute introduction followed by a dynamic 50-minute interactive panel discussion. Bring your questions and engage directly with experts ready to share insights and practical guidance. Moderated by Bob Ward, this session is your opportunity to clarify concepts, explore best practices, and connect with industry leaders.
Dean Furness
Principal Analytics Consultant, Wells Fargo
Dean Furness
Wells Fargo, Principal Analytics Consultant
Becoming a paraplegic following an accident in 2011, Dean speaks to groups via large keynotes and small settings about grit, determination, and what it takes to keep moving through daily challenges. His story was recognized as a "Top 20 of 2020" TED Talk, garnering more than 4 million views. A significant part of his story details his adventures in wheelchair racing, leading him to compete in major marathons around the country including the Chicago, London, and Boston Marathons. Professionally Dean is a data and analytics specialist, focusing on data visualization and digital dashboard solutions to help executives with decision making activities. Dean is a married father of three living in Martensdale, IA where he enjoys woodworking and cooking.
-
Placing Focus Where It Matters Most
We have been measured all of our lives, from when we were young to today in our daily efforts. Most often, these measurements lead us into the trap of being compared with each other. But what if we looked at this from a different angle? One that helps you put focus on you? This talk presents taking a different approach by defining your personal average and placing focus there in an effort to break through the comparison trap. 15 years ago, I became a paraplegic. I'll detail my story of overcoming many challenges, helping me become a better person, spouse, and parent – and leading me back to a leadership position in my career all while competing in major marathons around the world in my wheelchair. I'll present the concept of 'personal average' and describe how it has helped me overcome many challenges since my accident.
Deborah Melkin
Data Engineer, Advisor360
Deborah Melkin
Advisor360, Data Engineer
Deborah Melkin has been working as a database professional with SQL Server for over 20 years. She spends her days helping programmers with all aspects of database design, queries, performance, and deployment. In 2016, she began her blog, Deb the DBA. Soon after that, she began speaking at SQL Saturdays and user groups. Deborah is a co-leader of the Data Platform Women in Tech (WIT) Virtual User Group and co-founder of WITspiration, a WIT mentorship circle. She is a former board member of the New England SQL Server User Group. She was named as One to Watch as part of the #Redgate 100 in 2022 and won Speaker Idol at PASS Summit 2019. Deborah is also a Microsoft MVP for the Data Platform (2020- ). In her spare time, Deborah can usually be found doing something musical or something geeky with her husband, Andy, and their dog, Sebastian.
-
Optimized Locking: Improving SQL Server Transaction Concurrency
Some aspects of the SQL Server engine have not seen much change for a long time – locking is one such component. Until now. Optimized locking is here and it affects not just the locking mechanisms but the way concurrency is handled. This session is designed to help us understand how optimized locking works. First we will cover the foundation on which it was built, the version store. Then we’ll dig deep into the two components of optimized locking: Transaction ID (TID) and lock after qualification (LAQ). Finally, we’ll review some best practices when enabled. By the end of this session, you should have a clear understanding of how it works and why we want to take advantage of this new functionality.
-
A Beginner's Guide to Becoming a Performance Tuner – T-SQL Edition
-
Debating Two Viewpoints on Indexing – Development vs Operations
-
Optimized Locking: Improving SQL Server Transaction Concurrency
Some aspects of the SQL Server engine have not seen much change for a long time – locking is one such component. Until now. Optimized locking is here and it affects not just the locking mechanisms but the way concurrency is handled. This session is designed to help us understand how optimized locking works. First we will cover the foundation on which it was built, the version store. Then we’ll dig deep into the two components of optimized locking: Transaction ID (TID) and lock after qualification (LAQ). Finally, we’ll review some best practices when enabled. By the end of this session, you should have a clear understanding of how it works and why we want to take advantage of this new functionality.
-
The Benefits of Mentoring
-
Community Conversation: Beyond the Data – Building Meaningful Connections in the Global Community
The global data community is more than just a network, it’s a vibrant space to learn, share, and grow together. Our guests will share how they make the most of this amazing community by building genuine relationships, attending events, and embracing the power of asking for help. Whether you’re looking to expand your knowledge, connect with peers, or simply find support when you need it, this discussion will show you how to turn community engagement into a rewarding experience.
-
Hidden Pathways to Achieving Peak SQL Server Performance
-
Debating Two Viewpoints on Indexing – Development vs Operations
Deepthi Goguri
Database Administrator, Database Administrator
Deepthi Goguri
Database Administrator, Database Administrator
Deepthi is a SQL Server Database Administrator with several years of experience in Administering SQL Servers. She is a Microsoft Data Platform MVP, Microsoft certified trainer and Microsoft certified professional with an Associate and Expert level Certification in Data Management and Analytics. Deepthi blogs for DBANuggets.com. Deepthi is a Co-Organizer for Microsoft Data and AI South Florida user group, Data TGIF, Cloud Data Driven User Group, Future Data Driven Summit, Databash Conference and Data Platform Diversity, Equity, and Inclusion Virtual Group. She is also DEI Steering Committee member for PASS Data Community Summit. She is a Redgate Community Ambassador. You can contact her on Twitter @dbanuggets.
-
Performance Tuning: How Query Store Saves the Day (and Night)
Struggling with plan regressions, inconsistent query performance, or those dreaded late-night production calls? Let Query Store be your superhero! In this session, you’ll discover how this powerful, often underused feature in SQL Server can bring visibility, control, and stability to your query performance – day or night. This demo-centric session will explore real-world performance issues and show exactly how Query Store swoops in to save the day. Whether you're dealing with rogue plans, tuning nightmares, or just trying to keep things humming, you'll leave with practical insights and tuning superpowers! Topics to be covered includes- 1. Root Cause Analysis with Query Store 2. Plan forcing and Automatic plan correction 3. SQL Server 2022 Query Store Advancements 4. Top 10 things not to forget while using query store 5. Things to avoid while using Query Store 6. Best Practices & Troubleshooting Tips
-
Zero Trust in Action: Mastering Azure SQL Security Like a Pro
-
Hobby Huddle: How the Mind Works and Non-Duality learnings for Mental Health with Deepthi Goguri
Hosted in the Community Zone, these Hobby Huddle sessions are a fun way for people in the community to showcase their passions and hobbies outside of everyday work life. There will be a designated seating area for you to join these highly entertaining and informative back-to-back mini sessions.
Denny Cherry
Principal Consultant, Denny Cherry & Associates Consulting
Denny Cherry
Denny Cherry & Associates Consulting, Principal Consultant
Denny Cherry is the owner and principal consultant for Denny Cherry & Associates Consulting, with over two decades of experience working with platforms such as Azure, Microsoft SQL Server, Hyper-V, vSphere, and Enterprise Storage solutions. Denny’s areas of technical expertise include system architecture, performance tuning, security, replication, and troubleshooting. Denny currently holds several Microsoft Certifications related to SQL Server, covering versions 2000 through 2022, including the Microsoft Certified Master, as well as being a Microsoft MVP for almost two decades. Denny has written several books and dozens of technical articles on SQL Server management, as well as how SQL Server integrates with various other technologies.
-
Database Administration for the Non Database Administrator
-
Azure Infrastructure
-
Calculating Costs for SQL Server – On-Premises vs. Cloud
In this session, we’ll review the factors that go into a cost calculation, both on-premises as well as for cloud-based systems. In this we will include compute (CPU/RAM), storage, SQL Server licensing, and the other factors that go into managing systems long term. This will include both geographical redundancy, high availability as well as disaster recovery options; both for the databases as well as the front end. Additionally, we will review what sorts of training people should be looking for from their companies as they make the more from having on-prem servers to cloud servers.
-
Getting SQL Service Broker Up and Running
-
SQL Service Broker Advanced Performance Tips and Tricks
-
Maintain the Same Level of utilities in Azure-Security, Reliability & Scale
-
Index Internals
-
Selecting the Correct Azure Data Solution for your Application
There are several different data platform solutions for use within your application. Selecting the right option can make the difference between a well-performing application and a poorly performing one, not to mention the cost aspect of choosing the wrong solution. In this session, we'll look at the options of Azure SQL Database, Azure SQL Database Managed Instance, and Cosmos DB to see when each is the right choice and when it isn't, from both a price and performance perspective.
Denny Lee
Databricks
Denny Lee
Databricks
Denny Lee is a long-time Apache Spark™ and MLflow contributor, Unity Catalog and Delta Lake maintainer, and a Product Management Director and Principal Developer Advocate at Databricks. He is a hands-on distributed systems and data sciences engineer with extensive experience developing internet-scale data platforms, predictive analytics, and AI systems. He has previously built enterprise DW/BI and big data systems at Microsoft, including Azure Cosmos DB, Project Isotope (HDInsight), and SQL Server. He was also the Senior Director of Data Sciences Engineering at SAP Concur. He also has a Master's of Biomedical Informatics from Oregon Health and Sciences University and has implemented powerful data solutions for enterprise Healthcare customers. His current technical focuses include AI, Distributed Systems, Delta Lake, Apache Spark, Deep Learning, Machine Learning, and Genomics.
-
Sponsor Luncheon: Simplify Your Data Storytelling with Databricks AI/BI and Apps
Ready to turn your data into compelling stories without wrestling with complex code? Join us for an engaging Lunch and Learn where we'll show you how Databricks AI/BI transforms the way you work with data—making analytics easier and more conversational. We'll start by showing you how to create insightful dashboards in just minutes—not days! with AI-powered tools that handle the complex work for you. Simply ask questions in plain English, let AI recommend visualizations, and watch your data come to life. But sometimes dashboards aren't enough. Your business users might need something more engaging—a custom app that allows them to easily filter, explore, and act on their data in ways that static dashboards can't quite match. This session is perfect for anyone who wants to spend less time coding and more time discovering insights. You'll leave with practical knowledge of modern data storytelling tools and a clear understanding of when to build a dashboard versus when to level up to an interactive app. This session will cover: – AI/BI Dashboards: Create interactive visualizations with minimal code using Databricks AI/BI's intuitive interface – AI/BI Genie: Leverage AI-assisted features to ask questions, generate insights, and spot trends – Databricks Apps: Discover when and how to build interactive apps that give business users superpowers No PhD in data science required—just bring your curiosity and your toughest data questions!
Dominick Raimato
Manager, Data, AI, and Adoption/Change Services, SHI International, Inc.
Dominick Raimato
SHI International, Inc., Manager, Data, AI, and Adoption/Change Services
Microsoft Data Platform MVP and self-identified as a data nerd, Dominick Raimato has had a huge passion for identifying insights and building solutions. You never know what kind of data projects he might be working on such as tracking temperature and humidity trends in his house to building a database of property assessments to make sure his house is in line with other properties to avoid overpaying taxes. Passionate about people, Dominick strives to create solutions that not just help achieve business objectives, but also make people’s lives better. If he can help someone get home earlier to eat dinner with their family or eliminate time consuming tasks so they can focus on more important things, he is in his element. Dominick currently lives in Bergen County, NJ where he can be found around the house working on projects or cooking in the kitchen. His specialty dishes include his homemade meat sauce in the winter and pulled pork bar-b-que on his smoker during the summer. He also ventures into New York City frequently to see his wife perform as a flutist with small ensembles, orchestras, and theaters.
-
Hey Real-Time Intelligence – Should I Go Outside Today?
I love it when people tell me they need real time data. More times than not, this is not really true. However, many of us work with processes and systems that could truly benefit from the value of having data being shared in real time. Enter Real-Time Intelligence – the solution that will help you harness the value of the real time data being produced within your organization and make it actionable! If you have people pestering you for data that must be received in real time, then this session is for you! We will walk through the basics of building out a weather prediction solution that leverages the Real-Time Intelligence solution within Microsoft Fabric. We will go through the process to collect weather data on a Raspberry Pi, send it to Fabric via Azure IoT Hub, and connect to the data in Microsoft Fabric. From there, we will transform, predict, and alert you on whether or not you should go outside for today! While sample connectors are cool, it will be fun to see us manipulate our sensors in the room and see predictions in real time! At the end of this session, you will feel comfortable with understanding the end to end process of integrating your IoT connected devices into Fabric’s Real-Time Intelligence hub, see a prediction, and take an action as a result!
-
Ingesting REST API data with Microsoft Fabric
-
Simulating Scenarios with Power BI
-
From the Audience to a Speaker on the Stage
Dr. Dani Ljepava
Product Manager, Microsoft
Dr. Dani Ljepava
Microsoft, Product Manager
Dani is a Senior Product Manager at Microsoft, responsible for product development of the flagship Azure SQL Managed Instance PaaS service. His areas of expertise in the Azure SQL domain include high availability, hybrid links, backup and restore, intelligence, monitoring, automatic tuning, data mobility, migration experiences and Copilot AI. His experience brings in more than 15+ years of product innovation worldwide – starting from Silicon Valley start-ups innovating Internet technologies, to enterprise innovation in the intelligent cloud space.
-
Becoming Azure SQL DBA Advancing the Role of the On-Premises SQL Server DBA
-
Becoming Azure SQL DBA workshop: Graduate your on-prem. DBA skills
-
A day with Azure SQL Managed Instance – the hands-on master class
-
Azure SQL Managed Instance – from ground to cloud
-
Migrating SQL Server to Azure SQL and Microsoft Fabric – The Master Class
-
Accelerate your Heterogenous Migrations using Code conversion Copilot
-
Unlocking the Power of Azure Arc for SQL Database Hybrid Data Management
-
The Ultimate Guide to High Availability and BCDR for all Azure SQL Services
-
Azure SQL Managed Instance Next-gen General Purpose
-
The ultimate guide to modernizing SQL Server databases to Azure SQL
-
5 common challenges in managing Azure SQL Managed Instance
-
Azure SQL Observability
-
Azure SQL Managed Instance: Improving Flexibility in Platform and Hardware
-
10 Cool things about SQL Managed Instance
-
Azure SQL Managed Instance Demo Party
-
Data mobility with Azure SQL
-
Becoming Azure SQL DBA – High Availability and BCDR
In this session you will learn how to evolve your Azure SQL DBA skills in the domain of High Availability (HA), Business Continuity and Disaster Recovery (BCDR) from the perspective of on-premises DBA’s. With the examples of SQL Server hosted in Azure VMs and fully-managed PaaS services Azure SQL Database and Azure SQL Managed Instance, you will gain a deep understanding of HA and BCDR architectures in Azure, and any new responsibilities you might have as an Azure SQL DBA. You will understand how HA for General Purpose and Business Critical service tiers work, and how automated patching and maintenance windows work in Azure SQL. You will also gain a deep understanding of how automated short and long-term backups work in Azure SQL. Furthermore, you'll understand advanced concepts of geo disaster recovery with Failover Groups, and disaster recovery between SQL Server and Azure SQL Managed Instance. In each of the areas and throughout the session, we will map on-premises SQL Server DBA responsibilities and how they map to Azure SQL DBA – what responsibilities are new, same as ever, shared, or fully delegated to Microsoft. You will walk away with a deep understanding of how your on-prem DBA skills evolve to Azure SQL DBA in these areas.
-
Accelerate SQL Server Migration with Azure Arc to Next-Gen Azure SQL Managed Instance
Discover how Azure Arc accelerates SQL Server modernization and migration to Azure SQL Managed Instance with a seamless, Microsoft Copilot-assisted experience. Explore benefits of the next-generation General Purpose Azure SQL Managed Instance, a fully managed database service that delivers a free performance upgrade, five times more databases, and ultimate flexibility in resource configuration compared to the previous generation, significantly improving your total cost of ownership. Explore how Azure Arc provides a unified migration experience in the Azure portal: from automated assessments and at-scale views of your SQL Server data estate to provisioning SQL Managed Instance, real-time database replication, and cutover – enabling near-zero downtime migration. What once required weeks can now be completed in days. Confidently migrate, optimize, and unlock the full potential of your data estate. This session is delivered by the Microsoft SQL Server product group, offering an opportunity to connect and network with the team behind these products.
-
From SQL to Insights – Azure SQL to the power of Fabric Mirroring
-
Becoming Azure SQL DBA – New Opportunities in Azure – Panel Discussion
As an Azure SQL DBA, your role expands beyond its traditional scope. This shift in responsibilities creates exciting opportunities to develop new cloud skills – from integrating with diverse Azure services to adopting modern coding practices with AI – empowering you to grow and innovate. This wrap-up session of the learning pathway includes a 10-minute introduction followed by a dynamic 50-minute interactive panel discussion. Bring your questions and engage directly with experts ready to share insights and practical guidance. Moderated by Bob Ward, this session is your opportunity to clarify concepts, explore best practices, and connect with industry leaders.
Duan Uys
Solutions Architect, WhereScape
Duan Uys
WhereScape, Solutions Architect
Duan Uys is a seasoned data professional with over a decade of experience architecting and implementing data warehouses and software solutions for clients in diverse sectors, including finance, healthcare, and retail. He has dedicated his career to streamlining manual processes, empowering teams to navigate operational challenges, and accelerating the delivery of high-impact results. Through these efforts, Duan has gained invaluable expertise in the best practices, and common pitfalls, of developing solid data foundations.
-
The Need for Speed: Agile Prototyping in Microsoft Fabric
In this session, discover how agile principles can revolutionize data warehouse development through rapid prototyping. We'll examine automated modeling techniques, iterative schema design, and efficient workflows that dramatically shorten cycles from concept to deployment. Attendees will gain real-world insights into integrating varied data sources, validating prototypes early to minimize risks, and scaling solutions to production, all supported by practical examples and best practices for adapting to changing business requirements.
Dustin Vannoy
Lead Specialist Solutions Architect, Databricks
Dustin Vannoy
Databricks, Lead Specialist Solutions Architect
Dustin Vannoy is a data engineer and solutions architect experienced in solving business problems with analytics and big data solutions. He is passionate about all aspects of data engineering, especially building data platforms and streaming data pipelines. He is experienced in using cloud technologies to transition legacy ETL jobs into streaming pipelines and building out a modern lakehouse architecture. He currently focuses on building data platforms and pipelines in Apache Spark / Databricks, Kafka, Python, and Scala. Dustin is a technical leader in San Diego and the co-founder of the San Diego Data Engineering Group. He now encourages others to grow their data skills by creating tutorials and speaking at user groups and conferences.
-
Azure Databricks for Data Warehousing: Make the Right Choices
You've seen the feature announcements—Lakeflow, AI/BI Dashboards, SQL Warehouses—but how do they actually work together? More importantly, when should you use each one? With rapid evolution and even some name changes along the way, Databricks has transformed from a big data and AI/ML platform into a data warehousing powerhouse. This session cuts through the marketing buzz to deliver a practitioner's guide to the features, options, and best practices that are key to success in building your data warehouse on Databricks. Whether you're migrating from a traditional warehouse or building greenfield, you'll learn exactly which tools to reach for and when. What we'll cover: Introducing the Data Intelligence Platform Intelligent Data Warehousing Ingest with Lakeflow Connect Transform with Lakeflow Declarative Pipelines Query with Databricks SQL Warehouse Visualize with AI/BI Dashboards Best practices and migration tips By the end of this talk, you'll have a clear decision framework for when to use each component and how your team can leverage Databricks SQL Warehouse, Lakeflow, and AI/BI capabilities to build a performant, cost-effective data warehouse for your organization!
-
Sponsor Luncheon: Simplify Your Data Storytelling with Databricks AI/BI and Apps
Ready to turn your data into compelling stories without wrestling with complex code? Join us for an engaging Lunch and Learn where we'll show you how Databricks AI/BI transforms the way you work with data—making analytics easier and more conversational. We'll start by showing you how to create insightful dashboards in just minutes—not days! with AI-powered tools that handle the complex work for you. Simply ask questions in plain English, let AI recommend visualizations, and watch your data come to life. But sometimes dashboards aren't enough. Your business users might need something more engaging—a custom app that allows them to easily filter, explore, and act on their data in ways that static dashboards can't quite match. This session is perfect for anyone who wants to spend less time coding and more time discovering insights. You'll leave with practical knowledge of modern data storytelling tools and a clear understanding of when to build a dashboard versus when to level up to an interactive app. This session will cover: – AI/BI Dashboards: Create interactive visualizations with minimal code using Databricks AI/BI's intuitive interface – AI/BI Genie: Leverage AI-assisted features to ask questions, generate insights, and spot trends – Databricks Apps: Discover when and how to build interactive apps that give business users superpowers No PhD in data science required—just bring your curiosity and your toughest data questions!
Edward Pollack
Sr. Data Architect, Transfinder
Edward Pollack
Transfinder, Sr. Data Architect
Edward Pollack has over 20 years of experience in database and systems administration, architecture, and development, becoming an advocate for designing efficient data structures that can withstand the test of time. He has spoken at many events, such as SQL Saturdays, PASS Community Summit, Dativerse, and at many user groups and is the organizer of SQL Saturday Albany. Edward has authored many articles, as well as the book "Dynamic SQL: Applications, Performance, and Security", and a chapter in "Expert T-SQL Window Functions in SQL Server". His first patent was issued in 2021, focused on the compression of geographical data for use by analytic systems. In his free time, Ed enjoys video games, sci-fi & fantasy, traveling and baking. He lives in the sometimes-frozen icescape of Albany, NY, with his wife Theresa and sons Nolan and Oliver, and a mountain of (his) video game plushies that help break the fall when tripping on (their) toys.
-
Advanced Optimization with Columnstore Indexes
Columnstore indexes provide an excellent solution to storing large analytic data in SQL Server. They can tame tables with millions or billions of rows with ease. Understanding their architecture, compression, and best practices can allow for significantly faster workloads while saving computing resources. This session dives into the details of how columnstore indexes work. Topics discussed will include: Columnstore encoding, optimization, and compression algorithms Segment and rowgroup elimination Optimization of secondary indexes Ordered columnstore indexes Best practices for data load processes This is an advanced dive into columnstore indexes that will allow data professionals interested in this technology to make the most of its features in SQL Server 2022+ Learn the biggest secrets and details of how analytic data is stored in SQL Server – no whitepapers required!
-
Performance Optimization on Modern Data Platforms
-
Hobby Huddle: Baking with Ed Pollack
Hosted in the Community Zone, these Hobby Huddle sessions are a fun way for people in the community to showcase their passions and hobbies outside of everyday work life. There will be a designated seating area for you to join these highly entertaining and informative back-to-back mini sessions.
Edwin M Sarmiento
Managing Director, 15C Inc
Edwin M Sarmiento
15C Inc, Managing Director
Edwin M Sarmiento is the Managing Director of 15C, a consulting and training company that specializes in designing, implementing and supporting SQL Server infrastructures. He is a 12-year former Microsoft Data Platform MVP and Microsoft Certified Master from Ottawa, Canada (but he’s originally from the Philippines) specializing in high availability, disaster recovery and system infrastructures running on the Microsoft server technology stack. He is very passionate about technology but has interests in music, neuroscience, social psychology, professional and organizational development, leadership and management matters when not working with databases. Edwin lives up to his primary mission statement: "To help people and organizations grow and develop their full potential."
-
Upgrade Strategies for Highly Available SQL Server Environments
-
Proactively Identifying and Dealing with SQL Server Database Corruption
-
High Availability Basics for the DBA: What SQL Server Cannot Do For You
-
Getting Real-World Experience in Tech as Fast as Possible
-
Always On Availability Groups and Failover Clustering Internals
Windows Server Failover Clustering (WSFC) is the platform that makes SQL Server Always On Availability Groups highly available. There are external interactions between WSFC and SQL Server that you as a DBA cannot monitor from within SQL Server. You need to know the different components that make up the WSFC and how they work to properly deploy and manage Always On Availability Groups for high availability. In this session, you will dive into the internals of the WSFC, the SQL Server cluster resource DLL, and how they work together to make your SQL Server databases highly available. You will analyze the cluster error log for identifying the root cause of an unexpected or planned failover. Finally, you will learn administrative tasks that you're not supposed to do outside of SQL Server.
Elayne Jones
Solutions Architect, Coca-Cola Bottlers' Sales & Services
Elayne Jones
Coca-Cola Bottlers' Sales & Services, Solutions Architect
Elayne Jones is a Solutions Architect at Coca-Cola Bottlers Sales and Services. She specializes in data visualization and data modeling using Power BI. She has expertise in developing Power Apps and creating Power Platform solutions that drive efficiency within organizations. Elayne is skilled at querying data using the DAX and SQL languages. Elayne is an experienced Microsoft technology trainer and has authored numerous blog posts expressing her passion for Business Intelligence and Analytics.
-
Reign in the Chaos: Organizing Fabric Workspaces
Between Lakehouses, Semantic Models, Dataflows, Reports, and Paginated Reports, Fabric Workspaces can become a tangled mess. In this session, learn best practices for both organizing and documenting your Fabric Workspaces. By utilizing Fabric Folders to group related content, content creators can speed development. By leveraging the Lineage View, developers can analyze downstream impact to mitigate negative impact. Finally, learn how to empower a Center of Excellence to document and to audit item permissions.
-
Practical Power BI Version Control
-
Lay Down the Law as a Fabric Administrator
-
Become a Data Wizard Using Generative AI with Power BI
-
Integrating Power Apps with Power BI Reports
Elena Drakulevska
Dataviz & UX Consultant, MoonStory
Elena Drakulevska
MoonStory, Dataviz & UX Consultant
Hi there! I’m Elena, a Microsoft Data Platform MVP and BI & UX Consultant passionate about turning data into meaningful, user-friendly stories. My goal? To help businesses create reports that are not just visually stunning but also intuitive and accessible. Through my blog MoonStory and speaking at international conferences, I share my love for combining dataviz and UX Design to empower teams to see their data in a whole new way. When I’m not working magic with data, you’ll find me exploring the world, picking up new languages, doing yoga, or getting lost in a great book.
-
User-Centered Power BI Report Development: Enhancing UX and Accessibility
A well-designed Power BI report should engage, inform, and be accessible to all users. Yet, many reports suffer from poor usability, cognitive overload, and accessibility barriers, making insights harder to interpret and act upon. In this interactive workshop, you’ll explore UX best practices and digital accessibility principles to create reports that are intuitive, clear, and inclusive. Through hands-on exercises, case studies, and live critiques, you’ll gain practical strategies to enhance usability and accessibility in your Power BI reports. What You’ll Learn: – Identify audience needs and design for different personas – Apply UX best practices to improve clarity and reduce cognitive fatigue – Recognize and fix common accessibility challenges in Power BI reports – Integrate accessibility checks and automation into your reporting workflows The workshop includes two key segments: UX-Driven Report Design – Learn about different audiences, layout strategies, and improve usability through a hands-on redesign challenge. Accessibility in Power BI – Experience digital barriers firsthand, apply real-time fixes, and explore Power BI’s accessibility features to enhance inclusivity. By the end, you’ll have actionable techniques, tools, and best practices to build user-friendly, effective, and inclusive Power BI reports.
-
Power BI Meets UX/UI Design: Creating Insightful, User-Centered Reports
-
Through the Lens: Applying Photography Principles to Power BI Reports
-
Designing for Everyone: Power BI Through the Eyes of Your Users
We all want to build reports that work. But what if your “user-friendly” report is only friendly to users like YOU? In this session, we’ll explore how your Power BI design decisions impact people with different ways of seeing, thinking, and interacting with data. Using real-world scenarios, we’ll uncover the subtle ways our reports can either empower—or unintentionally exclude—the people they’re meant to help. This isn’t another lecture on accessibility standards. It’s a mindset shift. A people-first approach to designing Power BI reports that are not only functional, but fundamentally inclusive and impactful for everyone who relies on your data. You’ll walk away with: • A new lens for understanding your audience • Practical tips to improve clarity, usability, and reach • Inspiration to design with empathy and impact Great design isn’t about perfection—it’s about people. And when we design for “edge cases,” we’re actually designing better experiences for everyone.
Elena Marquetti-Ali
President, Marquetti Consulting
Elena Marquetti-Ali
Marquetti Consulting, President
Elena is a highly-rated professional speaker with 20+ years of experience as a leadership coach, group facilitator and collegiate faculty member. Elena works with groups, individuals, and organizations to amplify their authenticity and empower them to become effective leaders. Elena is the President of and Lead Consultant with Marquetti Consulting.
-
Code to Clarity: Communicating Technical Ideas to a Non-Technical Audience
In an era where technological advancements shape our world, the ability to communicate complex technical concepts to non-technical stakeholders is paramount. "Code to Clarity," is a dynamic session designed to help bridge the communication gap. Uncover the power of storytelling, visual aids, and analogies to demystify intricate technical jargon, fostering a shared understanding and facilitating more effective collaboration. Whether you're a developer, engineer, or tech leader, this session empowers you with the skills to articulate your ideas, innovations, and solutions in a way that captivates and enlightens diverse stakeholders. Elevate your communication prowess and leave equipped to transform complexity into clarity, fostering a stronger connection between the tech realm and the broader world.
-
Alone Together: Team-Building for Remote Teams
-
EI Leadership: The Human Skill Behind High-Performing Teams
Eric Peterson
Database Architect, Ingo Money
Eric Peterson
Ingo Money, Database Architect
Over 30 years of SQL Server experience with expertise in all phases of database design, implementation and maintenance. I regularly speak at SQL Saturdays and other technical conferences.
-
Encryption Strategies for SQL Server
-
How to establish an air-gap SQL Server for DR
-
Auditing Your SQL Server For Security, PCI, SOX And Data Access
Auditing has become the standard of determining if a company has security technologies in place to be able to process credit cards, obey data protection laws and review who accessed what data. In order to do this, you need to have several processes in place before you are asked to provide a review of your SQL Server system. This presentation will cover • Quick ways to identify who has access to what data in your SQL Server system. • Things that PCI and SOX auditors look for when it come to your SQL Server system • How to setup and keep audit data of when/how a user accesses confidential data. • Identifying what to do when a user requests that their data be removed from your system under data privacy laws All of the examples will be available on GIT hub for downloading after the presentation.
Erik Darling
Consultant, Darling Data
Erik Darling
Darling Data, Consultant
Erik has been working with SQL Server just about forever – Really challenging DBA/Developer/Architect roles – Started consulting for Brent Ozar Unlimited in 2015 Started Darling Data in 2019 – Worked with over 600 very happy clients – Lots of free community activity like blogging, training, open-source projects – Speaking about SQL Server at conferences all over the world
-
SQL Server Performance Engineering: Techniques that Actually Work
Every SQL Server professional needs to understand not just what makes queries fast, but what makes them slow. This knowledge is essential whether you're aiming to become a performance wizard or just trying to make those problematic queries stop holding up the business. As a consultant who resolves complex SQL Server performance headaches for clients worldwide, I'll share the exact analysis and troubleshooting techniques that deliver real results. No theoretical fluff—this comprehensive session reveals my battle-tested approach to identifying problematic queries, designing indexes that actually help, and implementing changes that make a measurable difference. You'll gain practical insights into hardware considerations, query rewrites that actually fix problems, indexing patterns that survive in production, and more. I focus on techniques that work in the real world, not just in perfect lab conditions. By the end of this session, you'll have the tools and confidence to diagnose and resolve SQL Server performance issues without resorting to guesswork or prayer.
-
SQL Server Performance Therapy: Relationship Tips for Queries and Indexes
-
For a Few Rows More: Mastering Row Goals in SQL Server
In this performance tuning deep dive, Erik Darling exposes the strange and often overlooked impact of row goals on SQL Server execution plans. Through practical demonstrations, you'll see how these optimizer shortcuts, designed to retrieve just enough rows to satisfy a query, can silently steer execution plans in ways that drastically affect performance —sometimes for better, sometimes for worse. You'll walk away with immediately applicable techniques for recognizing, troubleshooting, and strategically implementing row goals in your environment. Learn how to finesse these mechanisms to your advantage, using them deliberately to push SQL Server toward faster, more efficient plans. If you want to understand why SQL Server sometimes makes bewildering decisions, and how to take back control of your queries, saddle up — it's time to go looking for a few rows more. These essential techniques are vital for anyone serious about advanced SQL Server performance optimization.
-
A Fistful of Parameters: Take Care and Control of Unstable Plans
-
The Metric Mindset: What Modern Performance Tuners Need To Care About
-
SQL Server Performance Engineering: Techniques That Actually Work
Every SQL Server professional needs to understand not just what makes queries fast, but what makes them slow. This knowledge is essential whether you're aiming to become a performance wizard or just trying to make those problematic queries stop holding up the business. As a consultant who resolves complex SQL Server performance headaches for clients worldwide, I'll share the exact analysis and troubleshooting techniques that deliver real results. No theoretical fluff—this comprehensive session reveals my battle-tested approach to identifying problematic queries, designing indexes that actually help, and implementing changes that make a measurable difference. You'll gain practical insights into hardware considerations, query rewrites that actually fix problems, indexing patterns that survive in production, and more. I focus on techniques that work in the real world, not just in perfect lab conditions. By the end of this session, you'll have the tools and confidence to diagnose and resolve SQL Server performance issues without resorting to guesswork or prayer.
-
SQL Server Performance Therapy: Relationship Tips for Queries and Indexes
-
For a Few Rows More: Mastering Row Goals in SQL Server
In this performance tuning deep dive, Erik Darling exposes the strange and often overlooked impact of row goals on SQL Server execution plans. Through practical demonstrations, you'll see how these optimizer shortcuts, designed to retrieve just enough rows to satisfy a query, can silently steer execution plans in ways that drastically affect performance —sometimes for better, sometimes for worse. You'll walk away with immediately applicable techniques for recognizing, troubleshooting, and strategically implementing row goals in your environment. Learn how to finesse these mechanisms to your advantage, using them deliberately to push SQL Server toward faster, more efficient plans. If you want to understand why SQL Server sometimes makes bewildering decisions, and how to take back control of your queries, saddle up — it's time to go looking for a few rows more. These essential techniques are vital for anyone serious about advanced SQL Server performance optimization.
-
A Fistful of Parameters: Take Care and Control of Unstable Plans
-
The Metric Mindset: What Modern Performance Tuners Need To Care About
-
SQL Server Performance Engineering: Techniques that Actually Work
Every SQL Server professional needs to understand not just what makes queries fast, but what makes them slow. This knowledge is essential whether you're aiming to become a performance wizard or just trying to make those problematic queries stop holding up the business. As a consultant who resolves complex SQL Server performance headaches for clients worldwide, I'll share the exact analysis and troubleshooting techniques that deliver real results. No theoretical fluff—this comprehensive session reveals my battle-tested approach to identifying problematic queries, designing indexes that actually help, and implementing changes that make a measurable difference. You'll gain practical insights into hardware considerations, query rewrites that actually fix problems, indexing patterns that survive in production, and more. I focus on techniques that work in the real world, not just in perfect lab conditions. By the end of this session, you'll have the tools and confidence to diagnose and resolve SQL Server performance issues without resorting to guesswork or prayer.
-
SQL Server Performance Therapy: Relationship Tips for Queries and Indexes
-
For a Few Rows More: Mastering Row Goals in SQL Server
In this performance tuning deep dive, Erik Darling exposes the strange and often overlooked impact of row goals on SQL Server execution plans. Through practical demonstrations, you'll see how these optimizer shortcuts, designed to retrieve just enough rows to satisfy a query, can silently steer execution plans in ways that drastically affect performance —sometimes for better, sometimes for worse. You'll walk away with immediately applicable techniques for recognizing, troubleshooting, and strategically implementing row goals in your environment. Learn how to finesse these mechanisms to your advantage, using them deliberately to push SQL Server toward faster, more efficient plans. If you want to understand why SQL Server sometimes makes bewildering decisions, and how to take back control of your queries, saddle up — it's time to go looking for a few rows more. These essential techniques are vital for anyone serious about advanced SQL Server performance optimization.
-
A Fistful of Parameters: Take Care and Control of Unstable Plans
-
The Metric Mindset: What Modern Performance Tuners Need To Care About
-
SQL Server Performance Therapy: Relationship Tips for Queries and Indexes
-
For a Few Rows More: Mastering Row Goals in SQL Server
-
A Fistful of Parameters: Take Care and Control of Unstable Plans
In this high-octane session, Erik Darling tackles the notorious parameter sniffing problems that plague SQL Server performance. You'll learn to identify when the optimizer's plan caching decisions turn lightning-fast queries into server-crushing disasters due to shifting parameter values. Through real execution plans, you'll see how SQL Server makes critical optimization choices based on first-seen parameter values, often creating performance catastrophes for subsequent executions. Erik demonstrates practical techniques to identify skewed distribution statistics and reveals battle-tested methods to take care and control of your execution plans. Whether dealing with simple procedures that run fast for most users but tank for others, or complex multi-parameter queries where optimal plans vary wildly, you'll leave armed with concrete strategies to maintain stable performance regardless of parameter values. Essential knowledge for any SQL professional tired of watching carefully tuned queries fall apart when parameters change.
-
The Metric Mindset: What Modern Performance Tuners Need To Care About
-
SQL Server Performance Engineering: Techniques That Actually Work
-
Advanced T-SQL Triage: The Art of Fixing Terrible Code
You’ve seen it before: the procedure that looks like it was generated by an AI trained on Stack Overflow and despair. It’s got MERGE. It’s got RIGHT JOINs. It’s got logic so tangled you’d need a flowchart, a flashlight, and a therapist to debug it. And now… it’s your problem. In this full-day festival of query-fixing, Erik Darling and Kendra Little lead you through the real world mysteries of advanced T-SQL: the strange, the slow, and the occasionally cursed. You’ll tackle tangled paging logic, rescue window functions and indexed views from spools and spills, and finally learn when to keep a CTE—and when to yeet it. We’ll refactor data modifications that block like linebackers, decode procedural patterns, and write dynamic SQL that’s powerful and polite. You’ll learn when to CROSS APPLY, dig into views vs. inline TVFs, and discover why RIGHT JOIN is not simply LEFT JOIN’s syntactic twin. We’ll uncover when user-defined functions wreck your query execution plans—and how to rewrite them with flair. If you’ve ever been curious about why that query sometimes takes SO long and how to best rewrite it without just guessing, this is your playground. Expect fast demos, big laughs, and a glorious cheat sheet to take home. Because refactoring SQL isn’t just necessary—it’s super fun when you're in the right party.
-
T-SQL That Doesn’t Suck: Real-World Patterns for Faster, Smarter Queries
Let’s be honest: even experienced developers write T-SQL that starts to smell over time. Maybe it works, but it’s tangled, hard to maintain, and full of traps for future you. In this full day, demo-packed session, Erik Darling and Kendra Little will walk you through the real world query problems that quietly haunt OLTP systems—and how to fix them without rewriting the whole app. We’ll dissect the subtle stuff that tanks performance: implicit conversions, sneaky NULL logic, non-sargable filters, and joins that don’t do what you think they do. You’ll compare EXISTS to JOINs, untangle OR conditions, and learn when EXCEPT and INTERSECT save you from disaster. You'll see where views go off the rails, when temp tables and table variables shine, and how to modify data in a way that won’t make your DBA cry. Along the way, you’ll learn to leverage window functions, cross apply, and patterns for parameterization that hold up under pressure. You’ll know exactly how to refactor messy code into queries that are easier to understand, debug, and evolve—without sacrificing intent or introducing subtle bugs. If you’ve ever looked at a query and thought, “I have no idea what this does, and I’m afraid to touch it,” this session is for you. You already know how to write T-SQL that works. Now it’s time to write T-SQL you’re proud of.
Erin Stellato
Principal Program Manager, Microsoft
Erin Stellato
Microsoft, Principal Program Manager
Erin Stellato is a Principal Program Manager on the SQL Experiences team, helping advance tools that customers use daily with Azure SQL. She is passionate about data and chocolate, but not always in that order. She previously worked as a consultant and was a Data Platform MVP, and has been an active member of the SQL Server community as both a volunteer and speaker. Her areas of interest within the engine include Query Store, Extended Events, statistics, and performance tuning. Erin also enjoys helping accidental/involuntary DBAs…or anyone who's interested…understand how SQL Server works.
-
Becoming Azure SQL DBA – Performance Monitoring, Tuning, and Alerting
In this session you will learn how to extend your Azure SQL DBA skills in the domain of performance monitoring, tuning, and alerting from the perspective of on-premises DBA. While there are similarities between a fully-managed Azure SQL PaaS service and SQL Server, in this session you will gain a deeper understanding of performance monitoring, troubleshooting, tuning and alerting specific to Azure. Learn how to use the cloud-native DB watcher monitoring solution and Query Performance Insights to monitor and identify database performance issues in Azure. We'll step through automated tuning and automated plan regression correction (APRC) and demonstrate, using Resource Health, to understand the health of the environment as well as setting up alerts to quickly identify performance issues. Finally, we'll discuss how to optimize performance with resource right-sizing, choosing the appropriate storage type, and configuring file structures. In each of the areas and throughout the session, we will map on-premises SQL Server DBA responsibilities to the Azure SQL DBA role – highlighting what responsibilities are new, which ones stay the same, and what is shared or fully delegated to Microsoft. You will walk away with an understanding of the relevant DBA skills you need to evolve as an Azure SQL DBA.
-
Smarter GitHub Copilot + SSMS 22
Discover how GitHub Copilot is transforming the way you write T-SQL and optimize your SQL databases inside SQL Server Management Studio (SSMS) 22. In this session, we’ll showcase the newest SSMS 22 features alongside real-world demos of GitHub Copilot, highlighting how AI assistance can speed up query writing, reduce errors, and boost productivity. You’ll learn best practices for getting the most out of Copilot in your daily workflow and see firsthand how SSMS 22 + GitHub Copilot can take your efficiency to the next level.
-
Breakfast with the Microsoft Data Leadership Team
Get your day started early at PASS Data Community Summit with a free breakfast and a Q&A session with a panel of leaders across Microsoft hosted by Bob Ward. Tell us what is top of mind for you across SQL Server, Azure SQL, Microsoft Fabric and topics like AI. This is always one of the most popular sessions at the PASS Data Community Summit, so you won’t want to miss it!
-
Becoming Azure SQL DBA – New Opportunities in Azure – Panel Discussion
As an Azure SQL DBA, your role expands beyond its traditional scope. This shift in responsibilities creates exciting opportunities to develop new cloud skills – from integrating with diverse Azure services to adopting modern coding practices with AI – empowering you to grow and innovate. This wrap-up session of the learning pathway includes a 10-minute introduction followed by a dynamic 50-minute interactive panel discussion. Bring your questions and engage directly with experts ready to share insights and practical guidance. Moderated by Bob Ward, this session is your opportunity to clarify concepts, explore best practices, and connect with industry leaders.
-
Becoming Azure SQL DBA – Copilot and AI
In this session you will learn how to unlock the future of data productivity with a hands-on exploration of AI-powered Copilots! We’ll dive into Copilot in SQL Server Management Studio (SSMS), Microsoft Copilot in Azure with NL2SQL and SQL, and Copilot for SQL Databases in Microsoft Fabric. Discover how each Copilot meets you where you are, and how each introduces efficiency into your SQL workflow. With real-world demos to showcase their power in action, and many best practice tips along the way, you’ll leave inspired and ready to transform how you interact with SQL, anywhere. In each of the areas and throughout the session, we will map on-premises SQL Server DBA responsibilities to the Azure SQL DBA role – highlighting what responsibilities are new, which ones stay the same, and what is shared or fully delegated to Microsoft. You will walk away with an understanding of the relevant DBA skills you need to evolve as an Azure SQL DBA.
-
Inside SQL Server 2025
Join Bob Ward and friends to go deep into the next major release of SQL Server, SQL Server 2025, the Enterprise AI-ready database. You will learn the fundamentals of what capabilities are in the release so you can plan and make key decisions on when and how to upgrade. This session will then go deep into all the major features including but not limited to: AI built-in, JSON, RegEx, REST APIs, Change Event Streaming, Fabric Mirroring, new concurrency enhancements, performance improvements, HA enhancements, and security. You will learn all the latest innovations of SQL Server 2025 including plenty of demonstrations and sample code you can take home to try on your own. Come see all the excitement of the modern database platform reimagined with SQL Server 2025.
Erland Sommarskog
Consultant, Erland Sommarskog SQL-Konsult AB
Erland Sommarskog
Erland Sommarskog SQL-Konsult AB, Consultant
Erland Sommarskog is an independent consultant based in Stockholm, working with SQL Server since 1991. He was first awarded SQL Server MVP in 2001, and has been re-awarded every year since. His focus is on systems development with the SQL Server Database Engine and his passion is to help people to write better SQL Server applications.
-
Collations – All You Wanted to Know and More!
-
Introduction to Regular Expressions in SQL Server
Regular expressions have been used in computing environments for decades for advanced search and replace operations. In SQL Server we have had the LIKE operator forever, but anyone who has tried to make advanced pattern-matching has realised that its capabilities are quite limited. And if you want to make replace operations, you have been even more limited. This is changing! Regular expressions are finally coming to SQL Server. Support for regular expressions was recently announced as being available in public preview in Azure SQL Database and Fabric SQL Database, and it is also announced to be included in SQL Server 2025. In this session we will learn how regular expressions work in SQL Server, starting with the very basic operations. Through the session we will move to more advanced features, and I will show some examples of complex find-and-replace operations we can do with regular expression that previously were difficult or impossible to implement in T-SQL. To have benefit from this session, you don't need to have any previous knowledge or experience of regular expressions, but as the title says: this is an introduction. SQL-wise, I will assume that you have used the LIKE operator and maybe some of the other built-in string functions.
Ezat Karimi
Amazon
Ezat Karimi
Amazon
-
Full Text Search and Semantic Search in PostgreSQL and OpenSearch
-
Multi-Tenant Healthcare Systems with OpenSearch
-
Oracle to Amazon Aurora PostgreSQL Migration: Challenges, Strategies
Organizations face complex challenges when migrating from Oracle to Amazon Aurora/RDS PostgreSQL. In this presentation, we'll explore key migration hurdles, including SQL syntax variations, and proprietary Oracle features. A significant focus will be on developing the right target architecture. We'll discuss target platform strategies that satisfies requirements and addresses identified pain points, proper capacity planning and performance benchmarking. While automated tools like Amazon DMS and SCT facilitate aspects of the migration, we'll highlight areas requiring manual intervention and strategic planning. Post-migration challenges will be discussed, including adapting to PostgreSQL's MVCC architecture, vacuuming processes, query optimization techniques, statistics management, and indexing strategies. We'll emphasize the importance of developing comprehensive testing suites and validation routines. We'll discuss strategies for ensuring data integrity, functional equivalence, and performance parity between the source Oracle system and the target PostgreSQL environment. We'll explore the operational shift required when moving to a managed database service, including adapting to the shared responsibility model. Real-world migration examples and lessons learned will be shared, highlighting challenges and successful strategies implemented by our customers. We'll discuss leveraging AI/ML and generative AI to streamline the migration process, and help solving migration issues.
-
JSON Document Processing in Different Amazon Database Services
-
Dynamic data masking in Amazon RDS/Aurora PostgreSQL, and Babelfish
-
Building a Job Search Platform using PostgreSQL
Building an effective job search platform requires a database system that can handle complex queries, full-text search, and vector similarity search for semantic matching. Geospatial search enhances a job search platform by allowing job recruiters and candidates to factor in location in their searches. A popular option to provide these three search techniques is PostgreSQL, a powerful relational database with robust search capabilities. This post explores how this database service can be used to implement a job search platform, detailing the features, data models, indexing strategies, and query capabilities. Key topics: • Anatomy of a job search platform o Full text search Full text search limitations o Semantic search Semantic search limitations o Geospatial search o Hybrid search • Performance and security considerations • Conclusion
Fabiano Amorim
Principal Consultant, Pythian
Fabiano Amorim
Pythian, Principal Consultant
Fabiano Amorim is a Microsoft Data Platform most valuable professional (MVP) since 2011 that loves to conquer complex, challenging problems—especially ones that others aren’t able to solve. With over two decades of experience, Fabiano is well known in the database community for his performance tuning abilities and the many conference speaking engagements around the world.
-
Future-Proofing SQL Server: Performance, HADR and Innovations on 22 and 25
SQL Server continues to evolve into a hybrid data platform—and this session will show you how to stay ahead. We'll dive into the most impactful features introduced in SQL Server 2022 for DBAs, including Contained Availability Groups, enhanced Distributed AGs for hybrid deployments, and Intelligent Query Processing updates that improve performance in complex workloads. Looking forward, we’ll explore what SQL Server 2025 is set to bring, including AI-assisted query optimization, deeper integration with Microsoft Fabric and more. You'll learn how to enable seamless, near real-time analytics across on-premises data and the cloud—without the need for traditional ETL. Whether you're optimizing for uptime, planning hybrid deployments, or exploring Fabric-powered analytics, this session delivers practical insights to help you future-proof your SQL Server environment.
Frank Geisler
CEO, GDS Business Intelligence GmbH
Frank Geisler
GDS Business Intelligence GmbH, CEO
Frank Geisler is the owner and CEO of GDS Business Intelligence GmbH, a leading Microsoft Solution Provider specializing in Data and AI. He holds numerous prestigious certifications, including Data Platform MVP, MCT, Azure Solutions Architect Expert, Azure Security Engineer Associate, Azure Data Engineer Associate, and DevOps Engineer Expert. In his role, Frank excels in building robust Business Intelligence systems leveraging Microsoft technologies such as SQL Server, Azure Data Platform, Microsoft Fabric, and Power BI. He is also proficient in constructing Azure infrastructures and architectures using PowerShell and Bicep. Frank is a prolific author, having written several bestselling books including "Power BI für Dummies," "Azure für Dummies," "Docker für Dummies," and "Pro Serverless Data Handling with Microsoft Azure." As a frequent speaker, he has delivered insightful presentations at major national and international conferences like the PASS Community Summit, SQL BITS, and SQL Server Konferenz. In addition to his professional achievements, Frank co-founded PASS Deutschland e.V. in 2004 and has served on its board of directors for many years. He also leads the Microsoft Data Community Regional Chapter Münsterland, contributing significantly to the community's growth and development.
-
Activator and Further Functionality within Fabric Real-Time Intelligence
Real-Time Intelligence (RTI) in Microsoft Fabric is revolutionizing how we handle streaming data and actionable insights. In this session, we take a deep dive into the Activator, a powerful new component that bridges real-time events with automated responses. You'll learn how to configure and leverage the Activator to trigger actions across Microsoft services and beyond. But that’s not all—this session also explores advanced features, including custom event patterns, integrations, and performance tuning. Whether you're already working with RTI or just starting out, you'll leave with practical knowledge and inspiration to elevate your real-time analytics projects to the next level. There might be some NDA stuff I can not talk about right now.
-
Automagical database documentation and diagrams with dbml
-
Mastering the Activator in Microsoft Fabric RTI: A Deep Dive
-
Hobby Huddle: BBQ Cooking with Frank Geisler
Hosted in the Community Zone, these Hobby Huddle sessions are a fun way for people in the community to showcase their passions and hobbies outside of everyday work life. There will be a designated seating area for you to join these highly entertaining and informative back-to-back mini sessions.
-
Build A Fabric Real-time Intelligence & Power BI Solution in One Day
-
Build A Fabric Real-time Intelligence Solution in One Day
-
Professional DP-700 exam guide for the Fabric Data Engineering cert
-
A full day of CI/CD options for Microsoft Fabric
-
Racing to Real-Time Intelligence: Live Insights from Live Data
Frank Gill
Senior Consultant, Fortified Data
Frank Gill
Fortified Data, Senior Consultant
Frank Gill is a Senior Consultant at Fortified Data. With 25 years of IT experience, the first 8 as a mainframe programmer, he has developed a love of all things internal. He has worked extensively with SQL Server solution in Azure, including Managed Instance, and has a passion for providing clients with HADR solutions. When not administering databases or geeking out on internals, Frank volunteers at the Art Institute of Chicago and reads voraciously.
-
Simplify DR, Migration, and Upgrade with Distributed Availability Groups
Introduced in SQL Server 2016, Distributed Availability Groups extend the traditional Availability Group (AG) architecture by allowing you to group together multiple Availability Groups. Each participating AG maintains its own clustering infrastructure, offering greater flexibility in deployment and configuration. In this session, we’ll explore how Distributed Availability Groups simplify disaster recovery by allowing replicas in different geographic locations without relying on multi-subnet clustering. We’ll also cover how this feature supports cross-version and cross-platform configurations, including Windows and Linux. Whether you're planning a disaster recovery solution, migrating environments, or performing version upgrades with minimal downtime, Distributed Availability Groups provide a practical approach. Join this session to learn how to implement and manage them effectively.
G-Su Paek
Solutions Engineer, Redgate
G-Su Paek
Redgate, Solutions Engineer
G-Su Paek has been working in pre-sales for over 6+ years, having been with Redgate for almost a year. He has a background in Cybersecurity, Sales, and Solution Architecture.
-
Guardrails and Good Data – How Ops Teams Stay Secure at Scale
Ginger Grant
Consultant, Desert Isle Group
Ginger Grant
Desert Isle Group, Consultant
Ginger Grant is a distinguished Microsoft Data Platform MVP and Microsoft Certified Trainer (MCT), renowned for her deep expertise in advanced analytics, machine learning, AI, data warehousing, and the evolving landscape of Microsoft Fabric. As a sought-after consultant, Ginger empowers organizations to harness the full potential of their data ecosystems. Beyond consulting, Ginger is a prolific thought leader and speaker for both keynotes and technical training. She contributes regularly as a columnist for Pure AI, authors insightful books, and shares practical knowledge on her blog, DesertIsleSQL.com. Her educational impact spans a wide range of technologies, including Azure Synapse Analytics, Python, and Azure Machine Learning, making her a trusted voice in the data community. Whether on stage, in print, or in the classroom, Ginger’s passion for data and commitment to knowledge-sharing make her a standout figure in the world of data and AI.
-
Incorporating AI in Data Analysis
Jump start your understanding of AI with this crash course in everything you need to know to start using AI every day for many different purposes. Microsoft and many other companies are working on incorporating AI for many different scenarios. Data professionals need this information and this session was designed for you, not for an app developer. Here you will receive explanations of the important parts of AI in order to implement AI in many different aspects of your job from writing emails, to developing pySpark, T-SQL, ETL or Power BI reports. You will learn all about generative AI, prompt engineering, Retrieval Augmented Generation (RAG) and Vector Databases. The demonstrations and hands on exercises will show you how to implement the skills so you will leave knowing how to use AI. Be sure to bring your laptop to do these exercises as you will learn these skills . As we are targeting skills for the data professional, this course will focus on the Copilot and AI Agent in Fabric and the copilot usage in Power BI. After attending this session you will have some practical experience on how to make AI work for you. You will leave with techniques for how to implement AI without coding or spending money to be able to take advantage of this advanced technology.
-
Unlock the Power of T-SQL in Microsoft Fabric
-
Optimizing SQL Database Management in Microsoft Fabric
Cloud Architects face critical decisions about where to create and manage SQL databases as you have the choice now to create one in Azure or in Microsoft Fabric . This session will delve into the differences and similarities between Azure SQL DB and SQL DB within Microsoft Fabric. You'll learn how these databases are managed and monitored, uncovering the benefits and potential drawbacks of using Fabric including the ability to easily incorporate AI. We will compare the costs of Azure SQL DB with Fabric's SQL DB, exploring various cost models to demonstrate how Fabric can minimize expenses and reduce management efforts through automated maintenance and centralized resource monitoring. Practical use cases will illustrate when and why SQL DB in Fabric can enhance your data environment.
-
Harnessing Fabric Data Agents for Intelligent Data Interaction
-
Effective Data Modeling for Analytics in Warehousing, Fabric and Power BI
Mastering data modelling is necessary in order to analyze data. The data can be stored in Power BI, Lakehouse, or Data Warehouse as all require the same fundamental structure to build powerful analytics to answer complex business questions, whether it be for user-friendly, high-performing Power BI reports or adhoc queries. This session explores industry best practices and techniques for designing robust data models that prioritize security, usability, performance, adaptability, and scalability. Participants will learn how to translate business requirements into conceptual models, structure data using star schemas, and develop conformed dimensions that serve organization-wide reporting needs. The session provides a comprehensive walkthrough of what it takes to understand user requirements to delivering the final report. These skills will equip attendees with actionable strategies to deliver their data analytics solutions in their environments. Real-world examples and case studies will highlight common challenges and demonstrate practical solutions. By the end of the session, attendees will be equipped with the knowledge and tools to design data models that drive insightful, scalable, and high-impact analytics and Power BI reports across complex business environments.
-
Developing a Fabric Environment From Start to End
Wondering how to get started with enterprise Fabric Development? This workshop will provide attendees with the knowledge and tools needed to create a enterprise-ready Microsoft Fabric environment. The workshop covers everything from getting started gathering data from different sources, transforming the data into an analytical model, securing access, and monitoring it’s performance. As Fabric provides many different methods for performing these tasks, we will cover a variety of different development tools, including using shortcuts, copying data, pipelines, Data Flow Gen 2, notebooks, and explain which is the best choice in a given situation. Participants will practice the steps in hands on exercises using medallion architecture to transform the data into an analytical lakehouse, which can be used for ad-hoc querying, and of course as a source for Power BI reports. Participants will learn how to provide ongoing maintenance, security and monitoring of the lakehouse to ensure it is an enterprise level solution. The workshop experience and examples will provide participants with the knowledge needed to implement the techniques to create their own lakehouse. By the end of the session, participants will not only understand the technical steps involved but also when and why to choose a lakehouse architecture for their organizational data needs.
Gleb Otochkin
Cloud Advocate, Databases, Google
Gleb Otochkin
Google, Cloud Advocate, Databases
Gleb is a Cloud Advocate at Google who specializes in Cloud Database technologies in the Google Cloud. He has over 20 years of experience in data-related technologies, and has expertise in a variety of areas, including relational databases, Big Data, application development, data replication and integration solutions, including products from Google, Oracle, AWS, Cloudera, and other vendors. He has been honored to be a presenter at various conferences in the USA, Europe, and APAC for many years.
-
From Prompt to Production: Live-Coding with Gemini-CLI & AlloyDB
Join me for a live coding session where we'll build and deploy a real-world application from scratch. As a runner, I wanted a simple way to find and organize our group runs. In this session, we'll use Google Cloud's Gemini -CLI to rapidly scaffold a "Running Meetup" web app, powered by an AlloyDB for PostgreSQL backend. Then we add advanced features like vector search for events based on natural language descriptions (e.g., "easy morning trail run"). And finally we show how to deploy the application to serverless platforms like Cloud Run.
Glen Kendell
President & CEO, Concourse
Glen Kendell
Concourse, President & CEO
Glen Kendell is a cybersecurity professional, infrastructure architect, and community builder who loves exploring the connection points between systems and humans. As the founder of Concourse and host of the podcast Data & Confused, he crafts secure environments and soulful spaces where people can thrive together in an age of accelerating change.
-
From Agent to Automation: Building Your SQL Orchestration Control Plane
SQL Server Agent got us this far—but it was never designed for the realities of modern data environments. Static schedules, hidden job steps, limited observability, and scattered logs leave teams blind when they need clarity most. It’s time to move from isolated jobs to orchestrated control, and build your own SQL automation platform. In this session, you’ll learn how to combine Windmill, an open-source workflow engine, with an active, API-connected database that doesn’t just log job history—it drives automation across multiple SQL Servers. Integrated with Authentik for secure identity and approvals, this becomes your operational control plane: jobs are versioned, actions are auditable, and workflows are triggered and tracked in real time. We’ll start by building your own internal SQL portal: a centralized, API-driven platform to manage, monitor, and approve jobs across your entire environment. Then, we’ll show how to migrate your existing SQL Agent jobs—like index maintenance, backups, blocking detection, and dynamic performance tuning—into fully observable, controlled workflows. The result is an automation platform that scales with your operations and gives you real-time visibility and control over every SQL server.
Grant Fritchey
Advocate, Redgate
Grant Fritchey
Redgate, Advocate
Grant Fritchey is a Data Platform MVP with over 30 years' experience in IT, including time spent in support and development. He has worked with SQL Server since 6.0 back in 1995. He has also developed in VB, VB.NET, C#, and Java. Grant has written books for Apress and Simple-Talk.
-
Answering Questions Using Extended Events
-
Leveraging AI as a DBA
-
Future-Proofing Your Database Estate: Smarter Monitoring for Strategic Growth
As database environments evolve spanning hybrid infrastructures, diverse platforms, and growing performance demands, it's no longer enough to monitor what's happening now. Strategic estate planning requires a forward-looking approach. In this session, you'll discover how you can use Redgate Monitor to plan for the future with confidence. Learn how to leverage historical performance data, disk usage trends, and alert patterns to inform decisions around patching, capacity planning, and workload optimization. We'll explore how to identify risks before they become issues, and how to adapt your monitoring strategy to support long-term goals. You’ll also get a first look at new deployment options designed for scalability and cost-efficiency, including running Redgate Monitor’s data repository on PostgreSQL with TimescaleDB. Whether you're managing SQL Server, exploring PostgreSQL, or preparing for a cloud-first future, this session will equip you with practical insights and tools to evolve your monitoring strategy, and manage your estate with agility and foresight.
-
Expanding Your SQL Server Expertise with PostgreSQL
-
Future-Proofing Your Database Estate: Smarter Monitoring for Strategic Growth
As database environments evolve spanning hybrid infrastructures, diverse platforms, and growing performance demands, it's no longer enough to monitor what's happening now. Strategic estate planning requires a forward-looking approach. In this session, you'll discover how you can use Redgate Monitor to plan for the future with confidence. Learn how to leverage historical performance data, disk usage trends, and alert patterns to inform decisions around patching, capacity planning, and workload optimization. We'll explore how to identify risks before they become issues, and how to adapt your monitoring strategy to support long-term goals. You’ll also get a first look at new deployment options designed for scalability and cost-efficiency, including running Redgate Monitor’s data repository on PostgreSQL with TimescaleDB. Whether you're managing SQL Server, exploring PostgreSQL, or preparing for a cloud-first future, this session will equip you with practical insights and tools to evolve your monitoring strategy, and manage your estate with agility and foresight.
-
Expanding Your SQL Server Expertise with PostgreSQL
As PostgreSQL continues to gain traction across modern tech stacks, professionals are increasingly expected to navigate both SQL Server and PostgreSQL environments. Rather than replacing existing systems, organizations are integrating PostgreSQL alongside their current solutions – creating a growing need for dual expertise. This session is crafted specifically for SQL Server practitioners looking to broaden their capabilities. We'll explore the common ground between the two platforms, dive into the key differences that often trip up newcomers, and provide practical guidance for mastering PostgreSQL. From tooling and documentation to cloud integration and community resources, this deep-dive session will equip you with the knowledge and confidence to expand your database skill set effectively.
-
Answering Questions Using Extended Events
-
Leveraging AI as a DBA
-
Extended Events Live Data Window
In this session, I will explain how you can use the Live Data Window within SQL Server Management Studio in order to work with Extended Events data. Extended Events data is output as XML and that can make it hard to deal with. However, through the use of the Live Data Window, you can avoid the XML entirely. In fact, with the Live Data Window, you can do a lot of things with your data that just isn't possible without a lot of time programming. Please come to this session and I'll show you how to view your data, sort and aggregate the data, all on the fly in this demo-heavy presentation. You can save time, avoid the XML, and gain new functionality, all through better knowledge of the Live Data Window in SSMS.
-
Adding PostgreSQL to Your SQL Server Skill Set
More organizations are adding PostgreSQL to their technology stack than ever before. The challenge here is that they are not immediately replacing their existing technology. This means that more and more people need to understand both SQL Server and PostgreSQL. This session is explicitly designed to support the people who already know SQL Server in their journey to add PostgreSQL to their skill set. We'll cover the areas of overlap between the two tool sets. We'll also get into all the differences that can make learning PostgreSQL a challenge. Not only will this all-day session teach you about PostgreSQL, but we'll explore tooling, documentation, the cloud, and other resources that can help you on your journey as you add PostgreSQL to your existing skill set.
-
Fixing What Slows You Down: Practical Performance Tuning for SQL Server and PostgreSQL
Performance issues can cripple applications, but diagnosing them doesn’t have to be a guessing game. In this session, we’ll explore how Redgate Monitor helps you pinpoint and resolve the most impactful performance problems across both SQL Server and PostgreSQL environments. You’ll learn how to: • For SQL Server: Analyze real execution plans, track wait stats, and detect blocking and deadlocks before they escalate. • For PostgreSQL: Monitor vacuum lag, bloat, and idle-in-transaction sessions, and identify slow queries using execution metrics. • Cross-platform: Correlate query behavior with system-level metrics like CPU, memory, and disk I/O to troubleshoot faster and optimize proactively. Whether you're managing one platform or both, you’ll walk away with a practical, unified approach to performance tuning that reduces firefighting and boosts efficiency.
-
Future-Proofing Your Database Estate: Smarter Monitoring for Strategic Growth
As database environments evolve spanning hybrid infrastructures, diverse platforms, and growing performance demands, it's no longer enough to monitor what's happening now. Strategic estate planning requires a forward-looking approach. In this session, you'll discover how you can use Redgate Monitor to plan for the future with confidence. Learn how to leverage historical performance data, disk usage trends, and alert patterns to inform decisions around patching, capacity planning, and workload optimization. We'll explore how to identify risks before they become issues, and how to adapt your monitoring strategy to support long-term goals. You’ll also get a first look at new deployment options designed for scalability and cost-efficiency, including running Redgate Monitor’s data repository on PostgreSQL with TimescaleDB. Whether you're managing SQL Server, exploring PostgreSQL, or preparing for a cloud-first future, this session will equip you with practical insights and tools to evolve your monitoring strategy, and manage your estate with agility and foresight.
-
Redgate Keynote: The Data Professional of the Future: How You Can Thrive in the Age of Machines
The data professional of 2025 might be a career database expert…or simply the closest thing your organization has to a data professional. The database landscape has never been more complex, and the modern data professional is tasked with balancing shifting platform trends and emerging technology like AI with the ever-present need to keep databases and the data they contain secure – in an era when organizational pressure to deliver value from data is stronger and more persistent than it’s ever been. In this session you’ll learn more about the pressures and challenges faced by the data professional of today, as well as trusted advice on how to navigate today’s and tomorrow’s database landscape, no matter where you are on your professional journey.
-
Maximize your Productivity: 10 Ways Redgate Makes Database Management Easier
Managing databases can be complex, time-consuming, and—let’s be honest—a bit painful at times. In this session, we’ll show you practical ways Redgate simplifies everyday database management tasks to save you time and effort. You’ll also discover how Redgate’s free educational resources can help you sharpen your skills and work more efficiently.
-
Hobby Huddle: Ham Radio with Grant Fritchey
Hosted in the Community Zone, these Hobby Huddle sessions are a fun way for people in the community to showcase their passions and hobbies outside of everyday work life. There will be a designated seating area for you to join these highly entertaining and informative back-to-back mini sessions.
Haider Raza
Senior Cloud Solution Architect, Microsoft
Haider Raza
Microsoft, Senior Cloud Solution Architect
Haider is a Data and AI leader with over 25 years of experience in data architecture, analytics, artificial and enterprise intelligence. He has led the design and delivery of greenfield business intelligence platforms, data lakes, and data warehouses across diverse industries and global markets. Currently at Microsoft, Haider advises organizations on modernizing their data ecosystems using cloud-native, open-source, and PaaS technologies. He is a frequent speaker at industry events, sharing insights on data strategy, governance, architecture, and modernization. His work bridges technical depth with strategic vision, helping enterprises unlock value from data at scale.
-
Mastering Azure SQL PaaS: From Novice to Expert
Welcome to a comprehensive deep dive into the world of Azure SQL Platform as a Service (PaaS). In this session tailored for beginners to intermediate SQL Server professionals, we will embark on a transformative journey, transitioning you from a novice to a hero in Azure SQL. Whether you have limited understanding or are new to Azure SQL, this session is your gateway to mastering key concepts, deployment options, and advanced techniques in a condensed, informative, and engaging format. Session Highlights: 1. Introduction to Azure SQL 2. High Availability and Business Continuity 3. Migration of SQL Server to Azure SQL 4. Admin Tasks and Performance Optimization 5. Security and Compliance
Hamish Watson
DevOps and AI Consultant, Morph iT Limited
Hamish Watson
Morph iT Limited, DevOps and AI Consultant
Hamish Watson is a globally recognised leader in DevOps for data and applications. His career has focused on empowering teams to deliver value faster, more reliably, and confidently. His expertise in implementing modern DevOps practices has helped organizations standardize deployments, streamline delivery pipelines, and unlock innovation through automation and collaboration. Passionate about education and knowledge sharing, Hamish regularly speaks at international conferences on the intersection of DevOps, data, with a focus on business delivery. He is also a university lecturer and graduate mentor, helping to shape the next generation of technology professionals. Known for his mantra #makestuffgo, Hamish believes that systems should not only work — they should work better, faster, and together. He brings a unique mix of hands-on experience, strategic thinking, and a collaborative spirit to everything he does.
-
A Guide to migrate SQL Server databases to PostgreSQL on Amazon Aurora
-
DevOps for Databases & Data Warehouses: Deliver Changes Faster, Safely and Reliably
The pressure on data engineers to deliver data faster and reliably continues to grow, especially with increasing demands for data governance, security, and compliance. This workshop provides a practical, hands-on approach to accelerating database and data warehouse changes while maintaining security and reliability. Whether working with Fabric Data Warehouse or Azure SQL Database, you'll learn best practices for version-controlling database changes and implementing CI/CD workflows using GitHub, Azure DevOps, and SQL Projects to all variants of SQL Server. We will also showcase how to supercharge development with CoPilot, enabling faster SQL and Fabric coding while ensuring best practices are followed. By the end of this session, attendees will: ✅ Understand how to automate database change management efficiently. ✅ Learn how to apply DevOps and CI/CD principles to database deployments. ✅ Gain insights into leveraging CoPilot for AI-assisted development in SQL and Fabric. ✅ See practical demos and real-world implementation strategies. The session will be a hands-on workshop combining interactive demos, guided exercises, and real-world case studies, where attendees will learn to automate database deployments, leverage CoPilot for SQL development, and implement CI/CD best practices using GitHub, Azure DevOps, and SQL Projects. If you're a data engineer, database administrator, or DevOps engineer looking to modernize your database delivery pipeline, this session is for you!
-
Level Up Your Career: The Strategic Value of Community Engagement
-
Strategies for Seamless On-Premises to Cloud Migrations
Migrating enterprise databases to the cloud is a complex but necessary evolution for data-driven organizations. This session is designed for data architects, DBAs, and IT leaders looking to build a robust, secure, and cost-effective cloud architecture while maintaining business continuity. We'll explore the architectural decisions, patterns, and governance models that support successful migrations of SQL Server and PostgreSQL from on-premises environments to cloud platforms like Azure, AWS, or hybrid infrastructures. From selecting the right PaaS vs. IaaS options to implementing secure multi-platform strategies, this session delivers practical guidance for every stage of the journey. Key topics include: – Cloud migration frameworks for SQL Server and PostgreSQL with minimal downtime – Building hybrid and multi-cloud architectures for diverse workloads – Securing data with role-based access, encryption, and network isolation – Managing cloud costs effectively using scaling strategies and usage monitoring – Implementing data governance policies with tools that span both database platforms – Integrating DevOps practices to streamline database deployments and maintenance Whether you're modernizing legacy SQL Server workloads or planning a greenfield PostgreSQL deployment in the cloud, this session will provide the architecture and tools to move confidently and securely.
-
Migrating to SQL Server to PostgreSQL: open-source can save you money
-
A New Paradigm for Data Engineering: Empowering DevOps processes with AI
Discover how Artificial Intelligence is reshaping DevOps and Data Engineering. This lightning talk explores how AI can automate workflows, optimize resource usage, and boost system reliability and security in modern data platform environments. We’ll touch on real-world challenges—like managing scalable resources, CI/CD pipelines, and real-time incident response—and how AI-driven solutions enable predictive maintenance, intelligent automation, and anomaly detection. Learn what it takes to embed AI in your DevOps workflow, from robust data platforms to cross-team collaboration. Walk away with actionable insights on how AI can not only automate but also innovate your data operations—paving the way for the future of data engineering.
-
Lightning Talks-01: A Rapid-Fire Exploration of Key Tech Topics
-
Accessories Included: Database DevOps with SQL projects
-
Accelerating Cloud Database Migrations with DevOps
-
Empowering Data Engineers with Copilot, DevOps, and Fabric
-
Redgate Luncheon: Harnessing AI: Insights and Innovations from the Community
Join us for a dynamic luncheon session where Community Experts will explore the transformative power of AI in the world of databases. This hybrid panel-networking session promises to blend insightful dialogue with interactive discussion, offering attendees a unique opportunity to engage directly with their peers and industry experts. After an enlightening panel discussion of each topic, you'll have the opportunity to delve deeper into these topics at your table, exchanging views and strategies on overcoming these hurdles. This year's session will delve into the practical applications and innovative use cases of AI. Our panelists, who are at the forefront of AI integration, will share their experiences, challenges, and successes. Attendees will then have the opportunity to ask questions, discuss with their peers, and share their own stories. Whether you're an AI enthusiast or just curious about its potential, this session promises to be both informative and inspiring.
Haripriya Naidu
Database Administrator, S&P Global
Haripriya Naidu
S&P Global, Database Administrator
Haripriya is a Microsoft Data Platform MVP and Lead SQL Server Database Administrator with 12 years of experience specializing in performance tuning, high availability, and process automation. She is passionate about SQL Server internals and solving complex performance issues. She has spoken at PASS Summit and multiple SQL Saturdays, sharing practical insights on improving SQL Server performance. She also enjoys writing technical deep-dives on her Substack blog, which has been featured multiple times in Brent Ozar’s newsletter. Blog: https://gohigh.substack.com/ LinkedIn: https://www.linkedin.com/in/haripriya-naidu1215/
-
Maximizing SQL Server Performance with Read Committed Snapshot Isolation
Are your read operations frequently blocked by write operations? Do you want to retrieve clean data without relying on NOLOCK? Then it's time to switch to Read Committed Snapshot Isolation (RCSI). In this session, I’ll explain what RCSI is, how it works, and how it differs from the default Read Committed Isolation level. I’ll demonstrate how version store comes into play with this isolation level and when it can overwhelm TempDB if not managed properly. Finally, I’ll discuss how to manage the implications of enabling RCSI. By the end of this session, you'll have a clear understanding of RCSI and when to implement it in your environment for improved concurrency and performance.
-
Deep Dive into Memory Optimized TempDB
Still Fighting TempDB Contention? Meet Memory-Optimized TempDB! TempDB contention has always been a challenge for DBAs, especially in high-concurrency OLTP environments. To address this, SQL Server 2019 and 2022 introduced several enhancements, including Memory-Optimized TempDB, to reduce bottlenecks and improve performance. In this session, I’ll show you how Memory-Optimized TempDB works, when to use it, and how to implement it. I’ll demonstrate how to resolve contention using this feature. I'll also highlight its limitations, and provide strategies to manage them effectively. By the end of this session, you'll have a clear understanding of how and when to leverage Memory-Optimized TempDB to enhance server performance.
-
Hidden Pathways to Achieving Peak SQL Server Performance
Harsha Yadlapudi
EMEA Data Management Lead, Google
Harsha Yadlapudi
Google, EMEA Data Management Lead
Harsha is a GCP Managed Databases Lead in the EMEA Data Management practice, helping businesses of all sizes unlock the full potential of the Google Cloud databases across 100+ countries in EMEA.He specializes in Managed Cloud SQL Server on GCP, addressing field strategies & supporting regional Customer Engineering teams.Active speaker in multiple Microsoft & Google community events.
-
Train your Pokémon card recognition model, with AI and Postgres on steroids
-
Is Your Next DBA an AI ?!? Bridging AI Agents & Databases
-
BigQuery: Free Your Data (and Your Weekends!) – A DBA's Guide
-
SQL Server at Hyperscale: Hitting a Half Million IOPS in the Cloud
Can you really achieve a half million IOPS for SQL Server in the public cloud without breaking the bank or sacrificing availability? Yes – and this session shows you how on Google Cloud. Dive deep with us through presentations and live demonstrations as we architect a solution for maximum performance, resilience, and cost-efficiency. We'll explore: * The 500k IOPS Formula: Configuring optimal Compute Engine instances combined with the massive throughput of Hyperdisk storage. * Advanced HA/DR Designs: Implementing robust availability using Google Cloud's synchronous cross-zone and asynchronous cross-region mirroring capabilities. * Cloud-Native Cost Optimization: Utilizing elasticity to precisely scale resources (up/down, in/out) and manage SQL Server license costs effectively. Gain practical, actionable insights into building and managing hyperspeed SQL Server environments on Google Cloud. Leave ready to implement these strategies immediately.
-
Click, Deploy, Relax: Zero to SQL Server with Workload Manager
Heidi Hasting
Mgr. Data Engineering, –
Heidi Hasting
-, Mgr. Data Engineering
Heidi Hasting is a Business Intelligence professional and former software developer with over ten years experience in Microsoft products. She is an ALM/DLM enthusiast and Azure DevOps fan and co-founder and organiser of the Adelaide Power BI User Group. Heidi is a regular attendee at tech events including Azure Bootcamps, DevOps days, SQLSaturdays, Difinity and PASS Summit.
-
CI/CD in Database Development
-
Mastering Sensitivity Labels: Enhancing Data Governance Across Platforms
-
Level Up Your Career: Prof Development, Networking, and Certifications
-
Empowering Data Engineers with Copilot, DevOps, and Fabric
-
Fabric-CLI Deep Dive: From Zero to Full Deployment with Live Demos
The Fabric-CLI is a game-changer for data engineers and developers looking to streamline deployment in Microsoft Fabric. In this session, we’ll take you from zero to fully deployed using the Fabric Command Line Interface. You’ll learn what the Fabric-CLI is, how it works under the hood, and how to use it to automate complex deployment workflows. We’ll start with the basics — installing and configuring the Fabric-CLI — before diving into real-world demos showcasing how to export artifacts, manage workspaces, automate deployments, and integrate with your DevOps pipelines. Along the way, we’ll highlight tips, best practices, and common pitfalls to avoid. By the end of this session, you’ll have the skills to confidently use Fabric-CLI for faster, more reliable deployments across development, testing, and production environments. Whether you’re a beginner or looking to optimize your deployment strategy, this session will give you the hands-on knowledge you need to succeed.
Heini Ilmarinen
Co-founder & architect, Bruvo
Heini Ilmarinen
Bruvo, Co-founder & architect
Heini is a math teacher turned Azure expert. With her background in mathematics and teaching, she has a great passion for problem solving, simplifying complex things and a unique approach to architectures. She is a Co-founder at Bruvo and a Data Platform MVP, specializing in Azure infrastructure and the data platform. She has been working with customers large and small for the past several years to help them sort out their path to the cloud and make sense of their architecture – both hybrid and cloud. She has worked extensively with both Azure infrastructure and analytics solutions, and makes the most of this skillset in her projects, as well as while speaking. On her free time Heini spends time in the nature, either snowboarding or mountain biking, depending on the season.
-
Streaming Head-to-Head: Microsoft Fabric vs. Databricks
Databricks is widely used in lakehouse solutions, which ever cloud platform you are on. It has developed to a mature product that has steady capabilities that enable the creation of modern data platform solutions for both batching and streaming scenarios. With Microsoft Fabric out with Real Time Intelligence, is Fabric leagues behind or is it able to challenge Databricks neck to neck, especially when it comes to streaming? In this session we will begin by looking into what kind of challenges streaming brings into the picture and what kind of capabilities are needed. We'll then go into what kind of options Microsoft Fabric and Databricks provide for streaming; highlighting both the similarities and differences. We will then dive deeper to look at how does the streaming capabilities differ, as well as how streaming can be part of a full lakehouse solution. After this session you'll be able to understand the strengths and weaknesses of both of these services for streaming, as well as how they rank up head to head!
-
Riding The Wave of Everchanging Technologies
-
To Bicep or to Terraform – That is the Question
-
Deep Dive to Version Control for Microsoft Fabric
-
A Picture is Worth a Thousand Words
-
No-Nonsense Guide to Data Engineering in Microsoft Fabric
Data engineering doesn’t have to be a headache — especially with Microsoft Fabric in your toolkit. Fabric might still be quite young as a service, but Data engineering leverages true and tested capabilities of Delta Lake and Spark, as well as Pipelines adapted from Data Factory. This session is for you to get up to speed with the core capabilities for Data engineering in Fabric. We will begin with a walk through of the core concepts of data engineering in Fabric from OneLake, Delta Lake, Lakehouses, Pipelines, Spark compute and Notebooks. At the core is understanding how these different pieces work together. After that we will go more deeply into working with notebooks and how you can manage your computer usage efficiently. We will also focus on how to leverage Lakehouses and structure your data. Lastly we will dive into an end-to-end solution that can showcase all these components and how they work together in practice. Through this sample we will highlight how other Fabric items can enhance your solution and in which cases you should consider them. Whether you're just starting out your data engineering game or wanting to deepen your understanding, this session will get you up to speed in no time!
-
Streaming head to head – Microsoft Fabric vs. Databricks
-
Getting Data In and Out of Azure
All the Azure Data offerings are great. But they are also confusing. Which one is right for you? Which size do you need? The answer is of course: It depends. Join us for a day of unmystifying the jungle of offerings! We will walk you through the different service offerings from SQL Server running in a VM over Azure SQL DB up to Fabric. To make sure, this is applicable and actionable, we will clearly structure this day by use cases for both, your needs to have your data land in Azure as well as how to make it accessible for consumption: – HA/DR – are you intending to use Azure only as your backup datacenter? – Migration – is Azure going to be your new home? – ETL, Replication, Mirroring and Links – are you only intending to run some of your workloads, like analytics in the cloud and need to build a landing zone for your data from other sources? – Streaming – Are you getting data from sensors or other devices? – Analytics – Is Fabric really your only choice to run reports in the future? This demo packed day will be your fast track to figure out which of the countless offerings is right for you and what it will take you to get there. We’ll focus on the technical aspects but also take a look at implications like security, governance and of course: cost.
Hugo Kornelis
Database consultant, perFact B.V.
Hugo Kornelis
perFact B.V., Database consultant
Hugo Kornelis is an established SQL Server community expert who spends a lot of time at various conferences. He is also a blogger, technical editor of a variety of books, and Pluralsight author. He was awarded SQL Server MVP and Data Platform MVP 18 times (2006 – 2016 / 2019 – now). When not working for the community, he is busy at his day job: freelance database developer/consultant. Hugo has over 25 years of SQL Server experience in various roles. Starting from a strong database design background, he has spent the last ten years specializing in execution plans and query performance tuning.
-
Performance and execution plan improvements in SQL Server 2025
-
Approximate functions: How do they work?
-
Debugging without debugger: investigating SQL Server’s internal structures
-
Execution plans … Where do I start?
-
Five stages of grief – internals of a hash spill
-
Hash Match, the operator
-
Here’s The Execution Plan … Now What?
You have learned the relevance of execution plans. You know where to find them, and you’ve been taught the basics of how to read them. You’ve looked at some of the clean, simple execution plans that presenters used in classroom training, or at conferences, and you feel confident that you can work with them. And then you get your first problem query at your workplace, you look at its execution plan, and you just want to crawl under a rock and cry. Real code is much more complicated than demo code. Real code translates to large, complex, and often messy execution plans. The principles of reading execution plans still apply, but the plan is large and messy and you struggle where to even begin. If your query uses a lot of I/O, then which operators are to blame? If your query uses a lot of memory, then what area is responsible? What are some things you should always look at? Knowing the root cause of a problem can help find a cure. Knowing where to look in a large execution plan can help you find that root cause faster!
-
Normalization beyond Third Normal Form
-
Performance and execution plan improvements in SQL Server 2025
-
Execution plans explained
-
Execution plans in depth
-
Approximate functions: How do they work?
-
Debugging without debugger: investigating SQL Server’s internal structures
-
Execution plans … Where do I start?
-
Five stages of grief – internals of a hash spill
-
Hash Match, The Operator
SQL Server has a lot of difference execution plan operators. By far the most interesting, and the most versatile, has to be the Hash Match operator. Hash Match is the only operator that can have either one or two inputs. It is the only operator that can either block, stream, or block partially. And it is one of just a few operators that contribute to the total memory grant of an execution plan. If you ever looked at execution plans, you will have seen this operator. And you probably have a rough idea of what it does. But do you know EXACTLY what happens when this operator is used? In this extended 500-level session, we will dive deep into the bowels of the operator to learn how it performs. It is going to be wild ride, so keep your hands, arms, and legs inside the conference room at all times; and please remain seated until the presenter has come to a full stop.
-
Normalization beyond Third Normal Form
-
Here’s the execution plan … now what?
Huxley Kendell
Solution Engineer, Redgate
Huxley Kendell
Redgate, Solution Engineer
As a Solution Engineer at Redgate, I help customers of various sizes and industries to understand and implement Database DevOps. I provide technical support, training, and guidance to ensure customer satisfaction and success when implementing new cutting edge solutions. I travel the world, helping companies of all sizes implement the best solutions to help sustain and accelerate their growth!
-
Deploy with Confidence – Scaling Database Change Without the Risk
-
Deploy with Confidence – Scaling Database Change Without the Risk
-
Maximize your Productivity: 10 Ways Redgate Makes Database Management Easier
Managing databases can be complex, time-consuming, and—let’s be honest—a bit painful at times. In this session, we’ll show you practical ways Redgate simplifies everyday database management tasks to save you time and effort. You’ll also discover how Redgate’s free educational resources can help you sharpen your skills and work more efficiently.
InduTeja Aligeti
Senior Delivery Consultant, Amazon Web Services
InduTeja Aligeti
Amazon Web Services, Senior Delivery Consultant
With over 20 years of experience in the IT industry, I’m a database enthusiast with deep expertise across multi-database and multi-cloud platforms. I’ve worked with global enterprises like Microsoft, Wells Fargo, and UHG, and currently, I’m at AWS in Hyderabad INDIA, helping enterprise customers migrate and modernize their database workloads on the cloud, playing a key role in their digital transformation journeys.
-
Accelerate MSSQL to Postgres using AI driven tools and techniqueseiif
-
Mastering Database Migration: A DBA’s Roadmap from SQL Server to PostgreSQL
As organizations move from SQL Server to PostgreSQL to reduce costs and embrace open-source databases, a well-planned migration is crucial. This session covers different migrtaion pathways, schema conversion, stored procedure translation, performance tuning, and workload optimization while highlighting key differences in indexing, partitioning, and concurrency control. Attendees will explore migration tools like AWS DMS, Babelfish, pgLoader etc, learn best practices for high availability and disaster recovery, and tackle challenges like T-SQL to pgSQL conversion. Through real-world case studies, DBAs will gain a clear roadmap for a seamless transition and maximizing PostgreSQL’s capabilities.
-
Migrate SQL Server to AWS: From Strategy to Success
-
SQL Server on AWS: Architecting for High Availability and Resilience
Itay Maoz
Itay Maoz
Itay Maoz leads the Cloud SQL engineering organization at Google. Prior to joining Google, he was the General Manager of Amazon ElastiCache and Amazon MemoryDB, two in-memory database services at Amazon Web Services. Throughout his 25+ years of experience in software development, he has mainly focused on building and scaling high-performance datastores, including storage systems, databases, and caching. Itay has earned his bachelor's degree in software engineering from the Israel Institute of Technology and his master's degree in computer science from Tel Aviv University.
-
Unlock Your Data's Potential: Breakfast with Google Cloud Leadership
Join us for an exclusive, sponsored breakfast designed for data and IT professionals seeking a strategic overview of the entire Google Cloud database ecosystem. This is your opportunity to move beyond individual products and understand the comprehensive strategy that supports your most critical workloads. Connect directly with Google Cloud's database leadership to discuss our commitment to providing choice, performance, and flexibility across all major database requirements. What You Will Learn The Full Portfolio: Discover how our range of offerings—from hyperscale solutions like Cloud SQL and AlloyDB to industry-leading services like Spanner and Firestore—fits together to meet every modern data need. Strategic Partnerships: Get an executive briefing on how our crucial partnerships, including the groundbreaking collaboration with Oracle, enable you to maintain existing investments while accelerating cloud adoption. The Innovation Roadmap: Engage in a strategic discussion about the future of data infrastructure and how Google Cloud is pioneering AI-driven operations and open-source innovation. Google Cloud speakers include Raj Pai, Vice President, Product Management, Cloud AI and Itay Maoz, Senior Director of Engineering We will also feature a customer panel with our product experts who will discuss how they've successfully leveraged this comprehensive strategy—integrating both Google Cloud native services and partner technologies—to transform their business.
JP Chen
Senior Director and Practice Leader, SQL Server Services, Datavail
JP Chen
Datavail, Senior Director and Practice Leader, SQL Server Services
JP Chen is the Senior Director of SQL at Datavail, where he leads a high-performing team of DBAs across the U.S., India, Canada, and Colombia. With over 20 years of experience in IT, JP specializes in managing and optimizing SQL Server environments across on-premises, AWS, and Azure platforms. He is recognized for his strengths in customer support, problem-solving, team leadership, and technical writing. JP is the author of Delivering Technical Presentations: A Quick-Start Guide for Effective Technical Communication and Migrating to Azure SQL: A Practical Hands-On Guide for Cloud Migration Success, as well as several whitepapers including SQL on Linux, Keeping Pace with Change, and The 5 Hats Worn By Database Administrators, available at www.datavail.com. He has presented at major industry events such as SQL PASS, SQL Saturday, RMOUG, and the AWS Workshop Series. JP holds multiple certifications, including Project Management Professional (PMP®), AMA Certified Professional in Management (AMA-CPM®), FinOps Certified Professional, and AWS Certified Database – Specialty. Outside of work, he enjoys traveling and discovering diverse culinary traditions around the world.
-
Migrating to Azure SQL – A Practical Guide for Cloud Migration Success
As cloud adoption accelerates, data professionals are increasingly called upon to support or lead database migrations—often with limited cloud experience. This session provides a clear, practical roadmap for migrating SQL Server workloads to Azure SQL, tailored for those new to the platform.
-
FinOps for Data Professionals: Optimizing Cloud Costs
-
Delivering Technical Presentations
-
Migrating to Azure SQL
-
Migrating to Azure SQL: A Practical Guide for Cloud Migration Success
Cloud computing has become essential for modern IT infrastructure, offering scalability, flexibility, and efficiency. This session explores the fundamentals of cloud computing and Microsoft Azure's role as a leading provider. In this session, you’ll learn about planning and executing your migration to Azure SQL. Get a comprehensive strategy, including discovery, assessment, migration, and optimization phases. The session will compare various migration methods and tools, with practical guidance on preparing your on-premises environment for testing. Key takeaways: • Understanding Azure SQL infrastructure • Developing a migration strategy • Comparing migration tools and methods • Optimizing costs in Azure • Best practices for successful migration By the end, you'll have a thorough understanding of Azure SQL Databases and Azure SQL Managed Instance, enabling you to transform your SQL Server environments effectively.
James Phillips
Senior Platform Architect, O'Neil Digital Solutions
James Phillips
O'Neil Digital Solutions, Senior Platform Architect
James Philips is a Enterpirse Architect Manager with O'Neil Digital Solutions, former Microsoft CSA and Chief Information Officer at Rev.io. With over 25 years of experience in both large, complex organizations and fast-growing SMB environments. Spending a majority of that time being responsible for overall systems architecture, performance, database management, data migration, analytics and long-term technology strategy to ensure its system performs optimally to meet evolving client needs. In total, James has managed and led teams in database administration and application development at over a dozen companies, while also managing his own independent consulting engagements on several occasions.
-
Microsoft Fabric for Data Governance
-
SQL, PostgreSQL, CosmosDB – Which One Is Right For Your Application?
Choosing the right database for your application can be a daunting task. You can spend months researching, creating pros and cons lists, talking to your team or you can come to this session and learn the shortcut to figuring this out. There isn't a magic wand solution but there are key criteria such as: Features, Audience, Application Purpose, Staff Expertise and Time to Market that will drive which decision is right for your organization. We will go over each one of these pillars and then break it down to which solution is ideal for the various combinations of answers. You will walk away with a bit of a cheat sheet on how to make your decisions in the future.
-
SQL Database for Microsoft Fabric in the real world.
-
How the heck do you know what Fabric SKU is right for you?
-
Business Brews & Breakthroughs with Redgate
-
Redgate Luncheon: Harnessing AI: Insights and Innovations from the Community
Join us for a dynamic luncheon session where Community Experts will explore the transformative power of AI in the world of databases. This hybrid panel-networking session promises to blend insightful dialogue with interactive discussion, offering attendees a unique opportunity to engage directly with their peers and industry experts. After an enlightening panel discussion of each topic, you'll have the opportunity to delve deeper into these topics at your table, exchanging views and strategies on overcoming these hurdles. This year's session will delve into the practical applications and innovative use cases of AI. Our panelists, who are at the forefront of AI integration, will share their experiences, challenges, and successes. Attendees will then have the opportunity to ask questions, discuss with their peers, and share their own stories. Whether you're an AI enthusiast or just curious about its potential, this session promises to be both informative and inspiring.
James Serra
Data & AI Solution Architect, Microsoft
James Serra
Microsoft, Data & AI Solution Architect
I work at Microsoft as a big data and data warehousing solution architect where I have been for most of the last ten years. I am a thought leader in the use and application of Big Data and advanced analytics, including data architectures such as the modern data warehouse, data lakehouse, data fabric, and data mesh. Previously I was an independent consultant working as a Data Warehouse/Business Intelligence architect and developer. I am a prior SQL Server MVP with nearly 40 years of IT experience. I started my career as a software developer, then was a DBA for 12 years, and for the last twelve years I have been working extensively with business intelligence and data warehousing using numerous Microsoft technologies and tools. I have been at times a permanent employee, consultant, contractor, and owner of my own business. All these experiences along with continuous learning have helped me to develop many successful data warehouse and BI projects. I am a popular blogger and speaker, having presented at dozens of major events including PASS Summit, SQLBits, Data Summit, SQLDay, Enterprise Data World conference, Big Data Conference Europe, SQL Saturdays, and Informatica World. I am the author of the book “Deciphering Data Architectures: Choosing Between a Modern Data Warehouse, Data Fabric, Data Lakehouse, and Data Mesh“.
-
Using Generative AI on Structured Data
Generative AI, traditionally used for processing unstructured text, is rapidly advancing to handle structured data like relational databases, spreadsheets, and CSV files. New tools now enable AI to extract meaningful insights, identify patterns, and generate predictions from structured datasets. This presentation will explore how AI transforms our interaction with structured data, providing practical applications for enhanced automation, decision-making, and efficiency in data analysis. I will discuss ChatGPT, Copilot, and Microsoft Fabric AI Skill and provide a level-set on GenAI definitions, RAG, fine-tuning, and cover industry use cases for using both unstructured and structured data to make better business decisions.
-
Moving to Azure SQL from VM-based SQL
-
Enhancing your career: Building your personal brand
-
Learning To Present and Becoming Good At It
Have you been thinking about presenting at a user group? Are you being asked to present at your work? Is learning to present one of the keys to advancing your career? Or do you just think it would be fun to present but you are too nervous to try it? Well take the first step to becoming a presenter by attending this session and I will guide you through the process of learning to present and becoming good at it. It’s easier than you think! I am an introvert and was deathly afraid to speak in public. Now I love to present and it’s actually my main function in my job at Microsoft. I’ll share with you journey that lead me to speak at major conferences and the skills I learned along the way to become a good presenter and to get rid of the fear. You can do it!
Janis Griffin
Senior Systems Consultant, Quest Software
Janis Griffin
Quest Software, Senior Systems Consultant
Janis Griffin has over 35+ years of DBA/database experience including design, development, and implementation of many critical database applications. Before coming to Quest Software, Janis primarily worked in the Telecom/Network Industry, working with both real-time network routing databases and OLTP business to business applications. Janis also held positions as Principal Architect and Senior Manager, mentoring other DBAs on best practices in database performance tuning and management.
-
Mastering PostgreSQL Performance: A Systematic Approach to Query Tuning
Achieving peak performance in PostgreSQL databases requires mastering the art of query tuning. Developers and DBAs often grapple with diagnosing and resolving performance bottlenecks, wasting valuable time on trial-and-error approaches. This session introduces a systematic methodology for tuning PostgreSQL queries, leveraging tools like Wait Time analysis, explain plans, and SQL diagramming. Attendees will learn to identify costly operations, select optimal execution plans, and apply proven best practices through real-world case studies. Whether you are a novice or an experienced professional, this presentation will empower you to optimize queries efficiently, streamline database performance, and save countless hours in troubleshooting.
-
SQL Server 2025: The Evolution into an AI-Powered Vector Database
Jaouad Amine
Microsoft Technical Specialist, Dell
Jaouad Amine
Dell, Microsoft Technical Specialist
Over the past few years, I've focused on two things: delivering cutting-edge technical training and designing practical, high-value solutions for customers. My unique background—combining engineering, sales, and marketing—gives me a holistic view of the entire IT and Infrastructure value chain. My mission is to share this perspective and help you connect the dots. Join me for one of the first deep dives into how you can finally bring powerful AI directly to your data using the combined strength of Dell AI infrastructure and the groundbreaking capabilities of SQL Server 2025.
-
Bring AI to your Data with Dell Technologies
As organizations accelerate their AI journeys, the convergence of data, infrastructure, and intelligent automation becomes critical. In this session, Dell Technologies explores how SQL Server 2025 and the Dell AI Factory are transforming the way enterprises bring AI to their data—securely, efficiently, and at scale. Building on our 4 decades of joint innovation, we will revisit the shared SQL 2025 REST API capabilities and highlight how they enable modern, agentic AI workflows and seamless integration with enterprise applications. Using the Dell AI Factory architecture, we will showcase how on-premises deployments can leverage expert AI services, open ecosystems, and AI-optimized infrastructure to drive real-world outcomes. We examine the shift from traditional 3-tier architectures to disaggregated models—combining the flexibility of HCI with the simplicity of legacy systems—to meet the demands of AI-driven workloads. Attendees will gain insight into the latest announcements from Dell Technologies World, including the Dell Private Cloud and Dell Automation Platform. These innovations enable fully transferrable infrastructure, lifecycle automation, and streamlined support. Finally, we explore the future of SQL 2025 on these disaggregated Dell architectures, referencing Microsoft’s Arun Vijayraghavan’s recent blog on agentic AI and showcasing blueprints for deploying intelligent, scalable solutions. This session is ideal for technical leaders seeking to modernize their data platforms and unlock the full potential of AI—on their terms, in their environment.
Jared Kuehn
Data Architect Consultant, Baker Tilly
Jared Kuehn
Baker Tilly, Data Architect Consultant
For over a decade, DataBard (aka Jared) has consulted for many organizations to implement data solutions using Microsoft products. For more than three decades, he has been honing his skills in theater and other performing arts. As a speaker, Jared marries these two disciplines together, creating dynamic presentations that add entertainment to education. With this symbiotic relationship, he seeks to improve presentation engagement, support attendees in knowledge retention, and foster a culture of passion for the data industry. Technically-speaking, Jared has multiple Microsoft certifications in both on-prem and cloud technologies (most recently Microsoft Fabric). While currently focused on analytical environment architecture and design, his specialties include: • Data Modeling • SQL Query Development and Performance Tuning • ETL Design Patterns • Azure Data solutions • Data Management In research, he is expanding his knowledge set into Microsoft Fabric, as well as the world of AI and Machine Learning. He strives to not only be an expert in technology, but an expert in all things data, from process to culture. When he has spare time, Jared continues to perform in community theater productions, in church music, and his YouTube channel where he creates educational videos on the data industry.
-
Intro to PySpark in Microsoft Fabric
With all of the engineering features in Microsoft Fabric, which medium should you use to move and transform data? Low-code data flows and pipelines? Good old relational SQL? What about this newfangled PySpark everyone is buzzing about? If the last option piques your curiosity and you haven't tried it, this is the session for you. I'll cover basic Python principles that will make even complicated Python easy to read. Then I will showcase how to interact with and manage your Spark environment, boosting performance while simplifying interactions with data storage in Microsoft Fabric. Finally, I'll showcase Fabric features and community content that can support your next steps in learning to implement PySpark.
-
Building a Data Culture
-
Theater Techniques to Improve your Presentation
-
Intro to Machine Learning as a Data Engineer
-
The Power of Storytelling in a Data-Driven World
Jared Lander
Lead Software Engineer, Relativity ODA LLC
Jared Lander
Relativity ODA LLC, Lead Software Engineer
Jared Lander is a Lead Software Engineer at Relativity with nearly 15 years of experience working with SQL Server and database technologies. He launched his career as an intern in Customer Support, where he quickly developed a passion for SQL Server that shaped his future path. Jared became an Infrastructure Support Specialist and subject matter expert for SQL Server environments before transitioning to a Database Administrator role, helping support RelativityOne, the company’s cloud-based SaaS product. For the past two and a half years, he has specialized in development and automation for RelativityOne’s SQL Platform-as-a-Service (PaaS) offering. Jared is passionate about driving efficiency, resiliency, and performance in large-scale SQL Server environments. Outside of work, he enjoys spending time with his wife, Johnnie, his stepdaughter, Hanna, and their two Miniature Cockapoos, Mr. Chips and Lumiere.
-
Managing SQL Server RAM When you have no more Gigs to give
-
DbOwner Trigger Attack Vulnerability : A solution
In a landscape of thousands of SQL instances and hundreds of thousands of databases, and legacy application privileges, how do you prevent the malicious privilege escalation as described by Erland Sommarskog in a session at Data PASS 2024?privileged user creating a DDL trigger that silently escalates access at the server level the moment a sysadmin runs an innocuous command. In this session, we’ll walk through how our team neutralized this threat. We’ll show how an application login that required DBOwner role (a least-privileged but highly trusted user in many systems) could create triggers that outlive their creators and silently weaponize even well-intentioned sysadmins, and we will show how to prevent the attack. You’ll learn: • Why database-level DDL triggers are a risk surface few teams monitor • How we built a lightweight detection automatically destroy dangerous triggers before they can be “detonated” • the solution to the problem so obvious it started as a joke. We’ll share our audit triggers, our denylist strategy, and our method for crawling for existing traps. You’ll walk away with an understanding of a response scaffold — and a new respect for the silent weapons hidden in your DDL layer. Intended Audience • SQL Server engineers and DBAs • Security-conscious development teams • Application platform architects • DevOps and SREs with shared responsibility for database layers DEMO: : demonstrate how solution effectively defeats the attack surface area.
-
The S.O.L.I.D. DBA: Engineering Principles for a SQL-Centered World
-
Scaling SQL: Not by Accident, but by Architecture
Jason Romans
Senior Business Intelligence Engineer, The DAX Shepherd
Jason Romans
The DAX Shepherd, Senior Business Intelligence Engineer
Jason Romans is a Business Intelligence engineer in Nashville, TN working with the Microsoft Business Intelligence stack. Jason started his career as a DBA and over the years moved to working in his passion of Business Intelligence and data modeling. As a Microsoft MVP he is always learning and eager to share what he has learned. His first computer was a Commodore 64 and he's been hooked ever since.
-
Unlocking the Power of TMDL: Enhancing Power BI Development
Tabular Model Definition Language (TMDL) is a game-changer for efficient Power BI development using code. With the new TMDL view in Power BI Desktop, developers can quickly create and modify semantic model objects, surpassing the capabilities of the Power BI Desktop UI. This allows for the generation and modification of semantic objects that previously required third-party tools, enabling standardized reusable scripts that benefit individual developers and their teams. This presentation will begin with an introduction to TMDL, highlighting its unique features and differences from other scripting languages. Attendees will learn the syntax and common pitfalls to avoid. We will explore how the TMDL view integrates with Power BI projects and provide guidance on when to use it. Additionally, we will examine the support for TMDL in other tools such as Tabular Editor and Visual Studio Code. One of the key advantages of TMDL is its text-based nature, allowing developers to use their preferred editors to develop and apply changes. We will also touch on source control and how it can be utilized to detect changes between models efficiently. Finally, we will discuss common use cases and demonstrate how to leverage TMDL to become more effective and sought-after Power BI developers.
-
How to Diagnose a Slow Power BI Report
-
Unlocking the Power of TMDL: Enhancing Power BI Development
Tabular Model Definition Language (TMDL) is a game-changer for efficient Power BI development using code. With the new TMDL view in Power BI Desktop, developers can quickly create and modify semantic model objects, surpassing the capabilities of the Power BI Desktop UI. This allows for the generation and modification of semantic objects that previously required third-party tools, enabling standardized reusable scripts that benefit individual developers and their teams. This presentation will begin with an introduction to TMDL, highlighting its unique features and differences from other scripting languages. Attendees will learn the syntax and common pitfalls to avoid. We will explore how the TMDL view integrates with Power BI projects and provide guidance on when to use it. Additionally, we will examine the support for TMDL in other tools such as Tabular Editor and Visual Studio Code. One of the key advantages of TMDL is its text-based nature, allowing developers to use their preferred editors to develop and apply changes. We will also touch on source control and how it can be utilized to detect changes between models efficiently. Finally, we will discuss common use cases and demonstrate how to leverage TMDL to become more effective and sought-after Power BI developers.
-
How to Diagnose a Slow Power BI Report
-
Unlocking the Power of TMDL: Enhancing Power BI Development
-
Sharing Knowledge: The Journey of a Blogger
-
Embracing Change – Transitioning to a New Job After 14 Years
-
Utilizing Semantic Link Labs To Identify Issues In Models And Reports
It is too late when a user raises the issue that a report is broken. Credibility has already been lost with the very people you are trying to increase report adoption with. Using Semantic Link Labs in a Microsoft Notebook unlocks a powerful toolkit that allows you to identify issues proactively and can be used to fix many common problems. We will examine the various options for installing Semantic Link Labs and their benefits. All of this is done within the familiar Microsoft Fabric Environment. Not only can we identify broken reports, but we can also automate the process with notebooks and pipelines. Even if a report is not technically broken, it may be broken from a usability point of view since it does not follow best practices. We can identify those reports so that they can be corrected. We will cover various tasks that, if done manually, would take more time and aren’t reproducible like code in a notebook is. A report’s performance depends on the semantic model to which it is connected. We will cover methods for reporting on the semantic model's best practices. By the end of the session, we will have built a semantic model and a report to track the semantic models in our environment using best practices.
-
How to Diagnose a Slow Power BI Report
Jaysukh Ramani
Senior SQL Database Administrator, Datavail Corporation
Jaysukh Ramani
Datavail Corporation, Senior SQL Database Administrator
Jaysukh Ramani is a Proficient SQL Server developer and administrator with over 18 years of experience in designing and optimizing database systems, performance tuning, automation, and cloud-based solutions. His work spans a wide range of industries, including Manufacturing, Insurance, Finance, Retail, Restaurant, and Healthcare. As a certified professional in both Microsoft and AWS technologies, he is driven by a passion for innovation and delivering exceptional value to clients.
-
Surviving a Ransomware Attack: SQL Database Administrator Strategies
Ransomware attacks pose a significant threat to SQL Server databases, potentially leading to severe data loss and operational disruptions. This presentation explores the impact of ransomware on SQL Server, detailing effective recovery strategies and best practices for database backups. Attendees will gain practical insights into techniques for restoring system databases, common issues encountered during the restoration process, and the benefits of a well-planned recovery strategy. Additionally, the session will cover essential backup strategies, including best practices for securing database backups, managing keys and certificates, maintaining an inventory, and automating database restores to ensure backup integrity including RPO and RTO information. This session will highlight the signs of a ransomware attack, common encryption tactics used by attackers, and the challenges faced when rebuilding servers after an attack. Furthermore, we will discuss quick response steps to mitigate the impact of a ransomware incident. By the end of this presentation, participants will be equipped with knowledge to enhance their preparedness against ransomware threats and implement robust data protection and recovery mechanisms.
Jeff Foushee
Principal {Various}, Humana, Inc.
Jeff Foushee
Humana, Inc., Principal {Various}
Jeff Foushee has blurred the line between developer and DBA since 1996. He collects knowledge in different programming languages, collects attractions on RoadsideAmerica.com , and volunteers as a Blackjack dealer for Louisville area church picnics.
-
The T-SQL JSON Operators
Officer Reese, someone got semi-structured data in my relational database! No worries; let's look at all the various T-SQL functions we can use to query and create JSON data within SQL Server!
-
Advanced T-SQL Pattern Matching: Leveraging LIKE and Regular Expressions
-
The T-SQL JSON Operators
Officer Reese, someone got semi-structured data in my relational database! No worries; let's look at all the various T-SQL functions we can use to query and create JSON data within SQL Server!
-
Advanced T-SQL Pattern Matching: Leveraging LIKE and Regular Expressions
-
Duplicate – please delete
Jeff Iannucci
Senior Consultant, Straight Path Solutions
Jeff Iannucci
Straight Path Solutions, Senior Consultant
Jeff Iannucci is a Senior Consultant with Straight Path Solutions and the author of "Learn SQL in a Month of Lunches." For over 20 years, he has worked extensively with databases and SQL development in sectors such as healthcare, finance, retail sales, and government. He appreciates any opportunity to share his knowledge, including blogging, creating video content, and presenting at user groups and conferences.
-
Defending Your SQL Server: Practical Strategies against Ransomware
-
Revealing Hidden Performance Issues with Extended Events
Extended Events may be the most underutilized SQL Server tool for query performance tuning. Did you know there are now Extended Events that can tell you about spills to tempdb, aborted queries, and even queries that execute with common antipatterns? It’s true! With newer Extended Events released in SQL Server 2022, you can now diagnose these query problems and more. You can even measure the impact of these Extended Events with other Extended Events. In this demo-filled session you will see how you can create these powerful Extended Events, how to review the collected data to determine which queries you can improve, and even how to rewrite the captured queries for improved performance. You’ll leave this session with a new set of tools to find and fix hidden performance issues in your SQL Server queries.
-
Surviving SQL Server: Guidance for new or accidental DBAs
-
Performance Tuning Your Transactions
-
Defending Your SQL Server: Practical Strategies against Ransomware
-
Revealing Hidden Performance Issues with Extended Events
Extended Events may be the most underutilized SQL Server tool for query performance tuning. Did you know there are now Extended Events that can tell you about spills to tempdb, aborted queries, and even queries that execute with common antipatterns? It’s true! With newer Extended Events released in SQL Server 2022, you can now diagnose these query problems and more. You can even measure the impact of these Extended Events with other Extended Events. In this demo-filled session you will see how you can create these powerful Extended Events, how to review the collected data to determine which queries you can improve, and even how to rewrite the captured queries for improved performance. You’ll leave this session with a new set of tools to find and fix hidden performance issues in your SQL Server queries.
-
Surviving SQL Server: Guidance for new or accidental DBAs
-
Performance Tuning Your Transactions
-
Defending Your SQL Server: Practical Strategies against Ransomware
-
Revealing Hidden Performance Issues with Extended Events
Extended Events may be the most underutilized SQL Server tool for query performance tuning. Did you know there are now Extended Events that can tell you about spills to tempdb, aborted queries, and even queries that execute with common antipatterns? It’s true! With newer Extended Events released in SQL Server 2022, you can now diagnose these query problems and more. You can even measure the impact of these Extended Events with other Extended Events. In this demo-filled session you will see how you can create these powerful Extended Events, how to review the collected data to determine which queries you can improve, and even how to rewrite the captured queries for improved performance. You’ll leave this session with a new set of tools to find and fix hidden performance issues in your SQL Server queries.
-
Surviving SQL Server: Guidance for new or accidental DBAs
-
Performance Tuning Your Transactions
-
Hidden Pathways to Achieving Peak SQL Server Performance
Jeff Levy
Senior Manager, Protiviti
Jeff Levy
Protiviti, Senior Manager
Jeff is an experienced Data Architect and is very adept at understanding client needs while driving through and translating business requirements into a technical solution. Jeff has been working in IT for over 14 years and has been working in the SQL / Data Warehouse space since 2012. Jeff has a specialization in the Azure / Microsoft technology stack and has designed and developed many data platforms from scratch.
-
Choosing the Right Tool: Microsoft Fabric Warehouse vs. Lakehouse
The modern data landscape demands scalable, flexible, and efficient architectures to support diverse business needs. With Microsoft Fabric, two leading paradigms have emerged to address these challenges: the Fabric Warehouse and the Fabric Lakehouse. While both tools aim to provide robust solutions for data storage, processing, and analytics, their approaches, strengths, and trade-offs differ significantly. This presentation explores the core concepts, architectures, and use cases of the Fabric Warehouse and Fabric Lakehouse. I will compare their performance in areas such as data integration, scalability, and cost-efficiency. Attendees will gain insights into how these approaches align with specific business objectives and workloads, enabling informed decisions about which model best suits their organization’s data strategy.
Jeff Taylor
Principal Data Consultant, Database Consulting, LLC
Jeff Taylor
Database Consulting, LLC, Principal Data Consultant
Jeff Taylor is a Principal Data Consultant, Microsoft MVP, Redgate Community Ambassador, and Octopus Insider with over 27 years of experience specializing in performance tuning and critical data issues. He currently organizes the Jax Data and SQL Saturday Jacksonville and is the Jacksonville Development Users Group co-organizer.
-
Optimize Your Database To Run Like A Ferrari, Not A Mini-Van
-
Azure Data Factory (ADF) – Your New SSIS
As data volumes grow, so does the complexity of managing them. With every increase in data, there's more to extract, transform, and load. This often requires connectors for both extraction and import, yet legacy tools like SSIS may fall short, lacking the modern connectivity and scalability that today’s data demands. Azure Data Factory (ADF) offers a powerful, cloud-native alternative. With over 120 built-in connectors, ADF enables seamless data integration from virtually any source, while scaling effortlessly to handle large datasets. In this session, we’ll explore how ADF modernizes ETL/ELT workflows, replacing SSIS with a flexible and scalable platform. We’ll dive into its core components—pipelines, activities, datasets, linked services, data flows, and integration runtimes—and discuss best practices for deployment, monitoring, scheduling, and cost optimization. To bring it all together, we’ll walk through a live demo of a real-world pipeline originally built for SQL Saturday Jacksonville, showcasing how ADF can tackle practical data integration challenges with minimal coding. A live demo and walkthrough of a real-world pipeline—originally designed for SQL Saturday Jacksonville—will illustrate how ADF can be used to solve practical data integration challenges with minimal code.
-
What's New In SQL Server For The Developer
Jeremy Schneider
Postgres Engineer, GEICO
Jeremy Schneider
GEICO, Postgres Engineer
Jeremy Schneider has been programming for 30 years and working with databases for 20 years, first focused on Oracle and later focused on Postgres. He is currently an organizer of the Seattle Postgres User Group. He is also a Postgres Engineer at GEICO tech. Bringing his background with large-scale data processing and enterprise relational databases to the table, he is helping build a next-generation hybrid-cloud database platform – enabling developers to architect and operate applications that are fast and reliable while meeting business requirements such as integration, compliance and security.
-
Quroum-Based Consistency for Cluster Changes with CloudNativePG Operator
Most people don’t think of Postgres in the context of quorum or distributed systems theory, but vanilla, open-source Postgres has supported quorum commits across multiple replicas for almost 10 years now. Technologies like Cassandra and Dynamo popularized quorum consistency in the hot path of distributed writes and reads, but the theory also applies to cluster reconfigurations in a single-writer database like Postgres. Stateful Kubernetes Operators at level V of the capabilities framework require very careful end-to-end coordination between control plane and data plane algorithms to avoid data loss when providing auto-healing under circumstances like network partitions or compounded failures. This session will explore how quorum consistency can be applied in the CloudNativePG operator, offering insights to users of Postgres on Kubernetes about trusting Postgres to keep our data safe.
Jess Pomfret
Data platform engineer, Data Masterminds
Jess Pomfret
Data Masterminds, Data platform engineer
Jess Pomfret is a Data Platform Engineer and a dual Microsoft MVP. She started working with SQL Server in 2011, and enjoys problem-solving and automating processes with PowerShell. She also enjoys contributing to dbatools and dbachecks, two open source PowerShell modules that aid DBAs with automating the management of SQL Server instances. She has also contributed to the SqlServerDsc module, adding several new resources to use when configuring your SQL Servers. She grew up in the South West of England and outside of her DBA life enjoys Crossfit, cycling and watching proper football.
-
Automating SQL Server Management with GitHub Actions and PowerShell
This talk demonstrates how GitHub Actions can be leveraged with PowerShell and SQL Server to streamline database operations and implement DevOps practices for your database environment. What you'll learn: • GitHub Actions – This powerful CI/CD platform enables automation triggered by code commits, issue creation and more. • PowerShell and dbatools – We'll then add PowerShell and dbatools into the mix to extend the automation to managing SQL Server automation. • Demos and Ideas – I'll show multiple demos, including adding articles to SQL Server replication and deploying database change with sqlpackage. This session aims to give you a practical understanding of how to combine these technologies to reduce manual effort, minimize human error, and build more reliable and repeatable processes for your SQL Server environments.
-
Data Infrastructure as Code and SQL Server Deployments from Zero to Hero
Jim Hill
Business Intelligence Practice Manager, JourneyTeam
Jim Hill
JourneyTeam, Business Intelligence Practice Manager
With over 30 years of experience turning raw data into actionable insights, Jim is a seasoned expert in data warehousing and business intelligence. He has designed and implemented innovative solutions for leading organizations including Motorola, On Semiconductor, DHL, and 1-800 Contacts, as well as many others across diverse industries. A passionate educator, Jim regularly teaches courses on data visualization and Power BI, helping professionals communicate data with clarity and impact. His practical, engaging approach empowers teams to unlock the true value of their data. Jim is also a dynamic conference speaker, known for delivering insightful, high-energy sessions that inspire audiences to rethink how they work with data. He has presented at top industry events such as TDWI, Oracle OpenWorld, PASS Summit, and FabricCon 2025, earning high praise for both content and delivery.
-
How Much is Fabric | Strategies for Estimating Fabric Spend
In today's rapidly evolving digital landscape, accurately estimating fabric capacity expenses is crucial for optimizing resource allocation and managing costs effectively. This session will delve into the various strategies and best practices for estimating fabric capacity expenses within the Microsoft Fabric ecosystem. Attendees will gain insights into the key factors influencing fabric costs, including workload patterns, resource utilization, and scaling considerations. Through real-world examples and expert guidance, participants will learn how to develop robust cost estimation models that align with their organization's budgetary goals and operational requirements. Whether you're a business intelligence developer, IT manager, or financial analyst, this session will equip you with the knowledge and tools needed to make informed decisions about fabric capacity planning and expense management.
Jitendra Kumar
Delivery Consultant, Amazon Web Services
Jitendra Kumar
Amazon Web Services, Delivery Consultant
Jitendra Kumar is a Senior Delivery Consultant at AWS with over two decades of enterprise database expertise, specializing in large-scale distributed Data Warehouse systems using MPP architecture. A 5X AWS Certified Professional and passionate knowledge sharer, he has authored numerous technical publications and regularly speaks at conferences, bringing practical insights from his experience managing complex database migrations and modernization projects. At AWS, Jitendra helps enterprise customers transform their self-managed database infrastructure to managed services, ensuring cost-effectiveness, scalability, and resilience across on-premises, AWS, and Azure environments. Jitendra holds a Master of Computer Application (MCA) and Bachelor of Science (BSc) degree. You can connect with him on LinkedIn at https://www.linkedin.com/in/jitendrkumar/
-
Cross-Engine Query Optimization: Systematic Approach to Database Migration
*Abstract:* As organizations modernize their data infrastructure, migrating between database engines has become increasingly common. However, the varying optimization strategies across different database engines present significant challenges for maintaining query performance. This session presents a comprehensive framework for understanding and translating query optimization techniques across Oracle, PostgreSQL, and SQL Server. *Key Topics:* – Query Optimizer Architecture: A comparative analysis – Execution Plan Migration Patterns – Cross-Engine Optimizer Hint Mapping – Performance Validation Methodology – Automated Hint Translation Tools *Target Audience:* Database architects, performance engineers, and technical leaders involved in large-scale database migrations.
-
Automating 2TB Azure SQL Database Migration to RDS in 3.5 Hours using CI/CD
Joe Fleming
Founder/Owner, SQL Tailor Consulting
Joe Fleming
SQL Tailor Consulting, Founder/Owner
With over 25 years of data experience, I've been through the ringer, solving some of the weirdest problems you can imagine. I've done Performance Tuning, Disaster Recovery, High Availability, Replication, RCA and general troubleshooting in industries from health care, finance, software, manufacturing, logistics, and more — both on-premises and in AWS and Azure Cloud platforms. I've got a passion for helping others and was a PASS Chapter Leader and Regional Mentor for several years. Always looking for new challenges, I started consulting in 2015 and became an independent consultant in 2022.
-
Transactional Replication: From Expletives to Excellence
Getting transactional replication set up and running can be a challenge, and troubleshooting it can send you into a rabbit hole of frustration full of twists, turns, and perhaps even a Jabberwocky. This session will help you get a handle on the various components, how they interact, and how to translate the sometimes confusing error messages into practical information for troubleshooting.
Joey Dantoni
Principal Cloud Architect, 3Cloud
Joey Dantoni
3Cloud, Principal Cloud Architect
Joey D'Antoni is a Principal Cloud Architect, and a Microsoft Data Platform MVP and VMware vExpert with over 20 years of experience working in both Fortune 500 and smaller firms. He is a frequent speaker at major tech events like Microsoft Ignite, PASS Summit and Enterprise Data World. He blogs about all topics technology at joeydantoni.com. He believes that no single platform is the answer to all technology problems. He lives in Malvern, PA and holds a BS in Computer Information Systems from Louisiana Tech University and an MBA from North Carolina State University, and is the co-author of the Microsoft book "Introducing SQL Server 2016"
-
How to Manage Your Azure Infrastructure Like an Expert: Top Best Practices
-
Cloud Strategy in a Turbulent World
Between regulatory, geo-political, and capacity concerns, having an exit strategy from a cloud provider is a decision many organizations face. While having an exit strategy from the cloud isn’t a pleasant thought, in this session, you will learn how to plan, the tradeoffs of choosing various services, and your options for where to relocate. You’ll learn about co-location facilities, national clouds, and the challenges of moving back to an on-premises data center. We will discuss the complex services to move, like email or chat, and those that move more quickly. You will learn about how sticky various cloud services are, and how you best plan for a turbulent future.
-
Azure: From Admin to Architect
-
Azure: From Admin to Architect
-
PaaS vs. IaaS: Navigating Trade-offs for Smarter IT Decisions
-
Getting Started with Data Governance in Microsoft Fabric and Purview
-
How to Manage Your Azure Infrastructure Like an Expert: Top Best Practices
Are you struggling with understanding your cloud resources? Or are you trying to bring reduce your costs or optimize workloads? In this webinar, you will learn about the overview of Azure Infrastructure—networks, VMs, databases, and storage. Learn strategies for optimizing your costs, improving performance, and how to take advantage of options to lower your costs, improve security, and increase performance. You will learn strategies for maintaining your infrastructure over time to stay on top of the latest cloud features..
-
Building SQL DBs in Fabric for Dashboards and Datamarts: Design Patterns
-
Becoming Azure SQL DBA – Security, Compliance, Threats, Connectivity
In this session you will learn how to evolve your Azure SQL DBA skills in the domain of security, compliance, authentication and connectivity, from the perspective of an on-premises DBA now supporting databases in Azure. On the example of a fully-managed Azure SQL PaaS service, you will gain a deep understanding of the security and compliance concepts the platform offers. You will understand authentication and best practices related to using WinAuth and EntraID with your Azure SQL resources – and how it maps with resources migrated from your on-premises SQL Server. We will review how to use advanced threat protection to automatically detect any security vulnerabilities. You will learn about Microsoft Purview, which helps you gain visibility, safeguard and manage sensitive data, govern, and manage critical data risks and regulatory requirements in Azure. We will also cover the basics of networking in Azure SQL and what is required to securely connect to, and access, your Azure SQL resources. In each of the areas and throughout the session, we will map on-premises SQL Server DBA responsibilities to the Azure SQL DBA role – highlighting what responsibilities are new, which ones stay the same, and what is shared or fully delegated to Microsoft. You will walk away with an understanding of the relevant DBA skills you need to evolve as an Azure SQL DBA.
-
Becoming Azure SQL DBA – New Opportunities in Azure – Panel Discussion
As an Azure SQL DBA, your role expands beyond its traditional scope. This shift in responsibilities creates exciting opportunities to develop new cloud skills – from integrating with diverse Azure services to adopting modern coding practices with AI – empowering you to grow and innovate. This wrap-up session of the learning pathway includes a 10-minute introduction followed by a dynamic 50-minute interactive panel discussion. Bring your questions and engage directly with experts ready to share insights and practical guidance. Moderated by Bob Ward, this session is your opportunity to clarify concepts, explore best practices, and connect with industry leaders.
Johan Ludvig Brattås
Director, Brattås Konsult
Johan Ludvig Brattås
Brattås Konsult, Director
Johan Ludvig Brattås is a director, and a dedicated community guy. He has worked with MS SQL server since late 1999, mostly with BI in one form or another. Since 2015, most of his work has been in the cloud working on data platform services such as Snowflake, Databricks and Fabric. Combining his passion for MS SQL Server with his passion for sharing knowledge, he started speaking at various events in the SQL Community. This is also a way to give back to the community for all the things he has learned over the years. When not working, Johan Ludvig either spends his time with his kids, playing with new technology or teaching coeliacs how to bake glutenfree food.
-
Building Event-Driven Architectures in Azure and Fabric
Event-driven architecture (EDA) is a design pattern in which the flow of data through the layers of the data platform (such as ETL pipelines) is determined by events such as user actions, sensor outputs, or messages from other programs. Event-driven architecture can enable real-time data processing and decision-making by reacting to events as they occur, facilitating timely insights and actions, but can also be applied to a more batch-oriented platform where the flow of data is orchestrated by the events instead of a fixed schedule. This session will cover the basics of event-sourcing and messaging, as well as best practice when designing and building event-driven architectures. We’ll discuss the platforms and tools available under the Azure stack for streaming data, as well as the various components of Fabric Real-Time Intelligence, including Real-time Hub and its features to subscribe to various event sources. You will learn how to build enterprise-scale event-driven solutions, with a demonstration of a full end-to-end streaming data solution built in Azure, as well as Microsoft Fabric. We will cover how to optimize cost and scalability when building these solutions, ensuring that your solution is also resilient and performant when handling data in motion. And last, but not least – even if your organization does not have much in the way of data in motion at the moment, you will learn how to build event-driven actions on a variety of event types and deliver value and insights quicker without relying on traditional batch and scheduling data integration solutions.
-
Build A Fabric Real-time Intelligence & Power BI Solution in One Day
-
Build A Fabric Real-time Intelligence Solution in One Day
John Martin
Technology Partner and Alliances Manager, Redgate
John Martin
Redgate, Technology Partner and Alliances Manager
John is an experienced data platform professional having spent over fifteen years working with Microsoft SQL Server, Azure, and AWS technologies. Working closely with clients to solve their data platform problems with innovative solutions with a focus on simplicity, maintainability, and automation.
-
Modernizing workloads from SQL Server to PostgreSQL with Amazon Aurora
There are many reasons to look at using PostgreSQL for your database systems, but what if you're already using SQL Server? In this session we will walk through the process needed to migrate a SQL Server database to PostgreSQL running on Amazon Aurora. Then take a practical look at how to put it into practice converting the database code, moving data, and automating deployments with AWS Services and free community tools. You'll walk away with the knowledge needed to start moving workloads from SQL Server to PostgreSQL on Amazon Aurora.
-
Database Security is not the DBAs responsibility, change my mind!
-
Of course I want to let Azure SQL manage my indexes, why wouldn't I?
-
The importance of being a visible ally
-
Automation for the people!
-
Create and Manage Masked Databases in the Cloud for Development and Testing
The cloud is a game changer when it comes to increasing developer agility and their ability to deliver new features and value to customers. However, doing this in a way that minimizes risks with data leakage, or introducing bugs is a challenge when working with simple datasets which don’t reflect what is in production. But what if there was a way to easily take a production database and create a library of masked images for development and test to use? In this session we will look at how we can make use of could-native functionality in PaaS database services to create and manage copies of production. Then looking at how we can mask the data before creating multiple images with different volumes of data in them. We will discuss the key challenges that need to be overcome and some ways to do it, before looking at how we can add Redgate Test Data Manager into the mix to simplify and speed up the process.
-
Break the Compliance Bottleneck – Automate Secure Test Data in 60 Minutes
Rising data privacy regulations coupled with the need for speed, mean delivering high-quality software quickly and securely is non-negotiable. This session will show you how to automate the creation and delivery of secure test data in just 60 minutes using Redgate. We’ll show you how to: – Automatically classify sensitive data (PII, PCI, PHI, and more) – Apply robust, policy-driven masking to meet compliance standards. – Use data subsetting to accelerate masking and reduce test data volume. Integrate the delivery of compliant test data into your CI/CD pipelines. With a live demonstration and practical guidance, this session is perfect for DevOps engineers, DBAs, and developers looking to simplify test data provisioning while staying secure and compliant. Bring your challenges and your curiosity – this session is interactive, practical, and designed to deliver value fast.
-
Why TimescaleDB is Great for Storing Monitoring Data
Monitoring is important. Being able to see performance history, generate baselines, and get meaningful insights quickly. But, is a relational database the best place to keep this data? We will look at why it is a better option to use a time series database like TimescaleDB for storing monitoring data and the benefits of using the right engine for the right application.
-
A Guide to migrate SQL Server databases to PostgreSQL on Amazon Aurora
-
Shopaholics: How to pick the right software to make your life EASIER
-
Picking the Right Cloud Platform for Your Project
Have you been told you need to move your data estate to the cloud? Do you know if you should go IaaS or PaaS? Which database engines are you going to use? In this session we’re looking to answer these questions and more. The cloud is a compelling option to host your data estate. With the proliferation of different options for cloud hosting however, how do you choose which one? Is there an emphasis in your project on performance? Cost? Security? We will show you how to identify your objectives, select your success criteria and decide which cloud platform will fulfil your company’s needs.
-
Adding PostgreSQL to Your SQL Server Skill Set
More organizations are adding PostgreSQL to their technology stack than ever before. The challenge here is that they are not immediately replacing their existing technology. This means that more and more people need to understand both SQL Server and PostgreSQL. This session is explicitly designed to support the people who already know SQL Server in their journey to add PostgreSQL to their skill set. We'll cover the areas of overlap between the two tool sets. We'll also get into all the differences that can make learning PostgreSQL a challenge. Not only will this all-day session teach you about PostgreSQL, but we'll explore tooling, documentation, the cloud, and other resources that can help you on your journey as you add PostgreSQL to your existing skill set.
John Morehouse
Principal Consultant, Denny Cherry & Associates
John Morehouse
Denny Cherry & Associates, Principal Consultant
John Morehouse is a Principal Consultant with Denny Cherry & Associates Consulting living in Chesapeake, Virginia. Honored to be a Microsoft Data Platform MVP & former VMWare vExpert, having over 2 decades of technical experience, John now focuses on solving crucial business problems with Microsoft SQL Server oriented solutions. John has a passion around speaking, teaching technical topics and giving back to his community whenever possible. He is a blogger, avid tweeter, and a frequent speaker at conferences and user groups whenever possible. If you want to find John, you can find him on LinkedIn (https://www.linkedin.com/in/johnmorehouse) or on his blog, http://sqlrus.com.
-
Exploring Optimized Locking in Azure SQL Database
-
Fortifying SQL Server on Azure VMs: Disaster Recovery Techniques
Failing to plan is planning to fail. Azure offers a robust ecosystem where you can run SQL Server pretty effortlessly. Yet, disasters affect data centers, even ones belonging to Microsoft. Natural events, like tornadoes or hurricanes, can cause entire data centers to be lost. Even human error, like a heavy equipment operator cutting the power to a data center by mistake, can cause an entire outage if you do not take steps to prevent it. Several options are available to ensure disaster recovery when running SQL Server on an Azure Virtual Machine. Sometimes it can be challenging to determine which option is the best for your particular use case and fit into your organization's grand scheme. In this session, we'll walk through some available options so that you can see first-hand the benefits and pitfalls of each one. You will – • Learn about the various options available for your Azure VM running SQL Server • See where shortcomings may exist in each solution • Leave with a solid idea of what would work best for your environment Disaster recovery is vital in Azure. Let's ensure you've got everything in place to recover successfully, regardless of what comes your way.
-
Azure SQL Database vs. Fabric SQL DB: Key Differences for Developers & DBAs
-
Vectors in Azure SQL Database: Bringing AI to Your Data
-
SQL Server Administration Basics: Laying the Foundation for Your DBA Path
In today's data-driven world, SQL Server continues to be a powerhouse for organizations looking to leverage their data effectively. This all-day training session offers practical, actionable insights for optimizing SQL Server environments and ensuring operational efficiency, whether on-premises or in the cloud. We’ll start by breaking down the basics of hardware and performance. You’ll learn how SQL Server uses system resources like CPU, memory, and storage, and how to choose the right setup for your environment. We’ll showcase both on-prem and cloud-based options so you can make smart choices that fit your organization’s needs. From there, we’ll walk through essential day-to-day administration tasks. You’ll learn how to configure your SQL Server environment, set up backups, manage routine maintenance, and build simple disaster recovery plans. We’ll use real-world examples to help you understand what to do, why it matters, and how to handle common challenges that come up in a DBA’s world. In this session – You will: • Learn about critical facets of SQL Server architecture • Exam common configurations & administrative practices • Review high availability & disaster recovery options for SQL Server By the end of the day, you’ll walk away with the confidence and knowledge to start managing SQL Server environments effectively—and a solid foundation to grow from as your experience builds.
-
Becoming Azure SQL DBA – High Availability and BCDR
In this session you will learn how to evolve your Azure SQL DBA skills in the domain of High Availability (HA), Business Continuity and Disaster Recovery (BCDR) from the perspective of on-premises DBA’s. With the examples of SQL Server hosted in Azure VMs and fully-managed PaaS services Azure SQL Database and Azure SQL Managed Instance, you will gain a deep understanding of HA and BCDR architectures in Azure, and any new responsibilities you might have as an Azure SQL DBA. You will understand how HA for General Purpose and Business Critical service tiers work, and how automated patching and maintenance windows work in Azure SQL. You will also gain a deep understanding of how automated short and long-term backups work in Azure SQL. Furthermore, you'll understand advanced concepts of geo disaster recovery with Failover Groups, and disaster recovery between SQL Server and Azure SQL Managed Instance. In each of the areas and throughout the session, we will map on-premises SQL Server DBA responsibilities and how they map to Azure SQL DBA – what responsibilities are new, same as ever, shared, or fully delegated to Microsoft. You will walk away with a deep understanding of how your on-prem DBA skills evolve to Azure SQL DBA in these areas.
-
Becoming Azure SQL DBA – New Opportunities in Azure – Panel Discussion
As an Azure SQL DBA, your role expands beyond its traditional scope. This shift in responsibilities creates exciting opportunities to develop new cloud skills – from integrating with diverse Azure services to adopting modern coding practices with AI – empowering you to grow and innovate. This wrap-up session of the learning pathway includes a 10-minute introduction followed by a dynamic 50-minute interactive panel discussion. Bring your questions and engage directly with experts ready to share insights and practical guidance. Moderated by Bob Ward, this session is your opportunity to clarify concepts, explore best practices, and connect with industry leaders.
-
Solving Real-World SQL Server Performance Problems
Are your users frustrated by slow reports? Do your SQL Server instances—on-premises or in Azure—struggle under high demand? Whether you manage a single server or a large-scale environment, performance tuning is essential, and it doesn’t have to be overwhelming. In this full-day session, learn how to identify and resolve performance bottlenecks using a wide range of tools, scripts, and best practices. We’ll start with practical techniques for analyzing your environment, reading execution plans, and tuning for performance. You'll gain a clear understanding of how everyday maintenance tasks—and even infrastructure—can impact your server’s responsiveness. This session focuses on SQL Server 2019 and newer, including Azure SQL Database and SQL Server 2025, covering the latest performance enhancements and cloud-specific considerations. We’ll walk through real-world examples of common performance problems and how to fix them using straightforward, repeatable methods. You’ll leave with: A checklist of key performance areas to evaluate in your environment—whether in the cloud or on-prem Strategies for addressing both query-level and server-level issues Insights into how SQL Server and Azure features can work for—or against—you. Confidence to apply what you’ve learned, regardless of your current skill level. Designed for DBAs, developers, and anyone responsible for SQL Server performance, this session emphasizes practical, real-world solutions you can use right away—on any platform.
Juliana Smith
Reporting Lead, Costain
Juliana Smith
Costain, Reporting Lead
Hi there! I’m Juliana Smith, a multi award-winning IT Chartered Professional. For over 15 years, I’ve been driven by a passion for uncovering the stories hidden within data. I specialise in turning complex project information into clear, actionable insights that help stakeholders confidently track progress and tackle challenges head-on. Through Power BI, I don't only create visualisations, but also ensure it’s accessible to everyone, following WCAG guidelines to remove barriers and make informed decision-making truly inclusive.
-
Designing for Everyone: Power BI Through the Eyes of Your Users
We all want to build reports that work. But what if your “user-friendly” report is only friendly to users like YOU? In this session, we’ll explore how your Power BI design decisions impact people with different ways of seeing, thinking, and interacting with data. Using real-world scenarios, we’ll uncover the subtle ways our reports can either empower—or unintentionally exclude—the people they’re meant to help. This isn’t another lecture on accessibility standards. It’s a mindset shift. A people-first approach to designing Power BI reports that are not only functional, but fundamentally inclusive and impactful for everyone who relies on your data. You’ll walk away with: • A new lens for understanding your audience • Practical tips to improve clarity, usability, and reach • Inspiration to design with empathy and impact Great design isn’t about perfection—it’s about people. And when we design for “edge cases,” we’re actually designing better experiences for everyone.
-
User-Centered Power BI Report Development: Enhancing UX and Accessibility
A well-designed Power BI report should engage, inform, and be accessible to all users. Yet, many reports suffer from poor usability, cognitive overload, and accessibility barriers, making insights harder to interpret and act upon. In this interactive workshop, you’ll explore UX best practices and digital accessibility principles to create reports that are intuitive, clear, and inclusive. Through hands-on exercises, case studies, and live critiques, you’ll gain practical strategies to enhance usability and accessibility in your Power BI reports. What You’ll Learn: – Identify audience needs and design for different personas – Apply UX best practices to improve clarity and reduce cognitive fatigue – Recognize and fix common accessibility challenges in Power BI reports – Integrate accessibility checks and automation into your reporting workflows The workshop includes two key segments: UX-Driven Report Design – Learn about different audiences, layout strategies, and improve usability through a hands-on redesign challenge. Accessibility in Power BI – Experience digital barriers firsthand, apply real-time fixes, and explore Power BI’s accessibility features to enhance inclusivity. By the end, you’ll have actionable techniques, tools, and best practices to build user-friendly, effective, and inclusive Power BI reports.
K. Brian Kelley
Enterprise Architect, AgFirst Farm Credit Bank
K. Brian Kelley
AgFirst Farm Credit Bank, Enterprise Architect
Brian Kelley is an author, columnist, Certified Information Systems Auditor (CISA), Certified Data Privacy Solutions Engineer (CDPSE), accredited CISA trainer, TOGAF 9 certified architect, and former Microsoft Data Platform (SQL Server) MVP (2009-2016) focusing on Microsoft Azure, SQL Server and Windows security. Brian currently serves as an enterprise architect covering cloud, data, infrastructure, and security. He has served in a myriad of other positions including senior database administrator/architect, data warehouse architect, web developer, incident response team lead, and project manager. He is a regular columnist for the ISACA Journal writing on Digital Trust, as well as an author at MSSQLTips.com. Brian is a frequent speaker on Microsoft SQL Server, security, and audit topics and has presented at the PASS Data Summit, IT/Dev Connections, SQL Connections, and the Techno Security and Digital Forensics Conference.
-
Quantum Computing and Its Impact on Data
Currently, AI is the new "big thing" in technology, while quantum computing has been advancing quickly, mostly out of the public view. However, quantum computing will have a direct and significant impact on data science and search, due to quantum principles such as entanglement and superposition and quantum algorithms such as Grover's algorithm. However, quantum computing is not a replacement for "classical computing," but another type of "compute" which promises solutions to particular problems in reasonable time frames that are practically impossible in classical computing. In this session, we'll discuss what quantum computing is, why it offers significant improvement over traditional computing in specific use cases, what those cases are, where we'd likely see hybrid setups of both classical and quantum computing, and what you and your organization should be doing to prepare for it. We will also look at proposed timelines for when particular milestones in quantum computer are predicted to be reached.
Karthik Ganapathy
Solution Specialist, Quest software Inc
Karthik Ganapathy
Quest software Inc, Solution Specialist
15 years of experience in data governance and data analytics. Helping customers implement data discovery and AI governance. Expertise around automation and integration for data lineage, data observability, semantic augmentation data modernization. My focus revolves around Metadata management, Data discovery and search, Data lineage and impact analysis, Data quality, Data Literacy, Data classification and tagging
-
Autonomous data products : From Business Requirement to Data Product with Trust & Speed
This framework establishes a scalable "intelligence factory" where trusted data products are not just manually curated but are systematically and autonomously generated, cataloged, and published to an internal data marketplace. This empowers organizations to rapidly monetize their unstructured data assets, accelerate innovation, and foster a true data-driven culture built on a foundation of automated governance and trust.
Karunakar Kotha
Sr Customer Engineer / Data Architect, Microsoft
Karunakar Kotha
Microsoft, Sr Customer Engineer / Data Architect
I am an IT professional with over 18+ years of experience, specializing in database technologies. My background includes database architecture, modernization, design, and performance optimization. Worked with a wide range of clients, successfully delivering database solutions that address real business needs. I have led modernization projects, helping organizations move their databases to the cloud to improve scalability, performance, and cost-effectiveness. I am detail-oriented and passionate about Database technologies. My goal is always to deliver solutions that meet and exceed customer /business expectations.
-
Navigating the Future: SQL Server to Fabric Real-Time Intelligence
In today's data ecosystem, professionals face a critical decision: stick with familiar SQL Server technology or venture into Microsoft Fabric's Real-Time Intelligence (RTI) databases. This choice can significantly impact performance, scalability, and overall business intelligence capabilities. As a former SQL DBA and Microsoft Escalation Engineer who's worked extensively with both Azure Synapse and Fabric RTI, I'll guide you through this decision-making process with clarity and practical insights. We'll explore when SQL Server remains the optimal choice and when RTI databases offer compelling advantages. You'll discover the architecture differences that matter, performance considerations, and cost implications of each approach. I'll demonstrate how your existing SQL skills transfer to Kusto Query Language (KQL), showing familiar patterns and highlighting key differences. Through real-world scenarios and demonstrations, we'll examine migration paths, hybrid approaches, and integration strategies between these technologies. You'll see firsthand how these systems handle time-series data, complex analytics, and large-scale workloads differently. By the end of this session, you'll have a clear framework for database selection decisions and practical knowledge to implement or migrate to Fabric RTI when appropriate for your organization's needs.
Kasper de Jonge
Microsoft
Kasper de Jonge
Microsoft
-
Microsoft Fabric, Lakehouses and Power BI: A guide for BI developers
-
Unlocking AI Potential: Leveraging Your Data with Microsoft Fabric
-
Fabric Security: Everything you Need to Know!
Microsoft Fabric is an all-in-one analytics solution for enterprises that covers everything from data movement to data science, Real-Time Analytics, and business intelligence. It offers a comprehensive suite of services, including data lake, data engineering, and data integration, all in one place. Microsoft Fabric is a SaaS (Software as a Service) platform which works differently than a PaaS (Platform as a Service) like Azure. With the Fabric SaaS service, you are getting a lot of security features out of the box that you might not be aware of to allow you to secure your data estate. In this session we will look at how Fabric secures your data, we will look at all the aspects of security of the Fabric platform: – Understand how users authenticate, – Understand inbound security options , – How can you access your secure data, – Where and how is your data stored and where does it go when used, – How to make sure your data is only accessible for certain users. – Finally, how to govern your data with Purview integration. This will make sure you understand what you get out of the box with Fabric and have the discussion with your security team!
Kellyn Gorman
Multiplatform and AI Advocate, Redgate
Kellyn Gorman
Redgate, Multiplatform and AI Advocate
Kellyn Gorman is a Database and AI Advocate and Engineer at Redgate She's the previous director of Data and AI at Silk, and the Oracle SME in Azure at Microsoft. With a robust background in cloud technology and a passion for promoting its merits and potential, I am thrilled to spearhead conversations and actions that help shape the future of this industry. Kellyn has authored numerous technical books, white papers and solution repositories in Github on database, AI and engineering topics.
-
Zero to Understanding with Oracle as a Microsoft Professional
Are you a MS data professional who's always been curious about Oracle but unsure where to start? In this beginner-friendly session, we'll break down the fundamentals of the Oracle databases, exploring key architectural differences, core concepts, (instances, datafiles, etc.) and how they translate to familiar SQL Server constructs. We'll walk through how to connect to Oracle, run basic SQL and navigate Oracle's tools as a Microsoft-savvy pro. We'll go over critical differences and translations and understand common gotchas. Whether you're facing a multiplatform environment or expanding your data skillset, this session is your launchpad.
-
Multi-Platform Databases in the Cloud – How Workloads Impact Decisions
Choosing the right database for the cloud isn't just about features, it's about aligning workloads, performance characteristics, and operational tradeoffs. In this strategic session, we'll examine how real-world workloads influence database decisions for cloud solutions and when the cloud may not be the right decision. We'll focus on Oracle, MySQL and MongoDB, three database platforms that our attendees may not be as familiar with and offer a unique view into the database world. This session will inspect workload patterns for various database usage, (OLTP, analytics, hybrid) and how scaling, latency and cost behaviors differ. You'll leave with a better understanding of how to evaluate platform and solution based on the workload type and organizational needs- not hype.
-
Toolbox Treasures: 10 Productivity Hacks to Level Up Your Database Work
-
Redgate Keynote: The Data Professional of the Future: How You Can Thrive in the Age of Machines
The data professional of 2025 might be a career database expert…or simply the closest thing your organization has to a data professional. The database landscape has never been more complex, and the modern data professional is tasked with balancing shifting platform trends and emerging technology like AI with the ever-present need to keep databases and the data they contain secure – in an era when organizational pressure to deliver value from data is stronger and more persistent than it’s ever been. In this session you’ll learn more about the pressures and challenges faced by the data professional of today, as well as trusted advice on how to navigate today’s and tomorrow’s database landscape, no matter where you are on your professional journey.
-
Redgate Luncheon: Harnessing AI: Insights and Innovations from the Community
Join us for a dynamic luncheon session where Community Experts will explore the transformative power of AI in the world of databases. This hybrid panel-networking session promises to blend insightful dialogue with interactive discussion, offering attendees a unique opportunity to engage directly with their peers and industry experts. After an enlightening panel discussion of each topic, you'll have the opportunity to delve deeper into these topics at your table, exchanging views and strategies on overcoming these hurdles. This year's session will delve into the practical applications and innovative use cases of AI. Our panelists, who are at the forefront of AI integration, will share their experiences, challenges, and successes. Attendees will then have the opportunity to ask questions, discuss with their peers, and share their own stories. Whether you're an AI enthusiast or just curious about its potential, this session promises to be both informative and inspiring.
-
Becoming Multi-Platform Proficient and the Tools Which Can Help You (Oracle, some MySQL and MongoDB)
Modern data professionals increasingly find themselves managing diverse database technologies, often in the same organization. This session is designed for those who want to sharpen their proficiency across Oracle, MySQL and MongoDB, learning the tips, tools and techniques that can reduce the friction to efficiency when working across platforms. We'll explore the unique strengths and quirks of each database, focusing on administrative monitoring, administration and performance tuning. We'll see how tools can streamline your multiplatform development and operations. If you're balancing enterprise and open-source databases, along with emerging NoSQL use cases, this session is your practical toolkit.
Kendra Little
Staff Database Reliability Engineer, Dutchie
Kendra Little
Dutchie, Staff Database Reliability Engineer
Kendra Little (she/her) helps DBAs and developers squeeze every drop of performance out of SQL Server without losing their minds (or their data). She’s a Staff Database Reliability Engineer at Dutchie, a Microsoft Certified Master in SQL Server, and a seasoned speaker with deep experience in consulting, DevOps, and production firefighting. Kendra specializes in teaching real-world humans how to troubleshoot tough database problems, build smarter automation, and keep systems stable and snappy.
-
Win Friends & Influence Queries with RECOMPILE Hints
-
SQL Server RDS Starter Kit
Thinking about moving to RDS but not sure where the gotchas are hiding? You’re not alone. DBAs face a whole new mix of trade-offs, limitations, and config surprises when stepping into the managed SQL Server world. This session is your crash course on Amazon RDS for SQL Server—built specifically for folks who know how to run an instance, but want the inside scoop on what changes in RDS-land. We’ll talk about what instance types pay off for different workloads and whether RDS Custom is worth the extra baggage it brings. You’ll learn what Multi-AZ really protects you from (and what it doesn’t), how read replicas behave in practice, and how to make the most of them without over-engineering. We’ll also cover the odd little world of parameter groups, backups, and a few MSDB quirks that could leave you scratching your head. By the end, you’ll know what RDS is good at, what it isn’t, and how to navigate it all without waking hidden dragons.
-
T-SQL That Doesn’t Suck: Real-World Patterns for Faster, Smarter Queries
Let’s be honest: even experienced developers write T-SQL that starts to smell over time. Maybe it works, but it’s tangled, hard to maintain, and full of traps for future you. In this full day, demo-packed session, Erik Darling and Kendra Little will walk you through the real world query problems that quietly haunt OLTP systems—and how to fix them without rewriting the whole app. We’ll dissect the subtle stuff that tanks performance: implicit conversions, sneaky NULL logic, non-sargable filters, and joins that don’t do what you think they do. You’ll compare EXISTS to JOINs, untangle OR conditions, and learn when EXCEPT and INTERSECT save you from disaster. You'll see where views go off the rails, when temp tables and table variables shine, and how to modify data in a way that won’t make your DBA cry. Along the way, you’ll learn to leverage window functions, cross apply, and patterns for parameterization that hold up under pressure. You’ll know exactly how to refactor messy code into queries that are easier to understand, debug, and evolve—without sacrificing intent or introducing subtle bugs. If you’ve ever looked at a query and thought, “I have no idea what this does, and I’m afraid to touch it,” this session is for you. You already know how to write T-SQL that works. Now it’s time to write T-SQL you’re proud of.
-
SQL Server Performance Engineering: Techniques That Actually Work
-
Advanced T-SQL Triage: The Art of Fixing Terrible Code
You’ve seen it before: the procedure that looks like it was generated by an AI trained on Stack Overflow and despair. It’s got MERGE. It’s got RIGHT JOINs. It’s got logic so tangled you’d need a flowchart, a flashlight, and a therapist to debug it. And now… it’s your problem. In this full-day festival of query-fixing, Erik Darling and Kendra Little lead you through the real world mysteries of advanced T-SQL: the strange, the slow, and the occasionally cursed. You’ll tackle tangled paging logic, rescue window functions and indexed views from spools and spills, and finally learn when to keep a CTE—and when to yeet it. We’ll refactor data modifications that block like linebackers, decode procedural patterns, and write dynamic SQL that’s powerful and polite. You’ll learn when to CROSS APPLY, dig into views vs. inline TVFs, and discover why RIGHT JOIN is not simply LEFT JOIN’s syntactic twin. We’ll uncover when user-defined functions wreck your query execution plans—and how to rewrite them with flair. If you’ve ever been curious about why that query sometimes takes SO long and how to best rewrite it without just guessing, this is your playground. Expect fast demos, big laughs, and a glorious cheat sheet to take home. Because refactoring SQL isn’t just necessary—it’s super fun when you're in the right party.
Kevin Chant
Lead Technology Advocate, Macaw
Kevin Chant
Macaw, Lead Technology Advocate
Lead Technology Advocate. Originally from the UK and now living in the Netherlands. Microsoft Certified Trainer and Data Platform MVP. Many years experience in the IT sector, including supporting companies in the top 10 of the fortune 500 list. In addition to a lot of experience with the Microsoft Data Platform, also various Microsoft Certifications. Real life experience with various Microsoft Data Platform offerings and Azure DevOps. Held various roles; including Team Leader, SQL Server Product Owner, certification coach and Solution Architect. In addition, involved with Data Platform Community in various ways. Including blogs, MVP videos, Dutch Fabric user group organizer and sharing various repositories in GitHub.
-
Professional DP-700 exam guide for the Fabric Data Engineering cert
-
Deep Dive into CI/CD Options for Microsoft Fabric
When looking to implement CI/CD within Microsoft Fabric one of the biggest questions is where to start, due to the various options available. In this session I will explore the various CI/CD (Continuous Integration and Continuous Deployment) options available in Microsoft Fabric. Participants will gain insights into different CI/CD workflow options, including: – Deployment Pipelines – Git-based deployments – Azure Pipeline deployments – Alternative options for Data Warehouses and SQL Databases in Fabric In addition, I will cover Microsoft Fabric's Variable libraries to manage and reuse variables across different stages. Plus, testing options for some Microsoft Fabric items. By the end of the session, participants will have a comprehensive understanding of the various CI/CD options available so that they can decide on the best option(s) for themselves.
-
Targeted DP-600 exam guide for the Fabric Analytics Engineer certification
Kevin Liu
Cloud Platform Architect, ID.me
Kevin Liu
ID.me, Cloud Platform Architect
Kevin is a Cloud Platform Architect at ID.me, a next-generation digital identity wallet. He is responsible for leading the platform infrastructure team where he manages container runtimes, databases, network infrastructure, and ML/AI ops.
-
PostgreSQL on Google Cloud: Unveiling Innovation in Cloud SQL and AlloyDB
Join us to explore why Google Cloud is the ideal home for your PostgreSQL databases. This session will unveil the latest innovations and powerful capabilities within Cloud SQL for PostgreSQL and AlloyDB for PostgreSQL, Google Cloud's leading managed database services. We’ll dive into the compelling advantages of each platform, showing how they provide unmatched scalability, performance, and enterprise-grade features to meet a wide range of application needs. You'll learn how these solutions empower developers to build, deploy, and manage applications with confidence. The session will conclude with a real-world case study from a Google Cloud customer, ID.me, sharing their journey and the benefits they realized by moving their PostgreSQL workloads to Google Cloud. Discover how to unlock the full potential of PostgreSQL with Google Cloud.
-
Unlock Your Data's Potential: Breakfast with Google Cloud Leadership
Join us for an exclusive, sponsored breakfast designed for data and IT professionals seeking a strategic overview of the entire Google Cloud database ecosystem. This is your opportunity to move beyond individual products and understand the comprehensive strategy that supports your most critical workloads. Connect directly with Google Cloud's database leadership to discuss our commitment to providing choice, performance, and flexibility across all major database requirements. What You Will Learn The Full Portfolio: Discover how our range of offerings—from hyperscale solutions like Cloud SQL and AlloyDB to industry-leading services like Spanner and Firestore—fits together to meet every modern data need. Strategic Partnerships: Get an executive briefing on how our crucial partnerships, including the groundbreaking collaboration with Oracle, enable you to maintain existing investments while accelerating cloud adoption. The Innovation Roadmap: Engage in a strategic discussion about the future of data infrastructure and how Google Cloud is pioneering AI-driven operations and open-source innovation. Google Cloud speakers include Raj Pai, Vice President, Product Management, Cloud AI and Itay Maoz, Senior Director of Engineering We will also feature a customer panel with our product experts who will discuss how they've successfully leveraged this comprehensive strategy—integrating both Google Cloud native services and partner technologies—to transform their business.
Kevin Wilkie
Lead Data Architect, Build Technology Group
Kevin Wilkie
Build Technology Group, Lead Data Architect
Kevin has been wrangling data and dodging dirty reads for over 15 years — back when SQL Server 6.5 was considered "modern" and query tuning meant praying to the optimizer gods. His journey began as an "Accidental DBA" — one misfired report and suddenly he was deep in sys.objects, pretending sp_who2 was an actual plan. Since then, he's played nearly every role in the SQL Server ecosystem: SQL Developer, BI Associate, Production DBA, and now a Data Architect with a mission. These days, Kevin spends his time normalizing chaos, denormalizing silos, and preaching the gospel of clean, well-modeled data. He’s been known to pivot on the fly, avoid Cartesian relationships, and bring deadlocks back to life — all while gently reminding teams that SELECT * is a cry for help. If it involves data, Kevin's in — tuning queries, architecting solutions, or just nerding out about why surrogate keys are misunderstood. He’s not just passionate about data — he’s got a clustered index on it.
-
Finding the RIght Data Types
-
Seamlessly Integrating On-Prem SQL Server with Snowflake & Microsoft Fabric
Join us for an in-depth technical session on transforming your data architecture with the seamless integration of On-Premises SQL Server to Snowflake and Microsoft Fabric. Discover the strategies and best practices for setting up efficient data pipelines, ensuring data consistency, and overcoming common challenges. We will explore a real-world scenario and showcase advanced techniques for optimizing performance and scalability. We will guide you through the end-to-end process, from initial setup to execution. Whether you want to enhance your data analytics capabilities or streamline your data management, this session offers invaluable insights and practical solutions.
-
Best Cooperative Practices – Development and QA
Kranthi Kiran Burada
Sr DB Migration Specialist, Amazon Web Services
Kranthi Kiran Burada
Amazon Web Services, Sr DB Migration Specialist
I’m Kranthi Kiran Burada, and I've been serving as a Database Migration Specialist at AWS for the past 8 years, accumulating a total of 12 years of experience in the field. My primary focus lies in assisting customers with migrating from commercial databases to open-source databases like PostgreSQL. Over the last 9 years, I've been deeply involved with PostgreSQL, aiding clients in performance optimization, database design, troubleshooting, and offering best practices during migrations from Oracle/SQL Server to PostgreSQL. I had the privilege of being a speaker at both SwissPGDay 2024, PGConf Belgium 2024,SQLBits 2025.Additionally, I have the privilege of being an AWS Certification Subject Matter Expert (SME), contributing to the development of all AWS associate certifications and the AWS Database Specialty Certification. Beyond my professional endeavours, I'm passionate about exploring new destinations and indulging in games like badminton and cricket during my leisure time.
-
The Role of PostgreSQL in the AI World
As artificial intelligence (AI) continues to revolutionize industries, the need for robust, scalable, and efficient data management solutions has never been greater. PostgreSQL, with its extensibility, performance, and advanced data capabilities, plays a crucial role in enabling AI-driven applications. This talk explores how PostgreSQL supports AI workloads, from managing vast datasets and integrating with machine learning frameworks to leveraging extensions like PostgreSQL ML, TimescaleDB, and JSONB for AI-ready storage and querying. .
-
Things to consider before migrating legacy SQLServer databases to AWS
Kristyna Ferris
Solution Architect, P3 Adaptive
Kristyna Ferris
P3 Adaptive, Solution Architect
Kristyna Ferris is a solution architect at P3 Adaptive. Her experience includes implementing and managing enterprise-level Power BI instance, training teams on reporting best practices, and building templates for scalable analytics. Passionate about participating and growing the data community, she enjoys co-writing on Data on Wheels (dataonwheels.com). She also a co-organizer for Lexington Data Technology Group.
-
Unlock Real-Time Insights with Fabric Real-Time Dashboards and Kusto-Powered Power BI Reports
In this session, discover how to harness the full potential of the Real-Time Dashboard component in Microsoft Fabric’s Real-Time Intelligence. You’ll learn how to visualize streaming data instantly and take action with live insights. We’ll build real-time dashboards on the fly using streaming data, demonstrating how the Kusto Query Language (KQL) makes it easy to create high-performance visuals—even for complex analytical scenarios. You’ll see how to bring this data into Power BI for deeper, interactive exploration, enabling you to go beyond the basics with advanced visuals tailored for real-time decision-making. You’ll also learn how to bring real-time data into Power BI for advanced, interactive reporting—no more relying on legacy streaming datasets. With traditional streaming datasets being phased out, we’ll show you how this new approach offers greater flexibility, scalability, and visualization power through the combination of Fabric and Kusto. By the end of this session, you'll walk away with practical techniques and a clear roadmap for creating rich, real-time reporting experiences that blend the speed of Fabric streaming data with the power of Kusto and the flexibility of Power BI.
Lance Wright
Microsoft
Lance Wright
Microsoft
Lance is an IT professional with over 15 years of experience, including 10 as a product leader. At Microsoft, he's a Senior Product Manager in Azure Databases working on performance monitoring solutions across Azure SQL and Fabric SQL. He also works on Azure Arc, helping to bring Azure cloud management capabilities to on-prem and multi-cloud environments. Prior to Microsoft, Lance spent over 10 years building data-centric B2B SaaS products both as a software engineer and as a product leader.
-
Accelerate SQL Server Migration with Azure Arc to Next-Gen Azure SQL Managed Instance
Discover how Azure Arc accelerates SQL Server modernization and migration to Azure SQL Managed Instance with a seamless, Microsoft Copilot-assisted experience. Explore benefits of the next-generation General Purpose Azure SQL Managed Instance, a fully managed database service that delivers a free performance upgrade, five times more databases, and ultimate flexibility in resource configuration compared to the previous generation, significantly improving your total cost of ownership. Explore how Azure Arc provides a unified migration experience in the Azure portal: from automated assessments and at-scale views of your SQL Server data estate to provisioning SQL Managed Instance, real-time database replication, and cutover – enabling near-zero downtime migration. What once required weeks can now be completed in days. Confidently migrate, optimize, and unlock the full potential of your data estate. This session is delivered by the Microsoft SQL Server product group, offering an opportunity to connect and network with the team behind these products.
Laura Copeland
Solutions Engineer, Redgate
Laura Copeland
Redgate, Solutions Engineer
Passionate about bringing technology and people together, Laura works as a solutions engineer to help people adopt DevOps practices into their database strategy.
-
Shopaholics: How to pick the right software to make your life EASIER
-
Picking the Right Cloud Platform for Your Project
Have you been told you need to move your data estate to the cloud? Do you know if you should go IaaS or PaaS? Which database engines are you going to use? In this session we’re looking to answer these questions and more. The cloud is a compelling option to host your data estate. With the proliferation of different options for cloud hosting however, how do you choose which one? Is there an emphasis in your project on performance? Cost? Security? We will show you how to identify your objectives, select your success criteria and decide which cloud platform will fulfil your company’s needs.
-
Redgate Keynote: The Data Professional of the Future: How You Can Thrive in the Age of Machines
The data professional of 2025 might be a career database expert…or simply the closest thing your organization has to a data professional. The database landscape has never been more complex, and the modern data professional is tasked with balancing shifting platform trends and emerging technology like AI with the ever-present need to keep databases and the data they contain secure – in an era when organizational pressure to deliver value from data is stronger and more persistent than it’s ever been. In this session you’ll learn more about the pressures and challenges faced by the data professional of today, as well as trusted advice on how to navigate today’s and tomorrow’s database landscape, no matter where you are on your professional journey.
-
Create and Manage Masked Databases in the Cloud for Development and Testing
The cloud is a game changer when it comes to increasing developer agility and their ability to deliver new features and value to customers. However, doing this in a way that minimizes risks with data leakage, or introducing bugs is a challenge when working with simple datasets which don’t reflect what is in production. But what if there was a way to easily take a production database and create a library of masked images for development and test to use? In this session we will look at how we can make use of could-native functionality in PaaS database services to create and manage copies of production. Then looking at how we can mask the data before creating multiple images with different volumes of data in them. We will discuss the key challenges that need to be overcome and some ways to do it, before looking at how we can add Redgate Test Data Manager into the mix to simplify and speed up the process.
Lee Coates
Senior Systems Analyst, City of Gresham, OR
Lee Coates
City of Gresham, OR, Senior Systems Analyst
With a diverse background in Data Science, Software Development, and Service Delivery, Lee Coates spends his time architecting solutions that digitally transforms Enterprise Data Estates. From cloud migrations and data pipelines to governance strategies and AI-powered apps, Lee enjoys empowering analysts, developers, and business leaders to embrace and realize data-driven decision making in their organizations. He's been a member of the Microsoft Data Community for 13 years and loves to speak with others about Azure, Fabric, sql, dotnet, git, cloud-native, civic tech, and particle physics!
-
Transform your Data Governance Initiatives with Purview Unified Catalog
Delivering on the promises of data & AI can be complicated for organizations that don't have a clear understanding of the scope and quality of their Data Estate. Effective Data Governance makes this journey easier by clearly identifying where data is, who is managing it, and what type of value it provides. But without a robust tool to help facilitate these activities, Data Governance initiatives can struggle to gain acceptance and provide meaningful impact across the business. In this session, learn how Microsoft’s Purview Unified Catalog can transform your data governance efforts into a flourishing governance program that empowers the use of data for Business Intelligence and Agentic Automation. Join to discover how the Purview Unified Catalog allows enterprises to seamlessly track data sets, distribute governance responsibilities, and enhance the quality and accessibility of data for developers, analysts, and decision-makers working in Azure, Microsoft Fabric, and beyond.
Lenore Flower
Data Trainer & Consultant, Data Plumber, LLC
Lenore Flower
Data Plumber, LLC, Data Trainer & Consultant
Lenore Flower (MBA, MCT) is the owner of Data Plumber, LLC, a training and consulting business dedicated to helping companies build up their staff’s capacity in tandem with their Microsoft-based data infrastructure. An unapologetic generalist, Lenore loves helping others overcome intimidating concepts in data engineering and business intelligence, so they may maintain their organization’s systems independently and with confidence. While her technical “home base” is Power BI, she has experience working with SQL, Azure, Fabric, and D365 F&O—and a soft spot for paginated reports. As co-organizer of the Power BI Washington, D.C. user group, Lenore has been instrumental in helping the Power BI User Group to nearly double over the past two years, including facilitating Power BI Days DC, a free two-day conference. Lenore holds an MBA from the University of Maryland, where she prioritized finance and tech coursework that aligned with her background in FP&A (Financial Planning & Analysis), accounting, and DQ&G (Data Quality and Governance). When not building data systems in the cloud, Lenore can be found building things in real life—to varying levels of success.
-
Brag Better with Power BI: A Hands-On Portfolio Session
-
Managing the BI Mullet; Managed Self Service BI
-
Style your BI Mullet: Getting Managed Self Service BI right for your Org.
-
Building Community in Tech: Lessons from the Power BI DC User Group
-
Beyond the Tech: Improving Collaboration in Data Governance
Most data problems are really people problems in a trench coat. Sure, data governance tools can make it easy to implement data quality & governance (DQ&G) policies, but implementation is the easy part. The hard part? Defining–and getting your people to actually agree on–the details in those policies. No technology can decide for you who should be the data steward for which data points, nor can any DQ&G software resolve internal disputes over who should be able to access what sensitive data. Organizations often rush to implement new data systems before they resolve existing data issues. While it’s certainly more fun to explore the latest tech, the greatest risk to any data project is not choosing the wrong tool, but failing to resolve ongoing challenges related to security, data quality, and consistent data categorization and usage. Because conversations around DQ&G can become so contentious, we believe that collaborative communication strategies are an essential part of the DQ&G toolkit, one that could potentially save your organization from spending thousands in project overruns and months of delays down the road. Join self-identified Data Plumber Lenore Flower and collaboration expert Brian Stauber to learn practical strategies for tackling the most stubborn roadblocks in your data governance plan. This deep dive will provide both practical communications skills and methodologies and core items to incorporate as you build (or grow) your organization’s DQ&G practice.
-
Hobby Huddle: "Garden Math" for mortals with Lenore Flower
Hosted in the Community Zone, these Hobby Huddle sessions are a fun way for people in the community to showcase their passions and hobbies outside of everyday work life. There will be a designated seating area for you to join these highly entertaining and informative back-to-back mini sessions.
-
From the Audience to a Speaker on the Stage
Leslie Welch
Principal Architect, Data & AI, SHI International
Leslie Welch
SHI International, Principal Architect, Data & AI
Leslie Welch is the Principal Architect of Data & AI at SHI. She is a Microsoft MVP with over 18 years of experience in automation, data analysis and visualization, and strengthening organizations’ knowledge of data. Leslie has worked end to end in the data arena from capture and ETL to database design/engineering to analysis and dashboard design to ML use cases with a focus on improving productivity, performance, and data quality. She has extensive experience in enterprise deployment of Power BI, both as an admin and Power BI architect and a trainer and technical community organizer. She is passionate about data governance, truth telling in data, and she creates collaborative environments across data practitioners and the numerous data stakeholders. Leslie is co-organizer of Power BI DC, and frequently attends and speaks at data community meetups in the DC metro area including but not limited to Data Viz DC, Data Science DC, Women Who Code DC, Washington DC Power Platform User Group, and Generative AI DC. Leslie was recognized as a 2023 New Power Woman in Tech by DCA Live, and on the RealLIST Engineers 2024 as one of 15 technologists helping the DMV ecosystem grow.
-
Data Model Ideation for Power BI
-
The Data Must Flow: Level up your Power BI Dataflow Architecture
-
Challenging the Norm: An Agile Approach to Data Visualization
-
It's Only a Model: Demystifying Row Level Security in Power BI
And now for something completely secure! In a world where data leaks are more dangerous than a run-in with the Spanish Inquisition, this Monty Python themed workshop arms attendees with the tools to lock down Power BI reports like a well-fortified castle, whether in the cloud or under the scrutiny of regulatory oversight. This workshop is designed to provide Power BI developers, ranging from novices to experienced developers, with a solid foundation to implement RLS in Power BI. Workshop attendees will learn how to set up RLS start to finish using two approaches, covering setup in both Power BI Desktop the Service. We will tackle some common challenges, discuss model considerations and how they impact RLS, and untangle the differences between access and security in Power BI. The workshop will also address governance considerations and some of the unique challenges present within government cloud infrastructures.
-
From the Audience to a Speaker on the Stage
-
Building Community in Tech: Lessons from the Power BI DC User Group
Louise Domeisen
Redgate
Louise Domeisen
Redgate
Louise is Customer Advocacy Manager at Redgate Software. Among many things, she manages the Redgate Ambassador programs, creates customer case studies and testimonials, and drives online peer reviews. Her favorite part of the role is working closely with customers and the wider data community to build relationships and understand the human side of software, enabling them to share their stories to help others develop and succeed.
-
Community Meet & Greet: First-Timers Mingle
New to PASS Data Community Summit? Join us for the First Timers Mingle, a relaxed and welcoming opportunity to help you make the most of your experience from the very start. Whether you're attending solo or with colleagues, this is your chance to meet fellow first-time participants, connect with friendly faces and get insider tips on navigating the event in a casual, low-pressure setting. Come curious, leave connected!
-
Community Meet & Greet: Community Leaders Assemble
This is your space to connect with fellow community leaders and share what makes your group thrive. Community Leaders Assemble is an informal networking session designed for open conversation—no presentations, just people talking about their communities. Bring your stories, challenges, and successes, and discover fresh ideas from others who share your passion for building engaged, vibrant groups.
-
Level Up Your Career: The Strategic Value of Community Engagement
Lukas Fittl
Founder & CEO, pganalyze
Lukas Fittl
pganalyze, Founder & CEO
Lukas is the Founder of pganalyze, a PostgreSQL performance monitoring and optimization tool, author of pg_query (a library to parse Postgres queries), and a contributor to the Postgres project. Lukas has worked on Postgres performance for over two decades and was previously a Principal Program Manager at Microsoft, leading the launch of Azure Database for Postgres Flexible Server.
-
Indexes, Wait Events, and EXPLAIN—Oh My! Porting Your Tuning Skills to Postgres
After investing years honing your SQL Server knowledge and performance tuning expertise, you're now facing requests to support production workloads on Postgres. How can your SQL Server skills in reading query metrics, analyzing wait stats, developing indexes, and interpreting query plans help you in this new environment? With the right guidance, you can quickly transfer your hard-earned query tuning knowledge from SQL Server to Postgres. However, once you've mastered the basics, there are numerous advanced techniques that only come through extensive experience and sometimes even diving into the source code. Join us for this full-day precon to explore PostgreSQL query and performance tuning in depth, with SQL Server comparisons that will help you make connections faster.
Manish Kumar
Sr Cloud Solution Architect, Microsoft
Manish Kumar
Microsoft, Sr Cloud Solution Architect
Manish Kumar is a seasoned Senior Cloud Solution Architect at Microsoft with over 24 years of diverse experience in the IT industry, specializing in modern data platforms and cloud-native solutions. He is a recognized expert in architecting end-to-end data solutions leveraging core Azure services, including Azure SQL, Microsoft Fabric, Azure Synapse Analytics, Microsoft Purview, and a broad range of PaaS and hybrid data technologies. His work focuses on delivering scalable, secure, and performance-optimized architectures across structured, unstructured, and streaming data workloads. Throughout his career, Manish has held a variety of roles—Software Developer, SQL Server DBA, Data Architect, and now Senior Cloud Solution Architect—which give him a unique, full-spectrum perspective on enterprise data challenges. Manish is also a passionate advocate for knowledge sharing. He frequently presents at industry conferences and community events, bringing deep technical insight, practical experience, and thought leadership to global audiences.
-
Interoperability between Microsoft Fabric and Snowflake using Iceberg table
-
Azure Purview – Evolving as a complete Data Governance solution
-
Mastering Azure SQL PaaS: From Novice to Expert
Welcome to a comprehensive deep dive into the world of Azure SQL Platform as a Service (PaaS). In this session tailored for beginners to intermediate SQL Server professionals, we will embark on a transformative journey, transitioning you from a novice to a hero in Azure SQL. Whether you have limited understanding or are new to Azure SQL, this session is your gateway to mastering key concepts, deployment options, and advanced techniques in a condensed, informative, and engaging format. Session Highlights: 1. Introduction to Azure SQL 2. High Availability and Business Continuity 3. Migration of SQL Server to Azure SQL 4. Admin Tasks and Performance Optimization 5. Security and Compliance
-
Autonomous Tuning in Azure Database for PostgreSQL
Manny Doporto
Commercial Account Executive, Redgate
Manny Doporto
Redgate, Commercial Account Executive
I've been at Redgate for 3 years now. Working my way up through various roles in the sales organization, and interacting with customers at every level. I enjoy problem solving and helping others find solutions to complex challenges, especially in the database space.
-
Business Brews & Breakthroughs with Redgate
Marco Russo
Consultant, SQLBI
Marco Russo
SQLBI, Consultant
Marco is a business intelligence consultant and mentor. He wrote several books about Power BI, Analysis Services, and Power Pivot. He also regularly writes articles and white papers that are available on sqlbi.com. Marco is a Microsoft MVP. Today, Marco focuses his time with SQLBI customers, traveling extensively to train and consult on DAX and data modeling for Power BI, Fabric, and Analysis Services. Marco also teaches public classes worldwide. Marco is a regular speaker at international conferences. During his trips, he also enjoys delivering evening sessions at local user groups.
-
Advanced DAX Techniques
Advance your DAX skills in this workshop designed to bridge the gap between basic and advanced concepts. Through real-world scenarios, you’ll learn to write efficient expressions, solve complex business challenges, and create dynamic, impactful reports with DAX. Here are a few examples of what you can learn in this workshop: • Using OR conditions between slicers in DAX. • Creating a slicer that filters multiple columns in Power BI. • Learn how to use REMOVEFILTERS / VALUES for “natural” hierarchical calculations. • Show updated year-to-date actuals and forecasts in the same chart. • When and how to use visual calculations in DAX. • Optimize cumulative totals using variables and windows. • Implement different types of ranking calculations. • Aggregate relative periods (like each new customer's first 30 days of purchase) efficiently. Good experience writing DAX measures in Power BI or Analysis Services is a prerequisite for attending this training. You must know row context, filter context, and context transition. You are comfortable using CALCULATE and are not afraid to learn something new.
-
Introducing DAX User Defined Functions (UDF)
-
Compare different storage engines for semantic models
-
Understanding visual calculations in Power BI
-
Time Intelligence with New Calendar Feature in DAX and Power BI
Every Power BI model has dates, and calculations over dates are needed to aggregate and compare data, such as Year-To-Date, Same-Period-Last-Year, Moving Average, and so on. Quick measures and DAX functions can help, but how do you manage holidays, working days, weeks-based fiscal calendars, and any non-standard calculations? This session shows how to implement time intelligence calculations using the new Calendar feature in DAX. After introducing how it works and configuring an existing date table, we show several examples of correctly managing more complex situations. Finally, we will provide best practices for the use of this feature with reduced maintenance over time as the semantic model evolves.
Marsha Pierce
Global Lead of Field Engineering NDB, Nutanix
Marsha Pierce
Nutanix, Global Lead of Field Engineering NDB
Marsha is a database expert across multiple database platforms specializing in Dev-Ops, on-premises automation, and virtualization. Prior to joining Nutanix she led the specialist org at Pure Storage.
-
Deploy, Restore, and Patch Your Databases Anywhere, Anytime in Minutes
Managing hundreds or thousands of databases—Microsoft SQL Server, Oracle, PostgreSQL, MongoDB, MySQL, vector databases—across on-prem and cloud? Struggling with slow database deployments, painful patching, and long backups? Discover how Nutanix Database Service (NDB) is simplifying database lifecycle management: Deploy databases in minutes Patch hundreds of servers effortlessly Snapshot and restore 40TB databases in ~10 minutes Say goodbye to chaos and hello to speed, control, and consistency for databases running in a hybrid multicloud world.
-
Rapid Cloning & Data Privacy: Accelerating Dev/Test with Nutanix + Redgate
Storage-based clones let you copy data at the speed of light. Nutanix gives you the granularity of access to put those clones into your developers' hands, both on-premises and in the cloud. In this session, we’ll show you how to integrate this tool with Redgate’s Data Masker to enforce data privacy while providing developers with the data they need at scale.
-
What I Wish I Knew: A Candid Guide to Becoming a Leader
This session is not just another leadership guide. It is a collection of real-life experiences and practical wisdom from those who have had made the transition to leadership roles in the industry. So, whether you're managing a team for the first time or looking to refine your approach, our panel is here to help you navigate, grow, and lead with confidence.
Martynas Jočys
Data & Analytics Consultant, Macaw
Martynas Jočys
Macaw, Data & Analytics Consultant
Martynas is a BI professional specializing in UX design and visual communication of data. His quest is to make data-human interaction a thing data humans care about by raising awareness and promoting best practices. He is doing Visual Storytelling trainings for clients worldwide, co-runs Data Visualization and Fabric User Group meetups in Lithuania, and occasionally shares his knowledge in conferences, meetups and on his blog.
-
Design Thinking for Data or How to elevate your Dashboard development
-
Fabric for Power BI developer – great power never walks alone
-
Export to Excel will still be a request. What can you do to avoid it?
-
Getting More Efficient with Time Planning – It's Not About Time
Developers use many planning techniques for working on one project, including Scrum and Kanban. But what about a broader landscape – juggling multiple projects, managerial work and even personal life? Lack of time for everything and conflicting priorities introduce a lot of stress, which isn't helpful. In this session, I will share several ways you could efficiently organize your work and life using some of the best time-planning techniques I discovered as a Business Analyst and Power BI developer. Also, I will explain a few counter-intuitive principles that will change the way you approach planning forever. First, time planning is never about time, second, you will never have completed all the tasks for today and it's not even the goal, third – come and hear yourself! At the end of this session, you will have a planning system to try or at least a collection of tips on boosting your efficiency at work and in life.
Matt Gordon
Practice Director, Data & Analytics, Apps Associates
Matt Gordon
Apps Associates, Practice Director, Data & Analytics
Matt is a Microsoft Data Platform MVP and has worked with SQL Server since 2000. He is the leader of the Lexington, KY Data Technology Group and a frequent domestic and international community speaker. He's an IDERA ACE alumnus and Redgate Ambassador. His original data professional role was in database development, which quickly evolved into query tuning work that further evolved into being a DBA in the healthcare realm. He has supported several critical systems utilizing SQL Server and Azure SQL and managed dozens of 24/7/365 SQL Server implementations. As a consultant, he works with customers large, medium, and small to migrate to the cloud, make their data estate operate efficiently, and find the right tools and solutions within the Microsoft Data Platform.
-
Racing to Real-Time Intelligence: Live Insights from Live Data
-
Personal Career Tips for an Impersonal World
-
Azure-ish SQL and Azure AI 101
You've heard way too much about AI – but you're not sure what you need it to do or what you want it to do. You're used to working with Azure SQL or SQL Server in some form or fashion but you don't know how to make them work with AI. Or, you're brand new to data and AI and want to know what you need to know so you can take that next step in your career. If any of this resonates with you, join me for this one hour session that takes you through what you can do with Azure AI and Azure-ish SQL, whether it's for fun or for work. You'll walk out conversant in AI in the Microsoft data world and able to have that next conversation about data and AI and what you can do with it – either with your current boss or your next one.
-
Racing to Real-Time Intelligence: Live Insights from Live Data
-
Personal Career Tips for an Impersonal World
-
Azure-ish SQL and Azure AI 101
-
Catalog Your Data In Motion With Fabric Real-Time Hub & Eventstreams
If you ask a user how often they need data their initial answer is often "in real time", right? Once you solve for getting them that data in real-time, then how to direct it where it needs to go? Where can you collect this data in motion from and what kinds of actions and transformations can you take on it as it moves around? How can you best support the business deriving insights from this data in motion? The Real-Time Hub is your starting point for building real-time applications in Fabric. It's a kind of action center for bringing real-time events from a variety of sources (and clouds) into Fabric, learning how to analyze – and act upon – that data, storing it, and then visualizing it in real-time dashboards. In this session, we will explore what you can do with Real-Time Hub and what real world scenarios you can unlock based on our real world experiences. Following on from there, we'll be using some real-time sample data sources provided within the hub to show you how to easy it is to pull in some real-time data. Once we've discussed and demonstrated the various data sources you can connect to (many of which you have in your environment today), we will demonstrate the no-code experience of Eventstreams to show how easy it is to get moving working with data in motion in your Fabric environment. If you have data moving around your data estate, it's worth checking out these new and notable ways to work with it.
Matt Sharkey
Senior Manager Database Administration, Buildertrend
Matt Sharkey
Buildertrend, Senior Manager Database Administration
Matt is the Senior Manager of Database Administration at Buildertrend, where he focuses on cloud database infrastructure and building resilient, high-availability SQL Server environments. Outside of work, he enjoys spending time outdoors, training in martial arts, and spending time with his wife and their 2-year-old son.
-
Migrate To and From SQL Server on Google Cloud
This session will provide a comprehensive guide to migrating and integrating your SQL Server databases with Google Cloud. We'll explore various migration options, including native tools and third-party solutions, to ensure a smooth transition with minimal downtime. Whether you're considering a lift-and-shift or a more refactored approach, this session will equip you with the knowledge and strategies for successful SQL Server deployments on Google Cloud. Additionally, If you are looking at migrating from SQL Server to PostgreSQL and don’t know where to start, we have you covered with innovation in Gemini-based database conversion and migration from SQL Server to PostgreSQL. Come and learn more. Finally, learn from the team at Buildertrend about how they migrated to Cloud SQL for SQL Server to benefit from enterprise features such as High Availability and Disaster Recovery.
-
Unlock Your Data's Potential: Breakfast with Google Cloud Leadership
Join us for an exclusive, sponsored breakfast designed for data and IT professionals seeking a strategic overview of the entire Google Cloud database ecosystem. This is your opportunity to move beyond individual products and understand the comprehensive strategy that supports your most critical workloads. Connect directly with Google Cloud's database leadership to discuss our commitment to providing choice, performance, and flexibility across all major database requirements. What You Will Learn The Full Portfolio: Discover how our range of offerings—from hyperscale solutions like Cloud SQL and AlloyDB to industry-leading services like Spanner and Firestore—fits together to meet every modern data need. Strategic Partnerships: Get an executive briefing on how our crucial partnerships, including the groundbreaking collaboration with Oracle, enable you to maintain existing investments while accelerating cloud adoption. The Innovation Roadmap: Engage in a strategic discussion about the future of data infrastructure and how Google Cloud is pioneering AI-driven operations and open-source innovation. Google Cloud speakers include Raj Pai, Vice President, Product Management, Cloud AI and Itay Maoz, Senior Director of Engineering We will also feature a customer panel with our product experts who will discuss how they've successfully leveraged this comprehensive strategy—integrating both Google Cloud native services and partner technologies—to transform their business.
Mehul Joshi
Global Practice Head – SQL Server, Datavail
Mehul Joshi
Datavail, Global Practice Head – SQL Server
Global Practice Head – SQL Server at Datavail, having 18+ years of IT experience. Excellent communicator and strong relationship building skills at all level. Proven track record in serving global clientele, leading cross-cultural teams through a combination of strategic thinking, interpersonal and analytical skills.
-
AI Impacts on the DBA: SQL Server Monitoring with Datavail TechBoost™
The DBA role is transforming rapidly, driven by advances in AI and automation. Tasks that once required hours of manual effort—like index tuning, query optimization, and system monitoring—are increasingly handled by intelligent systems. Initiatives like Datavail TechBoost™ are accelerating this shift, empowering DBAs to focus more on strategic, high-value work. This change doesn’t make DBAs obsolete; it elevates their role to focus on more strategic work: designing modern data architectures, leading cloud migrations, ensuring compliance, and managing costs with FinOps practices. At Datavail, we've embedded AI capabilities into our Datavail TechBoost™ platform, which manages over 100,000 databases for more than 400 customers. This isn't theoretical implementation—it's production-proven technology supporting Fortune 100 companies and mid-market clients across diverse environments. Our approach to AI focuses on practical automation rather than abstract innovation, with input from our hundreds of DBAs. In this session, you’ll learn about: • The real-world impacts of AI for DBAs and what that means for SQL Server and other data platform monitoring. • Tasks that are ideal for AI automation, and those that remain more strategic. • What the DBA role of the future looks like
Michael McKinley
President, McKinley Consulting
Michael McKinley
McKinley Consulting, President
Michael McKinley is a data analytics consultant, instructor, and Power BI evangelist with a passion for turning complexity into clarity. At AllianceBernstein, he leads data intelligence efforts that empower senior leaders with actionable insights. Outside the office, Michael runs the Nashville Power BI User Group, teaches live training courses, and speaks at conferences to help others harness the full potential of Power BI, the Power Platform, and Microsoft Fabric.
-
AI in Action: Supercharging Power BI Development and Troubleshooting
Unlock the power of AI to streamline your Power BI workflows, from design to delivery. This session explores how tools like ChatGPT, Copilot, and other AI assistants can support every stage of Power BI development: designing reports, optimizing data models, troubleshooting issues, and even analyzing insights. Discover how AI tools like ChatGPT, Copilot, and others can: • Simplify DAX creation and debugging, offering tailored solutions for complex calculations. • Clarify data modeling challenges, including filter context and relationship issues, to deliver accurate insights. • Provide actionable guidance for creating effective visuals that meet specific user needs. • Summarize complex reports and datasets, empowering data consumers to generate insights independently. • Streamline workflows for expert developers, reducing repetitive tasks and accelerating project delivery. This session is designed to demonstrate how AI can elevate Power BI development at all skill levels, helping you work smarter and faster while delivering impactful results.
-
Application Lifecycle Management for Business Intelligence: ALM for BI
-
Creating Influence Through Analytics: Lessons From 20 Years in BI
-
AI in Action: Supercharging Power BI Development and Troubleshooting
-
Application Lifecycle Management for Business Intelligence: ALM for BI
Managing Power BI development in a team setting can be challenging—avoiding code conflicts, ensuring seamless collaboration, and deploying updates effectively require a strategic approach. In this session, I’ll share how principles from application development can be applied to Power BI, introducing the state-of-the-art practices for managing the BI development lifecycle. You’ll learn how to: • Enable multiple developers to work on the same dataset simultaneously. • Store your Power BI models as code in your source repository. • Use Git for version control, change tracking, and collaboration. • Integrate with ticketing systems and implement deployment pipelines for continuous integration and delivery. Whether you manage a team of BI developers or work on one, this session will equip you with tools and strategies to tackle some of the most pressing challenges in BI development and scale your Power BI projects effectively.
-
Creating Influence Through Analytics: Lessons From 20 Years in BI
Mihail Mateev
CEO / Founder, Soft Project Ltd
Mihail Mateev
Soft Project Ltd, CEO / Founder
I am a Microsoft Regional Director currently living in Sofia, Bulgaria. My interests range from technology to entrepreneurship. I am also interested in programming, web development, and education. Technical Consultant, Community enthusiast, PASS Regional Mentor for Central Eastern Europe, chapter lead, Microsoft MVP – AI Platform, Data Platform. Organizer of SQLSaturday, Azure Bootcamp, IoT, and JavaScript conferences. My experience is in various areas related to Microsoft technologies, including Windows Platform, ASP.Net MVC, MS SQL Server and Microsoft Azure. I have a PhD in cloud computing and am a university lecturer on Smart Homes and Smart Energy IoT Solution
-
Extending the Fabric UI: Custom Visuals, Interactions, and Integrations
Microsoft Fabric offers powerful capabilities out of the box, but many organizations need custom UI elements to tailor the user experience. In this session, you'll learn how to extend the Microsoft Fabric frontend using approved extensibility points—such as custom visuals, embedded experiences, and third-party app integrations. We’ll explore how to build seamless UI components that integrate with Fabric’s layout and navigation model while maintaining performance, security, and governance. Through real-world examples and demos, you'll gain practical knowledge on enhancing interactivity, embedding external tools, and creating richer data experiences within the Fabric ecosystem.
-
Empower Microsoft Fabric solutions using GenAI on top of data in Fabric.
-
Building Next-Gen Digital Twins with Microsoft Fabric
-
Implementing Modern Automation using Power Automate and Microsoft Fabric
-
Create Generative Actions in Cloud Flows with Power Automate
-
Unlocking the Power of Fabric: When and Why SMBs Should Start or Migrate
-
Empowering agentic AI by integrating Fabric with Azure AI Foundry
Mike Jewett
Database Administrator, United Wholesale Mortgage
Mike Jewett
United Wholesale Mortgage, Database Administrator
Mike is a Database Administrator at United Wholesale Mortgage (UWM). He has 9 years of experience as a Database Admin, primarily focusing on Microsoft SQL Server and PowerShell automations when possible. He also has experience in other DBMS’ including Mongo, Redis, and Postgres.
-
Scaling for Success: How UWM Transformed Database Operations
United Wholesale Mortgage (UWM), the #1 mortgage lender in America, is on a mission to help more Americans achieve the dream of homeownership. In 2024 alone, UWM originated $1.39 billion in mortgages. In this session, discover how UWM leveraged Redgate solutions to drive an 80% increase in deployment efficiency and elevate performance monitoring across their database estate. Learn how these tools empowered developers to take ownership of deployments, freeing DBAs to focus on strategic initiatives, and how real-time anomaly detection and historical insights have enabled faster, more proactive issue resolution. Join us to explore how Redgate’s solutions have not only enhanced UWM’s scalability and security posture but also reshaped their engineering culture—positioning them to confidently adopt new technologies and continue helping people and businesses thrive.
Mike Walsh
Founder &CEO, Straight Path IT Solutions
Mike Walsh
Straight Path IT Solutions, Founder &CEO
Mike has been working with SQL Server since 1999. In 2011, he founded Straight Path Solutions as a side project. He eventually hired his first full-time employee, and fast-forward to today: Straight Path is a 17-employee SQL Server consultancy that performs SQL Server tuning, cloud migrations, and managed DBA services for small and large businesses. When he's not working with his team, he's keeping busy with his family of 6, listening to music, trying to slow down and enjoy each day as it happens.
-
Becoming a SQL Consultant: Lessons for Every Stage of the Journey
Whether you're dreaming of going out on your own, just signed your first client, or are thinking about building a team—you’re on a journey I know well. Fifteen years ago, I negotiated with my employer for them to become my first anchor client. From there, I started networking, doing side gigs, and slowly growing—with all the trial and error that comes with it. This session is a candid look at the stages of building a consulting practice in the data world. We'll cover lessons for: -The aspiring freelancer—how to prep while still employed -The brand-new solo consultant—how to protect your time and build trust -The solo-to-team leap—how to delegate, hire, and preserve your culture I’ll share what worked (and what didn’t): navigating partnerships, choosing a business structure, setting pricing, and managing client relationships. We'll touch on contracts, taxes, and the real-life decisions that can make or break your momentum. Along the way, I’ve leaned on books, mentors, coaches, and a lot of late-night second-guessing. I’m still learning—but I’ll give you a head start on the "straight path" I took. This isn’t a business school lecture or MBA program – it’s a real-world field guide. If you're considering the leap—or want to be more intentional about the path you're already on, this session will offer insights you can use Monday morning to start down the road.
-
Avoiding DBA Regrets: Tales from the Trenches
Mikey Bronowski
Chief Popkorn Officer, DataPopkorn.com
Mikey Bronowski
DataPopkorn.com, Chief Popkorn Officer
Husband | Dad | DBA | MCT | Data Platform MVP | UG Leader Chief Popkorn Officer at DataPopkorn.com Data enthusiast with a mathematics background. Working with SQL Server for over 15 years. Microsoft Data Platform MVP and Microsoft Certified Trainer with a couple of certificates, sharing knowledge.
-
Mastering Temporal Tables: Time Travel in Your Database
Temporal tables, introduced in SQL Server 2016, have revolutionized the way we track and manage historical data in databases. Acting like a time machine for your data, these tables allow for automatic history tracking, making it easier to analyze data over time, audit changes, and restore information to any point in the past. This session is designed for database administrators and developers who want to leverage the power of temporal tables to enhance data integrity, facilitate complex queries, and simplify the process of data recovery and analysis. Attendees will learn the concepts behind temporal tables, how to set them up, query historical data, and integrate them into their data management practices. Through practical examples and best practices, this workshop will provide the attendees with the tools they need to implement temporal tables in their database environments effectively.
-
DBCC CHECKDB: The Never-Ending Story – Why It Takes Forever?
-
dbatools: make DBA's life easier with PowerShell
-
dbachecks: An Introduction to Database Health Checks with PowerShell
-
Vector functions 101
Explore the capabilities of vector functions, a new feature in Azure SQL Database, designed to enhance how you handle and analyze binary format vectors directly in the SQL Engine. This session will provide an overview of how these functions allow for the storage and manipulation of vectors, offering powerful computational possibilities. We'll dive into the specifics of key functions like VECTOR_DISTANCE, VECTOR_NORM, and VECTOR_NORMALIZE.
-
Regular expressions in SQL Server
-
The Good, the Bad, and the Ugly aka Vectors, Regex, and JSON
In this session, we’re taking a fun look at three of the newest tools in SQL Server 2025 — Vectors, Regex, and JSON — or as we like to call them: the good, the bad, and the ugly. Whether it’s finding patterns, organizing tricky data, or speeding things up, these features can save you time and hassle.
-
dbatools – in 10 minutes
-
Lightning Talks-02: A Rapid-Fire Exploration of Key Tech Topics
Minesh Chande
Sr. Solutions Architect, AWS
Minesh Chande
AWS, Sr. Solutions Architect
I am a Senior Database Specialist Solutions Architect at AWS, focused on helping customers across diverse industries optimize their database solutions. I specialize in designing, migrating, and enhancing SQL Server workloads on managed database platforms such as Amazon RDS, Amazon RDS Custom, and Babelfish for Aurora PostgreSQL.
-
Mastering Database Migration: A DBA’s Roadmap from SQL Server to PostgreSQL
As organizations move from SQL Server to PostgreSQL to reduce costs and embrace open-source databases, a well-planned migration is crucial. This session covers different migrtaion pathways, schema conversion, stored procedure translation, performance tuning, and workload optimization while highlighting key differences in indexing, partitioning, and concurrency control. Attendees will explore migration tools like AWS DMS, Babelfish, pgLoader etc, learn best practices for high availability and disaster recovery, and tackle challenges like T-SQL to pgSQL conversion. Through real-world case studies, DBAs will gain a clear roadmap for a seamless transition and maximizing PostgreSQL’s capabilities.
-
T-SQL to PostgreSQL: Leveraging Amazon Q Developer and AWS DMS SC
This technical session showcases how Amazon Q Developer streamlines the conversion of embedded T-SQL code to PostgreSQL through IDE integration. We'll demonstrate how effective prompt engineering combined with AWS DMS Schema Conversion's GenAI capabilities can dramatically reduce conversion time and effort. The session highlights practical examples of code conversion using Visual Studio and VS Code, emphasizing accurate results with minimal post-conversion adjustments.
Mohamed Kabiruddin
Product Manager, Google
Mohamed Kabiruddin
Google, Product Manager
Mohamed Kabiruddin: Mohamed is a Senior Product Manager working on Cloud SQL for SQL Server. He is passionate about data and spent several years working with databases, data warehouses, analytics products and related technologies. Prior to joining Google Cloud, Mohamed worked in Google’s Core Informational Retrieval team focusing on Search and vector embeddings. He also worked in Microsoft Azure SQL product team before Google.
-
Migrate To and From SQL Server on Google Cloud
This session will provide a comprehensive guide to migrating and integrating your SQL Server databases with Google Cloud. We'll explore various migration options, including native tools and third-party solutions, to ensure a smooth transition with minimal downtime. Whether you're considering a lift-and-shift or a more refactored approach, this session will equip you with the knowledge and strategies for successful SQL Server deployments on Google Cloud. Additionally, If you are looking at migrating from SQL Server to PostgreSQL and don’t know where to start, we have you covered with innovation in Gemini-based database conversion and migration from SQL Server to PostgreSQL. Come and learn more. Finally, learn from the team at Buildertrend about how they migrated to Cloud SQL for SQL Server to benefit from enterprise features such as High Availability and Disaster Recovery.
Monica Morehouse (Rathbun)
Consultant, Denny Cherry and Associates Consulting
Monica Morehouse (Rathbun)
Denny Cherry and Associates Consulting, Consultant
Monica Morehouse (Rathbun), a Microsoft MVP for Data Platform, resides in Virginia and brings two decades of experience across various database platforms, with a particular focus on SQL Server and the Microsoft Data Platform. She is a frequent speaker at IT industry conferences, where she shares her expertise on performance tuning and configuration management for both on-premises and cloud environments. Monica leads the Hampton Roads SQL Server User Group and is passionate about SQL Server and its community, she is dedicated to giving back in any way she can. You can often find her online (@sqlespresso) offering helpful tips or blogging at sqlespresso.com.
-
Solving Real-World SQL Server Performance Problems
Are your users frustrated by slow reports? Do your SQL Server instances—on-premises or in Azure—struggle under high demand? Whether you manage a single server or a large-scale environment, performance tuning is essential, and it doesn’t have to be overwhelming. In this full-day session, learn how to identify and resolve performance bottlenecks using a wide range of tools, scripts, and best practices. We’ll start with practical techniques for analyzing your environment, reading execution plans, and tuning for performance. You'll gain a clear understanding of how everyday maintenance tasks—and even infrastructure—can impact your server’s responsiveness. This session focuses on SQL Server 2019 and newer, including Azure SQL Database and SQL Server 2025, covering the latest performance enhancements and cloud-specific considerations. We’ll walk through real-world examples of common performance problems and how to fix them using straightforward, repeatable methods. You’ll leave with: A checklist of key performance areas to evaluate in your environment—whether in the cloud or on-prem Strategies for addressing both query-level and server-level issues Insights into how SQL Server and Azure features can work for—or against—you. Confidence to apply what you’ve learned, regardless of your current skill level. Designed for DBAs, developers, and anyone responsible for SQL Server performance, this session emphasizes practical, real-world solutions you can use right away—on any platform.
-
Query Store: The simplest way to Troubleshooting Performance
-
Automatic Performance Tuning
Starting with SQL Server 2017, Microsoft introduced automatic tuning features that provide insight into potential query performance problems, recommend solutions, and automatically fix those identified problems. With each new version of SQL Server, they are building in more and more of these offerings. It's important to understand where automatic tuning helps, what problems it still doesn't solve, and how DBAs who have been query tuning for years should approach it and incorporate it into their toolbox. In this session, we dive into what is offered with the traditional on-prem installs as well as what Azure offers in the cloud. Throughout the session we will examine each of these features, touching on what they are, how they are used, and how to implement them. We will conclude with diving into Intelligent Query Processing and how it can help you obtain the maximum performance from your queries with minimal effort. Join me as we take performance tuning to the next level.
-
Top 5 Performance Considerations for Database Cloud Environments
As organizations move to cloud platforms, optimizing performance is crucial for scalability, efficiency, and cost savings. In this session, we’ll cover the top five performance considerations for cloud deployments: • Cloud hardware and its impact • Getting the best performance from your infrastructure • Managing TempDB usage • Key database configuration best practices • Avoiding cloud throttling These factors can greatly affect your workload's performance. We'll look at how cloud resource choices like CPU, memory, and storage impact performance and costs. We'll also discuss infrastructure configuration, including network design and resource allocation, that influences performance and availability. We’ll explore TempDB settings, which are vital for database performance but often limited in cloud environments. Finally, we’ll cover cloud throttling, how it works, and its potential effects on database performance. By understanding these considerations, you can better optimize performance and reliability in the cloud.
-
Becoming Azure SQL DBA – Performance Monitoring, Tuning, and Alerting
In this session you will learn how to extend your Azure SQL DBA skills in the domain of performance monitoring, tuning, and alerting from the perspective of on-premises DBA. While there are similarities between a fully-managed Azure SQL PaaS service and SQL Server, in this session you will gain a deeper understanding of performance monitoring, troubleshooting, tuning and alerting specific to Azure. Learn how to use the cloud-native DB watcher monitoring solution and Query Performance Insights to monitor and identify database performance issues in Azure. We'll step through automated tuning and automated plan regression correction (APRC) and demonstrate, using Resource Health, to understand the health of the environment as well as setting up alerts to quickly identify performance issues. Finally, we'll discuss how to optimize performance with resource right-sizing, choosing the appropriate storage type, and configuring file structures. In each of the areas and throughout the session, we will map on-premises SQL Server DBA responsibilities to the Azure SQL DBA role – highlighting what responsibilities are new, which ones stay the same, and what is shared or fully delegated to Microsoft. You will walk away with an understanding of the relevant DBA skills you need to evolve as an Azure SQL DBA.
-
Becoming Azure SQL DBA – New Opportunities in Azure – Panel Discussion
As an Azure SQL DBA, your role expands beyond its traditional scope. This shift in responsibilities creates exciting opportunities to develop new cloud skills – from integrating with diverse Azure services to adopting modern coding practices with AI – empowering you to grow and innovate. This wrap-up session of the learning pathway includes a 10-minute introduction followed by a dynamic 50-minute interactive panel discussion. Bring your questions and engage directly with experts ready to share insights and practical guidance. Moderated by Bob Ward, this session is your opportunity to clarify concepts, explore best practices, and connect with industry leaders.
-
SQL Server Administration Basics: Laying the Foundation for Your DBA Path
In today's data-driven world, SQL Server continues to be a powerhouse for organizations looking to leverage their data effectively. This all-day training session offers practical, actionable insights for optimizing SQL Server environments and ensuring operational efficiency, whether on-premises or in the cloud. We’ll start by breaking down the basics of hardware and performance. You’ll learn how SQL Server uses system resources like CPU, memory, and storage, and how to choose the right setup for your environment. We’ll showcase both on-prem and cloud-based options so you can make smart choices that fit your organization’s needs. From there, we’ll walk through essential day-to-day administration tasks. You’ll learn how to configure your SQL Server environment, set up backups, manage routine maintenance, and build simple disaster recovery plans. We’ll use real-world examples to help you understand what to do, why it matters, and how to handle common challenges that come up in a DBA’s world. In this session – You will: • Learn about critical facets of SQL Server architecture • Exam common configurations & administrative practices • Review high availability & disaster recovery options for SQL Server By the end of the day, you’ll walk away with the confidence and knowledge to start managing SQL Server environments effectively—and a solid foundation to grow from as your experience builds.
Mridula Pandit
Sr.Manager, Navy Federal Credit Union
Mridula Pandit
Navy Federal Credit Union, Sr.Manager
Mri has a career rooted in fintech and has worked with Fortune 100 companies, turning complex data challenges into practical, impactful solutions. With deep expertise in database administration, data architecture, and platform engineering, she has driven data modernization and real-time analytics while supporting AI and machine learning initiatives that enable digital transformation. Currently, she works as a Senior Data Engineering Manager at Navy Federal Credit Union, leading with a people-first mindset and a commitment to accountability. A proud woman in tech, Mri is passionate about empowering others and fostering inclusive spaces where diverse voices thrive. She is a Redgate Ambassador and has previously spoken at the PASS Summit, sharing her experiences to help others grow in their data journeys. Beyond the data world, Mri is a mom of two, a devoted dog lover, and a curious traveler, always seeking new stories and perspectives.
-
Unpolished, Unpopular, Unapologetic: The SuperPower of Being ‘Not Special
-
WIT Luncheon: Unpolished, Unpopular, Unapologetic: The Superpower of Being ‘Not Special'
I’m not a celebrity or MVP. I don’t have fancy titles or a fan following. I’m just a woman in tech—often the only woman, the only person of color, the only mom in the room. In spaces where feedback is loud but recognition is silent, I’ve learned to lead without a spotlight while navigating the unique pressures that come with being different. This keynote is for everyone in technology—new grads, professionals, leaders—anyone who’s ever felt overlooked, underestimated, or “not special.” I’ll share real stories and lessons learned. Together, we’ll explore overcoming imposter syndrome, honoring our layered identities, and finding empowerment through community, authenticity, and self-worth. You don’t need to be “special” to be powerful. Just be real. Just be you.
-
Decoding Software Purchases: A Vendor-Neutral Guide to Achieving Organizational Buy-In
Choosing the right software solution can be a daunting task, from identifying business needs to navigating internal approvals. In this session, Redgate’s Stephen Fontanella speaks with a Data Professional (Mridula Pandit) about their own experience discovering potential solutions, evaluating their effectiveness, and ensuring the right people are involved in the decision-making process. You’ll learn how to conduct a thorough assessment, engage stakeholders at the right stages, and effectively communicate software needs to decision-makers.
Nathan Giullian
Solution Architect, Enterbridge
Nathan Giullian
Enterbridge, Solution Architect
Nathan is an Engagement Manager at EnterBridge and previously worked as a Solution Architect for Business Intelligence using Microsoft tools. His passion for business intelligence began during his studies at Brigham Young University, where he majored in Economics. Nathan holds multiple Microsoft certifications, including Power Platform Functional Consultant, Power Platform App Maker, Power BI Data Analyst, and Fabric Analyst. He is also certified in DAX, Six Sigma, Power Platform, Azure, and Market Research.
-
How Much is Fabric | Strategies for Estimating Fabric Spend
In today's rapidly evolving digital landscape, accurately estimating fabric capacity expenses is crucial for optimizing resource allocation and managing costs effectively. This session will delve into the various strategies and best practices for estimating fabric capacity expenses within the Microsoft Fabric ecosystem. Attendees will gain insights into the key factors influencing fabric costs, including workload patterns, resource utilization, and scaling considerations. Through real-world examples and expert guidance, participants will learn how to develop robust cost estimation models that align with their organization's budgetary goals and operational requirements. Whether you're a business intelligence developer, IT manager, or financial analyst, this session will equip you with the knowledge and tools needed to make informed decisions about fabric capacity planning and expense management.
Naveen Gupta
Vice President of Growth, Data & AI, Yugabyte, Inc.
Naveen Gupta
Yugabyte, Inc., Vice President of Growth, Data & AI
Naveen Gupta is a Vice President of the Product Growth team at Yugabyte, Inc., primarily focusing on product innovation in data and artificial intelligence. Naveen has worked in engineering and product management roles, developing and launching several software products in enterprise applications and enterprise data management.
-
Building Ultra-Resilient GenAI Applications with Distributed PostgreSQL
As AI applications evolve from proof-of-concept to production scale, standalone vector databases and search solutions face critical limitations around data synchronization, ACID compliance, and enterprise resilience. This session explores how PostgreSQL-compatible distributed databases can address these challenges while maintaining the familiar developer experience. We'll cover: Why standalone vector databases create operational friction in production GenAI systems Implementing RAG architectures with pgvector that scale horizontally across regions Multi-agent patterns emerging in modern AI applications and their infrastructure requirements Ultra-resilience patterns: surviving peak traffic, grey failures, and multi-region disasters
Naveen Samba
Principal Engineering Architect Manager, Microsoft
Naveen Samba
Microsoft, Principal Engineering Architect Manager
Senior IT professional with broad experience in IT leadership, solution architecture and development, infrastructure design and optimization with deep skills in the Microsoft Data Platform, both on-prem and on the cloud. Currently working with customers migrate from competitive data platforms (Oracle, Exadata, Teradata, Netezza, DB2, PostgreSQL, MySQL, MongoDB etc.) to Azure.
-
Deep-Dive Workshop: Accelerating AI, Performance and Resilience with SQL Server 2025
SQL Server 2025 delivers powerful new capabilities for AI, developer productivity, and performance all ready to run at scale on modern infrastructure. In this 1/2 day deep dive workshop, you'll get a first look at what's new and how to put it to work immediately. Learn directly from Microsoft and Pure Storage experts how to harness SQL Server’s vector capabilities and walk through real-world demos covering semantic search, change event streaming, and using external tables to manage vector embeddings. You'll also see how new REST APIs securely expose SQL Server's internals for automation and observability including snapshots, performance monitoring, and backup management. The workshop wraps up with insights into core engine enhancements: optimized locking, faster backups using ZSTD compression all running on a modern Pure Storage foundation that brings scale and resilience to your data platform. Whether you're a DBA, developer, or architect, this session will equip you with practical strategies for harnessing Microsoft SQL Server 2025 and Pure Storage to accelerate your organization's AI and data goals.
Navnit Shukla
Sr. Solution Architect – Data and AI, Amazon Web Services
Navnit Shukla
Amazon Web Services, Sr. Solution Architect – Data and AI
Navnit Shukla is a Senior Specialist Solutions Architect at AWS, author of “Data Wrangling on AWS”, and a leading voice in GenAI, analytics, and modern data architectures. With 14+ years of experience, he has helped Fortune 500 companies implement AI-powered platforms using Amazon Bedrock, OpenSearch, Redshift, and more—driving millions in business impact. Navnit is a prolific educator and content creator, with 100+ technical videos on YouTube, top-ranked AWS blog contributions, and co-authored data engineering curricula for AWS Academy and Coursera. He’s a frequent speaker at AWS re:Invent and global industry events.
-
AI Ready Data Blueprints
Session Abstract: AI Ready Data Blueprints In today’s AI-driven world, having the right data strategy is crucial for unlocking the full potential of artificial intelligence. AI Ready Data Blueprints provide a structured approach to transforming raw data into a high-quality, scalable, and accessible asset for AI and machine learning applications. This session explores the key principles of designing AI-ready data architectures, covering best practices for data ingestion, transformation, storage, and governance. We will discuss how organizations can leverage modern data lakes, lakehouses, and data warehouses to ensure their data is optimized for AI workloads. Attendees will gain insights into schema evolution, data versioning, metadata management, and real-time processing—all critical components for building a robust AI data foundation. Additionally, we will showcase how cloud-native technologies like AWS Glue, Amazon S3, Apache Iceberg, and Amazon Athena can enable seamless data preparation for AI-driven insights. Through real-world examples and blueprints, you will learn how to implement incremental data processing, automated data quality checks, and cost-effective storage strategies to enhance AI performance. Whether you are a data engineer, architect, or AI practitioner, this session will equip you with the knowledge and frameworks needed to make your data AI-ready.
Neha Prakash
Business Analyst, KR3 Information Systems Inc.
Neha Prakash
KR3 Information Systems Inc., Business Analyst
Results-driven professional with over 7 years of experience in project and product management, business analysis, data analytics, and technology-driven solutions. Proven track record in leading strategy, optimizing identity and access management (IAM) processes, and delivering impactful analytics to enhance business performance.
-
Secure, Compliant, and Insightful: Data Governance for AI-Driven Analytics
As organizations increasingly integrate AI and machine learning into their analytics pipelines, the need for robust data governance has never been greater. But how can you innovate with AI while ensuring your data practices remain secure, compliant, and trustworthy? In this session, we will explore the intersection of advanced analytics and governance, offering practical guidance on building governance frameworks that support—not stifle—AI initiatives. You will learn how to structure policies that protect sensitive data, ensure regulatory compliance (GDPR, HIPAA, etc.), and promote transparency in AI/ML models. We'll discuss key components such as data lineage, access controls, ethical AI practices, and auditability. We'll examine how modern tools like Microsoft Purview, Power BI, and Azure ML can enable governance at scale. Whether you are a data professional, analyst, or AI practitioner, this session will equip you with strategies to drive insights from your data while upholding the principles of security, compliance, and accountability.
Nick Karpov
Databricks
Nick Karpov
Databricks
Nick Karpov is a Staff Developer Advocate at Databricks, where he helps data and AI practitioners unlock the full potential of the Lakehouse platform. With a career spanning software engineering, data infrastructure, and developer relations, Nick is passionate about bridging the gap between complex technical capabilities and real-world applications.
-
Cooking with Databricks: Pipelines, BI, and AI Apps in the Ghost Kitchen
People often experience Databricks through isolated lenses: a data pipeline here, a BI dashboard there, or a standalone ML demo. But the platform is most powerful when those pieces connect. Enter Caspers Ghost Kitchen—a simulated food-delivery business that serves as a unifying narrative across data engineering, analytics, and AI applications. In this session, we’ll ingest raw streaming orders into curated gold tables with Lakeflow, query those assets in seconds using AI/BI Genie, and close the loop with an AI-powered refund agent hosted in Databricks Apps and Lakebase. This isn’t a feature deep dive—it’s a cohesive, end-to-end journey showing how the Databricks platform supports the entire data lifecycle. You’ll leave with a clear picture of how to connect ingestion, analytics, and AI apps in a way that resonates across roles, from engineers to analysts to executives.
Nitish Reddy Kotha
Engineering Architect – Microsoft Azure Data & AI, Microsoft
Nitish Reddy Kotha
Microsoft, Engineering Architect – Microsoft Azure Data & AI
Nitish Reddy Kotha is an Engineering Architect in Microsoft’s SQL Engineering group, where he focuses on database transformation into Azure Database offerings. After completing his master’s in Computer Science at Ball State University, Indiana, his journey in last 15 years as a developer and Database consultant took him through Cisco, where he worked on enterprise-scale database technologies such as RAC, high availability, Unix Scripting, and replication features. He later managed Exadata migrations, Data consolidation to SQL databases in various roles before joining Oracle’s Information Integration team, where he worked alongside peers who later became part of the founding team at Striim, gaining deep expertise in cross database heterogeneous replication, CDC, ETL, and critical systems. Today at Microsoft, he applies his multi-database knowledge to bridge modernization gaps and build IP’s that supports future-ready architectures. Nitish believes that learning from peers and mentors has been key to his success, driving his commitment to constant improvement. You can connect with Nitish at https://www.linkedin.com/in/nitish-reddy-kotha-2225b5bb/
-
Architecting Database Transformation: From Migration to Modernization and Beyond
Database transformation can take many forms depending on stakeholder needs. We’ll begin with Migration patterns that minimize app refactoring while ensuring reliability. We’ll then dive into modernization techniques, where refactoring becomes essential. When employing this strategy, Post-migration, database and application stack performance tuning takes center stage, covering indexing and partitioning strategies, engine-specific optimizations, and other targeted changes that drive efficiency and scalability at enterprise scale. Finally, we’ll look ahead to transformation patterns that enable new capabilities, including vector indexes, AI copilots. Throughout the session, we’ll connect each phase with real-world tools and platforms (Ora2Pg, ADF, SSMA, Copilot integrations, monitoring solutions), sharing practical lessons learned from enterprise workloads. Attendees will leave with a clear framework and actionable approaches for guiding their organizations from legacy systems to future-ready, cloud-native architectures.
Noah Sommerfeld
Solution architect, Databricks
Noah Sommerfeld
Databricks, Solution architect
As a Delivery Solutions Architect with Databricks Field Engineering, I help make sure initiatives at our biggest and most strategic customers go live on time and on budget. I get a front-row view of the real-world challenges companies are facing as AI transforms the data ecosystem, and explodes both data volumes and stakeholder counts.
-
Sponsor Luncheon: Simplify Your Data Storytelling with Databricks AI/BI and Apps
Ready to turn your data into compelling stories without wrestling with complex code? Join us for an engaging Lunch and Learn where we'll show you how Databricks AI/BI transforms the way you work with data—making analytics easier and more conversational. We'll start by showing you how to create insightful dashboards in just minutes—not days! with AI-powered tools that handle the complex work for you. Simply ask questions in plain English, let AI recommend visualizations, and watch your data come to life. But sometimes dashboards aren't enough. Your business users might need something more engaging—a custom app that allows them to easily filter, explore, and act on their data in ways that static dashboards can't quite match. This session is perfect for anyone who wants to spend less time coding and more time discovering insights. You'll leave with practical knowledge of modern data storytelling tools and a clear understanding of when to build a dashboard versus when to level up to an interactive app. This session will cover: – AI/BI Dashboards: Create interactive visualizations with minimal code using Databricks AI/BI's intuitive interface – AI/BI Genie: Leverage AI-assisted features to ask questions, generate insights, and spot trends – Databricks Apps: Discover when and how to build interactive apps that give business users superpowers No PhD in data science required—just bring your curiosity and your toughest data questions!
OJ Ngo
IT, DH2i
OJ Ngo
DH2i, IT
With over two decades of experience in IT, Thanh "OJ" Ngo is a seasoned technologist and inventor dedicated to streamlining processes and finding creative solutions to everyday technical problems. As co-founder and principal architect of DH2i Company's core technology, OJ brings his unique blend of technical expertise and innovative thinking to the development of groundbreaking solutions that transform the way organizations approach IT challenges.
-
Maximize Your SQL Server Data Estate: Unlock Unified HA/DR at Lower Cost
-
How to Create a SQL Server AG Cluster Between On-Premises and AKS/EKS
-
How to Provision a SQL Server Availability Group Cluster in AKS/EKS
The path to true high availability for critical SQL Server workloads in the cloud has never been for the faint of heart. For organizations pursuing further modernization by deploying containers in the cloud, the complexity is dialed up even further. Until now… Join this presentation for a step-by-step demonstration showing you two different approaches your organization can employ to drastically simplify the deployment of secure and highly available SQL Server containers in the cloud: APPROACH 1: Use a DxEnterprise Helm chart and StatefulSets to deploy a 3-replica AG in AKS/EKS. APPROACH 2: Use DxEnterprise’s SQL Server Operator to automate the deployment of a customized Availability Group (AG) containing three replicas in AKS/EKS. Both approaches to SQL Server container deployment in EKS/AKS are executable in minutes, and they integrate powerful proprietary benefits like: SQL Server sidecar containers to avoid custom image/support headaches Fully automatic failover for SQL Server Availability Groups in Kubernetes Zero trust network access tunnels to securely connect any replica, anywhere A clear path has been paved to peak SQL Server scalability and cost-efficiency with containers in the cloud. Join this session to see how you can get there without sacrificing network security and high availability.
-
How to Build a Secure & Resilient Data Estate for SQL Server-Backed AI Apps
The impending release of SQL Server 2025 and its support for vector databases unlocks a brand-new pathway into the ‘Age of AI’ for organizations across countless verticals. In the same way, it provides a robust and reliable database alternative for organizations that have already endeavored into the creation of their own AI applications. Regardless of the chosen technology, only AI databases architected with a keen focus on scalability, security, and resilience will meet the dynamic needs of modern enterprises. Join this demo-centric presentation to be shown step-by-step how your organization can leverage Azure AI, Microsoft SQL Server 2025, and DH2i to build a comprehensive solution for deploying enterprise AI at scale. We’ll show you how you can use a SQL Server Operator to automate the deployment of an Availability Group in Kubernetes, providing an optimally scalable, secure, and highly available database backbone for your AI applications. Additionally, we’ll demonstrate fully automatic failover of an AI workload between Kubernetes replicas—a non-negotiable capability for achieving maximum resiliency. Attendees will leave with a full, actionable framework for building highly available, production AI apps with Azure AI, Microsoft SQL Server 2025, and DH2i.
-
How to Migrate SQL Server Workloads to Red Hat OpenShift with DxEnterprise
As organizations seek to modernize their infrastructure and improve SQL Server scalability, many are turning to containerization and orchestration platforms like Red Hat OpenShift. Migrating existing SQL Server workloads to these new environments can be complex and daunting, especially when the task at-hand involves migrating cross-platform from Windows to Linux for the first time. In this step-by-step demonstration, we’ll show you how you can deploy a secure, cross-platform SQL Server Availability Group (AG) that seamlessly spans from an on-premises Windows Server node to a newly created OpenShift cluster in Azure. We'll automate the deployment of this unique AG using DxEnterprise’s SQL Server Operator for Kubernetes, and be sure to demonstrate: – AG customization – The ability to control # of replicas, async or sync replication, etc. – The speedy workload migration from Windows to OpenShift using AG – Fully automatic, database-level HA for the new OpenShift workload with DxEnterprise If your organization has any SQL Server modernization ambitions at all and is eyeing OpenShift as a potential hub for virtualization and container orchestration, make this session a priority. You'll leave with an actionable understanding of an easy, secure, and highly available approach to OpenShift migration.
-
How to Build a Secure & Resilient Data Estate for SQL Server-Backed AI Apps
-
How to Unify a SQL Server Availability Group Across Windows and Linux
-
How to Build Your Database as a Service with SQL Server Containers & DH2i
-
How to Migrate SQL Server Workloads to Red Hat OpenShift with DxEnterprise
Ola Hallengren
Chief Data Platforms Engineer, Saxo Bank
Ola Hallengren
Saxo Bank, Chief Data Platforms Engineer
Ola Hallengren is a Data Platform MVP and the creator of the "SQL Server Maintenance Solution".
-
The Latest Performance Improvements in Azure SQL DB, MI and SQL Server 2025
In this cutting-edge session, we will explore the latest performance improvements in Azure SQL Database, Azure SQL Managed Instance, and SQL Server 2025. We’ll dive into advancements in the optimizer (including intelligent query processing), locking, storage engine, availability groups, and more. I will share my experiences of the features from early preview testing and demonstrate these features in action. Finally, we’ll address the questions: What problems do these features solve, how do they work, and in which scenarios will you benefit from upgrading?
-
The Hidden (and Documented) Gems of Ola Hallengren’s Maintenance Solution
Padma Rama Divya Achanta
Sr. SQL & Cloud Database Administrator, CDW
Padma Rama Divya Achanta
CDW, Sr. SQL & Cloud Database Administrator
Padma Rama Divya Achanta is a seasoned Cloud Data Strategist and Senior Consultant with over a decade of experience in cloud infrastructure, data engineering, and automation. She specializes in designing intelligent, scalable data platforms using Microsoft Azure, SQL Server, and Infrastructure as Code tools like Terraform, Ansible. Padma has led large-scale database modernization and migration initiatives, optimizing performance and delivering significant cost savings for enterprise clients. As the founder of DBAVaults.com, a platform dedicated to empowering data professionals with practical insights on cloud databases and automation, Padma actively shares technical articles, scripts, and architectural solutions. She has authored multiple peer-reviewed publications and book chapters on topics such as AI-driven cloud integration, sustainable tech innovation, and hybrid data architecture. Her unique ability to simplify complex technologies and drive innovation through automation makes her a sought-after speaker, mentor, and strategic advisor in the data community.
-
Automate Microsoft Azure SQL Database Creation With Terraform
-
Kickstart Your IaC Journey: Terraform Fundamentals for Azure Automation
-
Mastering Azure SQL Database Essentials for Modern Data Professionals
Azure SQL Database is more than just "SQL in the cloud"—it's a fully managed, intelligent database platform that can transform the way you design, deploy, and manage data systems. Yet many data professionals only scratch the surface, missing out on performance, scalability, security, and cost-optimization features that are built-in. In this session, we’ll deep dive into the essential concepts of Azure SQL Database that every modern data professional should know. You'll learn how to confidently choose the right service tier, leverage built-in high availability, secure your database using Microsoft Defender for SQL, and actionable cost-saving techniques like auto-pausing and reserved capacity that can make a real difference in your cloud bill. Whether you’re just moving from on-premises or supporting hybrid architectures, this talk will equip you with practical strategies, real-world examples, and insights from the field to help you unlock the full potential of Azure SQL Database.
Pam Lahoud
Principal Program Manager, Microsoft
Pam Lahoud
Microsoft, Principal Program Manager
Pam Lahoud has been with Microsoft since 2006 and is currently a Program Manager on the Microsoft Fabric Databases Customer Advisory Team, based in Redmond, WA, USA. She is passionate about Azure SQL and SQL Server performance and has focused on performance tuning and optimization, particularly from the developer perspective, throughout her career. She is a SQL 2008 MCM with over 25 years of experience working with Azure SQL and SQL Server and is co-author of the book "Learn T-SQL Querying: A guide to developing efficient and elegant T-SQL code" (https://aka.ms/LearnTSQLQuerying).
-
Becoming Azure SQL DBA – Security, Compliance, Threats, Connectivity
In this session you will learn how to evolve your Azure SQL DBA skills in the domain of security, compliance, authentication and connectivity, from the perspective of an on-premises DBA now supporting databases in Azure. On the example of a fully-managed Azure SQL PaaS service, you will gain a deep understanding of the security and compliance concepts the platform offers. You will understand authentication and best practices related to using WinAuth and EntraID with your Azure SQL resources – and how it maps with resources migrated from your on-premises SQL Server. We will review how to use advanced threat protection to automatically detect any security vulnerabilities. You will learn about Microsoft Purview, which helps you gain visibility, safeguard and manage sensitive data, govern, and manage critical data risks and regulatory requirements in Azure. We will also cover the basics of networking in Azure SQL and what is required to securely connect to, and access, your Azure SQL resources. In each of the areas and throughout the session, we will map on-premises SQL Server DBA responsibilities to the Azure SQL DBA role – highlighting what responsibilities are new, which ones stay the same, and what is shared or fully delegated to Microsoft. You will walk away with an understanding of the relevant DBA skills you need to evolve as an Azure SQL DBA.
-
Becoming Azure SQL DBA – New Opportunities in Azure – Panel Discussion
As an Azure SQL DBA, your role expands beyond its traditional scope. This shift in responsibilities creates exciting opportunities to develop new cloud skills – from integrating with diverse Azure services to adopting modern coding practices with AI – empowering you to grow and innovate. This wrap-up session of the learning pathway includes a 10-minute introduction followed by a dynamic 50-minute interactive panel discussion. Bring your questions and engage directly with experts ready to share insights and practical guidance. Moderated by Bob Ward, this session is your opportunity to clarify concepts, explore best practices, and connect with industry leaders.
-
Becoming Azure SQL DBA – Performance Monitoring, Tuning, and Alerting
In this session you will learn how to extend your Azure SQL DBA skills in the domain of performance monitoring, tuning, and alerting from the perspective of on-premises DBA. While there are similarities between a fully-managed Azure SQL PaaS service and SQL Server, in this session you will gain a deeper understanding of performance monitoring, troubleshooting, tuning and alerting specific to Azure. Learn how to use the cloud-native DB watcher monitoring solution and Query Performance Insights to monitor and identify database performance issues in Azure. We'll step through automated tuning and automated plan regression correction (APRC) and demonstrate, using Resource Health, to understand the health of the environment as well as setting up alerts to quickly identify performance issues. Finally, we'll discuss how to optimize performance with resource right-sizing, choosing the appropriate storage type, and configuring file structures. In each of the areas and throughout the session, we will map on-premises SQL Server DBA responsibilities to the Azure SQL DBA role – highlighting what responsibilities are new, which ones stay the same, and what is shared or fully delegated to Microsoft. You will walk away with an understanding of the relevant DBA skills you need to evolve as an Azure SQL DBA.
Paresh Motiwala
Developer, Harris County, TX
Paresh Motiwala
Harris County, TX, Developer
Paresh Motiwala ?PMP; an Azure enthusiast, a Data Platform Manager (Boston)has led several large SQL implementations, migrations and upgrades. He was recently selected as a "Speaker Idol" finalist for the PASS Summit 2018. He has managed multi terabyte OLTP databases. He loves learning and talking about Big Data. He has also been a Senior SQL DBA and a Solutions Architect in Fortune 100 companies. He also helps organizing / and speaks at many SQL Saturdays, Azure Bootcamp and User Groups. He is a certified in Big Data Analytics and FinTech(MIT) Project manager, Public Speaker, an avid singer, cook, open networker, and stand-up comedian. He teaches public speaking, debating, interviewing and group discussion skills. He also mentors DBAs in Boston area; and children around the globe via www.circlesofgrowth.com
-
Data Loss Prevention Using Digital and Database Forensics
Data loss can devastate businesses and individuals, making prevention critical in today’s digital era. Leveraging digital and database forensics offers tools to identify vulnerabilities, recover lost data, and implement preventive measures. This session explores how forensic techniques proactively safeguard critical assets. Attendees will learn key methodologies such as detecting unauthorized access, tracing data breaches, and analyzing logs or HDD dumps to uncover hidden threats. We'll discuss practical strategies to validate backup integrity, address weak points in database configurations, and enhance system resilience. Along the way, you'll gain insights into concepts like the 4th Amendment, Chain of Custody, Tagging and Bagging Evidence, and Honeypots to strengthen security protocols. With real-world examples, this session emphasizes actionable tips to prevent data loss, preserve evidence for audits or legal needs, and ensure compliance. Whether you’re a DBA, IT professional, or security expert, you'll leave equipped with knowledge to mitigate risks, bolster security, and maintain data integrity effectively.
-
Brewing Data Excellence: A DBA's Morning Routine
-
Unleashing Community Power: Secrets to Running SQL Saturdays & User Groups
-
Decoding Severity 16 Error in SQL
Severity 16 errors in SQL Server are a common occurrence that can disrupt daily operations. These errors typically stem from user input or permissions issues, signaling problems that must be resolved to maintain system reliability. While they may appear straightforward at first glance, decoding these errors often involves navigating a maze of root causes, ranging from data type mismatches and constraint violations to invalid object references or access permissions. This session will equip attendees with the tools and techniques needed to systematically identify, analyze, and resolve Severity 16 errors. We’ll explore how to interpret error messages effectively, trace their origins, and apply proven troubleshooting strategies to resolve them. Real-world examples and practical scripts will be shared to simplify the error resolution process and ensure your SQL Server environment remains robust. Whether you’re an experienced DBA or a novice, this session will provide actionable insights to empower you to handle Severity 16 errors with confidence. By the end, you’ll have a deeper understanding of these errors and the confidence to decode and address them efficiently
-
Data Loss Prevention Using Digital and Database Forensics
-
Defuse, Don’t Diffuse: Mastering the Gentle Art of Verbal Self-Defense
-
Lightning Talks-02: A Rapid-Fire Exploration of Key Tech Topics
Patrick O'Halloran
Principal Solutions Architect, WhereScape
Patrick O'Halloran
WhereScape, Principal Solutions Architect
Patrick O’Halloran has had a long career in technical and customer-facing roles. He is a self-described ‘math nerd’ who began his programming career at age 13. He’s been involved in large-scale software development throughout his career, and is currently at WhereScape where he has filled a variety of roles and is currently a Principal Solutions Architect.
-
The Modern Data Warehouse: Foundation for Scalable, Intelligent Analytics
-
The Need for Speed: Agile Prototyping in Microsoft Fabric
In this session, discover how agile principles can revolutionize data warehouse development through rapid prototyping. We'll examine automated modeling techniques, iterative schema design, and efficient workflows that dramatically shorten cycles from concept to deployment. Attendees will gain real-world insights into integrating varied data sources, validating prototypes early to minimize risks, and scaling solutions to production, all supported by practical examples and best practices for adapting to changing business requirements.
-
The Modern Data Warehouse: Foundation for Scalable, Intelligent Analytics
As organizations push toward real-time insights, AI-driven decisions, and hybrid cloud architectures, the role of the data warehouse is evolving. No longer a passive repository, today’s modern data warehouse is a dynamic, cloud-first foundation for delivering governed, scalable, and high-performance analytics. In this session, we’ll explore the architectural patterns, design principles, and automation strategies that define modern data warehousing. You’ll learn how to accelerate cloud migrations, leverage metadata-driven automation, and orchestrate complex workflows across diverse data sources—without sacrificing data quality or governance. We’ll also examine practical modernization approaches, from replatforming legacy systems to designing new environments built for agility and speed. Real-world examples will show how enterprises are using intelligent automation and orchestration to reduce time-to-insight, support real-time analytics, and enable continuous innovation. Whether you're managing a traditional on-prem solution, moving to the cloud, or architecting from scratch, this session will provide actionable guidance to help you build a future-ready data warehouse that powers advanced analytics, supports AI, and scales with your business.
-
Supercharging Your Move to Microsoft Fabric with Automation
As organizations look to modernize their data platforms, migrating from SQL Server or Azure Data Warehouse to Microsoft Fabric is a natural next step. But without the right approach, these migrations can be complex, risky, and resource-intensive. WhereScape simplifies and accelerates the journey. By automating repetitive tasks, enforcing best practices, and streamlining end-to-end lifecycle management, WhereScape reduces risk and speeds delivery at every stage of migration. In this session, you will learn how to move existing data warehouse assets into Fabric faster and with greater confidence. See how WhereScape enables rapid prototyping, seamless schema and pipeline conversion, and ongoing management while freeing your team from manual coding. Whether you are planning a full scale migration or taking a hybrid approach, this session will show how WhereScape helps unlock Fabric’s full potential faster, with fewer resources, and at greater scale.
Pavlo Golub
Senior/Lead Developer/Engineer, Cybertec
Pavlo Golub
Cybertec, Senior/Lead Developer/Engineer
Pavlo is a PostgreSQL expert and developer at CYBERTEC. He's been working with PostgreSQL since 2002. He has an M.Ed. with an emphasis in Mathematics and Computer Science from the Central Ukrainian State Pedagogical University, and previously worked as a professor at the Kirovograd Medical Professional College. He is the co-founder of PostgreSQL Ukraine: https://www.facebook.com/groups/postgresql.ua. He develops and maintains the PostgreSQL tools PGWatch2 and pg_timetable.
-
Professional PostgreSQL Monitoring Made Easy
This session provides a comprehensive overview of database monitoring, progressing to a focused exploration of PostgreSQL, along with a breakdown of the importance of key metrics. With many community-developed tools available, we'll highlight some of the more popular options and discuss the challenges associated with different approaches. To help overcome some of these challenges, we'll introduce pgwatch, an open-source tool from Cybertec that offers the simplest possible entry into exhaustive Postgres monitoring. We'll also touch on advanced topics such as anomaly detection and alerting, which can be easily implemented on top of the underlying data tier (TimescaleDB) using the TICK stack or Grafana.
Peter Kruis
SQL Consultant, Monin-IT
Peter Kruis
Monin-IT, SQL Consultant
Peter is a SQL Consultant at Monin-IT who enjoys working across the full spectrum of SQL Server since 2010, though he likes the performance tuning the most. Since stepping into the speaker scene in 2023, he has shared his knowledge at events like SQLBits and SQL/Data Saturday. Peter’s sessions are known for being informative, demo-filled and delivered with an approachable style that is very suitable for newcomers and accidental DBA's. Whether it’s diving into wait stats, tackling tricky query plans, or just having a laugh while learning something new; Peter is always up for making SQL a little more fun.
-
An introduction to Extended Events
-
Query Store Basics: A DBA’s Best Friend
Ever wondered why your queries are running differently today than they did last week? Query Store in SQL Server can help you figure that out! In this session, we’ll break down the basics of Query Store, designed especially for junior or accidental DBAs. We’ll talk about what Query Store does, how it works and how it evolved over time, and why it’s such a great tool for keeping track of query performance over time. With explanations and demos, you’ll learn how to use Query Store to spot performance issues, dig into query execution plans, and even prevent bad plans from ruining your day. If you’re new to SQL Server or just looking for a practical way to optimize query performance, this session will help you to get started.
-
Waitstats for the accidental DBA
-
Two Developers, One Mission: Make a Test Database That Doesn’t Suck
-
Query Store Basics: A DBA’s Best Friend
Ever wondered why your queries are running differently today than they did last week? Query Store in SQL Server can help you figure that out! In this session, we’ll break down the basics of Query Store, designed especially for junior or accidental DBAs. We’ll talk about what Query Store does, how it works and how it evolved over time, and why it’s such a great tool for keeping track of query performance over time. With explanations and demos, you’ll learn how to use Query Store to spot performance issues, dig into query execution plans, and even prevent bad plans from ruining your day. If you’re new to SQL Server or just looking for a practical way to optimize query performance, this session will help you to get started.
-
An introduction to Extended Events
-
Waitstats for the accidental DBA
-
Two Developers, One Mission: Make a Test Database That Doesn’t Suck
-
We Organized a Community Event and All We Got Is These Lousy Stickers
Peter Shore
Senior Database Administrator, APCO Holdings
Peter Shore
APCO Holdings, Senior Database Administrator
Peter is a seasoned IT professional with over 30 years of experience. He is a frequent speaker at SQL Saturday, User Groups, PASS Data Community Summit, and other conferences. He took the DBA plunge in 2013 quickly embracing the Microsoft Data Platform and associated community. Peter is comfortable working with physical, virtual, and Azure implementations of SQL Server focusing on improving performance and reliability. He is also adept at bridging the gap between technical and business language to bring technology solutions to business needs. Additionally, Peter is the president of CBusData in Columbus, Ohio and co-organizer of Data Saturday Columbus.
-
All Day PostgreSQL for the SQL Server DBA
-
Data Data Everywhere: A Look At Data Hoarding
Data has exploded over the last several years in both volume and value. The explosion in volume may mean that you, rather you on behalf of your employer, ingest, process, and store significant amounts of data. However, this process is potentially missing a piece, namely, getting rid of the data. In many cases there is a fear about doing this data clean-up, a mindset of “we might need that someday” where need means “we are afraid we missed some value in that data. This leads to data hoarding. Data hoarding brings risk. At the end of this session you will have: A better understanding of what data hoarding is A better understanding as to the risks of data hoarding Ideas of ways to address data hoarding in existing data stores Ideas of ways to prevent data hoarding in future projects
-
Infrastructure for the Data Professional: An Introduction
-
Wait Wait Do Tell Me:A Look at SQL Server Wait Statistics
-
Changing Perspective A Different View Of Imposter Syndrome
Phil Brammer
Director of Product Security, Buildertrend Solutions, Inc.
Phil Brammer
Buildertrend Solutions, Inc., Director of Product Security
Phil Brammer is the director of product security at Buildertrend. living in Omaha, Nebraska. He leads a team working on ensuring our customers and their data are secure, blending his database knowledge with industry security best practices. A former community speaker, he’s happy to be back to share his story. He enjoys spending time with his wife, Erica, and five kids.
-
Unlock Your Data's Potential: Breakfast with Google Cloud Leadership
Join us for an exclusive, sponsored breakfast designed for data and IT professionals seeking a strategic overview of the entire Google Cloud database ecosystem. This is your opportunity to move beyond individual products and understand the comprehensive strategy that supports your most critical workloads. Connect directly with Google Cloud's database leadership to discuss our commitment to providing choice, performance, and flexibility across all major database requirements. What You Will Learn The Full Portfolio: Discover how our range of offerings—from hyperscale solutions like Cloud SQL and AlloyDB to industry-leading services like Spanner and Firestore—fits together to meet every modern data need. Strategic Partnerships: Get an executive briefing on how our crucial partnerships, including the groundbreaking collaboration with Oracle, enable you to maintain existing investments while accelerating cloud adoption. The Innovation Roadmap: Engage in a strategic discussion about the future of data infrastructure and how Google Cloud is pioneering AI-driven operations and open-source innovation. Google Cloud speakers include Raj Pai, Vice President, Product Management, Cloud AI and Itay Maoz, Senior Director of Engineering We will also feature a customer panel with our product experts who will discuss how they've successfully leveraged this comprehensive strategy—integrating both Google Cloud native services and partner technologies—to transform their business.
Pinal Dave
SQL Performance Expert, SQL Authority
Pinal Dave
SQL Authority, SQL Performance Expert
Pinal Dave is an SQL Server Performance Tuning Expert and independent consultant with over 24 years of hands-on experience. He holds a Masters of Science degree and numerous database certifications. Pinal has authored 14 SQL Server database books and 94 Pluralsight courses. To freely share his knowledge and help others build their expertise, Pinal has also written more than 5,800 database tech articles on his blog at https://blog.sqlauthority.com. Pinal is an experienced and dedicated professional with a deep commitment to flawless customer service. If you need help with any SQL Server Performance Tuning Issues, please feel free to reach out at pinal@sqlauthority.com.
-
SQL Server Performance Tuning: A Practical Approach With a Touch of GenAI
SQL Server performance tuning can be overwhelming, but it doesn’t have to be. In this session, we will break down key strategies that help improve query performance, optimize indexes, and enhance database efficiency. If you already know SQL and want to step into performance tuning, this session is for you. We will cover essential tuning techniques, such as indexing strategies, query execution plans, and common performance bottlenecks. Along the way, we’ll also explore how Generative AI (GenAI) tools can assist in performance analysis, query optimization, and database monitoring. Additionally, we will touch on how Python can be used for database performance analysis and automation. Expect a practical discussion with real-world examples, covering: – How to analyze and optimize slow queries. – The role of indexing and execution plans in performance tuning. – A brief look at how GenAI can provide performance insights. By the end of this session, you will have a strong foundation to start your journey in SQL Server performance tuning, along with an understanding of how GenAI can enhance the optimization process.
-
SQL Performance Monitoring: Building a Real-Time Dashboard with Python
-
Hidden Culprits: CPU, Memory & IO Waits Slowing Down Your SQL Server
You’ve optimized your queries, added indexes, and still—your SQL Server feels sluggish. Why? The answer often lies in wait statistics, the hidden signals that tell you where your database is really struggling—whether it’s CPU overload, memory pressure, or slow disk I/O. In this practical, no-nonsense session, we’ll break down: How CPU, memory, and I/O waits impact performance and what they mean in real-world scenarios. Why some wait types are warning signs while others are just noise. Simple, proven fixes to eliminate slowdowns and make queries run faster. How Generative AI (GenAI) can help analyze wait stats, detect performance trends, and suggest optimizations. This session is designed for SQL developers, DBAs, and anyone who has ever struggled with a slow SQL Server. Whether you're new to performance tuning or looking for a structured way to troubleshoot issues, you'll leave with a clear roadmap to diagnosing and fixing SQL Server slowdowns—with a little help from AI!
Priya Sathy
Microsoft
Priya Sathy
Microsoft
Priya Sathy brings over 25 years of experience in the data and analytics space focusing on building capabilities that meet the needs and complexities of large-scale data platforms at enterprises. She currently is head of product for SQL Databases at Microsoft including Microsoft SQL Server, Azure SQL and Fabric SQL Database. Prior to this, Priya was head of product for Data Warehousing, Enterprise BI in Power BI and various other data and analytics products.
-
Breakfast with the Microsoft Data Leadership Team
Get your day started early at PASS Data Community Summit with a free breakfast and a Q&A session with a panel of leaders across Microsoft hosted by Bob Ward. Tell us what is top of mind for you across SQL Server, Azure SQL, Microsoft Fabric and topics like AI. This is always one of the most popular sessions at the PASS Data Community Summit, so you won’t want to miss it!
-
Microsoft Keynote: What's New in Azure Databases: Real-World Solutions for Developers and Enterprises
Whether you're modernizing for peak performance and AI readiness or building the next generation of intelligent apps and agents, enterprises require robust, scalable databases that not only meet today’s demanding requirements but also unlock tomorrow’s possibilities. Join CVP of Azure Databases Shireesh Thota to discover how Microsoft empowers your vision with breakthrough innovations across SQL Server, Azure SQL, Azure Cosmos DB, and Azure Database for PostgreSQL. Learn how to accelerate AI-powered experiences with Copilot and Fabric, and see dynamic demos that showcase how Microsoft’s unified, enterprise-ready data platform, together with AMD, can help you achieve your transformation goals.
Rafet Ducic
Sr. Solutions Architect, Amazon
Rafet Ducic
Amazon, Sr. Solutions Architect
Rafet Ducic is a Senior Solutions Architect at Amazon Web Services (AWS). He applies his more than 20 years of technical experience to help Global Industrial and Automotive customers transition their workloads to the cloud cost-efficiently and with optimal performance. With domain expertise in Database Technologies and Microsoft licensing, Rafet is adept at guiding companies of all sizes toward reduced operational costs and top performance standards.
-
Mastering Database Migration: A DBA’s Roadmap from SQL Server to PostgreSQL
As organizations move from SQL Server to PostgreSQL to reduce costs and embrace open-source databases, a well-planned migration is crucial. This session covers different migrtaion pathways, schema conversion, stored procedure translation, performance tuning, and workload optimization while highlighting key differences in indexing, partitioning, and concurrency control. Attendees will explore migration tools like AWS DMS, Babelfish, pgLoader etc, learn best practices for high availability and disaster recovery, and tackle challenges like T-SQL to pgSQL conversion. Through real-world case studies, DBAs will gain a clear roadmap for a seamless transition and maximizing PostgreSQL’s capabilities.
Raghupati Jha
Principal Product Manager, Nutanix
Raghupati Jha
Nutanix, Principal Product Manager
Raghu is part of the NDB Product team, focusing on Backup Recovery, Copy Data Management, and the SQL Server database engine.
-
Deploy, Restore, and Patch Your Databases Anywhere, Anytime in Minutes
Managing hundreds or thousands of databases—Microsoft SQL Server, Oracle, PostgreSQL, MongoDB, MySQL, vector databases—across on-prem and cloud? Struggling with slow database deployments, painful patching, and long backups? Discover how Nutanix Database Service (NDB) is simplifying database lifecycle management: Deploy databases in minutes Patch hundreds of servers effortlessly Snapshot and restore 40TB databases in ~10 minutes Say goodbye to chaos and hello to speed, control, and consistency for databases running in a hybrid multicloud world.
Raj Pai
Raj Pai
Raj Pai is Vice President of Product Management for Cloud AI.
-
Unlock Your Data's Potential: Breakfast with Google Cloud Leadership
Join us for an exclusive, sponsored breakfast designed for data and IT professionals seeking a strategic overview of the entire Google Cloud database ecosystem. This is your opportunity to move beyond individual products and understand the comprehensive strategy that supports your most critical workloads. Connect directly with Google Cloud's database leadership to discuss our commitment to providing choice, performance, and flexibility across all major database requirements. What You Will Learn The Full Portfolio: Discover how our range of offerings—from hyperscale solutions like Cloud SQL and AlloyDB to industry-leading services like Spanner and Firestore—fits together to meet every modern data need. Strategic Partnerships: Get an executive briefing on how our crucial partnerships, including the groundbreaking collaboration with Oracle, enable you to maintain existing investments while accelerating cloud adoption. The Innovation Roadmap: Engage in a strategic discussion about the future of data infrastructure and how Google Cloud is pioneering AI-driven operations and open-source innovation. Google Cloud speakers include Raj Pai, Vice President, Product Management, Cloud AI and Itay Maoz, Senior Director of Engineering We will also feature a customer panel with our product experts who will discuss how they've successfully leveraged this comprehensive strategy—integrating both Google Cloud native services and partner technologies—to transform their business.
Raj Pochiraju
Principal Program Manager, Microsoft
Raj Pochiraju
Microsoft, Principal Program Manager
At Microsoft primarily responsible for providing seamless database workloads migration experience to Azure Data platforms for our customers through our tools and service.
-
Inside SQL Server 2025
Join Bob Ward and friends to go deep into the next major release of SQL Server, SQL Server 2025, the Enterprise AI-ready database. You will learn the fundamentals of what capabilities are in the release so you can plan and make key decisions on when and how to upgrade. This session will then go deep into all the major features including but not limited to: AI built-in, JSON, RegEx, REST APIs, Change Event Streaming, Fabric Mirroring, new concurrency enhancements, performance improvements, HA enhancements, and security. You will learn all the latest innovations of SQL Server 2025 including plenty of demonstrations and sample code you can take home to try on your own. Come see all the excitement of the modern database platform reimagined with SQL Server 2025.
Ramesh Pathuri
Sr. Delivery Consultant – Data & AI/ML, Amazon
Ramesh Pathuri
Amazon, Sr. Delivery Consultant – Data & AI/ML
Senior Delivery Consultant – Data & AI/ML with AWS Worldwide Public Sector ProServe, Ramesh Pathuri brings extensive expertise in databases, data engineering, and artificial intelligence. He specializes in guiding organizations through cloud transformation journeys, enabling customers to unlock innovation through strategic database migration and modernization. Ramesh's technical leadership helps public sector entities optimize their data and analytics solutions, driving measurable improvements in efficiency and operational impact while navigating their transition to AWS Cloud technologies.
-
Transforming Telecom Data: PostgreSQL Vector Search with Amazon Bedrock
-
Transforming Government Intelligence with Vector Search and Amazon Bedrock
Government agencies face significant challenges managing vast amounts of information across policy documents, regulations, and departmental knowledge bases. Traditional keyword search methods prove inadequate when government officials need to find conceptually related information across different departments and agencies. This session demonstrates how to transform government intelligence by implementing semantic search using PostgreSQL's vector capabilities with Amazon Bedrock's embedding models. We'll build a knowledge discovery system that understands the meaning behind queries rather than just matching keywords. The solution leverages PostgreSQL's pgvector extension to store and query embeddings generated by Amazon Bedrock, enabling natural language search across policy documentation, citizen services, and interagency intelligence domains. Key topics covered: • Configuring PostgreSQL with vector extensions for secure semantic search • Generating embeddings from government documents using Amazon Bedrock • Implementing vector similarity search with HNSW indexing • Breaking down departmental silos with cross-agency knowledge discovery • Visualizing search analytics to identify policy and service gaps You'll see examples of how this technology helps government officials find relevant policies, public service representatives access citizen service information, and policy analysts discover insights across multiple agency databases.
Ray Kim
Senior Education Specialist, Professional Development Program, The Research Foundation for SUNY
Ray Kim
The Research Foundation for SUNY, Senior Education Specialist, Professional Development Program
Ray is an advocate for documentation and technical communication. He is a co-founder of the Albany, NY SQL group (CASSUG) and a member of the Society for Technical Communication (STC). He has spoken at PASS Summit and STC Summit, as well as numerous SQL Saturdays and other events. He has worked various positions in technology, including as a developer, webmaster, analyst, technical writer, and instructor. He holds an MS in technical communication from Rensselaer Polytechnic Institute and a BS in computer science from Syracuse University. He currently works as a technical writer and trainer for The Professional Development Program at The Research Foundation for SUNY. A musician in his spare time, Ray plays four different instruments (piano, clarinet, mallet percussion, saxophone). He also enjoys going to ball games and doing CrossFit, and is a two-time SQLServerCentral.com fantasy football champion. He lives in Troy, NY with his wife, Lianne, and their tuxedo cat, Bernard.
-
Networking 101: Building Professional Relationships
Networking. You keep hearing that word throughout your career development, but you don’t know much about it, much less, how to do it. You want to connect with technical and data professionals, so you attend events such as Data Saturday and your local user group. But what about your book club, your gym, your church group, or your kid’s soccer game? Those are prime — and overlooked — opportunities to network! In this interactive session, we will discuss networking — what it is, why it’s important, and where opportunities exist. You will even have an opportunity to practice networking within the confines of our room. You might even leave this session with new networking contacts that you didn’t previously have! Bring business cards if you have them!
-
SQL101 — Learning about SQL: Building a Sandbox
-
I lost my job! Now what?!? A survival guide for the unemployed
-
Disaster Documents: The role of documentation in disaster recovery
-
Networking 101: Building Professional Relationships
Networking. You keep hearing that word throughout your career development, but you don’t know much about it, much less, how to do it. You want to connect with technical and data professionals, so you attend events such as Data Saturday and your local user group. But what about your book club, your gym, your church group, or your kid’s soccer game? Those are prime — and overlooked — opportunities to network! In this interactive session, we will discuss networking — what it is, why it’s important, and where opportunities exist. You will even have an opportunity to practice networking within the confines of our room. You might even leave this session with new networking contacts that you didn’t previously have! Bring business cards if you have them!
-
SQL101 — Learning about SQL: Building a Sandbox
-
Disaster Documents: The role of documentation in disaster recovery
Ray Maor
Senior DBA Consultant and VP R&D, Experda
Ray Maor
Experda, Senior DBA Consultant and VP R&D
Maor Ray is a Senior Data & Infrastructure Engineer with over 20 years of experience architecting enterprise-grade data solutions. As a former military IT officer, Maor brings a unique blend of military precision and technical excellence to complex data challenges. Currently serving as Senior Data Engineering Consultant at Experda, he has designed and deployed high-performance, secure SQL Server and Oracle architectures for Fortune 500 companies, startups, and governmental agencies worldwide. Maor's career spans from hands-on database administration to executive-level product ownership. As Chief Architect and Head of R&D at Experda's Software Department, he led the development of proprietary SQL Server performance analysis tools, collaborating directly with Microsoft MVPs and working closely with clients to refine algorithms and user experiences. His expertise extends across the full technology stack. Throughout his career, Maor has managed distributed teams across multiple countries, led complex ETL pipeline implementations, and provided strategic consulting on performance optimization, high availability, and disaster recovery solutions. His deep understanding of both technical architecture and business requirements has made him a trusted advisor for organizations navigating digital transformation challenges. Maor holds a B.A. in Computer Science & Business Administration and maintains expertise in advanced database technologies, cloud platforms, and enterprise security frameworks. He is passionate about bridging the gap between traditional data management and emerging AI technologies.
-
When will DBAs be Replaced by AI? A Humorous Reality Check
Are DBAs Endangered? A Humorous Reality Check takes the perennial “AI-will-take-my-job” anxiety and flips it into an upbeat roadmap for data professionals. Starting with a tongue-in-cheek question—“Are DBAs becoming obsolete due to AI?”—the talk walks through the past 30 years of database work, shows why certain core duties stubbornly resist full automation, and sketches what a “2045 DBA” really looks like: a hybrid architect who curates data ethics, orchestrates AI pipelines, and (yes) enjoys a four-day workweek. The punchline: far from killing the role, AI expands it—provided DBAs embrace new skills and collaborate with intelligent tools.
-
Prepare Your Data For AI: 3 Steps To Teach AI Your Schema
Most organizations assume that providing AI agents access to their databases is enough for intelligent automation. This fundamental misconception leads to costly implementation failures and frustrated teams. The reality is that AI cannot simply examine an Entity Relationship Diagram and understand how to extract meaningful business data. While ERDs show table relationships and field types, they lack the critical context that makes data actionable. AI agents struggle because they don't understand that "Status Code 3" means "Pending Approval" or that certain customer records should be filtered based on complex business rules developed over years of operational experience. Database administrators find themselves caught in this gap—they understand data structure but lack frameworks for translating business logic into AI-comprehensible formats. The result? AI agents that query databases correctly but return meaningless results, missing the nuanced understanding that drives real business decisions. This presentation reveals why traditional data documentation fails AI implementation and provides practical strategies for encoding lookup values, business rules, and operational context into AI-readable formats. Attendees will learn how to transform their existing database knowledge into AI success, moving beyond technical data access to true organizational intelligence that delivers measurable business value.
Reid Havens
Co-Founder | BI Evangelist, Analytic Endeavors
Reid Havens
Analytic Endeavors, Co-Founder | BI Evangelist
As the founder of Havens Consulting Inc. and a Microsoft MVP, Reid is a seasoned professional with a wealth of experience in technology, organizational management, and business analytics. With a Master's Degree in Organizational Development and a background in consulting for Fortune 10, 50, and 500 companies, Reid has the knowledge and expertise to help your organization succeed. In addition to his corporate experience, Reid is also a highly sought-after instructor, teaching Business Intelligence, reporting, and data visualization at the University of Washington and other universities. His passion for education extends beyond the classroom, as he has been involved in the development of the PL-300, DP-500, and DP-600 Microsoft certifications. Also having developed numerous training curriculum’s delivered to companies around the world.
-
Scoped for Success: Why Composite Models are a Report's Best Friend
-
Becoming a Master Builder: Creating the Ultimate BI Toolbox
-
Power BI Feature Overload: Balancing Flexibility with Maintainability
-
Lights, Camera, Data!: Building Your Brand through YouTube and Speaking
Sharing your knowledge can skyrocket your career—and it’s easier to start than you think. This session is a crash course in content creation and public speaking for data professionals ready to step into the spotlight. We’ll explore how creating tutorials or vlogs on platforms like YouTube, and speaking at meetups or conferences, can establish you as a trusted voice in the community. Learn how to get started with the gear you already have—a headset, screen recorder, and your expertise. We’ll also touch on topics like: Generating content ideas and avoiding creator’s block Picking topics and formats that resonate (e.g. demos, how-tos, live walkthroughs) Simple video production tips for beginners Strategies to overcome stage fright and imposter syndrome Using your online content and speaking to open new doors: job leads, community recognition, MVP awards, and more Attendees will leave inspired and equipped with a practical toolkit to begin creating content and speaking with confidence—no studio or stage required.
-
Power BI Features I Regret Using (So You Don’t Have To)
We’ve all been there—excited to try a cool Power BI feature that ends up making things worse. In this fast-paced talk, I’ll share a few real-world lessons from features that caused more complexity, confusion, or chaos than they were worth. You’ll leave with a sharper eye for evaluating what features serve your users… and which ones might just be adding noise.
-
Speaker Hacks for Nervous Nerds (From a Nerd Who Talks a Lot)
-
Lightning Talks-01: A Rapid-Fire Exploration of Key Tech Topics
-
Hobby Huddle: Sailing Skills, Adventures, and the Culture of Life on the Water with Reid Havens
Hosted in the Community Zone, these Hobby Huddle sessions are a fun way for people in the community to showcase their passions and hobbies outside of everyday work life. There will be a designated seating area for you to join these highly entertaining and informative back-to-back mini sessions.
Reitse Eskens
Technical Consultant, Axians
Reitse Eskens
Axians, Technical Consultant
Reitse began his computer days with GW Basic and quickly followed with Windows 3.11. After that many computers followed and, interrupted by human resource and psychology studies, he got into the IT work field around 2007. There he met Oracle 9. With a command line from 1980. At his current job he met SQL Server. And loved it. Now he’s working with SQL 2008 to SQL 2019 on-premises and a number of Azure SQL Databases, supporting customers and tuning databases. New projects focus more on the Azure data platform as an architect, security advisor and data engineer.
-
SQL Copilot, What the Query is This!?
At Microsoft Build 2024, there was the announcement of Copilot for Azure Sql going to public preview. Now the question arises, what can this type of Copilot do for me as a DBA, Developer or any other role where I'm working with Azure Sql. To help you get started, this session will introduce Copilot for Sql to you. Let's try some different experiences, find this Copilot and let it work a bit for us. Let's dig into general questions, data related questions but also database management! Don't expect deep dives into the inner workings, but I will get a bit into the security as this is paramount in any data resource. Buck Woody once (often) said: "don't type in demos". Join this session to see me break that rule! And even better, send in your own questions and see what happens! After this session, you'll have a better understanding of how SQL Copilot can assist you in your daily work and even improve efficiency.
-
What they should have told me about being a DBA
-
How an attendee became a speaker
-
Fabric SQL, can I have some more databases please?
-
Data Literacy: Navigating Your Way to Data-Driven Success!
-
Data Literacy: Navigating Your Way to Data-Driven Success!
Data literacy is for everyone—from data enthusiasts to business leaders. It’s about ensuring everyone understands data and has access to the information they need. When everyone’s on board, the whole company benefits from diverse insights and better decisions. Join us to explore: – What data literacy really means and why it’s crucial. – How to make it work for your organization. – Engaging all skill levels in the data journey. – Key reasons why data literacy is essential. This session will be interactive with Menti meter polls and plenty of Q&A. Come ready to participate and learn together! Are you in our target audience? Absolutely! Will there be stroopwafels? You bet!
Rene Garza
Redgate
Rene Garza
Redgate
My background includes serving as Product Support Lead within an industry dependent on reliability and service. Working on the support side, my responsibilities included diagnosing software issues, application configuration, voicing the technical side of sales calls, on-site tech support in continental United States and South America. In an industry where reliability is paramount, I have been able to take my management experience and provide our clients in the automation industry with the best possible support. This includes on-site training, sales and marketing for their own in-house distribution, and product management; Clients include. but not limited to, Google, Ford, and XTO (a subsidiary of Exxon Mobil).
-
Productivity Hacks for the Modern Data Pro
Between firefighting performance issues, managing schema changes, and supporting development teams, database professionals are stretched thin. This session is your morning boost of practical tips, tools, and workflows to help you get more done with less stress. We’ll explore how leading teams are simplifying repetitive tasks, improving code quality, and staying productive across hybrid environments. Whether you're a DBA, developer, or somewhere in between, you’ll walk away with actionable ideas and a live demo of tools that make it all possible.
René Antúnez
Principal Solutions Architect, Eclipsys
René Antúnez
Eclipsys, Principal Solutions Architect
A Cloud Architect at Eclipsys, I am an Oracle ACE, a speaker at Oracle Open World, Oracle Developer Day, IOUG Collaborate, and the OTN Tour Latin America and APAC. I'm also the Co-President of ORAMEX (Mexico Oracle User Group), the Web Events Chair for IOUG Cloud Computing Special Interest Group (SIG), and the International Chair for the Oracle RAC SIG.
-
Becoming Multi-Platform Proficient and the Tools Which Can Help You (Oracle, some MySQL and MongoDB)
Modern data professionals increasingly find themselves managing diverse database technologies, often in the same organization. This session is designed for those who want to sharpen their proficiency across Oracle, MySQL and MongoDB, learning the tips, tools and techniques that can reduce the friction to efficiency when working across platforms. We'll explore the unique strengths and quirks of each database, focusing on administrative monitoring, administration and performance tuning. We'll see how tools can streamline your multiplatform development and operations. If you're balancing enterprise and open-source databases, along with emerging NoSQL use cases, this session is your practical toolkit.
-
Zero to Understanding with Oracle as a Microsoft Professional
Are you a MS data professional who's always been curious about Oracle but unsure where to start? In this beginner-friendly session, we'll break down the fundamentals of the Oracle databases, exploring key architectural differences, core concepts, (instances, datafiles, etc.) and how they translate to familiar SQL Server constructs. We'll walk through how to connect to Oracle, run basic SQL and navigate Oracle's tools as a Microsoft-savvy pro. We'll go over critical differences and translations and understand common gotchas. Whether you're facing a multiplatform environment or expanding your data skillset, this session is your launchpad.
-
Multi-Platform Databases in the Cloud – How Workloads Impact Decisions
Choosing the right database for the cloud isn't just about features, it's about aligning workloads, performance characteristics, and operational tradeoffs. In this strategic session, we'll examine how real-world workloads influence database decisions for cloud solutions and when the cloud may not be the right decision. We'll focus on Oracle, MySQL and MongoDB, three database platforms that our attendees may not be as familiar with and offer a unique view into the database world. This session will inspect workload patterns for various database usage, (OLTP, analytics, hybrid) and how scaling, latency and cost behaviors differ. You'll leave with a better understanding of how to evaluate platform and solution based on the workload type and organizational needs- not hype.
Rodney Kidd
Developer / Engineer, Redacted
Rodney Kidd
Redacted, Developer / Engineer
-
Hobby Huddle: Photography – Depth of Field with Rodney Kidd
Hosted in the Community Zone, these Hobby Huddle sessions are a fun way for people in the community to showcase their passions and hobbies outside of everyday work life. There will be a designated seating area for you to join these highly entertaining and informative back-to-back mini sessions.
Rodrigo Sáez
Business Intelligence Analyst, Exigo LLC
Rodrigo Sáez
Exigo LLC, Business Intelligence Analyst
-
Real-World Power BI DevOps with Git, VS Code, Azure DevOps, and AI
Many teams want DevOps for Power BI—but few have achieved it. In this session, you’ll learn how we implemented a scalable, Git-based Power BI development flow using the PBIP format, Visual Studio Code, and Azure DevOps—without needing full automation or complex pipelines. We’ll walk through how we manage multiple customer-specific repositories, track report changes locally, push updates to Azure DevOps, and sync changes into the Power BI Service using workspace connections. You’ll see what works, what’s still manual, and how we’re planning to evolve. Along the way, we’ll show how ChatGPT supports the workflow—not with automation, but as a practical co-pilot for documentation, naming standards, and task consistency. We’ll wrap with a preview of our research into Microsoft Fabric deployment pipelines and Azure DevOps integration, and how teams like ours can evaluate when and how to adopt automation. Whether you're just getting started or looking to mature your analytics DevOps practices, you’ll leave with actionable takeaways you can use right away.<
Rune Ovlien Rakeie
Principal Cloud Architect, Vivicta
Rune Ovlien Rakeie
Vivicta, Principal Cloud Architect
Rune have been working with databases for 30+ years, primarily in those years with Microsoft SQL Server. Throughout the years he has had many roles, from Developer, Database Designer, Solution Architect and all the way to Production DBA. Today Rune works for Vivicta as a Principal Cloud Architect “focusing” on the different cloud vendor’s Data Platform Services. Rune is very active in the Microsoft Data Platform community and has held a position on the board for the Norwegian user group. For the last 8 years he has been running the largest Microsoft Data Platform conference on Norwegian soil, SQLSaturday Oslo, now re-branded to Data Saturday Oslo. In his spare time you can find Rune in the woods of his hometown Arendal, enjoying his primary hobby, singletrack bicycling.
-
Microsoft Fabric + Terraform = OneLove
At this year’s Fabric Community Conference (FabCon) in Las Vegas, Microsoft announced the general availability of the Terraform provider for Microsoft Fabric. This milestone has significantly narrowed the gap in automated management of Fabric resources. Terraform, HashiCorp’s industry-leading Infrastructure-as-Code (IaC) tool, now enables streamlined provisioning and management of Microsoft Fabric environments using code. In this session, we’ll explore: * Why Infrastructure-as-Code matters: Discover the value of coding your infrastructure for scalability, repeatability, and efficiency * What’s possible with the Microsoft Fabric Terraform provider (& friends): Learn how to manage capacities, workspaces, connections, gateways, lakehouses, data pipelines, and other Fabric resources programmatically * Current limitations and workarounds: Get a clear view of what’s supported, what isn’t, and how to address gaps * Internal Git integration vs. IaC: What is the role of Microsoft Fabric’s built-in Git integration, where it complements and where it differs from the broader IaC approach Whether you’re new to IaC or looking to optimize your automation with Microsoft Fabric, this talk will provide actionable insights to elevate your automation strategy.
Russel Loski
Senior Data Engineer, SQL Movers
Russel Loski
SQL Movers, Senior Data Engineer
Russ is a data engineer with over 25 years of experience. He has worked with a variety of tools from DTS to SSIS to Azure Data Factory to Power BI. He has worked in industry as varied as sports and health care. Russ current lives with his wife in North Texas very close to his daughter and grandkids.
-
Automate, Optimize, Validate: PowerShell for Power BI & SSAS Success
Manually validating Power BI and SSAS tabular models can be tedious and error-prone. Running reports just to export data is a waste of time. What if you could automate these processes with PowerShell? In this session, you’ll discover how to: * Ensure development changes don’t break production by automating model comparisons. * Export Power BI report data instantly—no manual clicks required. * Retrieve SSAS metadata effortlessly for better documentation and governance. * Trigger alerts and notifications when data conditions change. This session will be packed with hands-on examples, real-world use cases, and easy-to-implement scripts. Whether you're a Power BI professional or a data engineer, you'll leave with practical automation techniques that free up time for more valuable work.
Ryan Adams
Senior Cloud Solution Architect, Microsoft
Ryan Adams
Microsoft, Senior Cloud Solution Architect
Ryan Adams is a Senior Cloud Solution Architect for Microsoft. His focus on Data & AI allows him to work with many technologies like SQL, Azure SQL IaaS and PaaS, Databricks, Synapse Analytics, and Fabric. Previously, Ryan worked on the Fabric CAT team with our Synapse and Fabric product offerings.
-
Data Analytics Ingestion with Fabric Mirroring
-
Data Lake, Data Warehouse, Data Lakehouse an Architectural Discussion
-
Fabric Tour
Fabric has so many options that it's overwhelming to understand if it is something you should even use. Even if you know it's something you will use, it becomes very complicated to know which facets you need and how to architect them. This session will help you understand all the options so you can decide what applies to your business. We'll also talk about how it fits into various architectures and things you can do to conserve costs.
Ryan Booz
Solutions Engineer, pganalyze
Ryan Booz
pganalyze, Solutions Engineer
Ryan is a Solutions Engineer at pganalyze, focusing on PostgreSQL. Ryan has worked as a PostgreSQL advocate, developer, DBA, and product manager for more than 25 years, primarily working with time-series data on PostgreSQL and the Microsoft Data Platform. Ryan is a long-time DBA and developer, starting with MySQL and Postgres in the late 90s. He spent more than 15 years working with SQL Server before returning to PostgreSQL full-time in 2018. He’s at the top of his game when he's learning something new about the data platform or teaching others about the technology he loves.
-
Indexes, Wait Events, and EXPLAIN—Oh My! Porting Your Tuning Skills to Postgres
After investing years honing your SQL Server knowledge and performance tuning expertise, you're now facing requests to support production workloads on Postgres. How can your SQL Server skills in reading query metrics, analyzing wait stats, developing indexes, and interpreting query plans help you in this new environment? With the right guidance, you can quickly transfer your hard-earned query tuning knowledge from SQL Server to Postgres. However, once you've mastered the basics, there are numerous advanced techniques that only come through extensive experience and sometimes even diving into the source code. Join us for this full-day precon to explore PostgreSQL query and performance tuning in depth, with SQL Server comparisons that will help you make connections faster.
Sakshi Kiran Naik
AI Consultant, Walgreens Boots Alliance
Sakshi Kiran Naik
Walgreens Boots Alliance, AI Consultant
Sakshi Naik is an AI Consultant at Walgreens Boots Alliance, where she leads the development of ethical and impactful AI solutions in healthcare. With a Master’s in Computer Science from Stevens Institute of Technology and a Bachelor’s in IT Engineering from Mumbai University, she brings deep technical expertise across data science, machine learning, and AI infrastructure. Sakshi is also a recognized voice in AI policy. She has advised U.S. Senators and Members of the House of Representatives on the future of responsible AI and serves as Secretary of the IEEE Connecticut Section. As a member of the IEEE AI Policy Committee (AIPC), she actively contributes to shaping national and global frameworks for trustworthy AI.
-
Building Trustworthy AI With Techniques To Reduce Bias In Real-World Use
Traditionally, AI functioned as a data extraction tool. Over time, it evolved into intelligent systems capable of decision-making and predictions, sometimes surpassing human intelligence. However, these systems remain prone to malfunctions, with AI bias emerging as a significant challenge in decision-making applications. In the South Africa credit card case, the AI system was biased against Black individuals; in the Dutch childcare case, it discriminated against parents with dual nationality; and in the LinkedIn hiring process, it favoured male candidates. Various techniques have been applied to reduce these biases and create fairer AI systems. These include the Algorithmic Fairness framework (demographic parity, counterfactual fairness), improved training data (data audits and augmentation), compliance with privacy standards, keeping a human in the loop, and using multiple AI models to cross-validate outcomes. These efforts reduced bias and also built trust. In the LinkedIn case, fairness algorithms led to a threefold rise in search queries with representative results, enhancing gender diversity. In the South Africa case, racial disparities in approval rates dropped by 40%. While progress has been made, full bias mitigation remains ongoing. The industry must adopt standardized best practices to ensure trust in AI systems. This presentation discusses the threat of AI bias, explores how to build ethical AI, and proposes best practices for creating fair-compliant applications.
-
From Linear to Hybrid AI: Enhancing Price Prediction Strategies
-
From Linear to Hybrid AI: Enhancing Price Prediction Strategies
-
From Linear to Hybrid AI: Enhancing Price Prediction Strategies
-
Building Trustworthy AI With Techniques To Reduce Bias In Real-World Use
Traditionally, AI functioned as a data extraction tool. Over time, it evolved into intelligent systems capable of decision-making and predictions, sometimes surpassing human intelligence. However, these systems remain prone to malfunctions, with AI bias emerging as a significant challenge in decision-making applications. In the South Africa credit card case, the AI system was biased against Black individuals; in the Dutch childcare case, it discriminated against parents with dual nationality; and in the LinkedIn hiring process, it favoured male candidates. Various techniques have been applied to reduce these biases and create fairer AI systems. These include the Algorithmic Fairness framework (demographic parity, counterfactual fairness), improved training data (data audits and augmentation), compliance with privacy standards, keeping a human in the loop, and using multiple AI models to cross-validate outcomes. These efforts reduced bias and also built trust. In the LinkedIn case, fairness algorithms led to a threefold rise in search queries with representative results, enhancing gender diversity. In the South Africa case, racial disparities in approval rates dropped by 40%. While progress has been made, full bias mitigation remains ongoing. The industry must adopt standardised best practices to ensure trust in AI systems. This presentation discusses the threat of AI bias, explores how to build ethical AI, and proposes best practices for creating fair-compliant applications.
Samet Coban
Team Lead Databases, Flow Traders
Samet Coban
Flow Traders, Team Lead Databases
I am a computer scientist and database enthusiast. I started my career as a software developer, but pretty early on, I found my passion in databases. I've been working as a DBA for the past 13 years, and for the last 4 years, I’ve been leading the database team. It’s been a great journey balancing hands-on technical work with team leadership. Outside of work, I’m passionate about travel and language learning. I’ve visited 54 countries, speak three languages fluently, and have beginner-level proficiency in three more. I can also read four different alphabets, reflecting my deep interest in global cultures and communication.
-
Login Detective: Tracking Database Access Without Breaking a Sweat
Have you ever needed to determine the last time a specific login accessed your SQL Server or identify which application servers historically used a particular login? Or perhaps you want to detect if an application account is being misused to connect via SQL Server Management Studio? Unfortunately, SQL Server doesn’t provide these insights out of the box, making audits, security monitoring, and database migrations more challenging. In this session, we will explore how to effectively utilize SQL Server Extended Events to track login activity. You’ll learn how to capture essential details—such as account names, host connections, application usage, and last successful login timestamps—and store this data in a table to build a comprehensive inventory. By the end of the session, you’ll have the tools and techniques to implement a robust login monitoring solution using SQL Server Extended Events. This will not only simplify database migrations but also strengthen security, detect unauthorized access, and support compliance efforts.
Sanjay Mishra
Sanjay Mishra
Sanjay Mishra is Director of Product Management for AlloyDB, responsible for driving innovation and growth for AlloyDB cloud service, AlloyDB Omni and AlloyDB AI.
-
PostgreSQL on Google Cloud: Unveiling Innovation in Cloud SQL and AlloyDB
Join us to explore why Google Cloud is the ideal home for your PostgreSQL databases. This session will unveil the latest innovations and powerful capabilities within Cloud SQL for PostgreSQL and AlloyDB for PostgreSQL, Google Cloud's leading managed database services. We’ll dive into the compelling advantages of each platform, showing how they provide unmatched scalability, performance, and enterprise-grade features to meet a wide range of application needs. You'll learn how these solutions empower developers to build, deploy, and manage applications with confidence. The session will conclude with a real-world case study from a Google Cloud customer, ID.me, sharing their journey and the benefits they realized by moving their PostgreSQL workloads to Google Cloud. Discover how to unlock the full potential of PostgreSQL with Google Cloud.
Sarah Dugan
Database engineering manager, Aya Healthcare
Sarah Dugan
Aya Healthcare, Database engineering manager
Sarah Dugan is the Database Engineering Manager at Aya Healthcare, a healthcare staffing company based out of San Diego. She’s a local Seattle-ite and lives on Bainbridge Island with her family, just a ferry away. She has worked as a Database Engineer for 24 years mostly doing database development in the healthcare space. She was recently promoted to Database Engineering Manager and works from home managing a group of 9 database engineers. Aya has over four hundred people in its IT organization and approximately 30 SCRUM teams. On a given day during the sprint cycle, they may have upwards of twenty database scripts being checked into source control through Flyway by their development community and has been using Flyway for almost three years.
-
Business Brews & Breakthroughs with Redgate
Saverio Lorenzini
Senior Cloud Solution Architect, Microsoft Corp.
Saverio Lorenzini
Microsoft Corp., Senior Cloud Solution Architect
Saverio Lorenzini is a Senior Cloud Solution Architect at Microsoft with over 25 years of expertise in SQL Server. Passionate about data, he specializes in performance tuning, optimization, migrations, and Azure SQL. He is the creator of the SQL Monitoring Dashboard, a Microsoft IP solution for monitoring SQL instances. His current focus includes leveraging AI to enhance SQL productivity. An active member of the SQL community, Saverio regularly speaks at SQL and data conferences.
-
Enhancing T-SQL Performance and Quality in Large SQL Codebases with OpenAI
-
AI-Powered SQL Server Tuning with OpenAI
This session introduces a prototype that showcases an intelligent assistant for SQL Server tuning and optimization, powered by OpenAI. The system analyzes performance-related data from various sources, including Query Store, to identify inefficiencies and generate a list of tuning recommendations. It also proposes a revised version of the database design, with updates to indexes, statistics, and other key components. You’ll learn how to leverage Azure OpenAI to build an application that connects to SQL Server—on-premises or in the cloud. The assistant retrieves critical performance metrics such as high I/O queries, missing indexes, unused or redundant indexes, statistics, wait stats, and fragmentation levels. Using this data with industry best practices training, the AI provides actionable tuning suggestions, including CREATE, ALTER, and DROP statements for database objects, along with targeted maintenance strategies. The system detects overlapping or mergeable indexes and statistics—both existing and missing—and evaluates schema-level design metrics. It then generates a new optimized database layout with consolidated indexing for enhanced performance. The ultimate goal is to improve performances by eliminating plan warnings, bottlenecks, and design and configuration inefficiencies. The session features live demos showing how to build and apply this AI-powered assistant to improve DBA workflows, increasing both the speed and quality of DBA tuning tasks.
Scott Ellis
Staff SQL Engineer, Relativity
Scott Ellis
Relativity, Staff SQL Engineer
Scott Ellis is an author and technologist with 30 years experience in software development, digital forensics, big data, training, and business intelligence. He has spent the majority of his adult life writing code, conducting forensic examinations, writing SQL, developing databases, advising on SQL security and development best practices, and fixing deeply broken things. He is a 11 time speaker at Relativity Fest, leading multiple sessions in SQL, infrastructure, and security disciplines. His background is a blend of professional, military, and university experience with focuses in physics, art, and technology. He has a formal education in physics and writing, and professional disciplines include SQL, Azure, automation, DEVOPS, software usability, security, database tuning, computer forensics, performance troubleshooting, and technical publications. He has provided expertise to companies such as Deloitte, Ernst and Young, KPMG, McDermott Will and Emery, the DOJ, Department of Commerce, and Epiq. In his career he has worked on countless system and database performance issues as well as software development projects with technologies that include JSON, XML, PowerShell, MS SQL, some My SQL, and a little PostgreSQL. Projects focused on imaging, forensics, e-discovery, database performance, network, hardware, litigation, software systems, and forensics work with testimony and depositions in Federal and State trials. Extensive database work includes deep, hands-on experience with performance tuning, high availability, data architecture, report writing, security, maintenance plan management, and automation. Specialties: Microsoft SQL server performance tuning, scripting, high availability, failover clustering, TDE, database administration, database design, documentation, electronics, imaging, instruction, JavaScript, Azure, network engineering, operating systems, presentation skills, SQL Server, Query analyzer, and strong familiarity with cloud infrastructure. For fun and for his wife, one year he built a five octave, concert grand Marimba. It actually took two years.
-
Managing SQL Server RAM When you have no more Gigs to give
-
DbOwner Trigger Attack Vulnerability : A solution
In a landscape of thousands of SQL instances and hundreds of thousands of databases, and legacy application privileges, how do you prevent the malicious privilege escalation as described by Erland Sommarskog in a session at Data PASS 2024?privileged user creating a DDL trigger that silently escalates access at the server level the moment a sysadmin runs an innocuous command. In this session, we’ll walk through how our team neutralized this threat. We’ll show how an application login that required DBOwner role (a least-privileged but highly trusted user in many systems) could create triggers that outlive their creators and silently weaponize even well-intentioned sysadmins, and we will show how to prevent the attack. You’ll learn: • Why database-level DDL triggers are a risk surface few teams monitor • How we built a lightweight detection automatically destroy dangerous triggers before they can be “detonated” • the solution to the problem so obvious it started as a joke. We’ll share our audit triggers, our denylist strategy, and our method for crawling for existing traps. You’ll walk away with an understanding of a response scaffold — and a new respect for the silent weapons hidden in your DDL layer. Intended Audience • SQL Server engineers and DBAs • Security-conscious development teams • Application platform architects • DevOps and SREs with shared responsibility for database layers DEMO: : demonstrate how solution effectively defeats the attack surface area.
-
The S.O.L.I.D. DBA: Engineering Principles for a SQL-Centered World
-
Scaling SQL: Not by Accident, but by Architecture
Shane Borden
Staff Technical Solutions Consultant, Google
Shane Borden
Google, Staff Technical Solutions Consultant
Experienced technology professional with a successful career in leading highly visible, complex data architecture design, migration and production support tasks across multiple reference architectures for a variety of Fortune 500 and start-up customers alike. Most recently Shane has been focused on code conversion, data migration and performance improvements from database platforms such as Oracle and SQL Server to Google AlloyDB for PostgreSQL and Google Cloud SQL for PostgreSQL.
-
Unlock PostgreSQL Performance: A Guide to Index Types and Their Proper Uses
In the world of relational databases, indexes are crucial for optimizing query performance. PostgreSQL, a powerful open-source database system, offers a variety of index types beyond the standard B-tree. This presentation delves into the different index types available in PostgreSQL, exploring their strengths, weaknesses, and ideal use cases. Specifically, we will cover and show use cases for the following types of indexes: We will cover and demonstrate use cases for: B-tree Indexes Hash Indexes GIN Indexes BRIN Indexes BLOOM Vector Indexes
Sharon Reid
staff database administrator, NISA Investment Advisors
Sharon Reid
NISA Investment Advisors, staff database administrator
Before the commencement of her reign as the "Database Security Czarina of NISA," Sharon Reid was an adjunct English professor. After taking a LaunchCode class on SQL, she was asked to be a teaching assistant and then took over as lead mentor for the SQL cohort for LaunchCode Women+. Currently, she is on the St. Louis SQL Server and BI user group advisory board. Sharon enjoys sharing her passion for database security and is determined to make it fun, practical and implemented.
-
Extending SQL Server security principles to the cloud
-
A Data Engineer Teaches A DBA Python: Mother/Daughter Duo Collaborates
When a DBA needed to learn basic Python when moving to the cloud, she turned to a data engineer for help. DBAs and data engineers are often at odds—not this mother/daughter duo. Join them as they share insights and experiences of working together to expand their skill sets. You will leave with an understanding of why a DBA would need to learn a new language complete with practical examples focusing on the Pyspark library, a Python API for Apache Spark, and the popular Python libraries Pandas and Numpy.
Shawn Oden
Database Administrator, Peraton
Shawn Oden
Peraton, Database Administrator
Back in the olden days, before that thing we call Y2K, I was a pilot, but then I accidentally became a programmer. And now I'm a data monkey. Nearly all of my career has been spent working in Background Screening, Healthcare, Insurance, Government or Military, where cybersecurity has played a fairly heavy role. I've learned quite a bit about quite a lot, and my interest has been piqued in more topics than I can ever hope to master. On the current step of my adventure, I find myself on the path as a DBA for the U.S. Army. And I'm still more curious than is probably good for me.
-
I'm Just Here For The T-Shirt: Coping With Impostor Syndrome
I’ve been a professional geek for 25 years, but I still feel like I’m out of my zone. Technology moves so fast. There are so many things to know. Everyone is so much smarter than me. I don’t know what I’m doing, and it’s just a matter of time before someone finds me out. Or something like that. Imposter Syndrome is a problem. But it can be dealt with.
-
Schrödinger’s Backup: Is Your Backup Really a Backup?
-
If You Teach A Man To Phish …
Shayon Sanyal
Principal WW Specialist Solutions Architect, Amazon Web Services, Inc.
Shayon Sanyal
Amazon Web Services, Inc., Principal WW Specialist Solutions Architect
Shayon Sanyal is a Principal WW Specialist Solutions Architect for Data and AI and a Subject Matter Expert for Amazon’s flagship relational database, Amazon Aurora. He has over 15 years of experience managing relational databases and analytics workloads. Shayon’s relentless dedication to customer success allows him to help customers design scalable, secure, and robust cloud-based architectures. Shayon also helps service teams with design and delivery of pioneering features, such as generative AI.
-
Advanced Oracle Features in PostgreSQL: HA, DR, Functions & Extensions
-
Best Practices for Building Generative AI Applications with PostgreSQL
PostgreSQL makes it easier to store and query vector data for artificial intelligence and machine learning (AI/ML) use cases with the pgvector extension. Learning best practices for vector search will help you deliver a high-performance experience to your customers. In this session, learn how to store data from Amazon Bedrock in an Amazon Aurora PostgreSQL-Compatible Edition database and learn SQL queries and tuning parameters to optimize the performance of your application when working with AI/ML data, vector data types, exact and approximate nearest neighbor search algorithms, and vector-optimized indexing.
Shireesh Thota
Corporate Vice President, Azure Databases, Microsoft
Shireesh Thota
Microsoft, Corporate Vice President, Azure Databases
The mission of Azure Data is to help customers build a new class of data-first applications and drive a data culture where every employee can make better decisions based on data. Shireesh leads product management, engineering, and cloud operations for Azure databases. The products in his team’s portfolio include Azure SQL Database (on-prem, Hybrid and Cloud), Azure Cosmos DB, Azure PostgreSQL, and MySQL. Previously, as the Senior Vice President at SingleStore, Shireesh was responsible for end-to-end engineering and product vision of the company. Before moving to SingleStore, Shireesh was a founding member of Cosmos DB, where he architected, designed, and directly contributed to multiple key pieces of the services. Shireesh has 15+ years of experience on large scale, big data, scale-out, relational and schema agnostic distributed systems across SQL, Cosmos DB and PostgreSQL/Citus.
-
Microsoft Keynote: What's New in Azure Databases: Real-World Solutions for Developers and Enterprises
Whether you're modernizing for peak performance and AI readiness or building the next generation of intelligent apps and agents, enterprises require robust, scalable databases that not only meet today’s demanding requirements but also unlock tomorrow’s possibilities. Join CVP of Azure Databases Shireesh Thota to discover how Microsoft empowers your vision with breakthrough innovations across SQL Server, Azure SQL, Azure Cosmos DB, and Azure Database for PostgreSQL. Learn how to accelerate AI-powered experiences with Copilot and Fabric, and see dynamic demos that showcase how Microsoft’s unified, enterprise-ready data platform, together with AMD, can help you achieve your transformation goals.
-
Breakfast with the Microsoft Data Leadership Team
Get your day started early at PASS Data Community Summit with a free breakfast and a Q&A session with a panel of leaders across Microsoft hosted by Bob Ward. Tell us what is top of mind for you across SQL Server, Azure SQL, Microsoft Fabric and topics like AI. This is always one of the most popular sessions at the PASS Data Community Summit, so you won’t want to miss it!
Shiva Gurumurthy
Senior Marketing Manager, AMD
Shiva Gurumurthy
AMD, Senior Marketing Manager
Shiva is Senior Product Marketing Manager at AMD, working to drive adoption of AMD server CPU product (EPYC) in the Database and Analytics market. Entering his 26th year at AMD, he has held various roles in the GPU and CPU divisions, starting out as performance engineer for various compute-heavy applications like video, Image processing, analytics and machine learning and transitioning to a business/marketing role in 2020. Shiva holds a Master’s degree in Computer Science and Engineering, and has an MBA from Santa Clara University. Outside of work, he enjoys playing/watching sports (volleyball, badminton) and listens to a lot of music.
-
SQL Server 2025 and AMD in the era of AI
Join us as we talk about how SQL Server 2025 and AMD Technologies are utilizing AI, while showing you to do the same across your applications. We’ll recap the key features of SQL Server 2025 and talk about how advanced vector extensions in AMDs processor boost the performance of your database. This session will showcase how SQL Server 2025 provides the services you need to get you starting quickly to build AI applications securely with your own data delivered on a foundation of AMD-powered innovation.
-
AMD+HPE – Practical AI-ready SQL Server Infrastructure for SQL Server 2025
Are you looking to consolidate and save costs on traditional OLTP and OLAP enterprise-class workloads? Are you looking to right-size your enterprise infrastructure for modern AI needs? AMD and HPE bring you super-optimized platforms for your traditional and AI-enhanced data management applications that directly impact ROI and enable speedy adoption. Learn about AMD EPYC CPUs that power enterprise-class servers from HPE, and its features that elevate your AI applications to new levels of performance.
Shivam Gulati
Senior Data Architect, Amazon Web Services
Shivam Gulati
Amazon Web Services, Senior Data Architect
Shivam Gulati is a Senior Data Architect at Amazon Web Services. He works with customers to design and build highly available and scalable database, analytics and Generative AI solutions in the AWS Cloud. Outside of work, he loves traveling with his photography gear for capturing landscapes.
-
Running PostgreSQL databases with serverless technologies on AWS
-
Self-Service Analytics With Your SQL Server And Postgres Databases
Generative Business Intelligence (BI) presents a shift in data analysis by enabling systems to automatically derive insights, patterns, and trends from data. This session explores the integration of your data sitting in databases like SQL Server and Postgres with Amazon QuickSight and its Generative AI capabilities for self-service analytics.
-
Self-service analytics for SQL Server and Postgres databases
-
Accelerating schema conversion for SQL Server to Postgres migration
Stephanie Reis
Analytics Engineer, AAA Washington
Stephanie Reis
AAA Washington, Analytics Engineer
Stephanie Reis is a skilled data professional based in the Greater Seattle Area. With over a decade of experience in data engineering, database administration, SQL development, and statistical analysis, she has a proven track record of designing and implementing data solutions that enhance organizational efficiency and support data-driven decision-making.
-
Automating SQL Server Data Files
-
Stepping Outside Your Tech Stack
-
Automating SQL Server Data Files
-
You Can Do WHAT in SQL Server?!?
Most of us know SQL Server for its bread-and-butter functionality: running queries, building reports, and authoring stored procedures. But beyond the basics lies a powerful toolkit of features and capabilities that often go unnoticed – and unused. This session dives into the advanced, the creative, and the surprising things SQL Server can do. We'll explore how built-in T-SQL functions, optimization tools, and best-practice configuration options can help you solve complex problems more efficiently, streamline your code, and boost performance – all using functionality that’s been right there under the hood. Whether you're writing queries, building data pipelines, or supporting analytics workloads, this session will challenge your assumptions and introduce new approaches to working with SQL Server. If you've ever found yourself wondering, “Can SQL Server do that?” – this session is for you.
Stephen Atwell
Principal Product Manager, Harness.io
Stephen Atwell
Harness.io, Principal Product Manager
Stephen Atwell develops products to improve the life of technologists. Currently, he leads Harness’s Database DevOps product. Stephen was a speaker at Kubecon 2024, Postgresconf 2024, Data on Kubernetes Day in 2023, the Continuous Delivery Summit in 2022, CDCon in 2023, 2022 and 2021, and the TBM Conference in 2015. Stephen started working in IT Operations in 1998 and transitioned to developing software in 2006. Since then he has focused on developing products that solve problems he experienced in his previous roles. Stephen holds a bachelors of Engineering in Computer Science and has worn hats ranging from network administrator, to database administrator, to software engineer, to product manager. Outside of work, Stephen develops open source garden planning software (Kitchen Garden Aid 2 ). He lives in Bellevue, Washington with his wife and his dog.
-
Database DevOps: CD for Stateful Applications
Running stateful applications on Kubernetes can provide many of the same advantages as stateless applications. In this talk, Stephen and Chris will share some thoughts on managing stateful applications as part of a CD Pipeline so that applications – and the application's data – can be versioned and deployed safely and repeatedly. This talk will discuss managing persistent data and databases within Kubernetes, as well as managing structural changes to a database as part of a CD process. The talk will dive into automation approaches and tooling for managing data migrations between environments and running database migrations within a CI/CD pipeline. The talk will feature real-world examples where we discuss specific schema migrations and their performance impacts. We will also discuss how to leverage governance to ensure compliance while empowering your developers through automation. With Kubernetes and Liquibase, we can provide something better than before: A more testable, repeatable, and open way to deploy stateful applications. This talk features a practical demo of how CD tooling can empower users to automate data migrations within Kubernetes.
-
Faster DB Schema Migrations with AI enabled CI/CD & Automated Governance
Modern AI and automation are changing how the world approaches database migrations. In this talk, we will discuss what is possible. We’ll cover how you can accelerate database changes within your organization, while ensuring safety. We’ll cover best practices for integrating AI in CI/CD, and how you can automate change governance at scale. This will leave you with an understanding of how faster, more automated database migrations can speed up your business, and how to get started on adopting cutting-edge practices. We will cover: – Leveraging LLMs to author database schema migrations – Leveraging CI/CD to both automate validation of AI-authored migrations and the schema migration release process – Leveraging Open Policy Agent to automatically enforce policy while enabling self-service database changes – High-level differences between different migration tools
Stephen Fontanella
Sales Director, Redgate
Stephen Fontanella
Redgate, Sales Director
Stephen has a wealth of experience helping customers navigate complex software purchasing processes to acquire the solutions they need to do their jobs efficiently. This includes building value stories with company specific metrics, building criteria that resonate with the business, and identifying how to get decision makers on board. His goal for this session is to empower users of all job titles to successfully advocate for the products they need, regardless of vendor. He has worked in sales at Redgate for seven years and currently manages a team of account executives out of Pasadena, CA.
-
Decoding Software Purchases: A Vendor-Neutral Guide to Achieving Organizational Buy-In
Choosing the right software solution can be a daunting task, from identifying business needs to navigating internal approvals. In this session, Redgate’s Stephen Fontanella speaks with a Data Professional (Mridula Pandit) about their own experience discovering potential solutions, evaluating their effectiveness, and ensuring the right people are involved in the decision-making process. You’ll learn how to conduct a thorough assessment, engage stakeholders at the right stages, and effectively communicate software needs to decision-makers.
Stephen Leonard
AI Engineer, Enterprise Data & Analytics
Stephen Leonard
Enterprise Data & Analytics, AI Engineer
With half a decade of experience in IT infrastructure and data management, Stephen specializes in bridging the gap between traditional systems and emerging AI-driven technologies. His expertise spans a broad range of platforms and tools, including Windows and Linux environments, cloud solutions (Azure AZ-900 certified), and enterprise-grade data systems. Stephen brings strong technical proficiency combined with hands-on experience in project management, client communication, and team collaboration.
-
Data Engineering Fundamentals With Fabric Data Factory
Join Andy Leonard and Stephen Leonard for a comprehensive one-day training at PASS Data Community Summit 2025, introducing data engineering concepts for enterprise data warehousing using Microsoft Fabric Data Factory. Tailored for newcomers to data engineering and self-taught professionals seeking to deepen their expertise, this course blends theoretical foundations with practical demonstrations, enhanced by AI-driven techniques to accelerate development. The day begins with a lecture-based overview, covering the essentials of enterprise data warehousing, Microsoft Fabric's role in modern data architectures, and key terminology. We'll explore how Fabric Data Factory integrates into the broader data engineering landscape, with a focus on leveraging AI to streamline design and implementation processes. This foundation ensures all participants share a common understanding before diving into technical content. The second part shifts to demonstration-focused learning, showcasing practical implementations of pipeline-driven staging and loading processes. You'll observe real-world patterns that tackle common data warehousing challenges, incorporating AI tools to optimize workflows and enhance efficiency. Each demonstration balances theoretical best practices with pragmatic solutions tailored to enterprise constraints. The final section delves into lifecycle management, demonstrating straightforward approaches to monitoring data pipelines, implementing maintenance routines, and establishing effective governance. We'll highlight how AI can accelerate these processes, offering techniques to improve scalability and adaptability. This training emphasizes actionable knowledge, equipping you with practical skills and AI-enhanced strategies to apply immediately, regardless of your experience level or organization's maturity.
Stephen McMaster
Technology Strategist, Dell Technologies
Stephen McMaster
Dell Technologies, Technology Strategist
Dynamic and results-oriented professional with a deep experience in global sales and business development within the IT industry and Dell Technologies specifically. Recognized for pioneering innovative data solutions and establishing strategic partnerships that significantly contribute to revenue growth. Adept at leading cross-functional teams, fostering key relationships with global clients and partners, and driving strategic initiatives in alignment with corporate goals. • Superbly successful in creating and implementing innovative data solutions feeding the AI machine in a highly strategic nature. • Highly qualified and successful in building complex hybrid cloud enterprise solutions for multiple design variations and customer needs. • Global thought leader with customer facing engagements in countries around the world. • Expert in sophisticated workload sales models and vast knowledge of both the software and hardware marketplace and the capabilities and complexities of products. • Outstanding success in building and maintaining relationships with key corporate decision-makers, establishing large-volume, high-revenue accounts with excellent levels of retention and loyalty. • 26 years of Dell knowledge and internal networking around solutioning of where the market is going for our customers. • Exceptionally well organized with a track record that demonstrates self-motivation, creativity, and initiative to achieve both personal and corporate goals.
-
Bring AI to your Data with Dell Technologies
As organizations accelerate their AI journeys, the convergence of data, infrastructure, and intelligent automation becomes critical. In this session, Dell Technologies explores how SQL Server 2025 and the Dell AI Factory are transforming the way enterprises bring AI to their data—securely, efficiently, and at scale. Building on our 4 decades of joint innovation, we will revisit the shared SQL 2025 REST API capabilities and highlight how they enable modern, agentic AI workflows and seamless integration with enterprise applications. Using the Dell AI Factory architecture, we will showcase how on-premises deployments can leverage expert AI services, open ecosystems, and AI-optimized infrastructure to drive real-world outcomes. We examine the shift from traditional 3-tier architectures to disaggregated models—combining the flexibility of HCI with the simplicity of legacy systems—to meet the demands of AI-driven workloads. Attendees will gain insight into the latest announcements from Dell Technologies World, including the Dell Private Cloud and Dell Automation Platform. These innovations enable fully transferrable infrastructure, lifecycle automation, and streamlined support. Finally, we explore the future of SQL 2025 on these disaggregated Dell architectures, referencing Microsoft’s Arun Vijayraghavan’s recent blog on agentic AI and showcasing blueprints for deploying intelligent, scalable solutions. This session is ideal for technical leaders seeking to modernize their data platforms and unlock the full potential of AI—on their terms, in their environment.
Steve Jones
Advocate/Architect, Redgate
Steve Jones
Redgate, Advocate/Architect
Steve Jones has been working with databases and computers for over two decades. He has worked with SQL Server since 1991, from v4.2 through SQL Server 2016. He has been a DBA, developer, and manager in a variety of large and small companies across multiple industries. In 2001 Steve founded SQLServerCentral with two partners and has been publishing technical articles and facilitating discussions among SQL Server professionals ever since. He currently is the full-time editor of SQLServerCentral, as well as an evangelist for Redgate Software. Steve is a 10-year Microsoft Data Platform MVP and lives on a horse ranch in Colorado.
-
Blogging for the Tech Professional
When a company posts a job opening, they are often inundated with dozens, if not hundreds, of applicants. How do they decide which ones are worth pursuing with an interview? Standing out from other candidates is important to ensure you receive the consideration that you deserve. Building a blog and including those links in your CV can help. Come learn how to build a blog, choose topics, and share your knowledge in a way that boosts your career.
-
Modern Database Development: Real-World Lessons from the Front Lines
Join a panel of seasoned database professionals and industry experts as they dive into the toughest challenges facing modern development and operations teams. From navigating monolithic legacy systems, to wrangling with the data layer in the age of AI, this session explores the real-world roadblocks teams encounter when deploying databases at scale. You'll hear firsthand from organizations about their strategies for reducing downtime risk, managing inconsistent processes across diverse environments, and improving code quality. Whether you’re a developer, DBA, or DevOps leader, you’ll leave with practical insights and proven approaches to modernize your database deployment practices – no matter how complex your estate.
-
Redgate Keynote: The Data Professional of the Future: How You Can Thrive in the Age of Machines
The data professional of 2025 might be a career database expert…or simply the closest thing your organization has to a data professional. The database landscape has never been more complex, and the modern data professional is tasked with balancing shifting platform trends and emerging technology like AI with the ever-present need to keep databases and the data they contain secure – in an era when organizational pressure to deliver value from data is stronger and more persistent than it’s ever been. In this session you’ll learn more about the pressures and challenges faced by the data professional of today, as well as trusted advice on how to navigate today’s and tomorrow’s database landscape, no matter where you are on your professional journey.
Sudhir Amin
Sr. Solutions Architect, Amazon Web Services
Sudhir Amin
Amazon Web Services, Sr. Solutions Architect
Sudhir Amin is a Database Specialist Solutions Architect at Amazon Web Services. In his role based out of New York, he provides architectural guidance and technical assistance to enterprise customers across different industry verticals, accelerating their cloud adoption.
-
Real-time stream analytics from your transactional workload on Amazon RDS
-
Migrate SQL Server to AWS: From Strategy to Success
When migrating SQL Server workloads to AWS, businesses need migration approaches that maximize reliability while keeping systems running smoothly. Join this session to explore proven strategies for migrating SQL Server databases to AWS. Learn to leverage AWS services including Migration Hub, Database Migration Service (DMS), alongside native SQL Server features for migration. Discover real-world migration insights and best practices drawn from successful customer implementations.
-
Enhance RDS Observability with Database Activity Stream and DB Insights
-
T-SQL to PostgreSQL: Leveraging Amazon Q Developer and AWS DMS SC
This technical session showcases how Amazon Q Developer streamlines the conversion of embedded T-SQL code to PostgreSQL through IDE integration. We'll demonstrate how effective prompt engineering combined with AWS DMS Schema Conversion's GenAI capabilities can dramatically reduce conversion time and effort. The session highlights practical examples of code conversion using Visual Studio and VS Code, emphasizing accurate results with minimal post-conversion adjustments.
Surbhi Pokharna
Director, Cloud Data Platforms, Charles River Development
Surbhi Pokharna
Charles River Development, Director, Cloud Data Platforms
Surbhi is a data professional with over two decades of architectural/administration experience in the financial domain. Architecting the operationalization aspect in a client-facing role, delivering relational and advanced analytics solutions for large scale Operational and Data Warehouse implementations with focus on learning and innovation remained a constant part of her data journey. Currently she is leading enterprise scale cloud data migrations in SQL Server/Snowflake/Azure SQL/Managed Instance and involved with designing next generation Data Analytics solutions using Snowflake. Surbhi is board member of the New England SQL Server User Group, member of PASS, speaker in PAAS Summit/Snowflake User Group/Azure Data User Groups/Data Saturdays/SQLBits.
-
Unleashing the Power of Azure SQL Database ‘Hyperscale’
In today’s rapidly evolving data landscape, the need for scalability, performance, and reliability has never been more pressing. Businesses across all industries are generating and managing vast amounts of data, demanding robust solutions to store, access, and analyze this information efficiently. Amidst this explosion of data, one powerful but often underutilized feature within Microsoft’s Azure ecosystem quietly stands out, Azure SQL Database Hyperscale. Designed to support demanding data workloads, Hyperscale delivers a transformative experience for modern database management, combining the agility of cloud infrastructure with the scale and performance enterprises need. In this article, we will explore the depth and breadth of Azure SQL Database Hyperscale. We'll begin with an introduction to its core concepts and architecture, then explore its distinct advantages, and finally walk through practical guidance on how to implement and manage Hyperscale databases. By the end, you will understand why Hyperscale is often referred to as a hidden gem and how it can unlock new possibilities for your large-scale data operations.
-
Data Mirroring: Link Microsoft Fabric & Snowflake for Unified Analytics
-
The Role of AI in Modern Data Management
In today’s data-driven world, managing vast and complex datasets is becoming increasingly challenging. Artificial Intelligence (AI) is stepping in as a powerful ally in modern data management, automating processes and enabling smarter decision-making. In this quick, yet insightful session, we’ll explore how AI is transforming key areas of data management such as data quality, integration, and governance. We’ll dive into how AI tools can automatically cleanse, classify, and validate data, ensuring higher accuracy and consistency. AI-powered algorithms are also enhancing real-time data processing, improving predictive analytics, and enabling faster, data-backed decisions. By integrating AI into data platforms, businesses can create more efficient, scalable data management systems that not only improve operational workflows but also provide deeper, actionable insights. In this brief talk, you’ll walk away with a clear understanding of AI’s role in modernizing data management, its real-world applications, and how it can be leveraged to drive efficiency, innovation, and competitive advantage in your organization.
-
Lightning Talks-01: A Rapid-Fire Exploration of Key Tech Topics
-
Redgate Luncheon: Harnessing AI: Insights and Innovations from the Community
Join us for a dynamic luncheon session where Community Experts will explore the transformative power of AI in the world of databases. This hybrid panel-networking session promises to blend insightful dialogue with interactive discussion, offering attendees a unique opportunity to engage directly with their peers and industry experts. After an enlightening panel discussion of each topic, you'll have the opportunity to delve deeper into these topics at your table, exchanging views and strategies on overcoming these hurdles. This year's session will delve into the practical applications and innovative use cases of AI. Our panelists, who are at the forefront of AI integration, will share their experiences, challenges, and successes. Attendees will then have the opportunity to ask questions, discuss with their peers, and share their own stories. Whether you're an AI enthusiast or just curious about its potential, this session promises to be both informative and inspiring.
Taiob Ali
Database Solutions Manager, GMO LLC
Taiob Ali
GMO LLC, Database Solutions Manager
Microsoft Data Platform MVP | Global Data Solutions Leader | Cloud & AI AdvocateTaiob Taiob Ali is a Microsoft Data Platform MVP with over 19 years of experience designing and implementing data solutions across finance, e-commerce, and healthcare. My expertise includes the Microsoft Data Platform, MongoDB, Azure AI, and Python for data-driven innovation. As a dedicated community advocate, I've presented at over 100 events worldwide, including SQL Saturdays, Data Saturdays, and international conferences. I founded the Database Professionals Virtual Meetup Group, serve on the New England SQL Server User Group and SQL Saturday boards, and contribute regularly to Microsoft Learn, where my work is featured in the Contributor Stories.
-
Magnificent Seven & Beyond: Intelligent Query Processing in SQL Server
Can we enhance query performance without any code changes? Modifying applications can be an expensive endeavor or completely beyond your control. Therefore, developers and DBAs prefer that the query processor adapts to their workload requirements rather than relying on options and trace flags to improve performance. Adaptation is the foundational concept behind Intelligent Query Processing (IQP) in the latest versions of SQL Server. This demo-intensive presentation will explore the fifteen intelligent query processing features introduced in SQL Server 2022, 2019, and 2017. For each of these fifteen features, we will examine the issue it aims to resolve and the algorithm it uses to tackle the problem. We will evaluate the pros and cons of using these features. You will learn how to deploy them at various scopes tailored to your specific needs, such as server, database, session, or query levels. You will also be able to identify the features built on the Query Store. Attending this session will allow you to learn about the new capabilities of intelligent query processing and gain powerful tools to persuade your peers to upgrade SQL Server and databases to the latest build, both on-premises and in the cloud.
-
PostgreSQL vs. SQL Server: Security Model Differences
Security is paramount in database management. If you are an SQL Server expert looking to learn PostgreSQL, it is essential to understand how PostgreSQL's security model differs from that of SQL Server. This talk will compare the security models of both database systems. Aimed at database administrators and developers, the presentation will highlight the key differences in how these systems handle user authentication, roles, and permissions. For example, did you know that: -SQL Server distinguishes between logins and users, whereas PostgreSQL uses a unified role-based system for authentication and authorization. -SQL Server offers predefined server and database roles, such as sysadmin, which provides a range of out-of-the-box permissions. Conversely, PostgreSQL includes default roles like pg_read_all_data, designed to simplify standard permission sets. -SQL Server allows the creation of custom roles with flexible permission assignments. PostgreSQL's roles enable inheriting permissions from other roles and support complex role hierarchies. Understanding these differences and others discussed during the session will enhance your grasp of the security model distinctions between SQL Server and PostgreSQL, enabling you to implement security best practices effectively in either environment.
-
Azure PostgreSQL: 15 Essential Standards for Compliance and Security
-
Getting Started with Kusto Query Language (KQL) in Azure
-
Lifting Your Data Skills to the Cloud
-
Magnificent Seven and Beyond: Intelligent Query Processing in SQL Server
-
Azure PostgreSQL: 15 Essential Standards for Compliance and Security
Our team recently inherited multiple Azure Databases for PostgreSQL and discovered the lack of uniform implementation for essential items such as Private Endpoints, Authentication Methods, Backup Retention, Diagnostics Settings, Compute Type, and High Availability. Uniform standard implementation at the workplace for your PostgreSQL database (even when hosted in Azure) ensures compliance with regulations like GDPR, SOX, PCI DSS, and HIPAA. These regulations ensure institutions maintain robust security measures and audit trails to protect sensitive data and comply with legal requirements, especially in industries like finance and healthcare. Failure to comply with industry regulations can lead to monetary fines, civil penalties, operational restrictions, and, most importantly, reputation damage. In this session, you will receive a practical checklist of fifteen standards, their importance explained, and actionable insights about implementing them across your company.
-
PostgreSQL vs. SQL Server: Security Model Differences
-
Getting Started with Kusto Query Language (KQL) in Azure
-
Lifting Your Data Skills to the Cloud
Tarun Kumar
DBSync Inc
Tarun Kumar
DBSync Inc
Tarun Kumar leads the Cloud Replication business at DBSync, driving strategy, product direction, and customer success. With over a decade of experience working with data engineers, architects, and enterprise IT teams, Tarun has a deep pulse on the challenges organizations face in moving and managing data across systems. He regularly engages with customers and prospects worldwide, helping them design scalable, compliant, and cost-effective replication pipelines. At DBSync, he has been instrumental in shaping the company’s cloud-hosted replication platform, making it faster and simpler for teams to integrate data without heavy infrastructure or complex setup. Tarun is passionate about bridging the gap between evolving business needs and modern data engineering practices.
-
Getting Data Ingestion in Microsoft Fabric Right
As more teams adopt Microsoft Fabric to bring their analytics, BI, and AI together, one challenge keeps coming up again and again: getting the data in. Ingestion sounds simple, but in practice, there can be a lot of hidden complexities. From managing schema changes and API limits to keeping pipelines stable and data fresh, “just loading data” often turns into weeks of engineering work. In this session, we’ll take a practical look at what it really takes to get data ingestion right in Fabric. We’ll break down the most common challenges: brittle copy jobs, schema drift, and lag between Lakehouse and Warehouse layers, and walk through cleaner, faster ways to move operational data into Fabric. You’ll see how Fabric handles ingestion across OneLake, Fabric Warehouse, and SQL Database, and why replication-first and CDC-based approaches can make your data flows more reliable with less maintenance. Through real-world examples, we’ll show how to: • Move operational data from CRM, ERP, and SQL into Fabric without fragile pipelines • Handle schema changes automatically instead of manually patching pipelines • Deliver clean, consistent, analytics-ready data directly into Fabric’s query engines Whether you’re setting up your first Fabric pipeline or optimizing what you already have, this session will help you turn ingestion from a daily headache into a dependable, well-architected part of your data stack.
Tauseef Siddique
Product Manager, Microsoft
Tauseef Siddique
Microsoft, Product Manager
Product Manager at Microsoft on the SQL Tools and Experiences team, focusing in the MSSQL extension for VS Code. Graduated from Rutgers University with a major in Business Analytics and Information Technology.
-
Be a SQL Python Hero with VS Code, GitHub Copilot & MSSQL-Python Driver
Ready to supercharge your SQL development workflow? This 60-minute session shows how Python is becoming an essential companion for SQL developers. Discover how the enhanced MSSQL extension for Visual Studio Code, combined with GitHub Copilot, accelerates everything from schema design to data generation, import and export and query writing. We’ll dive into real-world demos that showcase how Python scripts can seamlessly integrate with SQL Server, Azure SQL, and Fabric SQL databases using the new mssql-python driver—bringing security, performance, and flexibility to your projects. Whether you’re building apps, automating tasks, or exploring advanced analytics, this session will help you understand the full potential of SQL + Python and take advantage of latest tools to stay ahead of the curve.
Theodoros Katsimanis
Principal SQL Server Administrator / DBA Team Technology Manager, KaizenGaming
Theodoros Katsimanis
KaizenGaming, Principal SQL Server Administrator / DBA Team Technology Manager
With over two decades of experience in the dynamic world of SQL Server, I have honed my skills to become a Production DBA specializing in high-performance database solutions. My journey began as a full stack developer in 1999, but it was my fascination with SQL Server that shaped my career path. Over the years, I transitioned from software development to focus exclusively on database administration/development, where I found my true passion. As a Production DBA, I am dedicated to ensuring that SQL Server databases operate at peak efficiency. My expertise lies in query and index tuning, database architecture, and comprehensive administration practices. In an industry where milliseconds are not fast enough and microseconds matter, I employ advanced techniques to achieve unparalleled performance. One of my key areas of specialization is the implementation of in-memory technology in production systems. This has enabled me to manage and optimize environments that handle dozens of thousands of concurrent transactions per second. My daily responsibilities include managing clusters, availability groups, listeners, failovers, and continuous tuning to maintain optimal performance and reliability. I am driven by the challenge of pushing SQL Server to its limits and beyond, ensuring seamless and swift data processing. My goal is to leverage my extensive experience and deep knowledge of SQL Server to contribute to innovative and high-performing database solutions.
-
AI Magic for SQL Server: Find the Culprit, Not the Clues
Tired of staring at wait stats, CPU charts, and blocked sessions, only to still wonder what really caused the issue? What if your SQL Server could tell you exactly who the culprit was—before you even knew there was a problem? In this session, we’ll show how AI and machine learning can become your smartest DBA assistant, automatically analyzing patterns in real time to uncover the true root cause of performance issues. No more chasing clues or blaming the wrong query! We’ll go beyond traditional monitoring and walk through a full end-to-end solution that captures Extended Events, processes data with Python, and uses an explainable ML model to pinpoint the top offender—whether it's a stored procedure, a blocking session, or a deployment gone wrong. You'll see real-world examples where the model correctly predicts root causes that humans often miss, complete with SHAP-based explanations so you can trust the results. Plus, we’ll explore how the model adapts over time to new patterns and learns to distinguish between normal load and true anomalies (like that nasty post-deployment Monday morning spike). Whether you're a production DBA, a performance tuner, or a developer tired of hearing “it works on my machine,” this session will equip you with tools to turn AI magic into operational insight—and finally give you the answer to the question: "Who broke the database?"
Thodoris Katsimanis
Principal SQL Server Administrator / DBA Team Technology Manager, Kaizengaming.com
Thodoris Katsimanis
Kaizengaming.com, Principal SQL Server Administrator / DBA Team Technology Manager
With over two decades of experience in the dynamic world of SQL Server, I have honed my skills to become a Production DBA specializing in high-performance database solutions. My journey began as a full stack developer in 1999, but it was my fascination with SQL Server that shaped my career path. Over the years, I transitioned from software development to focus exclusively on database administration/development, where I found my true passion. As a Production DBA, I am dedicated to ensuring that SQL Server databases operate at peak efficiency. My expertise lies in query and index tuning, database architecture, and comprehensive administration practices. In an industry where milliseconds are not fast enough and microseconds matter, I employ advanced techniques to achieve unparalleled performance. One of my key areas of specialization is the implementation of in-memory technology in production systems. This has enabled me to manage and optimize environments that handle dozens of thousands of concurrent transactions per second. My daily responsibilities include managing clusters, availability groups, listeners, failovers, and continuous tuning to maintain optimal performance and reliability. I am driven by the challenge of pushing SQL Server to its limits and beyond, ensuring seamless and swift data processing. My goal is to leverage my extensive experience and deep knowledge of SQL Server to contribute to innovative and high-performing database solutions.
-
Revolutionizing Database Performance: Deep Dive into SQL InMemory Technology
Are your queries fully optimized yet your application still suffers from high latency under heavy load? Do you struggle to meet SLAs despite tuning indexes and procedures? You might be hitting the architectural limits of disk-based storage. It's time to go beyond traditional optimization. In this advanced session, we’ll dive deep into the internals of SQL Server’s In-Memory OLTP engine (Hekaton) and reveal how you can achieve microsecond-level response times and dozens of thousands of transactions per second on a modern multi-core server. We’ll break down the components that make this possible—from memory-optimized tables and hash indexes to natively compiled stored procedures. Through real-world demos and performance comparisons, you’ll learn how to: Identify workloads that benefit most from In-Memory OLTP Design efficient memory-optimized schemas Avoid common performance pitfalls (e.g., bad bucket count, latch contention) Monitor and troubleshoot In-Memory internals like row versions, checkpoint files, and garbage collection By the end of the session, you’ll walk away with practical strategies to modernize your data architecture, boost concurrency, and eliminate I/O bottlenecks—without rewriting your entire application. If you're ready to go beyond “making things faster” and engineer for extreme throughput, this session is for you.
-
AI Magic for SQL Server: Find the Culprit, Not the Clues
Thomas LeBlanc
Principal Consultant, SHI International
Thomas LeBlanc
SHI International, Principal Consultant
Thomas LeBlanc is a business intelligence architect, SQL Server expert, trainer and author. He has been building performant data warehouse solutions and visualizations for the past 20 years and has certifications in Power BI, Fabric, Dimensional Modeling and Data Vault 2.0. He graduated from LSU in Quantitative Business Analysis with a concentration in Management Information Systems. Microsoft awarded him Data Platform MVP in 2017 and has been renewed every year since. For the past 20 years, he has focused on data warehouse solutions(on-premises and Azure) using Microsoft tools like Integration Services(SSIS), Analysis Services(SSAS), Power BI(on-premises and cloud), Synapse, Microsoft Fabric and Azure Data Factory. He recently published the second edition of Power BI Performance Best Practices. Being involved in the data community since 2012, he helps the Baton Rouge User Groups with meetings every month and hosting a SQLSaturday every year. He has spoken at conferences like PASS Summit, SQLSaturday, Live360, VSLive, Dynamics Community Summit NA and Power Platform events for since 2012. You can visit his website at Thomas-LeBlanc.com, read articles on LinkedIn (https://www.linkedin.com/in/thomaslleblanc/) or follow him on Bluesky (@PowerBIDude).
-
Speeding Up Power BI Reports
-
Fabric Monitoring: Tracking Usage and Optimizing Resources
-
Mastering the Lakehouse: Evolving Data Workflows in Microsoft Fabric
-
Power BI Development with Copilot
The integration of Copilot into Power BI development marks a step in learning Power BI development for data professionals. Copilot streamlines the data model and report-building process through intelligent suggestions, natural language query generation, and context-aware insights. Developers can now describe desired outcomes in plain English, and Copilot will generate DAX formulas, visuals, and even entire report structures. This enables developers to focus less on manual data wrangling and more on advancing Power BI skills. Copilot fosters a more collaborative and intuitive development experience within Power BI. It assists in data modeling, ensures best practices in design, and provides real-time feedback that enhances the overall quality of models and reports. With the ability to learn from organizational semantic models and adapt to specific business contexts, Copilot becomes a powerful assistant in delivering data-driven insights. As organizations continue to embrace self-service analytics, Copilot acts as both a tutor and co-creator—bridging skill gaps and accelerating the journey from raw data to impactful storytelling.
Tim Frazee
Data Architect, Parallon
Tim Frazee
Parallon, Data Architect
Database professional with experience in Oracle, SQL Server, and PostgresQL. A driven innovator who channels a passion for change into every project. Known for challenging norms and reimagining possibilities, he consistently creates forward-thinking solutions that inspire progress and impact.
-
Modern Database Development: Real-World Lessons from the Front Lines
Join a panel of seasoned database professionals and industry experts as they dive into the toughest challenges facing modern development and operations teams. From navigating monolithic legacy systems, to wrangling with the data layer in the age of AI, this session explores the real-world roadblocks teams encounter when deploying databases at scale. You'll hear firsthand from organizations about their strategies for reducing downtime risk, managing inconsistent processes across diverse environments, and improving code quality. Whether you’re a developer, DBA, or DevOps leader, you’ll leave with practical insights and proven approaches to modernize your database deployment practices – no matter how complex your estate.
Tim Steward
Principal Data Enterprise Architect, Fujitsu
Tim Steward
Fujitsu, Principal Data Enterprise Architect
Tim Steward is a Principal Enterprise Data Architect at Fujitsu Enterprise Postgres with over 30 years of database experience. During this time, he has developed a true passion for databases, and the evolution of technology. Tim enjoys the excitement he receives by knowing he helped a customer solve complex IT challenges by using his knowledge of database technology. Fresh out of college, Tim became an Oracle DBA within the fundraising industry, which is where he learned to pay attention to details and be an effective listener. As his career evolved, he became a consultant within the healthcare industry where he expanded his database skills to Sybase. During his consulting years, Tim added SQL Server and the MySQL database to his skill set. In recent years Tim has spent the past 9 years helping customers with PostgreSQL and their journey to opensource.
-
Data Sovereignty with PostgreSQL
Today global regulations around data privacy and sovereignty are getting tighter by the minute, organizations are increasingly required to control not just how data is handled, but where it resides. This presentation explores the critical concepts of data sovereignty and how it can impact the database architecture and compliance with a focus on PostgreSQL
Tiphfennie Gray
Systems Database Administrator, UW Medicine
Tiphfennie Gray
UW Medicine, Systems Database Administrator
Tiphfennie Gray is a Systems Database Administrator with a decade of experience supporting large-scale healthcare, education, and research environments. They specialize in performance tuning, automation, and high availability, and enjoy bridging the gap between technical complexity and real-world business needs. Tiphfennie thrives on creating practical solutions – whether it’s streamlining backup strategies for hundreds of terabytes of data, improving database deployment pipelines, or leading migration projects to modern infrastructure. Their approach blends precision with empathy, ensuring that technology empowers the teams and communities it ultimately serves. Outside of work, Tiphfennie mentors aspiring data professionals, volunteers in their local community, and advocates for increasing representation and inclusivity in the data platform world. They believe learning is lifelong and find joy in sharing knowledge that demystifies SQL Server for everyone.
-
Mastering the Modern Job Search
-
Inclusive Strategies for Stress Management and Work-Life Balance
In tech, professionals often face high-pressure deadlines, constant connectivity, and blurred boundaries between work and personal life. For neurodiverse individuals and those from underrepresented groups, these challenges can be even more pronounced. In this lightning talk, we’ll explore how to balance the demands of a career while fostering a diverse, inclusive work environment. This session will provide practical strategies for managing stress, setting boundaries, and incorporating self-care into a busy schedule, with a focus on how these strategies can be adapted for neurodiverse individuals and those from diverse backgrounds. We’ll also discuss how inclusive leadership and workplace practices can create a supportive culture that promotes work-life balance for everyone.
-
Lightning Talks-02: A Rapid-Fire Exploration of Key Tech Topics
Tobin Thankachen
Lead Architect, Datavail
Tobin Thankachen
Datavail, Lead Architect
Lead Architect – Data Management – Datavail Proficient Cloud & Data Analytics Lead Cloud, Big Data, & Traditional Data Warehouse expertise Cross-functional project leader using advanced data modeling and analysis techniques Improving organizational performance by evaluating best practices for DB servers and data quality issues for ETL and Analytics systems
-
Microsoft Fabric Modernization Pathways in Action
Microsoft Fabric has many organizations reconsidering their current data estates. Understanding how to modernize your existing data platforms into Microsoft Fabric's all-in-one platform is essential for getting the most out of your data and analytics investment. Learn about your modernization pathways and get real-world examples of organizations preparing for a Fabric future.
Tom Austin
Redgate
Tom Austin
Redgate
A globally recognised, award winning and experienced strategic, commercial and technical leader, with over 20 years spent working with sales/success/GTM teams and hundreds of clients across the globe, from start-ups to FTSE/Fortune 100 companies and everything in between. Commercially, I've led teams spread across the globe with office locations in Cambridge, Los Angeles, Austin, New York, Brisbane and Berlin. My current position leads a team of managers covering renewals, customer success management and product support that is responsible for over $60m revenue per year globally. Technically, I previously specialised in helping clients to improve their approach to Database DevOps while gaining maximum ROI and spent time writing and delivering DevOps training courses in Europe, Africa, Asia and the US.
-
What I Wish I Knew: A Candid Guide to Becoming a Leader
This session is not just another leadership guide. It is a collection of real-life experiences and practical wisdom from those who have had made the transition to leadership roles in the industry. So, whether you're managing a team for the first time or looking to refine your approach, our panel is here to help you navigate, grow, and lead with confidence.
Tom Freedman
Director, Trinity Life Sciences
Tom Freedman
Trinity Life Sciences, Director
Tom Freedman (MCSE) began working with Microsoft data platforms over 25 years ago and has developed data pipelines on almost every version of SQL Server since 2000. In his current role, he leads a team of data engineers and BI developers supporting commercial operations for Trinity Life Sciences. He has a passion for SQL fundamentals, and enjoys sharing that knowledge with others. Outside of data, Tom enjoys indie rock, tabletop games, barbecue, and traveling with his family.
-
Working with NULL: Much Ado About Nothing
For data practitioners, working with NULL is inevitable. Its odd nature can lead to bugs and unexpected results for even the most experienced data pros. We'll discuss what NULL is and isn't, why it needs special handling and consideration, and some best practices in SQL Server.
Tomislav Hlupić
Principal consultant, Solvership
Tomislav Hlupić
Solvership, Principal consultant
I work as a consultant at Solvership since 2016, with experience in various industries (banking, retail / wholesale, insurance…). My primary technology stack is Microsoft SQL Server & Azure with some experience in Big Data tools, primarily Cloudera-related. Besides the consulting work, I am a senior lecturer at Algebra University College in Zagreb since 2018 and external associate at Faculty of electrical engineering and computing in Zagreb since 2020, where I have defended my PhD thesis in 2022. Currently I am finishing an MBA degree, and in free time I have left I try to run some races and occasionally play retro video games.
-
Degrees, Diplomas or Digital Badges – ways to formalize your education
-
From local to cloud ninja: helping data teams touch the sky
-
So… my client just bought Fabric capacity. Now what?!
-
Building Smarter Pipelines With Metadata And A Dash Of Fabric
Remember the good ol' days of SSIS and execution frameworks? That kind of orchestration magic deserves a fresh spin in the cloud era — and no, it's not just a lift-and-shift. In this session, we'll explore how to reimagine metadata-driven execution in a truly modern, cloud-native, asynchronous way. We'll cover the quirks of async runs, how to wrangle logs, and the cloud-specific gotchas you will encounter. But it’s not all challenges — we’ve got some new toys, too! Microsoft Fabric brings smarter, more flexible framework possibilities with less headache and way more automation. We'll build a solution from the ground up, walking through architecture, dynamic execution strategies, and the essential components to keep your pipelines humming. Whether you are using Azure Data Factory, Synapse Pipelines, Fabric Data Factory, or even eyeing tools like dbt, this session will give you a framework that adapts across tools and keeps pace with the modern data stack.
-
Airflow & dbt: The hipsters of data integration meet Azure
-
Maps, Data, Action! Ways to work with geospatial data in Azure
Tonie Huizer
Consultant, Promicro
Tonie Huizer
Promicro, Consultant
Tonie Huizer is a Software, Data, and DevOps Consultant who believes good tech starts with understanding your people. A regular on international stages and a Microsoft Certified Azure Developer, Tonie loves helping teams simplify complex challenges; whether that’s modernizing database deployments, designing scalable systems, or just getting Dev and Ops to finally talk to each other. Tonie is known for his clear, energetic presentation style and ability to make complex topics accessible and engaging. As a Redgate Community Ambassador, he advocates for database DevOps and contributes to open knowledge sharing through blogs, talks, and events across the globe. He also co-leads SeaQL Saturday, a community-driven event that blends education, networking, and Sammie, his Kooikerhondje mascot. (Because yes, even events are better with a dog.) Whether he's discussing Azure DevOps workflows, database release automation, or the cultural shifts required to implement DevOps successfully, Tonie brings real-world experience, humor, and actionable insights. His goal is to inspire professionals not just to adopt new technologies, but to transform how teams collaborate and innovate. And when he’s not speaking about tech, he might just be speaking about whisky. Tonie is a passionate whisky enthusiast who loves sharing stories, flavors, and the occasional tasting session with his fellow connoisseur of TasteWhisky. If you're into DevOps, data, and a well-aged dram on the side, Tonie’s your man!
-
Code First, Review Later: Making EF Core Work for DBAs
Developers love EF Core for its speed and automation. DBAs fear it for the exact same reason. Automatic generated migrations can bypass crucial checks, leading to friction, surprises—and sometimes, outages. Database teams need transparency and control. Changes should be reviewable, testable, and deployable through trusted pipelines. EF Core’s migrations often feel like black boxes—and the confidence of the database team begins to slip. But what if you didn’t have to choose? With the right approach, you can let developers work in EF Core and give DBAs the control they need—by transforming EF migrations into Flyway-compatible scripts. In this session, you’ll see how EF Core and Flyway can work together. The result? Shorter deployment cycles, fewer surprises, and better collaboration between teams that traditionally don’t speak the same language. Join this session to explore real-world examples, automation with PowerShell and YAML, and the recipe for getting devs and DBAs on the same page.
-
Learn How to Build Workflow-Driven Database Provisioning in Azure DevOps
You’ve convinced your team to use version control. You’re provisioning dedicated databases. But the process still feels clunky. It’s manual, script-heavy, and often tied to specific developer machines. Sure, PowerShell helps, but it’s not consistent across the team. And when it breaks, not everyone can fix it (or wants to). In this deep dive, you’ll explore a more scalable approach: self-service database provisioning that happens automatically when someone starts new work. By integrating this process into Azure DevOps, you’ll see how to make database environments part of your team’s natural workflow. We’ll cover how to stash and resume work-in-progress environments, manage long-lived branches, and clean up unused resources. Throughout the session, we’ll look at how these ideas fit into the bigger picture of building an Internal Developer Platform (IDP) specifically for database development. Expect a few slides, but mostly hands-on demos using real-world PowerShell and YAML. You'll leave with patterns, scripts, and a practical plan to make database delivery faster, easier, and fully self-served.
-
Database DevOps…CJ/CD: Continuous Journey Continuous Disaster?
-
Two Developers, One Mission: Make a Test Database That Doesn’t Suck
-
Amplify your Experience: How to Start Writing and Speaking
-
Amplify Your Experience: A Hands-On Workshop from Concept to Publication
-
Code First, Review Later: Making EF Core Work for DBAs
Developers love EF Core for its speed and automation. DBAs fear it for the exact same reason. Automatic generated migrations can bypass crucial checks, leading to friction, surprises—and sometimes, outages. Database teams need transparency and control. Changes should be reviewable, testable, and deployable through trusted pipelines. EF Core’s migrations often feel like black boxes—and the confidence of the database team begins to slip. But what if you didn’t have to choose? With the right approach, you can let developers work in EF Core and give DBAs the control they need—by transforming EF migrations into Flyway-compatible scripts. In this session, you’ll see how EF Core and Flyway can work together. The result? Shorter deployment cycles, fewer surprises, and better collaboration between teams that traditionally don’t speak the same language. Join this session to explore real-world examples, automation with PowerShell and YAML, and the recipe for getting devs and DBAs on the same page.
-
Learn How to Build Workflow-Driven Database Provisioning in Azure DevOps
You’ve convinced your team to use version control. You’re provisioning dedicated databases. But the process still feels clunky. It’s manual, script-heavy, and often tied to specific developer machines. Sure, PowerShell helps, but it’s not consistent across the team. And when it breaks, not everyone can fix it (or wants to). In this deep dive, you’ll explore a more scalable approach: self-service database provisioning that happens automatically when someone starts new work. By integrating this process into Azure DevOps, you’ll see how to make database environments part of your team’s natural workflow. We’ll cover how to stash and resume work-in-progress environments, manage long-lived branches, and clean up unused resources. Throughout the session, we’ll look at how these ideas fit into the bigger picture of building an Internal Developer Platform (IDP) specifically for database development. Expect a few slides, but mostly hands-on demos using real-world PowerShell and YAML. You'll leave with patterns, scripts, and a practical plan to make database delivery faster, easier, and fully self-served.
-
Database DevOps…CJ/CD: Continuous Journey Continuous Disaster?
-
Database DevOps Power Play: State vs Migration
-
Two Developers, One Mission: Make a Test Database That Doesn’t Suck
-
The Database DevOps (CI/CD) Showdown: Team Migration vs Team State
-
Learn How to Build Workflow-Driven Database Provisioning in Azure DevOps
-
Code First, Review Later: Making EF Core Work for DBAs
Developers love EF Core for its speed and automation. DBAs fear it for the exact same reason. Automatic generated migrations can bypass crucial checks, leading to friction, surprises—and sometimes, outages. Database teams need transparency and control. Changes should be reviewable, testable, and deployable through trusted pipelines. EF Core’s migrations often feel like black boxes—and the confidence of the database team begins to slip. But what if you didn’t have to choose? With the right approach, you can let developers work in EF Core and give DBAs the control they need—by transforming EF migrations into Flyway-compatible scripts. In this session, you’ll see how EF Core and Flyway can work together. The result? Shorter deployment cycles, fewer surprises, and better collaboration between teams that traditionally don’t speak the same language. Join this session to explore real-world examples, automation with PowerShell and YAML, and the recipe for getting devs and DBAs on the same page.
-
Git Your Database Under Control: A Workshop to Version Control and CI/CD
-
We Organized a Community Event and All We Got Is These Lousy Stickers
-
Community Conversation: Beyond the Data – Building Meaningful Connections in the Global Community
The global data community is more than just a network, it’s a vibrant space to learn, share, and grow together. Our guests will share how they make the most of this amazing community by building genuine relationships, attending events, and embracing the power of asking for help. Whether you’re looking to expand your knowledge, connect with peers, or simply find support when you need it, this discussion will show you how to turn community engagement into a rewarding experience.
Tummala Aswini Kumar
Delivery Consultant, Amazon
Tummala Aswini Kumar
Amazon, Delivery Consultant
Aswini Kumar is a Lead Cloud Consultant specializing in enterprise database solutions and cloud migration. Expert in both RDBMS ( SQL Server, PostgreSQL, Babelfish) and NoSQL (DynamoDB) technologies, guiding clients through comprehensive database modernization on AWS. Proficient in designing scalable architectures, optimizing performance, and implementing high-availability solutions for Amazon Aurora and DynamoDB. Skilled in data modeling, migration strategies, and query optimization for both relational and NoSQL systems.
-
Supercharge Aurora PostgreSQL with ML: SageMaker and Comprehend Integration
Discover how to transform your Aurora PostgreSQL database into a powerful machine learning engine by seamlessly integrating AWS SageMaker and Amazon Comprehend. This session demonstrates how to leverage Aurora Machine Learning to perform real-time predictions and natural language processing directly from your SQL queries. We'll explore practical implementations of ML functions within your database, eliminating the need for complex data movement or separate processing pipelines. Through hands-on examples, learn to deploy SageMaker models for predictive analytics and utilize Comprehend for sentiment analysis, entity recognition, and text classification – all while maintaining high performance and scalability. We'll cover the entire workflow from ML model deployment to database integration, including best practices for security, performance optimization, and cost management. Whether you're analyzing customer feedback, predicting business metrics, or automating decision-making processes, discover how Aurora ML can enhance your applications with sophisticated machine learning capabilities while keeping your data within the Aurora ecosystem.
-
Modernize SQLServer Workloads with PostgreSQL Babelfish Zero-Code Migration
Umachandar (UC) Jayachandran
Principal Program Manager, Microsoft
Umachandar (UC) Jayachandran
Microsoft, Principal Program Manager
Umachandar Jayachandran is a Principal Program Manager in the SQL Server team. He is currently working on T-SQL improvements for cloud native developers with specific focus on JSON, Full-Text, ML Services & Unicode. Previously, he worked on SQL Server big data clusters, PolyBase, and Azure Arc enabled data services. He has also worked on many features including Window functions, T-SQL Compiler Services in Visual Studio, Performance, Benchmarks, Premium Databases and Capacity Management in Azure SQL DB. He has more than 30 years of experience in the database industry building applications using SQL Server, Oracle, and Sybase.
-
A Deep Dive into SQL Server's Complete String-matching Capabilities!
Modern applications, including those powered by AI, demand versatile and efficient search capabilities to handle diverse data needs, and SQL Server provides a comprehensive suite of string matching and data retrieval capabilities. In this session, we'll explore the extensive search functionalities that SQL Server offers that range from comparison operators, relational operators, to built-in functions. From basic comparison operators and advanced pattern matching including Regex, to full-text search, semantic search with vector capabilities, and fuzzy string matching, we'll explore how SQL Server empowers you to perform comprehensive searches across string data, as well as structured and unstructured documents. We'll also uncover emerging features that promise to enhance search performance and flexibility in future releases. Whether you are building an app with AI capabilities, optimizing complex queries, or simply looking to improve everyday data retrieval, this session will arm you enable you to leverage the full potential of SQL Server's search capabilities. Join us to stay ahead in the world of data-driven decision-making!
Uwe Ricken
Managing Director, db Berater GmbH
Uwe Ricken
db Berater GmbH, Managing Director
Uwe Ricken has been working with IT systems since the early 1990s, with a deep focus on Microsoft SQL Server starting from his work on Membership Software for the American Chamber of Commerce in Germany – later rolled out across five European countries. His passion for SQL Server took off in 2007 when he joined Deutsche Bank AG as a database administrator, gaining hands-on experience in high-performance, enterprise-scale environments. With over three decades of experience in database development and operations, Uwe earned the prestigious "Microsoft Certified Master – SQL Server 2008", the highest technical certification at the time. In 2013, he was honored with his first "Microsoft MVP Award" for his contributions to the SQL Server community across Germany and Europe. Since 2010, Uwe has been sharing his expertise through his blog https://sqlmaster.de, offering deep dives into SQL Server internals, performance tuning, and real-world troubleshooting. A regular speaker at international conferences and user groups, Uwe’s sessions focus on Database Internals, Indexing Strategies and Advanced SQL Development – always with a practical, performance-driven mindset.
-
Accelerating bad T-SQL code
-
Partitioning in Microsoft SQL Server: A Beginner's Guide
-
Resolving Deadlocks in SQL Server: A Practical Demo
Deadlocks in SQL Server can pose significant challenges for database administrators, leading to performance bottlenecks and potential disruptions in critical business operations. This session aims to provide a hands-on demonstration of effective deadlock resolution strategies within SQL Server. During the demo, we will explore two common scenarios that lead to deadlocks, highlighting the intricacies of transaction management and resource contention. Attendees will gain insights into the underlying causes of deadlocks and learn how to identify them using system views and Extended Events. The session will delve into practical techniques for deadlock prevention, such as proper indexing, optimizing queries, and implementing isolation levels. Additionally, we will showcase real-time deadlock detection and resolution using tools like SQL Server Management Studio (SSMS) and system views.
-
SQL Server Concurrency Challenges: Identifying and Resolving Wait States
Valerie Junk
BI Consultant, Wortell
Valerie Junk
Wortell, BI Consultant
Hi there. I am Valerie Junk. I am passionate about creating insights from data and designing dashboards using Power BI. With my expertise in data analytics and visualization, I support companies in developing effective data-driven strategies that yield results and take the business to the next level. With my background in cognitive psychology, I bring a unique perspective to the table. I balance my technical and visualization expertise with a keen focus on the human aspect of data.
-
Get Creative with Power BI: Make these Core Visuals Shine!
Creating reports with Power BI may seem easy at first. However, many people eventually discover that their reports are not being viewed, end-users are confused about navigating, and the process is more complicated than anticipated. This session will address these challenges and provide you with the skills to create visually appealing, intuitive, and actionable data visualizations that serve a clear purpose and effectively guide end-users. The techniques you will learn are practical and can be immediately applied to your work. You will learn how to: – Use DAX to Enhance Visuals: Learn how to modify your visuals using DAX, such as changing colors and highlighting important information, to make critical insights stand out. – Focus on Key Information: Learn techniques to refine your visuals so they display the most critical data, ensuring users can quickly grasp what matters most. – Facilitate User Navigation: Explore different methods to enable smooth user navigation, allowing end-users to interact easily with your reports and find the necessary information. Join me as we explore how to create purposeful, user-friendly visualizations using only Power BI's core features, no SVGs or external tools required. Let’s make your reports not just seen but truly understood!
-
Power BI – This is Not an Art Project
-
Visualizing Data for Non-Data Experts: Making Reports Accessible to All
-
From data to action: Driving decision-making with Power BI
-
Data Literacy: Navigating Your Way to Data-Driven Success!
Data literacy is for everyone—from data enthusiasts to business leaders. It’s about ensuring everyone understands data and has access to the information they need. When everyone’s on board, the whole company benefits from diverse insights and better decisions. Join us to explore: – What data literacy really means and why it’s crucial. – How to make it work for your organization. – Engaging all skill levels in the data journey. – Key reasons why data literacy is essential. This session will be interactive with Menti meter polls and plenty of Q&A. Come ready to participate and learn together! Are you in our target audience? Absolutely! Will there be stroopwafels? You bet!
-
Power BI Tables and Matrix Visuals: The Next Step
-
Data Literacy: Navigating Your Way to Data-Driven Success!
Varsha Rammohan
Product and Marketing Strategist, ManageEngine
Varsha Rammohan
ManageEngine, Product and Marketing Strategist
Varsha is a Product and Marketing Strategist at ManageEngine with over eight years of experience in IT operations and product marketing. She specializes in database monitoring and observability, helping DBAs move from reactive troubleshooting to predictive intelligence through practical frameworks. She develops product positioning, technical content, and thought leadership that empower DBAs to optimize performance and build proactive monitoring strategies. Her unique blend of product consulting and marketing expertise enables her to translate complex technical solutions into actionable insights for IT professionals.
-
Building a Proactive Database Monitoring Playbook That Actually Works
Database monitoring shouldn't feel like guesswork. Yet many DBAs spend hours troubleshooting performance issues that could have been caught early with the right monitoring signals. This session cuts through the complexity and teaches you a straightforward, repeatable approach to database performance monitoring. Whether you manage SQL Server, PostgreSQL, or both, you'll learn techniques that work across platforms and deployment models. We'll cover the complete monitoring lifecycle: *Foundation: Building monitoring baselines and identifying critical daily checkpoints—query response times, blocking chains, connection pool health, and Always On availability. *Execution Intelligence: Monitoring plan cache health, detecting slow queries through execution metrics, and correlating index usage with performance. *Contention Management: Tracking wait statistics, detecting blocking patterns, and monitoring for deadlocks before they cascade. *Maintenance Health: Watching vacuum operations, index fragmentation, statistics freshness, and backup success rates. *Infrastructure Alignment: Correlating database metrics with CPU, memory, and I/O to determine tune-versus-scale decisions. Each technique includes specific alert thresholds and monitoring queries you can deploy. We'll share decision frameworks for prioritizing fixes and eliminating alert fatigue. You'll walk away with a complete implementation roadmap, proven in production environments handling millions of transactions daily.
Veeranjaneyulu Grandhi
Delivery Consultant, Amazon
Veeranjaneyulu Grandhi
Amazon, Delivery Consultant
Veeranjaneyulu Grandhi is a Database Consultant with Amazon Web Services. He works with customers to build scalable, highly available, and secure solutions in the AWS Cloud. His focus area is homogeneous and heterogeneous database migrations.
-
Supercharge Aurora PostgreSQL with ML: SageMaker and Comprehend Integration
Discover how to transform your Aurora PostgreSQL database into a powerful machine learning engine by seamlessly integrating AWS SageMaker and Amazon Comprehend. This session demonstrates how to leverage Aurora Machine Learning to perform real-time predictions and natural language processing directly from your SQL queries. We'll explore practical implementations of ML functions within your database, eliminating the need for complex data movement or separate processing pipelines. Through hands-on examples, learn to deploy SageMaker models for predictive analytics and utilize Comprehend for sentiment analysis, entity recognition, and text classification – all while maintaining high performance and scalability. We'll cover the entire workflow from ML model deployment to database integration, including best practices for security, performance optimization, and cost management. Whether you're analyzing customer feedback, predicting business metrics, or automating decision-making processes, discover how Aurora ML can enhance your applications with sophisticated machine learning capabilities while keeping your data within the Aurora ecosystem.
Vikas Babu Gali
Senior Solutions Architect, Amazon Web Services
Vikas Babu Gali
Amazon Web Services, Senior Solutions Architect
Vikas Babu Gali is a Senior Specialist Solutions Architect, focusing on Microsoft Workloads at Amazon Web Services. Vikas provides architectural guidance and technical assistance to customers across different industry verticals accelerating their cloud adoption.
-
Migrate SQL Server to AWS: From Strategy to Success
When migrating SQL Server workloads to AWS, businesses need migration approaches that maximize reliability while keeping systems running smoothly. Join this session to explore proven strategies for migrating SQL Server databases to AWS. Learn to leverage AWS services including Migration Hub, Database Migration Service (DMS), alongside native SQL Server features for migration. Discover real-world migration insights and best practices drawn from successful customer implementations.
Vinay Balasubramaniam
Director of Product, BigQuery, Google
Vinay Balasubramaniam
Google, Director of Product, BigQuery
Vinay Balasubramaniam is Director of Product Management for BigQuery. He is responsible for BigQuery Core and Advanced Analytics, Security and Workload Management. Prior to joining Google, he was the VP of Product at Salesforce leading Einstein for Sales Cloud and Agentic AI platform (Agentforce). Prior to that he spent over a decade at Microsoft working on SQL Server, HDInsight and BizTalk Server.
-
Agents in the Data Cloud: Bringing Autonomous Intelligence for Data & AI
The growing volume and complexity of data often overwhelm teams, leading to missed opportunities as valuable insights remain hidden behind manual processes and repetitive work. Imagine a data platform that could proactively assist, automate workflows, and deliver precise intelligence at scale. In this agentic era of data and AI, purpose-built agents are fundamentally transforming how data analysts, scientists, engineers, and business users operate. Join us to discover how these autonomous systems, deeply grounded in your enterprise data, streamline operations for every role, boosting speed and productivity. We will explore how Google Cloud's autonomous data cloud, powered by cutting-edge AI, empowers organizations to shift from simply understanding the past to actively shaping the future. You'll gain practical strategies to integrate agentic AI into your work, unlocking your data's full potential for smarter decision-making.
Vineeth Joe Pradeep J
ManageEngine, Zoho Corporation
Vineeth Joe Pradeep J
ManageEngine, Zoho Corporation
Product Specialist at ManageEngine, helping enterprises design end-to-end observability strategies that reduce Mean Time to Identify (MTTI) and accelerate incident resolution across complex, distributed environments.
-
Breaking Siloed DB Ops: Correlation, Context, and Control
As cloud-native architectures grow more distributed, databases have become the invisible performance backbone of every digital service—but also the most fragile point of failure. This session explores where modern database operations fit inside the broader full-stack observability landscape, spanning five key pillars: workload observability, workplace observability, digital experience monitoring, AIOps/agentic ops, and FinOps. We begin with database management systems (DBMSs)—from traditional relational engines like Oracle, MySQL, and Microsoft SQL Server to managed cloud databases on AWS, Azure, and GCP. Despite their maturity, DB performance issues remain notoriously hard to detect early, often surfacing only after an outage or a customer-facing slowdown. The second part of the session dissects the most common pitfalls in database monitoring—blind spots in query performance visibility, delayed root cause analysis, siloed logs and metrics, and the false sense of “healthy” infrastructure in spite of degraded performance. This is where Site24x7’s database observability comes in—bridging metrics, logs, and traces to provide deep query-level insights, real-time anomaly detection, and context-aware AI-powered event correlation. Instead of troubleshooting in isolation, teams gain a unified, topology-aware view of how the application tier and database tier impact each other, enabling faster RCA and self-healing automation.
William Mentaze
Database Administrator (LEAD), Latham&Watkins
William Mentaze
Latham&Watkins, Database Administrator (LEAD)
William Mentaze is a SQL Server Database Administrator with almost 30 years of experience. He hold a Master of Sciences in computer sciences from the University of Toulouse Capitole (FRANCE). He is also a Microsoft-certified Professional (SQL Server Database Administration). He was involved in the Sql Server beta program, the Sql Server hotfix installer initial launch and continue to help the Sql Server community online. He also support Azure SQL Managed instance, Azure Database for PostgreSQL,PostgresSql ,Mysql and Mariadb databases on Linux(Centos, RHEL -RED HAT Enterprise Linux, Ubuntu).
-
The Accidental PostgreSQL DBA: How I'm Doing it Everyday
As a production SQL Server Database Administrator for more than 2 decades, I had to deal with other relational databases management systems including PostgreSQL. In this session , I will talk about the main tasks I’m handling on the PostgreSQL platform. From installing PostgreSQL on Linux RedHat \Ubuntu , Creating databases, Granting database access, scheduling the Databases backups, implementing a disaster recovery solution using streaming replication , Upgrading to new versions of PostgreSQL. At the end of this session, you will have enough information on the different tasks handled by a PostgreSQL DBA, and the resources to help you complete these tasks.
-
Rolling Upgrade of SQL Server Availability group to Windows 2022\SQL2025
Yo-Lei Chen
Product Manager, Microsoft
Yo-Lei Chen
Microsoft, Product Manager
Yo-Lei is a Product Manager on the Microsoft SQL team, where she leads query editor, Copilot, and developer experience efforts. With a background in Human-Computer Interaction, she combines her expertise in UX research and design to drive intuitive and intelligent user experiences.
-
SQL Database in Fabric: The Unified Database for AI Apps and Analytics
Discover how SQL in Fabric brings transactional and analytical workloads together in one cloud-native database. In this session, we’ll show how developers and data teams can simplify AI-driven application development with near real-time insights and built-in AI, seamless OneLake integration, and end-to-end analytics—all in a single, unified experience.
Yvonne Foo
Technical Product Management, Fidelity Investments
Yvonne Foo
Fidelity Investments, Technical Product Management
Yvonne is an accomplished technology leader with over a decade of experience, currently driving innovation at a top financial services firm. She is instrumental in delivering cutting-edge tech solutions that improve efficiency and impact. A recognized thought leader and public speaker, Yvonne has shared her expertise on national stages, including TEDx St. Louis, where she discussed career relaunches and the value of second chances in tech. Yvonne is passionate about mentorship, dedicating time to guide emerging professionals in a fast-paced industry. She is a committed advocate for diversity, focused on building an inclusive environment within technology. Her strong commitment to representation and empowering communities earned her the Unsung Hero Award at the 2022 OCA National Convention. Known for her authenticity, Yvonne uses her platform to inspire action, elevate underrepresented voices, and champion meaningful changes in the tech sector. Her compelling voice inspires others to embrace inclusivity and foster an environment where everyone's voices are heard and celebrated.
-
From Pause to Play: Navigating Career Comebacks
Imagine life threw you a curve ball and you stepped off the career ladder, but you returned with fresh insights and valuable experiences – only to face a hesitant job market. Life often takes us on unexpected detours, and career breaks can be both daunting and rejuvenating. My talk highlights “returnships”: structured programs designed to help individuals relaunch their careers. By offering training, mentorship, and support, returnships close skill gaps and rebuild confidence, empowering participants to thrive in demanding sectors like cybersecurity. Drawing from my personal journey, I’ll showcase how these programs successfully facilitate career reentry and growth. We’ll also explore how employment gaps impact our careers, the untapped potential they represent, and why companies should adopt returnships to enhance diversity and spur innovation. Attendees will gain actionable strategies to leverage returnship programs effectively, maximizing their benefits and unlocking the capabilities of seasoned professionals ready to bring resilience and fresh insights. This session will challenge the stigma around career breaks and demonstrate that, with the right support, returnees can excel and make transformative contributions, especially in securing our digital world.
