This course is designed to build your foundational skills in data engineering on Microsoft Fabric, focusing on the Lakehouse concept. This course will explore the powerful capabilities of Apache Spark for distributed data processing and the essential techniques for efficient data management, versioning, and reliability by working with Delta Lake tables. This course will also explore data ingestion and orchestration using Dataflows Gen2 and Data Factory pipelines. This course includes a combination of lectures and hands-on exercises that will prepare you to work with lakehouses in Microsoft Fabric. This course may earn a Credly badge.
* Actual course outline may vary depending on offering center. Contact your sales representative for more information.
Learning Objectives
Describe end-to-end analytics in Microsoft Fabric
Describe core features and capabilities of lakehouses in Microsoft Fabric
Create a lakehouse
Ingest data into files and tables in a lakehouse
Query lakehouse tables with SQL
Configure Spark in a Microsoft Fabric workspace
Identify suitable scenarios for Spark notebooks and Spark jobs
Use Spark dataframes to analyze and transform data
Use Spark SQL to query data in tables and views
Visualize data in a Spark notebook
Understand Delta Lake and delta tables in Microsoft Fabric
Create and manage delta tables using Spark
Use Spark to query and transform data in delta tables
Use delta tables with Spark structured streaming
Describe Dataflow (Gen2) capabilities in Microsoft Fabric
Create Dataflow (Gen2) solutions to ingest and transform data
Include a Dataflow (Gen2) in a pipeline
$695
Length: 1.0 day (8 hours)
Level:
Course Schedule:
4:00 PM PT
5:00 PM ET
4:00 PM PT