Apache Spark in Microsoft Fabric

If you have used Spark in Azure Synapse, prepare to be pleasantly surprised with the compute experience in Microsoft Fabric as Spark compute starts a lot faster because the underlying technology has changed. The Data Engineering and Data Science Fabric experiences include a managed Spark compute, which like previous Spark compute charges you when it is in use. The difference is the nodes are reserved for you, rather than allocated when you start the compute which results in compute starting in 30 seconds or less versus the 4 minutes of waiting it takes for Azure Synapse compute to start.  If you have different capacity needs that a default managed Spark compute will not provide, you can always create a custom pool.  Custom pools are created in a specific workspace, so you will need Administrator permissions on the workspace to create them. You can choose to make the new pool your default pool as well, so it will be what starts in the workspace.

Writing Spark Code in Fabric

If you are writing code in Spark, the two languages you will most likely be using are Python or Scala, but you could also chose Java, Scala, R, or ANSI SQL. Notice that unlike with Azure Synapse, .Net is not included as a language you can use, which is an interesting development. The other thing to keep in mind when writing SQL code in Spark is you will be writing ANSI SQL, not TSQL which you use in Lakehouses and SQL endpoints within Fabric. While TSQL is ANSI compliant, I realized the extent of the differences when trying to use some of the DATEPART TSQL commands as they have underscores in Spark and you use instr instead of TSQLs CHARINDEX. The differences are minor and stackoverflow or copilot can help you with the differences. Just remember that you may not be able use the exact same code as in the SQL endpoint and you will be fine.

Lakehouse Integration and Autogenerated Code

Like all Fabric experiences, the lakehouse is an integral part of the coding experience. When you create a new notebook, the first step is to add a lakehouse. Once it has been added, you can drag and drop elements inside of the notebook and fabric will write the code for you. Literally this code block shown below was created when I dragged over the table publicholidays into the notebook.

Autogenerated Spark dataframe using the clicky-draggy method

Generating code with Copilot in Spark

Fabric in Spark includes a library called chat-magics, and this library includes AI features which you can incorporate in your code if you have copilot enabled in your tenant.  There are a few administrative steps you need to include to make that work.  To enable copilot the first step is to see if it is supported in your Fabric tenant as it is not available everywhere.  Check the list to make sure it is possible. Also you will need to pay for the feature as Copilot is not available as part of the free trial and you will need a Fabric F64 SKU or a P1 capacity to use it. Once you have validated you can use Copilot, you will want to go to the Administrative settings and enable Copilot in your tenant, as shown below.

Fabric Copilot Admin settings

Once Copilot is enabled and active, you will be able to enable it by clicking on the copilot icon on the far right of the screen.  If you don’t see it, click on the ellipse, the three dot menu where Microsoft hides all the cool stuff and you will see the icon in a dropdown menu.

Chat-magics: Copilot spark help

Here are 6 Chat magic commands designed to help you with your code.

%%chat – Designed to provide answers for you regarding items in your code such as variables
%%describe – Provides a summary of the contents of a dataframe
%%code  – Explain what code you want written and copilot will generate it for you
%%add_comments – Most people forget to comment their code, and if this is you you can have AI generate meaningful comments for you.
%%fix_errors – Using this command, copilot will try to fix dependance, configuration and resource allocation errors for you.

In my next post I will provide examples of how to use chat magic commands in Fabric.

Yours Always,

Ginger Grant

Data aficionado et Data Raconteur

 

Fabric Features not found in Azure Synapse or Power BI – Data Wrangler

Fabric Features not found in Azure Synapse or Power BI – Data Wrangler

The second element in my series on new Fabric Features is Data Wrangler. Data Wrangler is an entirely new feature found inside of the Data Engineering and Machine Learning Experience of Fabric. It was created to help analyze data in a lakehouse using Spark and generated code. You may find that there’s a lot of data in the data lake that you need to evaluate to determine how you might incorporate the data into a data model. It’s important to examine the data to evaluate what the data contains. Is there anything missing? Incorrectly data typed? Bad Data? There is an easy method to discover what is missing with your data which uses some techniques commonly used by data scientists. Data Wrangler is used inside of notebooks in the Data Engineering or Machine Learning Environments, as the functionality does not exist within the Power BI experience.

Step-by-step Data Wrangling

The example I am going to walk through here uses the public holiday sample dataset, which you can load by clicking on the Use a sample button with Data Engineering, and selecting Public Holidays. Once you create a lakehouse with the sample data, you can follow along with the steps in this blog for doing your own data wrangling.

From within the Data Engineering Environment, select the notebook from the + New dropdown menu to create a new notebook. You will need to add some data to the lakehouse inside of the new notebook. To do this, click on the green Add button in the lakehouse section of the screen to add your Public Holiday Lakehouse. After the lakehouse is loaded, you will have two file folders, if you click on the one labelled Files, you will see two more folders. Click on the folder called sample_datasets, which makes it turn green and you will see a list of files in the sample_datasets pane which opens up. If you click on the ellipse menu next to public_holidays.parquet, you will see a popup window. In that window, select Load data and you will see two options, Spark and Pandas. The screen image shows what it looks like.

When Fabric was first released, the only option available for Data Wrangling was pandas, which is not Spark compliant. For more information regarding Pandas, check out my blog on Koalas where I cover why you don’t want to use pandas. I am very happy to see the Spark dataframes option as it is spark compliant, and can then take advantage of Spark’s superior memory and data management. However all of the generated code is pandas, and then converted to Spark. Data Wrangler will use Pandas no matter which option you select. We are now going to need to run this code which was generated for us so the data is loaded in a Spark DataFrame, by going to the Run menu and selecting Run all. This will start up the Spark cluster and run the code. You can look in the bottom left corner to see when it is complete.

Once you see the data you can start wrangling it. Go the Data tab and select Transform DataFrame in DataWrangler, then select the dataframe which was created for you, df.

Using Data Wrangler


In the Data Wrangler screen which is pictured here, you will see several interesting items. The first is at the top of the screen, where it lets you know that this code is pandas, but the code will be converted to pySpark when you leave Data Wrangler, but everything here will be pandas. This is important as if you end up with code you want to use, you can just use the generated code because it will be spark compliant. The top of the screen looks a lot like something you might see within Power Query if you have all the view items checked. Data Wrangler also shows missing and distinct as well as the top 3 values. On the left side of the screen there is a list of tasks which will generate code for you. I am going to select the first two of them, starting with Drop Duplicate Rows, which you can see in the screen shot image. This will immediately generate a line of code to drop the rows across all columns, but nothing will happen until the Apply button is clicked, which I will do so the code will be run. The applied steps are called cleaning steps and they appear on the bottom left side of the screen. Even if a step is applied it is not removed from the list, which seems odd. I am going to select the next option which is to delete missing values, and I am then prompted with a dropdown box to select which columns I want to drop if missing. I am going to pick all, then apply the change. At this point, since I completed the first two items, I don’t have anything missing as I deleted everything, so the next two steps will do nothing , as there is nothing missing. Clicking the option preview code for all steps allows me to see the generated code.
 If I want to run this code every time I get this dataset, I want to add it to a notebook, so I will select add code to notebook. Now in my notebook, I want to add pySpark code, which is different than the pandas code generated. Optionally I can add Pandas code, but I see no reason to do that. A popup window will appear showing me the code, at which point I can just click on the add button. The generated code is added to my notebook, saving me the tedium of having to write common code like this.

Data Wrangler is another automated tool which will write code for me similar to what I might expect if I was using Co-pilot. Data Wrangler contains decision making options and is generated based on my thoughts around which would be the best option in a given scenario. I can always return to Data Wrangler to add other steps or make different selections from the operations I chose. This tool can be really useful and I plan on using it to clean up my data in Notebooks.

 

 

 

Yours Always

Ginger Grant

Data aficionado et SQL Raconteur

Exploring Azure Synapse

In the past few months, I have been examining Azure Synapse and what it can do.  When it was first released in November of 2019, the first functionality that was rolled out was an update of Azure SQL DW.  For this reason, many people think that Synapse is just an improved version of a cloud data warehouse.  Microsoft did improve SQL DW when it moved it to Synapse.  The biggest architectural design change is the separation of the code from the compute, a theme with many web projects, which allows the compute power to be increased when need dictates and scaled down when computing needs change.  Within Synapse, resources are allocated as Pools and you can define a sql pools to run data warehouse and later change the compute to a different resource.  You will still need to partition your DW as large datasets require partitioning to perform well.  Subsequently Microsoft Released the Azure Synapse Studio to be a container for a larger environment of tools and notebooks to interact with them.

Non-Data Warehouse Elements of Azure Synapse

To me the more interesting parts about Azure Synapse have nothing to do with data warehouses.  Azure Synapse also contains the ability to query files stored in Azure Data Lake Gen 2 as if they were SQL files. This is a great way to analyze large data without first cleaning it up and putting it into a relational environment. Within Synapse you can formulate a query using syntax for selecting parts of files, providing the ability to look at many files as if they were one. You can also create processes which bring data into your synapse environment using Orchestration, a process that people who are familiar with Azure Data Factory will find very familiar. Synapse also contains the ability to analyze data in Cosmos DB without doing ETL or moving the data at all using a scalable architecture which will not impact the transactions being processed simultaneously on the same Cosmos DB.

Azure Synapse and Spark

Check out the promotional video here

By far the most interesting component of Azure Synapse is the Spark connection. Microsoft has added the ability to create Spark Pools into Azure Synapse.  To be honest I was somewhat surprised that this functionality is included here first and not in Azure Machine Learning, where to use Spark you need to access clusters created them in Databricks.  Spark provides the ability to dynamically scale resources when running processes.  This is very handy when writing machine learning code which can really use the performance improvements Spark brings.  Because this is Microsoft’s Spark, you can also write your code to access it in .Net if you like, in addition to the more common Spark Languages, Scala, R or Python.  You can also incorporate the AutoML API created for Azure Machine learning in R and Python so that you can use the power of Azure to select your algorithm and hyperparameters instead of spending time doing it yourself.

Getting up to Speed with Synapse

There is a lot to learn when it comes to Synapse as it combines a lot of different components into one environment. As more and more data is being migrated to the cloud, it is uniquely designed to handle both big data components containing raw data, managed data lakes as well as more traditional data warehouse needs.  It can also be the location where all of the data is processed, secured, cleaned and analyzed using Machine Learning. There is a lot to cover and since it is new, there is not a lot of places yet where you can learn more about it.  If you are interested in a deep dive on Azure Synapse and how to use it in a Modern Data Warehouse, sign up for my precon at PASS Summit 2020 where I will cover the topic in depth.

 

Yours Always

Ginger Grant

Data aficionado et SQL Raconteur