Balanced Data Distributor and Other Features in the Pack

BBDRunFullSometimes the basics aren’t enough.  Car manufactures know this, that’s why they have a base model and one with all the options.  Let’s face it, most of the time you want all the options.  Well, with SQL Server you can get the options and you don’t have to spend hours debating the cost of clear coat. But I digress. If you want all the options, you can have them, right after you install SQL Server, and I am not talking about the cool codeplex stuff, which I will have to bring up another time. The genuine Microsoft list of add-ons are available for you at no additional cost. When Microsoft releases SQL Server, just because you installed it, doesn’t mean you have everything.  For versions 2008R2, 2012, 2012 SP1 as well as 2014 all have Feature Packs. Looking at the SQL Server 2014 Feature Pack List , there are a number of things which are about as useful as heated seats in the desert, like the DB2 or SAP drivers, which is probably why they are not automatically included as part of the release. When looking at the list, there is a very good chance many SSIS developers may be interested in some of them.  For example, wouldn’t you want the option for the more powerful and fuel efficient motor?  I know I would. There is an item in the feature pack which provides that feature, the Balanced Data Distributor.

Threading the Memory since 2011

Balanced Data Distributor (or BDD for short) was first released as a new SSIS transform for 2008 and 2008 R2. It was designed to take advanced of multi-threading hardware capabilities by spreading the data load so that the data can be broken into chunks and processed at the same time, rather than serially starting at the beginning and loading a stream until it ends.  This means you can go much faster on a single tank of gas, which is awesome. As you might imagine, processing more data at one time, decreases the time needed to process it.  This is really usefully if you are using blocking transforms like a row-by-row script component or a Sort or Aggregate transform, which require SSIS to look at every single row before processing.  Another situation where you might find it useful is if you have to write to a really slow output.

DIY of BDD –  Divide and Conquer

What the BDD does is divide the copies into multiple threads. How many is determined by the developer because while you will not need to configure it, you will need to add code to tell it how many times to divide the task. Let me show you what I mean through a series of SSDT screen shots.

BBDfull

This screen shows that I have a Data Flow task where I am reading in 121,317 rows then doing an Aggregate and a Sort, both of which are blocking transforms. This is just meant to be an example as with 121,317 records, you probably won’t see that much of a performance improvement, but you get the idea.

In this screen shot you can see that I have added the BDD task, which I’ve highlighted in the SSIS toolbox so you will see where it shows up when the component is installed. You will also see that I copied the code so that the same tasks from the aggregate on appear twice. What happens when this version is run?

BBDRun2

Check out the outputs underneath the BDD component. The number of records was split, but it isn’t an even split.  There are 62,337 records on one side and 58,980 records on the other side.  The record counts in the output is determined by the component as the optimal number based on the available threads.  You configure nothing, just drag it onto the screen. Pretty cool, isn’t it?

This example shows how easy it is to speed up SSIS processing without a huge amount of effort, allowing you to drive laps of code with speeds you may not of thought possible with the help of a free download. Have fun and let me know what kind of performance gains you see.

Yours Always

Ginger Grant

Data aficionado et SQL Raconteur

From the clouds, data – DIY Guide to using SSIS with Azure

My apologies to Jack Johnson for lifting his song title for this blog, but I couldn’t help it. I thought that it might be useful to discuss the How-Tos of data migration to the cloud, which reminded me of all the cloud songs I know. I always thought the cloud metaphor was sort of an odd name for a remote data center, but that is probably why I am not in the business of naming things. But I digress. Back to the topic at hand. Moving data from SSIS to databases hosted on Microsoft’s Azure cloud requires some different steps than just moving data around on premise. Since things on the cloud change quite quickly, I thought I would demonstrate currently what this looks like using SSDT in Visual Studio 2012.

Breaking through the Wall

Before getting started with SSDT, there are some configuration steps one needs to complete on Azure first. The Windows Azure SQL database firewall must first be set up to allow connections from your IP, or your data will never get to the cloud. There is an option on the database screen to “Connect to your database”. If the firewall is not opened on the IP and port in use, you will get this message.

AzureDBScreenConfigFirewall-message

Selecting the Yes is needed to be able to connect to the Azure database via SSIS.

Also one might want to validate that the right drivers are loaded on SSDT as well. If you are running via Visual Studio 2012 or 2013, no worries as the drivers are already there, but for earlier versions new drivers may be required. If one is planning on loading data to an Azure SQL Server database, the ODBC or ADO.Net are the connections needed for Azure. The old data connection standby, Ole-DB is going to be left in the toolbox like last year’s dress, as it won’t work for the cloud. Much like fashion, everything old is new again so ODBC is once again the “It” connection. You can use ADO.Net too, but I won’t be here.

The next step in the process is getting the connection information needed to connect to the Azure database. Microsoft made this step quite easy. Look on the Azure Database screen where I’ve pasted a pink arrow. Click there.

 AzureDBScreenFull-EnabledCropped

 

This makes it so easy as a screen pops up with all the connection information you need. You just need to copy the ODBC section, and remember what your password is as you will need to enter it.

AzureODBCConnectionInfo

While we are still in Azure, I thought it would be a good idea to display where the SSIS package we will be creating will be putting the data. Here’s the table structure where the data will be placed.

AzureDBScreenTable

 

Here’s the query screen showing that right now the table is empty.

AzureDBScreenTable-NoData

SSIS Package Transferring Data to Azure

After you have all of the information you need from Azure, it is a relatively simple thing to create an SSIS package, with an OLEDB connection for my on premise database and an ODBC data connection to Azure using the information copied from the Azure database connection screen to transfer data to my Azure Database.

AzureDBSSISRunning

Going back to Azure, you can see 19,972 rows were added.

AzureDBScreenTable-Data

One word of caution, as you see here in the progress log, adding data can be a very slow process.

AzureDBSSISProgress

I highlighted the Elapsed time in red so that it would be easy to see that a simple file transfer took over two minutes.

Location, Location

One thing which is important to consider is where you are going to be moving your data. I demonstrated what I think may be the more common scenario, where the data is not on the cloud, and you want to put it to the cloud. Microsoft refers this as Hybrid Data Movement. Of course this may not be the case. If you are running SQL Server on a Virtual Machine in the cloud it may make a lot more sense to run SSIS on that virtual machine. If that is the case, for optimal performance, locate the SSIS in a VM in the same data center as the database because otherwise, due to the bandwidth and network latency, it will be slower. When transmitting data around the cloud, whether it be from on premises to the cloud or from one server to another on the cloud, you might want to consider compressing the data prior to sending if at all possible to decrease the size of the data being transmitted. It may be faster to extract the data you want on premises and transmit a compressed file to be applied on the cloud server. This can get to be more complicated as it requires setting up an Secure FTP server to transmit the files, which then have to be applied. Natively SSIS doesn’t have a compression tool, but there are third party products, such as Task Factory, which will allow you to not only compress the output but send it to your VM via Secured FTP from within the SSIS package.

 

Yours Always

Ginger Grant

Data aficionado et SQL Raconteur

 

SSIS Tuning – What Size is my row, your DIY Buffer Sizing Guide

When looking to improve the performance of an SSIS package, one of the common recommendations listed in many places is to modify your buffers so that you can load as much data into a single buffer size as possible. According to Microsoft, for optimal performance, SSIS should be configured for maximum memory utilization by having buffers with as many rows as possible without exceeding the internal 100 MB limit. Ok now that you know what Microsoft says about the topic, how does one put it in practice? I thought it might be nice to post the How-To as it is a little complicated.

Buffer Tuning Parameters

There are a couple of default settings that are used for loading data into memory and they all need to line up correctly to keep data from being sent from memory to disk. In order to set the size appropriately, it is best to apply the Price Is Right Axiom.

DefaultMaxBufferSize – the default is 10 MB. The maximum size is 100MB, which SSIS stores as MaxBufferSize. This value can never be any bigger than 100MB, so size it appropriately for it’s environment.

DefaultMaxBufferRows – The default is 10,000 rows. This setting needs to be set the same way that you win with the Price is Right . To win, the default max buffer rows * (times) the row size needs to be as close to DefaultMaxBufferSize as possible without going over.

Size Matters

To figure out what to put in the DefaultMaxBufferRows, one needs to know how big the rows are. Remember to win the value must be as close to the size of DefaultMaxBufferSize without going over. If the buffer is sized too large, the package will be slower as rows will be cached to disk, so make sure you don’t miss by even a little. How big is the row size? There are a number of ways of figuring this out. You can look at each field in the query SSIS is loading, and based on the data type, add up the value of all the fields. Fortunately that isn’t the only way to figure out the row size. If your data source is SQL Server, you are in luck, as the system tables can help to determine what the size is. Here is a sample the query, assuming your table name is Address, which you can run on the AdventureWorks database.

Exec sp_SpaceUsed 'Person.Address'

The Results are

name         rows    reserved       data          index_size    unused
Address    19614   5960 KB      2784 KB     2688 KB      488 KB

To figure out what the size of your buffer should be for this entire table is to take the number of (data *1024)/ Rows as 100MB is the max size you can set. To calculate the row size, use the formula values  2784 / 19614 * 1024 = 145.346, or 146 bytes per row.  If you set DefaultMaxBufferRows to 100MB, which is the maximum and what I recommend in most cases, it is 104857600 bytes is the Buffer Size.  Buffer Size/ Row size = DefaultMaxBufferRows. 104857600 / 146 = 718202.73 so set the DefaultMaxBufferRows to 728203  If  you are using the columns, you can get the same information by looking at the  syscolumns.  By using the column length, it is relatively easy to figure out what the appropriate size of your buffer should be, by adding up the column lengths. One word of caution. I do not wish to imply that because the information is available on a per table basis one should pick Table or View in the SSIS source. Au contraire. Always access the data by using a Select statement as it performs better.

Validate

To ensure that you have improved the buffer size performance, check it. After you are done with the settings, Enable logging on the data flow task, and select the BufferSizeTuning event to see how many rows are contained in each buffer.

Please feel free to drop me a line if you find this helpful.

Yours Always

Ginger Grant

Data aficionado et SQL Raconteur