SQL Server 2017 Machine Learning Services – Installation

With the release of SQL Server 2017, you now have the capability to incorporate both R and Python into SQL Server. As there  is a lot of material on the topic, this is the first post in the series which covers installation.  In future topics we will be covering the internal components, monitoring a R and Python code to determine performance impact on SQL Server, creating and maintaining R code, creating and maintaining Python code, and other related topics.

Installing Machine Learning Services

R was first introduced in the SQL Server 2016 and it was called R Services.  For SQL Server 2017, this same service was renamed to Machine Learning Services, and expanded to include Python. In the install there are three options, installing the Machine Learning Services, then selecting R and/or Python as you see in the attached picture.

Why you want to select Machine Learning Services(In-Database)

There are two installation options:  In-Database or Standalone.  If you are evaluating Machine Learning Services and you have no knowledge of what the load may be, start by selecting the Machine Learning Service In-Database.  There are several reasons why by default you want to select the In-Database option. One of the problems that Microsoft was looking to solve by incorporating advanced data analytics was to improve performance of the native code by greatly reducing data latency.  If you are analyzing a lot of data which is stored within SQL Server, the performance will be improved if the data does not need to be moved around on a network. Also, the licensing costs of installing R Server standalone also need to be evaluated with a Microsoft representative as well. An evaluation of the resource load on the network, as well as analysis of the code running on SQL Server should be performed prior to the decision to install the Machine Learning Server Standalone.

Internet Access Requirements for installing R and Python

The Machine Learning Service is an optional part of the SQL Server Install.  Because R and Python are both open source applications, Microsoft cannot include the R or Python executables within the install of SQL Server.  The executables must be downloaded from their respective locations on the internet, and the installation process is a little different if there is no internet access.  Each language has two installs, one for the executable and one for the server. If you do not access to the internet on the server where you are installing SQL Server 2017, you can download the files needed for the install.

Here are the links for SQL Server 2017 CU2.

Microsoft R Open

Microsoft R Server

Microsoft Python Open

Microsoft Python Server

If you are installing a different version, use the links provided on the Offline Installation screen. These links each will download a .cab file.  You will need to copy the cab files to a location where the server can access them and provide the path in the Offline Installation window.

 

What is Installed with Machine Learning Services

Machine Learning is an external process and communicates with launchpad.exe to access either R or Python.  For a quick check to see if the Machine Learning Services were installed, look for the SQL Server Launchpad service in the list of running services. It will also create by default 20 different external users which are used to call R or Python. There will be a subdirectory created for each user, with the name of the SQL Server instance name, which is by default MSSQLSERVER, followed by a number 00-20.  These subdirectories have nothing in them, as they are used temporarily when R or Python needs them and the information in them is eventually deleted by SQL Server. The default location is C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\MSSQL\ExtensibilityData.

The R tools are located in the C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\R_SERVICES directory, and include RLaucher.dll which will load R.  If you want to run R on the server, which can be handy when updating R libraries, run the RGUI.exe, which will load up an interface where you can run R code.  In SQL Server 2017 CU2, Microsoft R Open 3.3.3 is installed along with R Server 9.2.0.  Microsoft R Open is a version that is 100% Open Source and completely compatible with the standard Open Source version of R, which is commonly referred to as Comprehensive R Archive Network [CRAN] R.  Microsoft rewrote some of the underlying functions so that they would be multi-threaded, which R is not, and incorporate the Intel Math Kernel libraries to improve the performance.   R Server is the version of R which contains the proprietary functions which Microsoft created for SQL Server which include the ability to load code in and out of memory, which will be discussed in a future post.

The Python tools are located in the C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\PYTHON_SERVICES directory, and include Pythonlauncher.dll which will load Python 3.5. The installation includes the Anaconda distribution for Python, which includes not only the data science components of Anaconda, but also SciKitLearn and Pandas.  Microsoft has also included machine learning algorithms which were created for Python in the microsoftml and revoscalepy libraries.  There is a lot of interesting content will be discussing more about these libraries in a later post.

Configuring SQL Server to Run R or Python

Once the Machine Learning components are installed, there are some configuration steps which must be completed to permit R or Python to run on SQL Server.  If this is a new server, make sure to install SQL Server Management Studio, since it is not included in the SQL Server Install. From within an SSMS query window, the following script needs to be run to enable R

SP_CONFIGURE 'external scripts enabled', 1
GO
RECONFIGURE
GO

After this step completes successfully, a restart of SQL Server Services is required.  When stopping the service, you will be notified that SQL Server Launchpad also will need to be restarted.  I have noticed that for  some reason, Launchpad does not always restart when SQL Server is restarted, so you might want to check to make sure that it is running, as you cannot run R or Python unless the SQL Server Launchpad service is running.  After the restart, to check to see if R is working properly, run the following code from within an SSMS query window.

EXEC sp_execute_external_script @language =N'R',
@script=N'OutputDataSet <-InputDataSet',
@input_data_1 =N'SELECT 1 as CheckToSeeIfRIsWorking'
WITH RESULT SETS (([CheckToSeeIfRIsWorking] int not null));
GO

When run successfully, this script will return a 1. SQL Server is now ready to run R.  To check to see if Python can be run successfully, run this script.

EXEC sp_execute_external_script  @language =N'Python',
@script=N'OutputDataSet = InputDataSet',
@input_data_1 = N'SELECT 1 AS CheckToSeeIfPythonIsWorking'
WITH RESULT SETS ((CheckToSeeIfPythonIsWorking int not null));
GO

In my next post I will cover the SQL Server internal components which are run when R or Python code is run.  Please subscribe to my blog to be notified when the next installment will be available. If you have any questions, comments or ideas for future post topics, please leave me comments as I would really enjoy any feedback.

Yours Always

Ginger Grant

Data aficionado et SQL Raconteur

 

 

Applying Data Science to SQL Server

Data has been getting a lot of attention in the business world for a while now.  First there was big data, which was another way to store data so that later the data could be analyzed.  Recently the talk has been all about analyzing the data with new tools such as R and Python.  The reality is that people who have been working with databases doing work in business intelligence have been analyzing data for a while.  Learning a different toolset for analyzing data is not such a big leap, but an expansion of what they know.  As the field is rapidly expanding now, and demand is huge, now is a great time to learn the tools.

Traditional Data Science Development

Data scientist have created analysis solutions with data for a number of years.  The data is analyzed, cleaned, processed with various algorithms, and results are created.  When the process is complete, code has been created to provide meaning from a portion of the data and is ready to be migrated to production. Traditionally there has been a big gap between creating a solution and implementing the solution to be run against data on a regular basis.  Data Scientists traditionally are not part of the IT organization, they are actuaries or analysts, not the people who have anything to do with system processing. Recently I did some work for a company and after the data scientists were done creating a solution, they turned over all of their code to the Java team.  Six weeks later the code was released into production. This solution made no one happy.  Management thought it took too long.  The data scientist didn’t believe that the code that they created was what was implemented into production, and the java developers were tired of people blaming them for wrong code which required a long time to implement.

SQL Server Implementation of Data Science

Since SQL Server 2016 incorporates R and SQL Server 2017 has added the ability to include Python code into SQL Server, data science solutions can be incorporated as part of a scheduled process with SQL Server.  There is now a dev ops solution for incorporating R and Python into SQL Server.  One way of learning about the technology is through blogs and other online training which can help you get up to speed.  Many times though there is no substitute for hands on learning.  If you are attending PASS Summit 2017, and want to learn not only about data science, but how to incorporate it into SQL Server, I hope you can sign up for my all day training session on Applied Data Science for the SQL Server Professional.  I hope to see you there.

I have recently created a You Tube channel where I plan on sharing more data related content where I have included my first video about this conference.

If you are at PASS Summit, please introduce yourself as I would love to meet people who read my blog personally.

 

Yours Always

Ginger Grant

Data aficionado et SQL Raconteur

What to do when SQL Server 2016 R Installation Does Not Work

I was recently helping a customer with their R installation.  After R was installed successfully, I went to the SQL Server Management Studio and ran the script needed to configure external services

EXEC sp_configure  'external scripts enabled', 1
RECONFIGURE WITH OVERRIDE
GO
RECONFIGURE;
GO

After this script ran successfully, I closed SSMS, and restarted the SQL Server service, noting when I did so that it told me that it was also going to be restarting SQL Server Launchpad, which is installed as part of R Server, was also going to be restarted.  Everything looked good until I tried to run my first R command.  I went for the super generic script

EXEC sp_execute_external_script @language=N'R'
,@script=N'OutputDataSet <-InputDataSet'
,@input_data_1=N'SELECT 1 as CheckToSeeIfRIsWorking'
WITH RESULT SETS
(
(
[CheckToSeeIfRIsWorking] INT NOT NULL
)
);

Much to my surprise after this I received an error

Msg 39019, Level 16, State 1, Line 1
An external script error occurred:
Unable to launch the runtime. ErrorCode 0x80070490: 1168(Element not found.).
Msg 11536, Level 16, State 1, Line 1
EXECUTE statement failed because its WITH RESULT SETS clause specified 1 result set(s), but the statement only sent 0 result set(s) at run time.

I looked in the log files and didn’t find any errors.  I checked the configuration manager to ensure that I had some user ids configured in the configuration manager.  Nothing seemed to make any difference.  Looking online, the only error that I saw which might possibly be close was a different error message about 8.3 naming and the working directory.

Resolving Error Code 0x80070490 Unable to launch the runtime for R in SQL Server

When I saw this error, I first checked the various SQL Log files. I didn’t find any errors.  I checked the configuration manager to ensure that I had some user ids configured in the configuration manager.  The default 20 users were there as expected. Nothing seemed to make any difference.  Looking online, the only error that I saw which might possibly be close was a different error message about 8.3 naming and the working directory.  I ran into a similar problem with R and Power BI.  R natively doesn’t understand anything but directories which are only 8 characters long.  If there are more letters than this in a directory, which is always the case with the default SQL Server directories, internally the directory is read as the first 8 characters and a tilde(~). This particular server also had an installation of SQL Server 2014 on it, and if you looked at the filename, it was impossible to tell which one was which as the directory read like this C:\PROGRA~1\MICROS~2\MSSQL1~1.MSS\MSSQL\EXTENS~1    I created a new directory with short names D:\Rwork and changed the working directory to the new subdirectory Rwork.  I was really hopeful that this would work.  But when I ran the external script listed above, it still did not work.  What had I done wrong?  I didn’t delete the previous working directory, which was in the default location C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\ExtensibilityData . When SQL Server creates this subdirectory, it also includes subdirectories for all of the users.  When I created the new directory, it didn’t have those subdirectories added.  I copied the subdirectories from the old working directory to the new one and tried again.  This time, success!

I hope you find this post helpful, as I know I could have used it when I was trying to figure this out. If it saved you time too, drop me a line and let me know.

Yours Always

Ginger Grant

Data aficionado et SQL Raconteur

Database Table Design

http://michaeljswart.com/2016/06/t-sql-tuesday-079-its-2016/comment-page-1/#comment-186750There are a number of different ways that you could decide to organize your data in a database. If you are creating a database to be used in a transactional system, your table design should follow a normalized design as much as possible.  Data should be grouped in logical groups, such as customers, products, sales, orders, quotes, tickets etc.  Redundantly repeating data in multiple places will cause problems in the future.  Your design may include hundreds of tables, and that is perfectly fine.

If the purpose of the database is for Power BI or for a data warehouse, dimensional modeling techniques should be deployed.  In this type of database design, generally speaking there are a number of tables containing descriptive data, such as product and customer and few fact tables which contain the actions which happened.  The actions include things like Sales.  The database design will look much like a star with the fact tables in the center and the dimension tables connected to it like satellites.  If you have one dimensional model connected to another dimension, that design is called a snowflake and some applications, like Analysis Services Multi-dimensional will not process it well.  Power BI and Analysis Services Tabular work very well with snowflake dimensions.

SnowflakeModel

Snowflake Data Model from Power BI

 

Table Design Gone Wrong

To paraphrase Ron White, the reason that I described database modeling is so that even people who know nothing about database design could appreciate my interview story. When I was working at a previous location, I assisted in providing technical reviews for database developer jobs.  We asked a number of typical questions about indexing and stored procedures, but I always tried to come up with at least one question which the candidate could not readily answer by cramming interview questions found on the internet.  I decided to ask one candidate, who did correctly answer the previous stock questions something that would let us know what kind of work he had really done.  I asked him “What do you do to determine how to design a table?” I was interested to find out what his thought process was, see if he would mention normal form or describe something he had done in the past.  I was completely surprised by his answer.

“Well, you can only have 256 columns in a table. After that you have to create a new one.”  This answer was a complete surprise.  I was really curious to find out where he had developed this completely warped view of how to determine what fields should go in a table.  It turned out that he learned all of his database skills from a co-worker, who had recently retired.  His co-worker had worked at the same location for a very long time and when he started used mainframes without any databases.  He had migrated some of the applications to databases and they wrote them this way because it “made sense”.   After that the interview was over, and we hired someone else.

I challenge anyone who is learning databases to please look up what people tell you to do on the internet.  This is useful for two reasons, the first being that it will probably help you learn the concept better to read about it another way.   The other reason is that you can find out if the person teaching you really knows what they are doing, so you will learn the correct way to do something.

 

Yours Always

Ginger Grant

Data aficionado et SQL Raconteur

 

 

What to use to Dynamically Write a Date Dimension in Power BI: M or DAX?

Calendar-clip-artRecently I needed to create a date dimension for a Power BI model as there was not one in the source database. There are two different ways that I could do this, using DAX from the Modeling Tab within the Data View or using M via the Query Editor window.  As a general rule, when it is possible data manipulation should be done in M as it offers a greater level of compression.  In this case though I am using a function in DAX, which is not the same as creating a calculated column.

Create a Date Table in DAX

To create a date table in DAX,  in Power BI go to the Data View->Modeling Table.  Click on the  New Table button on the ribbon.  For those who are wondering how you would go about writing either one, here is the source code for the DAX version.

 

DimDate = ADDCOLUMNS( CALENDAR(DATE(2017,1,1), DATE(2020,12,31)) ,
"Date Key", FORMAT ( [Date], "YYYYMMDD" ), //NumericDate
"Year", YEAR([Date]),
"Qtr Number", "Qtr " & FORMAT( [Date], "Q"),
"Q Number", "Q " & FORMAT( [Date], "Q"),
"Month Name" , FORMAT ( [Date], "mmmm" ) ,
"Month Short Name" , FORMAT ( [Date], "mmm" ) ,
"Month Number", MONTH([Date]),
"Month Year", FORMAT ( [Date], "mmm " ) & YEAR([Date]),
"Day Name", FORMAT ( [Date], "dddd" ), //Name for Each day of the week
"Day Short Name", FORMAT ( [Date],  "ddd" ),
"Day Number" , WEEKDAY ( [Date] ) //Sunday is 1
)  

This code uses the DAX CALENDAR function to create a contiguous set of dates between January 1, 2017 and December 31, 2020 with a field is named “Date”.  The remaining fields need field names and comments were added for clarity

Create a Date Table in M

In the Power BI Query Editor, click on the New Source button and select Blank Query. Select the View tab and click on the Advanced Editor button.  The Advanced Editor is where the M query is stored.  Paste the following code in to create a new table called Query1, which of course you can rename.

let
Source = List.Dates( #date(2017,1,1), Number.From( #date(2020, 12,31) - Date.From( #date(2017,1,1) ))+1, #duration(1,0,0,0)),
#"Converted to Table" = Table.FromList(Source, Splitter.SplitByNothing(), null, null, ExtraValues.Error),
#"Changed Type" = Table.TransformColumnTypes(#"Converted to Table",{{"Column1", type date}}),
#"Renamed Columns" = Table.RenameColumns(#"Changed Type",{{"Column1", "Date"}}),
#"AddYear" = Table.AddColumn(#"Renamed Columns", "Year", each Text.End(Text.From([#"Date"], "en-US"), 4), type text),
#"AddMonth" = Table.AddColumn(AddYear, "Month Number",   each Date.ToText([Date], "MM")),
#"AddQuarter" = Table.AddColumn(AddMonth, "Quarter Number", each "Qtr " & Number.ToText(Date.QuarterOfYear([Date]))),
#"AddMonthName" = Table.AddColumn(#"AddQuarter", "Month Name", each Date.ToText([Date], "MMMM")),
#"AddMonthShortName" = Table.AddColumn(#"AddMonthName", "Month Short Name", each Date.ToText([Date], "MMM")),
#"AddShortMonthYear" = Table.AddColumn(#"AddMonthShortName", "Short Month Year", each [Month Short Name] &" " &  [Year]),
#"AddDayOfWeek" = Table.AddColumn(#"AddShortMonthYear", "Day of Week", each Date.ToText([Date], "dddd")),
#"AddDay" = Table.AddColumn(#"AddDayOfWeek", "Day", each Date.ToText([Date], "dd")),
#"AddDateKey" = Table.AddColumn(AddDay, "DateKey", each ([Year]&[Month Number]&[Day]))
in
#"AddDateKey"

To Create A Date Table Use Either M or DAX

Now the question is which one should you use?  To be honest it doesn’t matter.  I couldn’t see any difference when I tested it.  To validate this answer I consulted twitter, which sparked some very interesting comments and analysis. Marco Russo b | t  is planning on writing a blog on the details of it, but Jason Thomas b | t gave me this summary. “[The] Dictionary expands as needed as values are inserted–designed to reduce cost of re-alloc and re-org of hash buckets (at cost of memory waste).” The full explanation needs a post of it’s own to be sure, which I am looking forward to reading when Marco writes it.   I’ll quote Kasper De Jonge b | t who summed it up best “I don’t believe it will matter much, the date table will be so small regardless”. Whichever way you chose to add a date table, DAX or M, which are needed for time intelligence, you now have the code to do either one.

 

Yours Always

Ginger Grant

Data aficionado et SQL Raconteur

 

Missing Custom Power BI Visuals

One of the great new features of the July release was the ability to now get all of the Power BI Custom Visuals from within Power BI.  I had a bookmark to get the visuals from the Office Store, but it always seemed kind of a kludgy solution.  Personally, I liked the visuals better when they were on the Power BI website prior to March of 2017.  The filters worked better and they also included a sample file.  Now I have a different and more technical reason to not like the visuals in the Office Store, some of the Power BI Custom Visuals are not there.

Some Power BI Visuals Are Not in the Office Store

PowerBIFishCustomVisualThis week I decided to do a demo using the Aquarium custom visual.  As readers of my blog know, I have used the custom visual before, but it has been a while and I have changed PCs since then.  No worries I can always go download the visual from the store, right? Wrong. The aquarium visual is not available on the new store. Neither is Image Viewer, if one is looking to add that into your latest Power BI report it is not available. What happened?

So Long and Thanks for All of the Fish

I found out from Adam Saxton b | t  that moving Power BI custom visuals was not the simple cut and paste process that I had always assumed that it was.  The people who write custom visuals had to re-write them.  What’s more unlike when the custom visuals were housed on the Power BI Website, custom visual creators also had to pay $25 to register or $99 for their company.  This means that some custom visuals may never appear in the store as the people who created them aren’t willing to pay money to give them away.

If you have the custom visuals, or as in my case you know someone who can give you a copy of a Power BI custom visual which was published prior to the move over to the Office Store, the visual will still work when you upload it to the service.  I have also been told that Microsoft is working on adding the aquarium visual to the Office Store so at some point it will again be available for download.  For those who have noticed that the Box and Whisker custom visual is not the same as the previous version, I doubt they will be able to download the old one. If they can find someone to give it to them, it will still work.

If I do find out when the Aquarium visual will be available from the store again, I will update this post. Until then, if there is a visual you want, I would try asking on Twitter, as that worked for me.

****UPDATE: On August 4, 2017 the Enlighten Aquarium is now available again! Here’s a link to the Office Store.

Yours Always

Ginger Grant

Data aficionado et SQL Raconteur

Getting Started with Integrated Python and SQL Server 2017

As part of the effort Microsoft is making at incorporating analytics, Python is being added into SQL Server 2017.   This means SQL Server will support the two primary languages of Data Science within SQL Server, R and Python.  As I have previously reviewed using R in SQL Server, I wanted to also review using Python with SQL Server. Since Python is near the top of the most popular programming language charts, many people are interested in learning more about it.  As many data professionals are unfamiliar with Python, I wanted to introduce the topic not just here, but in my upcoming webinar for 24 Hours of PASS on Implementing Advanced Analytics with SQL Server 2017 and Python.

Installing Python in SQL Server

SQL Server 2017 Install Window

SQL Server 2017 Install Window

The process for using Python in SQL Server is very similar to the previous process of installing R.  Microsoft renamed R Services to Machine Learning Services, and now allows both R and Python to be installed, as shown in the screen.  Microsoft’s version of Python uses Anaconda, which is an open source analytics platform created by Continuum. This is where Python differs from other open source languages, as Continuum is providing the version of Python as it contains data science components which are not included in the standard distribution of Python. Continuum also sells an enterprise version of Anaconda, with of course more features than come with the free version. It is important to remember the python environment as you will need select the same distribution when running Python code outside of SQL Server.

Configuration Changes for Python

The last thing needed to run Python is to configure and restart the SQL Server Services. In a new query type the following command

sp_configure 'external scripts enabled', 1
GO
Reconfigure
GO

After restarting the SQL Server Service, SQL Server will now run Python code, or if you installed SQL Server with both R and Python as I did, both languages can be used.

Python Development Environments

SQL Server Management Studio is designed for writing TSQL code, not Python.  The process for implementing Python code in SQL Server would be first to create and test the code in Python, then once the code is working, deploy the code in SQL Server.  There are a number of different User Interfaces that you might want to consider when writing Python.  Python comes with IDLE, but as it rather a feature bereft application, chances are that if one is coding Python, they want to use some other user interface.  Some of the more common ones are JetBrain’s PyCharm , Atom Python Tools or the UI Windows developers use the most, Visual Studio with Python language support.  Selecting and setting up the environments is a surprisingly complex process.  Python is a very flexible language and is widely used beyond the realm of data science to do things like create web applications.  For this reason, the environments selected matter as they create different ecosystems.

Incorporating Python to solve Data Science Solutions

24HoursofPASS2017-PreconPreviewIn my upcoming session for 24 hours of PASS, I will review the pros and cons of several development environments, and let you know which one I selected and the steps needed to make it work.  We will also take a look at implementing some Python code in SQL Server so that we can perform advanced analytical analysis with Python.

Yours Always

Ginger Grant

Data aficionado et SQL Raconteur

 

Power BI – Beyond the Basics

When helping clients recently with their Power BI implementations, I have noticed that when talking to people about Power BI there seems to be some areas where there continues to be a log of questions.  While it is easy to find a plethora of information about getting started with Power BI, when it comes to implementing a solution, the information is scarce.  How do you handle releases? Should an implementation contain only one data model? Is Power BI’s data secured on the cloud? Is it required to have Office 365 use Power BI? Do you have to have Power BI Premier to have the Power BI run locally?

Advanced Power BI Techniques in Norway

While I have discussed some best practice techniques on my blog, as usual new features released in Power BI have a

Norway Parliament Building in Oslo

Norway Parliament Building in Oslo

tendency to change some of the available options.  For example, App workspaces, the updated take on Content Packs released a few months ago, now offer a new method for releasing not only dashboards but the reports behind them and the ability to easily migrate sources. I am excited that I will have the opportunity to discuss the answers to the questions received by doing a full day of training at SQL Saturday Oslo. I am looking forward to visiting Oslo, which is home to the best preserved Viking Ship, an Opera House designed to be walked on and the home of the guy who painted the Scream.  If you happen to reside somewhere where it is possible to make the journey to Norway, please register to attend this full day of interactive training.  We will cover all of these items and go into detail about Power BI administration, security and new features and design techniques which will improve Power BI implementation techniques.

sqlsat667_osloFor those of you who are unable to attend, I feel obliged to answer some of the questions I posed earlier.  Implementations generally require more than one data model.  Power BI is encrypted both in transit and at rest. You do not need to have Office 365 to run Power BI.  Power BI can be run locally with Power BI Report Server, which is part of SQL Server 2016 Enterprise with Software Assurance, and you do not need to sign up with Power BI Premier to install it.

I hope to see you in Norway.

Yours Always

Ginger Grant

Data aficionado et SQL Raconteur

 

On Premises Power BI with Power BI Report Server

On June 12, Microsoft officially released the Power BI Report Server.  The version that was released had a different set of features than what was  promised when the product was announced earlier, which I discussed in a previous post. Some of the features and versions of SQL Server which are available to receive the Power BI Report Server upgrade were clarified at MS Data Summit.  This post contains everything you need to know to determine if you can upgrade from a current SQL Server Reporting Services Instance, what features are included in Power BI Report Server and what time frame those who want to use it should follow.

Power BI Report Server Only Connects to Analysis Services Data Sources

The most glaring change from what was announced earlier, is Power BI Report Server can only connect to analysis services data sources, both tabular and multidimensional.  If you want to connect to SQL Server, Oracle or Excel or all three, use the Power BI Web Service.  Only going to the cloud version will users be able to create a data mashup or connect to anything but SQL Server.

Connecting to one data source is not what was promised when the Power BI Report Server was announced in May.  Various Power BI Product members held a session at the Microsoft Data Summit where attendees were able to ask questions.  I asked,  “When are we going to be able to use Power BI Report Server with data sources other than analysis services?”  In a room full of people, I was assured that it was a top priority of the team to release the same data connectivity functionality for Power BI Report Server that currently exists for Power BI Services and the current plan was to release this functionality the next release.

Power BI Report Server Releases are Planned for Three Times a Year

Power BI Desktop currently has a monthly release schedule.  The Power BI Service is often updated more frequently than that, PowerBIRSas Microsoft tends to make changes when they are complete, rather than hold them for a given date.  In a corporate environment, it is sometimes difficult to accommodate such frequent releases.  Power BI Report Server has a planned release cycle of three times a year, with exceptions of hot fixes or security patches.  The next release of Power BI Report Server is planned for the fall.

To ensure that the version of Power BI Desktop matches Power BI Report Server, there is now a version of the Power BI Desktop for Power BI Reporting Server. The icon is exactly the same, but when you start the program the splash screen is different, as it shows you that you are running Power BI Report Server, in the top left corner.  When running the Power BI Desktop, the title also clearly says report server.  It is possible to run both, as I am presently doing on my PC.  One of the pitfalls of doing this, is when you click on a PBIX file, the Desktop version which loads is the last one you installed.  The Power BI Desktop Report Server version contains functionality which is not supported in Power BI Report Server, as it allows you to connect to other data sources and run R, neither of which will work in Power BI Report Server.  Since the next release of Power BI Report Server, the one which should support connectivity to more than analysis services, is going to be part of the next fall release, that release should contain the data mashup capabilities in the future Power BI Report Server Desktop version.

No Dashboards for Power BI Report Server

As I talked about in a previous post, there is no dashboard capability for Power BI Report Server, as it creates reports and other desktop features.  Power BI Service features, like Dashboards and Workspaces, are not available in the desktop or in Power BI Report Server. In the meeting that the product team held, someone else in the room asked a question which I promised to answer in a previous post. “Are there plans to add dashboards in a future release of Power BI Report Server?” The answer was no. Microsoft does not consider that a Power BI Report Feature and does not have the desktop feature in the product road map.

Yours Always

Ginger Grant

Data aficionado et SQL Raconteur

Power BI Data Insights

 

2,500 people sat in the semi-darkness of the MS Data Insight Summit, joining who knows how many watching the live stream, watching and listening to the upcoming changes to  Power BI.  Some of the announcements were expected, like the General Availability [GA] release of Power BI Premium and Power BI Report Server on June 12.  Although there is a lot of documentation on both products, there was still more information to be learned now they are released.  Microsoft also announced they were creating a new product offering, Power BI Embedded.  As part of the product realignment, the ability to embed Power BI into applications was moved to only being a Premier feature.  This move caused an uproar in the marketplace as many companies wanted to continue using Power BI Embedded, but could not justify paying Power BI Premier pricing.  Power BI Embedded was created to address the sticker shock. This new Power BI product has two different pricing levels, EM1 and EM2, starting at $625 per month.  Not a whole lot of information has been publicly released regarding Power BI Embedded, but it is designed to have a limited feature set, focused on just embedding Power BI.

Power BI Upcoming Features

Microsoft demonstrated some upcoming features of desktop which were predictably very impressive.  They created an amazing time line custom visual which I really hope to use soon.  Another neat feature which was demonstrated in the keynote was drill down pages.  This feature allows users to create pages which will be displayed when the field is selected on the previous screen, and the data will reflect the selection.  As there can be a lot of different filters which can be created for Power BI, a new bookmark feature will be coming soon which will allow users to save the context of the report, which saves all of the selections made with all of the slicers. With this feature, the next time the report is viewed, only the selections people find important will be accessed.  These new features are scheduled for released in the next three months.

Power BI Community

KeynotePowerBICommunity

Art credit to Josh Sivey who was kind enough to send this

One of the last things that Microsoft did was to thank the user community for their involvement with Power BI. Since many of the new features added are based upon feedback from the user community, Microsoft really works hard to engage the larger user community to help share information regarding the product as well as mine the ideas from https://ideas.powerbi.com . It was nice of Microsoft to recognize people in the community. Even though the slide was not up for very long, lots of people notice who was recognized.

There is material for a number of other posts from this conference, so please subscribe to hear more information about Power BI very soon.

Yours Always

Ginger Grant

Data aficionado et SQL Raconteur