The Scoop on Sqoop

In the weeks following my talk at Desert Code Camp and SQL Saturday in Detroit about Big Data, I have been receiving inquiries at my blog regarding sqoop, so I thought that I might get more specific on how it works. Sqoop is part of the Apache borg-like collective of tools which was created to use databases, any databases. Lots of people have databases and like them. Databases are really good ways to store data. Just think if Oracle would have been cheaper and faster Hadoop may have never been created because Hadoop was created to solve those problems, I guess at least in this situation resistance was far from futile, but I digress. Let’s say you have some data which you would like to load up into your SQL database. Since you are picking the data to load up into SQL Server, I am expecting you are picking some data which is already structured.

A while ago I worked on a GPS tracking application. We collected data on trucks every 10 seconds, which means that we were collecting a lot of data. To decrease the data in the database, the data was archived off after 30 days. If I was working there now, I would recommend that the data be archived to HDFS. You could store it very cheaply that way and using Sqoop, load the data back again if someone threatened to sue or something worse…
Here’s how you make an archive that work using Sqoop and HDFS
1. Create an HDFS datastore
2. Load the drivers for SQL server, because they only give you mySQL
3. Run the Sqoop command
4. This extracts the data and inserts into HDFS
Ok, let’s say you want the data back. The trickiest part is getting back only the data you are interested in and not everything you have. You can run out of space in SQL server by loading all of this data up, so be careful. First you need to know some information about SQL Server. Run this query on your destination
Select CONNECTIONPROPERTY(‘Net_transport’) as net_transport
, CONNECTIONPROPERTY(‘local_tcp_port’) as tcp
, CONNECTIONPROPERTY(‘Client_net_address’) as client_net_address

If it comes back that you have mixed instead of TCP, go into SQL Server configuration manager to change it to TCP. You will need that information to know what to put here. I am of course assuming that you have already created a SQL user id called Hadoop with a password of bigdata.

sqoop import –connect “jdbc:sqlserver://192.168.138.1:1433;database=AdventureWorks;username=hadoop;password=bigdata” –table

Assuming you kicked this off in the right path and all, congratulations, you have just used Sqoop!

Yours Always

Ginger Grant

Data aficionado et SQL Raconteur

Leave a Reply