Wednesday, September 19, 2012

Create SQL Column with Variable Default

I wanted to create an integer column in a metadata table that would prioritize the rows of the table (this is for column [execution_order]), and for ease-of-use I wanted inserts into the table to default to a unique value. Although this sounds a lot like an identity column, I wanted it to be nullable, and not necessarily unique.

This is what I came up with:

CREATE TABLE [dbo].[metadata] (
[metadata_id] int NOT NULL identity(1,1)
,[project_id] int NOT NULL
,[descr] varchar(100) NOT NULL
,[delete_stmt] varchar(max) NOT NULL
,[execution_order] int NULL
,[insert_date] datetime NOT NULL
,[insert_userid] varchar(50) NOT NULL
,[is_active] bit NOT NULL
,CONSTRAINT pk_dbo_metadata PRIMARY KEY ([metadata_id])
)
GO

ALTER 
TABLE [dbo].[metadata]
ADD CONSTRAINT df_dbo_metadata__insert_date
DEFAULT(GETDATE()) FOR [insert_date]
GO

ALTER 
TABLE [dbo].[metadata]
ADD CONSTRAINT df_dbo_metadata__insert_userid
DEFAULT(SUSER_NAME()) FOR [insert_userid]
GO

ALTER
TABLE [dbo].[metadata]
ADD CONSTRAINT df_dbo_metadata__is_active
DEFAULT(1) FOR [is_active]
GO

ALTER
TABLE [dbo].[metadata]
ADD CONSTRAINT df_dbo_metadata__execution_order
DEFAULT((IDENT_CURRENT('[db].[dbo].[metadata]')) * 100) FOR [execution_order]
GO

Wednesday, December 28, 2011

Parsing Stored Procedure Result Set

One thing I've always wanted to be able to do is to parse the result set(s) of a stored procedure, so that I can easily create a structure to contain it's output. This thread on sqlteam.com details a way to capture the output without knowing the structure ahead of time. This is done by using a linked server that loops back to itself, and then runs this statement:

select * into #t from openquery(loopback, 'exec yourSproc')

to select the results into a temp table, the metadata of which can then be parsed as needed. This leads to the 'pie in the sky' idea of a stored proc that, given the name of another stored proc, will output a CREATE TABLE script that mirrors the result set of the second stored proc.

Removing Trailing Zeroes

I recently encountered the classic problem of removing trailing zeroes from formatted numeric output, and after mocking up some complicated code and doing some research, I came across a thread that says to convert it to float before varchar. It's a perfect solution that is simple and works correctly.

Wednesday, July 13, 2011

View EPs Neatly

Here's some code to view Extended Properties of a table in a neat denormalized report:

-- replace 'TABLENAME' with the name of your table:
DECLARE @ObjId int; SET @ObjId = OBJECT_ID('TABLENAME')

DECLARE @sql varchar(max)
SET @sql =
'SELECT
TableName = T.Name
,ColumnName = C.Name
'

; WITH TableNameBase AS (
SELECT *
FROM SYS.TABLES
WHERE OBJECT_ID = @ObjId
), PropNames0 AS (
SELECT DISTINCT Name
FROM SYS.EXTENDED_PROPERTIES
WHERE Major_Id = (SELECT [OBJECT_ID] FROM TableNameBase)
AND Minor_Id > 0
), PropNames AS (
SELECT Name, NameOrder = ROW_NUMBER() OVER (ORDER BY Name)
FROM PropNames0
)
SELECT * INTO #PropNames FROM PropNames

SELECT @sql = @sql +
' ,[' + P.Name + '] = ISNULL(P' + LTRIM(STR(P.NameOrder)) + '.Value, '''')
'
FROM #PropNames P

SET @sql = @sql +
'FROM SYS.TABLES T
JOIN SYS.COLUMNS C
ON C.OBJECT_ID = T.OBJECT_ID
'

SELECT @sql = @sql +
'LEFT JOIN SYS.EXTENDED_PROPERTIES P' + LTRIM(STR(P.NameOrder)) + '
ON P' + LTRIM(STR(P.NameOrder)) + '.Major_Id = T.OBJECT_ID
AND P' + LTRIM(STR(P.NameOrder)) + '.Minor_Id = C.Column_Id
AND P' + LTRIM(STR(P.NameOrder)) + '.Name = ''' + P.Name + '''
'
FROM #PropNames P

SET @sql = @sql +
'WHERE T.OBJECT_ID = ' + LTRIM(STR(@ObjId))

PRINT @sql
EXEC(@sql)

DROP TABLE #PropNames

Thursday, June 9, 2011

Query to List Tables and their Primary Keys

This query produces a resultset of two columns: every table in the current database, and the corresponding primary key expression if one exists:

; WITH Base AS (
SELECT
TABLE_NAME = QUOTENAME(T.TABLE_SCHEMA) + '.' + QUOTENAME(T.TABLE_NAME)
,C.Column_Name
,C.ORDINAL_POSITION
FROM INFORMATION_SCHEMA.TABLES T
LEFT JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS PK
ON PK.TABLE_NAME = T.TABLE_NAME
AND PK.TABLE_SCHEMA = T.TABLE_SCHEMA
AND PK.CONSTRAINT_TYPE = 'PRIMARY KEY'
LEFT JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE C
ON C.CONSTRAINT_NAME = PK.CONSTRAINT_NAME
AND C.TABLE_NAME = PK.TABLE_NAME
AND C.TABLE_SCHEMA = C.TABLE_SCHEMA
), OrderIt AS (
SELECT
TABLE_NAME
,Column_Name = CONVERT(varchar(max), Column_Name)
,ORDINAL_POSITION
,RecurseVar = ROW_NUMBER() OVER (PARTITION BY TABLE_NAME ORDER BY ORDINAL_POSITION)
FROM Base
), Recursed AS (
SELECT
Table_Name
,Column_Name
,RecurseVar
FROM OrderIt
WHERE RecurseVar = 1
UNION ALL
SELECT
B.Table_Name
,Column_Name = RTRIM(R.Column_Name) + ', ' + B.Column_Name
,B.RecurseVar
FROM Recursed R
JOIN OrderIt B
ON B.Table_Name = R.Table_Name
AND B.RecurseVar = R.RecurseVar + 1
), GetMax AS (
SELECT
Table_Name
,MAX_RecurseVar = MAX(RecurseVar)
FROM Recursed
GROUP BY Table_Name
), Results AS (
SELECT
R.Table_Name
,Primary_Key = R.Column_Name
FROM Recursed R
JOIN GetMax G
ON G.Table_Name = R.Table_Name
AND G.MAX_RecurseVar = R.RecurseVar
)
SELECT * FROM Results
ORDER BY Table_Name

Tuesday, May 17, 2011

XML SQL - Probing Depth of Document

Now let's say that you want to know how many levels deep this relationship goes. Without knowing the answer ahead of time, we have to use a recursive query to determine this:

; WITH XML_Doc AS (
SELECT
id
,parentid
,[level] = 1
,nodetype
,localname
,prev
,text
FROM #tmp2
WHERE LocalName = 'RootNode'
UNION ALL
SELECT
T.id
,T.parentid
,[level] = R.[level] + 1
,T.nodetype
,T.localname
,T.prev
,T.text
FROM XML_Doc R
JOIN #tmp2 T
ON R.Id = T.ParentId
)
SELECT *
INTO #XML_Doc
FROM XML_Doc

XML Parsing with Recursive Table Structure

After you load the XML data into the temp table using the code from yesterday's post, you end up with a hierarchical structure representing the XML data with simple id/parentid columns. So a node at the top level with an id=1 would have nodes at the next level with parentid=1, where id is unique and parentid always references id. There is also a column called 'nodetype' that seems to determine what this node represents. Now just on inspection, I deduced that nodetype=1 are root nodes, 3's are leaf nodes, and 2's are a combination (they have attribute info and children). This code will condense type 3's into their parent:

SELECT
X1.Id
,X1.ParentId
,X1.NodeType
,X1.LocalName
,X1.prev
,text = COALESCE(X1.text, X3.text)
INTO #tmp2
FROM #tmp X1
LEFT JOIN #tmp X3
ON X3.ParentId = X1.Id
AND X3.NodeType = 3
WHERE X1.NodeType <> 3
ORDER BY Id

Monday, May 16, 2011

XML into SQL

I'm working on loading an XML file into SQL right now, and struggling a bit with it. Not so much the coding itself, but figuring out how to represent the hierarchical XML data in a relational SQL table.

I did discover this tidbit for taking advantage of SQL's built-in tools for handling XML:

DECLARE @hdoc int DECLARE @doc varchar(max) SELECT @doc = CONVERT(varchar(max), XML_Column) FROM dbo.XML_Table EXEC sp_xml_preparedocument @hdoc OUTPUT, @doc PRINT @hdoc SELECT * INTO #tmp FROM OPENXML (@hdoc, '/RootNodeName',2) EXEC sp_xml_removedocument @hdoc

Works well.

Thursday, May 12, 2011

Getting the Real Name of a Temp Table

If you've ever wanted to retrieve the schema for a temp table, you probably noticed that in the TempDb database, the name of your temp table is not quite the same (unless you're using a global temp table - this entry does not apply to those). Say that you create a temp table named "#tmp" in your development database. If you go into TempDb, for example in the Sys.Tables view, you will find your table name, but padded on the right with underscores and a string of digits, to a length of 128 characters. So if you try to look up the schema in TempDb.Information_Schema.Columns using "#tmp", it will fail. To alleviate this problem, I wrote a simple SELECT that returns the true name of that temp table as it is stored in TempDb:

SELECT TempTableName = OBJECT_NAME(OBJECT_ID('TempDb..#tmp'), (SELECT Database_Id FROM SYS.DATABASES WHERE Name = 'TempDb'))

Monday, August 23, 2010

Getting Row Counts

Have you ever tried to get the row count of a table by a simple "SELECT COUNT(*)" statement, and been stupefied by how long it took to return the result? This article demonstrates how to use sysindexes to speed up that row count:

SELECT rows FROM sysindexes WHERE id = OBJECT_ID('table_name') AND indid < 2

Params for Pivoting

So let's add a second parameter to indicate the format of the return set. The default value will return it as

AttribName AttribValue Row Number
---------- ----------- ----------

and the other value will return the fully-pivoted data set:

AttribName AttribValue_Row1 ... AttribValue_RowN
---------- ---------------- ... ----------------

Thursday, August 19, 2010

How to Pivot

How about we construct a CTE using dynamic SQL, and we pivot the attribute values by constructing a series of SELECT...UNION SELECT statements, so that we have one SELECT per attribute???

PivotTable SQL Code

I want to write TSQL code that will pivot any given table. The way I imagine it, is that I implement this code as a stored proc, which accepts a table name as parameter. It returns a result set comprising the perfect pivot of that table. The result set need not be named, but could be inserted into a table. It will have to convert all values to varchar (or char for easier printing; perhaps can have a switch as a parameter).

Structure of return set:

AttribName AttribValue_Row1 ... AttribValue_RowN
---------- ---------------- ... ----------------

So, how do we do this? We can easily pivot the column names into values for 'AttribName' by pulling from Sys.Columns view. The problem then becomes how do we pivot the attribute values into the genericized column?

We need to first pivot the data into this structure:

AttribName AttribValue Row Number
---------- ----------- ----------

Then we can self join the data set as many times as there are rows in the original data. Or not.

Stairway to Database Design

"Designing a database is very easy to do, but hard to do well. There are many databases that have been built in the world that do not meet the needs of the applications they support.

The fundamentals of design are often not taught to developers or DBAs when they are given the task of building tables. Many people do not understand the logic used to decide what columns to use, the data types for each one, constraints between columns, and the keys that should be defined.

Joe Celko, widely viewed as an expert in the SQL language, brings us a series that looks to help you understand the fundamentals of the design process. The articles in the series are linked below..."

Monday, August 9, 2010

Random Rows

Here's an easy way to select a random row from a table:

SELECT TOP 1 * FROM Test4 ORDER BY NEWID()

Tuesday, August 3, 2010

Friday, July 30, 2010

SQL Date Functions Syntax

One thing that drives me a little crazy with T-SQL is the syntax of the date functions:

DATEADD (datepart , number, date )
DATEDIFF ( datepart , startdate , enddate )
DATEPART ( datepart , date )

The designers were consistent with putting the 'datepart' as the first parameter, but for DATEADD, why did they made the base date the last parameter? I think it should be consistent with DATEPART and DATEDIFF, so that the second parameter is always a DATETIME.

Tips to optimize your SQL statements

Interesting article on SQLServerCentral.com By Brian Ellul, 2010/07/29:

There is a huge difference between writing an SQL statement which works and one which works well and performs well. Sometimes developers are too focused on just writing their SQL to perform the required task, without taking into consideration its performance and most importantly its impact on the SQL Server Instance, i.e. the amount of CPU, IO, and memory resources their SQL is consuming. Thus, they starve other SQL Server processes during the SQL statement execution bringing the whole instance to its knees. This article is intended to provide the SQL developer with a set of easy checks to perform to try and optimize the SQL Statements."

Tuesday, July 27, 2010

Level of measurement

Interesting article in Wikipedia about metrics, and the different sorts:

The "levels of measurement", or scales of measure are expressions that typically refer to the theory of scale types developed by the psychologist Stanley Smith Stevens. Stevens proposed his theory in a 1946 Science article titled "On the theory of scales of measurement"[1]. In this article Stevens claimed that all measurement in science was conducted using four different types of scales that he called "nominal", "ordinal", "interval" and "ratio".

Monday, July 26, 2010

NK Search

I just tested the NK search algorithm on an unindexed table of 9 columns and over 10 million rows, and it ran for 4 minutes before returning the result that no natural key of 4 columns or fewer exists. This ran 255 queries in that time. The biggest performance boost was the use of the sample check, whereby the duplicate count is ran on the TOP 1000 rows first, then the full data set.

I'm polishing off the code and should post it by the end of the week.

Thursday, July 15, 2010

New and Improved Natural Key Discovery Algorithm

While preparing for my presentation on code I wrote that discovers natural keys,for the Baltimore SQL Server Users Group meeting on Monday, I discovered a blind spot in the design. I hit the panic button a few days ago, but today I think I've got a new solution that solves a lot of problems.

Wednesday, July 14, 2010

Table Variables and Query Optimizer

In yesterday's post I mentioned that I replaced the temp tables in a stored procedure with table variables. This worked perfectly functionally, but doomed the performance by orders of magnitude. The problem was that the query optimizer was choosing MERGE JOINs instead of HASH JOINs for the table variables. Forcing the hash joins with query hints fixed the problem.

[Ed. note 7/26]Here's another interesting discussion about this. Note the post that reads "And also, because table variables do not have statistics, the query optimizer will often take dubious choices because it believes the row count to be 1 all the time - leading to inappropriate join methods etc..."

Tuesday, July 13, 2010

VS2008 SSRS and Temp Tables in Spds

Ran into a problem using VS2008 to report the results of data from a SQL Server 2005 stored procedure. Apparently there is an issue with VS2008 and stored procedures that make use of temp tables. According to this article, I replaced all of the temp tables with variable tables, and the problem went away.

Code Review, Years Later

I'm reviewing code I wrote a couple of years ago to determine natural keys of raw data sets, in order to do a talk on it at the local SQL Server users' group meeting next week. After an initial reading, I thought that the code worked a certain way (doing breadth-first searches), but it actually works in a hybrid breadth/depth first fashion.

The first thought one might have would be something along the lines of "comment much?", but truthfully, I do, or at least I think I do. The code is found in a SQL Server Central article, explaining much of the background of the problem I was trying to solve, and the code itself has an entire page of comments, and yet I find now that there simply isn't one concise explanation of the resultant design of my algorithm.

There must be a magic number of weeks after code is finished and polished off, when the original intent of the developer is still in his or her brain, but beginning to fade. That is the perfect time to finish commenting the code, because at that time the developer will have to approach the code as an outsider, but still be able to summon those fuzzy thoughts about why something was done a certain way. Does anyone know what that magic number of weeks is?

Sunday, July 11, 2010

Recursive Stored Procedures and Temp Tables

If a stored procedure creates a temp table and then calls itself recursively, will the newly instantiated stored procedure have access to the temp table?

Friday, July 2, 2010

Temp Table Scope

One of the gotchas of TSQL is temp table scope. The fact that I can create a temp table within a stored procedure, then call a second stored procedure and reference that temp table, scares me a little. It reminds me of the 3G languages, with their global variables and modified private scope and such.

A big enhancement in SQL in this department: table variables. The scope of a table variable is the same as that of a "regular" variable - local to the scope of the procedure. So replacing the use of a temp table in a stored procedure eliminates the concern that a called inner procedure will unexpectedly change the data. However, there are major performance declines when using table variables to hold large amounts of data, as this article explains.

Another link: http://www.mssqltips.com/tip.asp?tip=1556

Wednesday, June 30, 2010

Installing Second Hard Drive

DISCLAIMER: I am not a hardware guy, nor am I a sysadmin type. But I'm trying to do things on the cheap in the DIY spirit. A friend gave me some old hardware, and I'm using it to try and upgrade my old desktop. I had some misadventures last night trying to install a second IDE drive on my Dell Optiplex 270 (yes I know it's old, it's just a lab computer). The drive is a 40gb Seagate that I'm planning on using for TempDb. When I first put it in, I didn't change the jumper settings, so the pc thought I had two master IDE drives. This caused BIG problems in BIOS. I could no longer boot, and my computer no longer recognized my original master drive. I changed the jumper settings of the second drive and tried to reboot - no luck. Now the pc was reporting TWO unknown drives (which I suppose is progress). I played around with the BIOS settings, but again, no luck. Then today after googling the task, I came across this how-to article that explains that the IDE cable that connects the IDE drives to the motherboard must be plugged in to the drives in a very specific way. The instructions worked perfectly. When I booted up in Windows, I noticed that that drive had two partitions of 20gb each. Wanting to use the entire drive for my new TempDb, I researched how to delete the partitions and reformat.

Tuesday, June 29, 2010

T-SQL Challenge #33

Just finished T-SQL Challenge #33. It was less challenging than some of the others (I finished it in under 20 minutes), but the problem was interesting enough and still required the declarative thinking that is the aim of these challenges.

Also last week R. Barry Young published articles on how to gain this mode of thinking: "There Must Be 15 Ways To Lose Your Cursors... part 1, Introduction".

Thursday, June 24, 2010

SQL Search Tool

I'm not big on 3rd party plug-in tools, and am not much for shilling for companies, but I LOVE Red Gate's SQL Search product. It makes searching your database for text fragments very easy (like when you analyze the impact of changing a column name, for example), and displays the results in an easy-to-use interface that enables you to click and edit the affected objects.

Friday, June 11, 2010

Needle in a Haystack

While working on loading 13m rows into a staging table, a name-parsing routine came across unexpected data, and the load failed. One of the greatest drawbacks to set-based ETL such as T-SQL, versus row- or batch-based ETL tools such as Informatica or SSIS, is that data discrepancies cause failures that are difficult to diagnose. In this case, the load failed with the error:

"Msg 537, Level 16, State 5, Procedure spd_Load_Stg_Names, Line 31
Invalid length parameter passed to the LEFT or SUBSTRING function.
The statement has been terminated."

This is almost useless as far as finding the problem data, as it does not tell me what value, or which row, caused the failure.

Sunday, May 23, 2010

Bulk Insert CSV with Text Qualifiers

One of the biggest shortcomings of SQL Server's BCP/Bulk Insert tool is the lack of specifying a text qualifier in a comma-delimited ("CSV"), text file for import. Let's take a look at an example of a record from a CSV file that we wish to import into a table:

"John Smith", "123 Main St, Apt 3", "Anytown", "XX", "12345"

So the format of the record is Name, Address, City, State, ZIP. The fields are separated by commas, and encapsulated with double-quotes. The "Address" field demonstrates the need for the text qualifier: the embedded comma that separates the first address line from the second. Without text qualifiers each line of address would appear as a separate field, rather than one discrete value. That in itself would not be a problem, but because some addresses only have one line, the number of delimiters becomes variable, and the import is then faulty.

The problem crystalizes when we try to import a file of that format into SQL Server. The BCP/Bulk Insert utilities do not have an option to specify the text qualifier. We have a couple of "kloodgy" options before us: We can specify the delimiter as ','. The problem here is that every field encapsulated with the double quotes will retain that character in the database, leaving us to update the table to remove those characters. We can specify that the delimiter is '","' instead of ','; this is a step in the right direction, but it will still leave a double quote leading in the first field and trailing in the last, leaving us with less work than in the first case.

Given the maturity of the SQL Server product, I'm surprised that Microsoft hasn't added this feature. I suppose that is their way of moving developers towards SSIS, which of course does have it.

If we really want to properly import this file using BCP or BULK INSERT without any weird cleanup kludges, we have to use a format file. Here's a good article on MSDN about how to create a format file from scratch using a BCP option. To complete my task, I will take the resultant format file and modify it to account for my comma delimiters and text qualifiers.

Tuesday, December 8, 2009

TSQL Challenge #18 Blues

I'm working on TSQL Challenge #18, and hitting up against a brick wall. The challenge involves building a calendar for given months of interest (month/year that is). So I build a CTE that first figures out the first & last day of the month, then the week numbers for those days. Matching the week numbers against a tally table, I can then tell which weeks that the days of a particular month will span - those weeks then become rows in the results. I then build a CTE to hold all of my days of interest, with the week number and day-of-month as columns. Using my 'Weeks' CTE as a base, I LEFT JOIN the 'Days' dataset seven times - one for each day of the week.

This is the error I encounter: "The conversion of a char data type to a datetime data type resulted in an out-of-range datetime value." If I limit my day-related tally table to a max of 28 (rather than 31), the error disappears, but I don't get any days in my calendar past the 28th of course. The weird thing is, if I set the tally to max of 31, and I select from the 'Days2' CTE, I don't get an error, and all the days I expect to see are present. Something strange is going on under the hood of SQL.

Thursday, December 3, 2009

Kimball Design Tip #119 Updating the Date Dimension

I've always thought of the Time or Date dimension to be a static one, rather than slowly-changing, until I read this tip from Kimball U: "...there are some attributes you may add to the basic date dimension that will change over time. These include indicators such as IsCurrentDay, IsCurrentMonth, IsPriorDay, IsPriorMonth, and so on. IsCurrentDay obviously must be updated each day."

Creating and automatically updating these attributes of the Date dimension will save time (sorry, no pun intended), for report writers and data mart developers, and also standardize such calculations across the enterprise. For example, let's say you are responsible for the data warehouse of a catalog company, and you want to calculate lag time between ordering and shipping, a common metric of customer satisfaction. Furthermore, management decides to give everyone an extra holiday as a bonus for company profitability from last year, and so lets everyone take off Groundhog Day. To account for this in your metric, you are going to want to have an attribute like "BusinessDayOfYear" as an incrementing integer, so that the first business day in January gets assigned BusinessDayOfYear=1, the second day is 2, and so forth, until you have the highest number as the last business day in December (which is most likely Dec 30th). If the company now has Groundhog Day as a holiday, then the IsHoliday for Feb 2nd is changed from 0 to 1, and then the BusinessDayOfYear attribute is recalculated.

Calculating how many business days between ordering and shipping is then trivial. Of course, this does not account for orders placed in December that ship in January, so you might want to have an attribute BusinessDayLifetime, which starts at 1 the first day the company was in business, and increments freely forever.

Monday, November 30, 2009

Building a Yahoo Finance input adapter for SQL Server StreamInsight

Here's an excellent SSC article by Johan Åhlén on building CEP (Complex Event Processing) solutions in SQL 2008. "What is StreamInsight and what is it good for? StreamInsight is a platform for developing and deploying applications that handle high-speed streaming data. It could be used for near real-time processing of data from production environments, structured data such as financial information and unstructured data such as Facebook, Twitter and blogs. Multiple sources can be combined and refined before they are being output. This technology is called CEP - complex event processing.

"Combine StreamInsight with data mining, and you can have real-time fraud detection, stock market prediction, you name it... Imagine a real-time Business Intelligence-application where the management can actually see the KPI gauges moving on the screen. I would say that monitoring real-time flow of information is one of the key success factors for tomorrow's Business Intelligence. This is also supported by Ralph Kimball, saying in an interview that Business Intelligence is moving from the strategic level towards operational level processes such as customer support, logistics and sales. At the operational level data must be continuously updated, not just once per day. I would add also that the new generation, that has grown up with Facebook and Twitter, will make it necessary to monitor new sources for successful Business Intelligence."

Monday, November 23, 2009

Data Vault Institute

While reading the comments on an SSC article about fault-tolerant ETL loading, I came across a link to the Data Vault Institute, and what sparked my attention was the comment by Dan Linstedt that "today's data warehouse has become a system of record. Due in part for the need of compliance." This struck me as an important point, given that many of the databases on which I've worked in the past were marketing databases, where bad data was generally disposed of. But in financial, health care, and governmental data warehouses, this approach would be completely unsatisfactory.

Monday, November 9, 2009

Intelligent Keys

I'm engaged in an internal debate about the use of intelligent versus surrogate keys. Typically when this issue arises, the debate centers around whether we want to use a key that is already present in the data (such as SSN in an employee table - this is an intelligent key, also known as a natural key), or if it's better to generate a new meaningless key (such as an auto-incrementing integer - this is a surrogate key).

Now the internal debate isn't over that issue per se - I fall on the side that favors the surrogate key creation. The real debate I'm in is whether it's okay to create an intelligent surrogate key. The most typical surrogate as mentioned previously is an auto-incrementing integer identity - every time a row is inserted into the table, a new key is created by adding one to the max value. These keys have zero business meaning - that's their advantage, that they are decoupled from the business data. However, there are situations where it makes sense to create this value intelligently. One example is creating a time dimension in a data warehouse, whereby the primary key consists of an integer in the form "YYYYMMDD". Microsoft favors this method (as I discovered in their training kit for exam 70-448). A big advantage to this approach is that if you create a clustered index on that intelligent surrogate key, all of your data will be sorted in date order (of course, if you insert into that table by earliest date first, it will also be in that order - unless you add an earlier time period at a later date).

TSQL Challenge #16 Out

I missed it (was out a week ago): http://beyondrelational.com/blogs/tc/archive/2009/11/02/tsql-challenge-16-find-intersections-in-date-ranges-and-concatenate-aggregated-labels.aspx

Tuesday, October 27, 2009

SQL/ETL/BI 101

I'm going to compile a list of articles that provide introductions to topics in SQL and BI.

Introduction to Indexes, by Gail Shaw

Index of MS-SQL Articles on DatabaseJournal.com, a gold mine of introductory articles.

"What a Data Warehouse is Not" by Bill Inmon.

Friday, October 23, 2009

SSIS Data Cleansing Webinar

I watched a good webinar on SSIS Data Cleansing, by Brian Knight and the good folks of Pragmatic Works of Jacksonville FL. "In this session with SQL Server MVP, Brian Knight, you'll learn how to cleanse your data and apply business rules to your data in SSIS. Learn how to solve complex data problems quickly in SSIS using simple techniques in the data flow. Brian will start by showing you the Data Profiling Task. Then he'll show how to use transforms like Fuzzy Grouping to de-duplicate your data and SSIS scripts to satisfy common scenarios he sees in the industry."

Help, my database is corrupt. Now what?

Found a good article on SSC about database corruption by Gail Shaw:

"What to do when the database is corrupt.

  1. Don't panic
  2. Don't detach the database
  3. Don't restart SQL
  4. Don't just run repair.
  5. Run an integrity check
  6. Afterwards, do a root-cause analysis"

Don't have a corrupt database, but still want to play in the sandbox? Click here and here for a couple of ways to corrupt your database. (Don't do this to production data!)

And if all else fails, there's a product called Recovery for SQL Server to help fix your files.

Thursday, October 22, 2009

SQL Server Execution Plans

Here's a link on SSC to a free PDF download of Grant Fitchley's "SQL Server Execution Plans", which is, funnily enough, about how to interpret and act upon an execution plan in SQL Server. "Every day, out in the various SQL Server forums, the same types of questions come up again and again: why is this query running slow? Why isn't my index getting used? And on and on. In order to arrive at the answer you have to ask the same return question in each case: have you looked at the execution plan? " – Grant Fritchey

Tuesday, October 20, 2009

Kimball University: Six Key Decisions for ETL Architectures

SSC posted a link to a good article titled "Kimball University: Six Key Decisions for ETL Architectures", by Bob Becker. Although written for directors/managers, I think developers will also find many of the points useful to understanding the context of an ETL project.

Monday, October 19, 2009

The ‘Subscription List’ SQL Problem

SQL ServerCentral now has a "Stack-Overflow"-type forum for SQL questions, ask.sqlservercentral.com. Phil Factor posted an interesting problem here that asks developers to post solutions to a report on a subscription list.

Celko on Stats

Click here for a really good article by Joe Celko on statistics. He discusses the difference between causation and correlation, and how to compute such things in SQL.

T-SQL Challenge #15

T-SQL Challenge #15 is out. It requires the use of PIVOT; an example can be found on MSDN at the bottom of this entry.

Tuesday, October 13, 2009

Database Space Capacity Planning

Good article on "Database Space Capacity Planning" by Chad Miller, that uses Powershell to query the space capacity of every SQL Server on your network. From his summary: "The need to monitor and forecast database and volume space is a critical task for database administrators. You can use the process described in this article to create a consolidated space forecasting report, which focuses on a "days remaining" metric. In addition, the use of PowerShell to collect data and load into a SQL table as demonstrated in this article, provides a solution you can easily adapt to many database administration problems."

Saturday, October 3, 2009

Are Row Based DBs the Problem in BI?

"Using a traditional, row-based database to run critical reporting and analytics systems is like entering a delivery truck in a Grand Prix race. It's just not what it was designed to do." - Dan Lahl, director of analytics for Sybase. Interesting article by charlesb2k on the pros and cons of transactional, star-schema, and columnar databases for BI and analytics: "Are Row Based DBs the Problem in BI?"

Wednesday, September 30, 2009

Setting up Change Data Capture in SQL Server 2008

Good article on "Setting up Change Data Capture in SQL Server 2008" by Tim Chapman, TechRepublic. As some of the commentators noted, the article explains how to easily setup CDC in SQL 2008, but provides no help for querying the CDC tables.

How to Calculate Data Warehouse Reliability

Very good article on "How to Calculate Data Warehouse Reliability" by Ashok Nayak. He makes the point that even if every stage in your data warehouse processing has 90%+ reliability, your DW as a whole might only have 70 or 80% reliability. Conventional thinking leads us to believe that every stage is like a link in a chain (where the chain is only as strong as its weakest link), but he concludes that it's much worse than that, that processes P1, P2, and P3, with independent reliabilities R1, R2, and R3, respectively, is not MIN(R1, R2, R3), but is actually R1 * R2 * R3.

Monday, September 28, 2009

T-SQL Challenge #14

T-SQL Challenge #14 is out. I've completed the challenge functionally, now I will tweak my solution to improve performance.

Friday, September 18, 2009

Books by Joe Celko

Excellent article by Wesley Brown that summarizes a bunch of books by the legendary Joe Celko. Don't know Joe? "He has participated on the ANSI X3H2 Database Standards Committee, and helped write the SQL-89 and SQL-92 standards. He is the author of seven books on SQL, and over 800 published articles on SQL and other database topics."

Wednesday, September 9, 2009

SQL Formatting

Here is a good article on the need to develop formatting standards for T-SQL code within an organization.

Thursday, September 3, 2009

TSQL Challenge #12

Click here to view my solution for TSQL Challenge #12, which involves identifying missing dates in a range, and propagating values for those missing dates.

Tuesday, September 1, 2009

TSQL Challenge #13 - Set-based solution

The set-based solution took me 3 or 4 times as long to develop as the cursor-based solution yesterday. I'm not sure if that's the nature of set-based development, my own learning curve, or both (like going from algebra to calculas). The set-based solution took a couple of interesting tricks to solve, which I will hold off publishing until after the deadline to the challenge.

Monday, August 31, 2009

TSQL Challenge #13

Looking at TSQL Challenge #13, I created a grouping by batch and invoice number:



The most straight-forward approach from here is to insert this into a temp table, open a cursor on it, run through the data and modify the "Set" column so that the batch/invoice numbers are appropriately grouped together. Then join the results of that temp table back to the original data set via batch/invoice number, so that the modified "Set" column is appended. This cursor-based solution is here (the rules require a set-based solution, so I did the cursor-based one just as a baseline to compare to the set-based solution).

Tuesday, July 7, 2009

Avoid Logging When Populating a Table

I recently ran into a brick wall while trying to populate a skinny table with just over 130 million rows, in that the log filled up way before the table did (I was down to about 40gb free on my local machine). This is a scenario where I have multiple recursive CTEs preceeding a INSERT .. SELECT FROM used to create those rows. To get around this problem, I created a stored procedure that outputs the results of those CTEs as a straightforward SELECT, then I redirect that output to a text file via a BCP batch file. I then BCP that file back into my destination table, thereby bypassing the extraneous logging that, in this case, is just a waste of space and time.

Thursday, June 18, 2009

Cool Use of CTEs to create number table

Many developers have needed to create a numbers table for matching data to ranges, etc., and this article shows various ways to do it, including multiple uses of CTEs: http://www.projectdmx.com/tsql/tblnumbers.aspx

Poker DW: Stacking the Deck Part 2

Earlier we showed how to construct a list of all possible 5 card poker hands, and verified the result in Wikipedia. What I want to create is a table of those hands, all 2,598,960 of them, with their high- and low-hand ranks also. I want to be able to analyze the odds of one hand winning over another with n cards to come, which means that I'll need to create a temp copy of the deck, pull out the known cards from it, and make a hand with every card left and evaluate those hands against my table of all possible.

Now how will I represent the cards in the table of poker hands? The most obvious way is to create a five column key consisting of the 5 CardId values that make that hand. Question: does the order of those CardIds matter? Remember what we talked about last entry, the order should not matter. But we have to put them in some sort of order - if we have five columns, say CardId1, CardId2, CardId3, CardId4, and CardId5, something is going to have to go somewhere. Let's say that we arbitrarily enter the CardIds into the columns in no particular order - how will we now query them? Let's make a trivial example of querying for two cards. Our WHERE clause of such a query would look like:

WHERE CardId1 = @CardId1 AND CardId2 = @CardId2
OR CardId1 = @CardId2 AND CardId2 = @CardId1



We have to match every permutation of variables to columns. With three cards:

WHERE
CardId1 = @CardId1 AND CardId2 = @CardId2 AND CardId3 = @CardId3
OR CardId1 = @CardId1 AND CardId2 = @CardId3 AND CardId3 = @CardId2 OR CardId1 = @CardId2 AND CardId2 = @CardId1 AND CardId3 = @CardId3 OR CardId1 = @CardId2 AND CardId2 = @CardId3 AND CardId3 = @CardId1 OR CardId1 = @CardId3 AND CardId2 = @CardId1 AND CardId3 = @CardId2 OR CardId1 = @CardId3 AND CardId2 = @CardId2 AND CardId3 = @CardId1


Going back to our research on permutations, the number of permutations of n elements is n!, which is also equal to n(n + 1)/2. With five cards we're looking at a WHERE clause that is 5(5+1)/2 = 5*6/2 = 15 lines long. The coding for that isn't so bad (try not to make a mistake - you'll be matching 5 variable/column pairs per line for 15 lines, for a total of 75 equality checks), but think of how slowly that would perform! And that's just to evaluate one hand - imagine the gears grinding away to find all possible 5 card hands with two cards to come - if you're on the flop, and you want to evaluate your chances to the river, you have "47 choose 2" =
1081 possible outcomes.

What I came up with is a solution using prime numbers that I learned while studying Gödel's incompleteness theorems. We assign every card in the deck a unique prime number; the first card gets 2, the second card 3, all the way up to the last card, which gets prime number 239. Now what happens if we want to look at a two-card hand and match it to a table of all possible two-card hands? If we multiply the prime numbers corresponding to those cards, we will get a number that is unique to those two cards (the primes of any other two cards will result in a different number when multiplied). Obviously it doesn't matter which order the primes are multiplied, so we have just found the perfect primary key for our poker hands table. When we want to evaluate a hand, we multiply the primes corresponding to the cards and match the result to the primary key.

We have an updated Dim_Deck creation script that adds a "PrimeFactor" column to it. Now I'm working on a creating the table of all possible hands.

Poker DW: Stacking the Deck

Returning to the Poker Data Warehouse, I dove into the text-parsing process after setting up a sketch of an initial database design, with the idea of polishing out the design at a later date. I recently completed the parsing of a sample hand history, and man was it more work than I expected! Fortunately I love writing text-parsing routines (perhaps from early career work in merge-purge duplicate removal), so the 750 line stored procedure was more a labor of love than a tedious chore. Getting into the dirty details of the source data made me think more about the 30,000 ft overview also.


I created a representation of a 52 card deck of cards in the Poker DW, and I started thinking about how to evaluate 5 card poker hands (i.e., determining what a player had at the end of the hand). What I really want is to be able to evaluate the odds of making the best hand on the next card or the river, which would ultimately allow me to judge whether a player made the right decision. This result would be similar to the "% to win" stats that you see on TV.


After I created my deck of cards, I started playing around with representing a hand of 5 cards. How many possible 5 card hands are there? Easy - think of it like this. Take 1 card from the deck, there's 52 cards to choose from. Take another card, there's 51 to choose from. Keep picking until you have 5 cards in your hand, that leaves 52 * 51 * 50 * 49 * 48 = 311,875,200 possible 5 card hands.


The problem with this method is that I'm picking permutations of 5 card hands, rather than combinations. Let's reduce my example above to picking two cards rather than five. According to that math, there are 52 * 51 = 2,652 possible two card hands. Using the card deck created above, this query will return that count, 2652 rows:


;WITH Draw1 AS (
SELECT Card1 = CardId
FROM Dim_Deck
),
Draw2 AS (
SELECT
Card1,
Card2 = CardId
FROM Dim_Deck D2
JOIN Draw1 D1
ON D1.Card1 <> D2.CardId
)
SELECT COUNT(*) FROM Draw2



Note the use of the recursive CTE to create the second draw, Draw2. So let's say that I picked the five of clubs first, and the four of hearts second. That is one of the 2,652 possible events. But the reversal of that order is also one of the possible events (picking the four of hearts first, and the five of clubs second). But I really don't care which order the two cards come in (the permutation), I only care about the set of cards that results.


Looking at an even simpler example of a deck of 5 cards, ace to five, how many ways are there to pick two? Here's a simple matrix:



The code above will pick everything except the diagonal that shows pairs:




but what we really want is this:



And in order to get it, we change the "<>" operators to ">":


;WITH Draw1 AS (
SELECT Card1 = CardId
FROM Dim_Deck
),
Draw2 AS (
SELECT
Card1,
Card2 = CardId
FROM Dim_Deck D2
JOIN Draw1 D1
ON D1.Card1 > D2.CardId
)
SELECT COUNT(*) FROM Draw2

and we obtain the correct result, 1326 rows.

Monday, June 15, 2009

StackOverflow

I started answering questions on StackOverflow, and came across a couple of interesting sites. The first is Vyas code page, a small library of useful SQL code. The other is an add-on toolkit for SSMS.

Friday, June 12, 2009

T-SQL to Export Table Structure to a script

Good script for scripting out SQL tables via T-SQL rather than Enterprise Manager/Dev Studio: http://www.sqlservercentral.com/scripts/Miscellaneous/30730/

Wednesday, May 27, 2009

CRM Import - Importing Into Drop-Down Combobox Targets

During the initial try to load accounts into MS-CRM 4.0, my source file failed for columns that had drop-down combobox columns as targets. A little research lead to this forum post, which advises using the GUIDs of the target system. I was a little mislead by this post, since it actually applied to lookups, not drop-down combobox values. The solution I needed involved converting the source values to the AttributeValue equivalents in the StringMap table of the MS-CRM database.

Thursday, May 21, 2009

BCP out Temp Tables

Hit a little snag today trying to output, via BCP, the results of a stored procedure that created and dropped a local temp table. The spd would run fine in Query Analyser, but when run from the DOS prompt I got the error message that the temp table didn't exist. Perhaps the problem is caused by BCP compiling and running the spd in different threads? Anyways, the results of a little googling provide the workaround of keeping the table in TempDb rather than creating as a real temp table. That works as long as the table is not created and dropped within the spd that BCP calls.

Monday, May 18, 2009

Case Study: Poker DW: Reporting Questions

After reviewing the entities and their relationships, we next want to look at some of the questions we might want to answer from the data in our DW. Some possible questions:
  • Which players will call a big checkraise on the flop with an overpair?
  • What is the actual expected outcome of reraising on the button with a suited connector and bluffing the flop?

Next: ???

Case Study: Poker DW: Entities

Continuing from the introduction, one of the first things we'll have to do in designing our Poker DW (Data Warehouse), is to identify all of the entities of interest. Taking a look at our sample hand history file (ignore the last two lines of HTML code, the hosting company stamped them when I uploaded the file), the first line starts with a sequence number, the format of the game and the datetime stamp of when a particular hand took place. The second line indicates the name of the table, whether it is real or play money, and which seat is the dealer. The next ten or so lines tell us who is in which seat, and what their stack size is (at the beginning of the hand), followed by a line for each player who posts a blind (small, big, and other). So up to now, we've seen such entities as Time, Table, Player, Seat, and Money (stack and pot size).

The next section begins with the header line "*** POCKET CARDS ***". Here we have such information as the hole cards dealt to the player, and all of the preflop action (fold, check, call, or raise). We can identify three more entities here: Betting Stage, Cards and Actions. The next section, "*** FLOP *** [10s 9d 3h]", contains the same entities, but this time we have community cards. At each step in these sections, we can calculate the pot size and stack sizes for each player. Two more sections, "Turn" and "River", provide similar info.

Special consideration should be given to the next section, "*** SHOW DOWN ***", as it will show us exactly what cards other players held during the hand, allowing us to "backfill" that info for earlier rounds of betting. This will help us answer some important questions in the hand histories. The final section, "*** SUMMARY ***", provides info such as the rake, the Hi hand (and Low if this is a hi/lo split game), and the final pot size (which we can use to verify our "running" pot size throughout the hand).

So let's summarize our entities and their relationships. Central to this is Hands. Hands occur at certain Times at a particular Table, which have Seats. Players make Actions with Money based on Cards appearing at a Betting Stage.

Friday, May 15, 2009

Case Study: Data Warehouse for Poker (Intro)

A friend of mine plays a good deal of online poker, and wants to improve his game by studying the hands he has played. I suggested creating a data warehouse from the hand histories held in the text files that the app saves, and using that data to identify winning and losing trends in his game. This entry will serve as the first in a series of the steps we will take to develop this.

Guide to Entries:

Thursday, May 14, 2009

Grouping Datetimes to Identify Sessions

Let's say that I have a record of events that are unique by a datetime "timestamp" marker, and that these events can be grouped by sessions, so that every event that occurs within a certain period of time of the preceeding and/or subsequent events to it are considered as part of the same session. For example, let's say that we are examining cars driving by a traffic counter where each car passing is recorded as an event, and we want to organize the events by sessions so that any two cars passing within 5 seconds of one another constitutes a session (the events will "chain" together so that if four cars pass within 2 seconds of one another, but the first and last cars are within 10 seconds, all four car events are a part of the same session).

Now, if I only have the events, how do I create sessions around them?

Wednesday, May 6, 2009

Another Version for Calculating Median

Joe Celko published a "history" of calculating the median in SQL, along with a final version that seems to work similarly to mine: http://www.simple-talk.com/sql/t-sql-programming/median-workbench/.

Tuesday, May 5, 2009

Querying Sys.Columns & Sys.Types

If you want to query the structure of a table that includes column names and data types, you have to perform a join between catalog views Sys.Columns and Sys.Types. There are some caveats to this. If any of your columns are defined as nvarchar or user-defined data types, you must qualify the data coming from Sys.Types. When you add a user-defined data type, it is entered into Sys.Types with a reference to it's native data type. In the example of the AdventureWorks database, the data type "AccountNumber" is defined as nvarchar, and shows up in Sys.Types with system_type_id = 231 (which points to "nvarchar", where system_type_id = 231 and user_type_id = 231).


Running the query:


SELECT
Tb.Name,
C.Name,
Tp.Name
FROM Sys.Tables Tb
JOIN Sys.Schemas Sch
ON Sch.Schema_Id = Tb.Schema_Id
JOIN Sys.Columns C
ON C.Object_Id = Tb.Object_Id
JOIN Sys.Types Tp
ON Tp.System_Type_Id = C.System_Type_Id
WHERE Tb.Name = 'Address'
ORDER BY Tb.Name, C.Name, Tp.Name



produces these results:



Weird, huh? Why did 'AddressLine1' show up six times with six different data types? The reason is two-fold. First, 'AddressLine1' is defined as nvarchar(60), which means that it will also show up as "sysname" datatype (think of "sysname" as MicroSoft's built-in user-defined data type).


Take a look at the results of the query below. It shows that, including itself, six different data types are based on nvarchar! That's why 'AddressLine1' showed up six times in the query above.

SELECT Name FROM Sys.Types Tp
WHERE System_Type_Id = 231


Name
-------------------
nvarchar
sysname
AccountNumber
Name
OrderNumber
Phone
(6 row(s) affected)


So let's change our query to use this 'User_Type_Id' column instead:

SELECT
Tb.Name,
C.Name,
Tp.Name
FROM Sys.Tables Tb
JOIN Sys.Schemas Sch
ON Sch.Schema_Id = Tb.Schema_Id
JOIN Sys.Columns C
ON C.Object_Id = Tb.Object_Id
JOIN Sys.Types Tp
ON Tp.User_Type_Id = C.System_Type_Id
WHERE Tb.Name = 'Address'
ORDER BY Tb.Name, C.Name, Tp.Name

This produces the results we want:


Tuesday, April 21, 2009

Data Patterns and the LIKE Clause

The "LIKE" clause of the SQL SELECT statement is one of the more interesting features of SQL's character processing. Most everyone is familiar with the "%" wildcard, which allows queries such as "SELECT LastName FROM Customers WHERE LastName LIKE 'Mc%'". This returns all the customer last names beginning with "Mc" (such as yours truly). But I suspect that many developers are unaware of some of the deeper uses of data pattern expressions.

The list of other wildcard characters related to LIKE includes "_", "[", "-", "]", and "^". The first, "_", is the 'any single character' expression. The "[]" characters act as a single character wildcard, but allow us to specify which characters will match. The WHERE clause above is equivalent to "WHERE LastName LIKE '[M][c]%'". When multiple characters reside within the brackets, the filter acts like an "or" expression. So changing the filter to "WHERE LastName LIKE '[M][c][aeiou]%'" would produce last names beginning with "Mc", then followed by a vowel, then any terminating string.

If you use the "-" with the brackets, you can specify ranges of characters (ranges defined by ASCII order). For example, let's say we want to search for user names that begin with 'jmclain' and are then followed by a single digit number. We would execute "SELECT * FROM Users WHERE UserName LIKE 'jmclain[0-9]'".

Where it gets complicated is when you want to search a column for wildcard literals. For example, let's say that you have a column called 'SalesDescription', and you want to count the rows where the SalesDescription column contains the string "50% off". If you were to execute "SELECT COUNT(*) FROM Sales WHERE SalesDescription LIKE '50% off'", you would mistakenly pull in rows with SalesDescription values such as '50 cents off', since the "%" wildcard represents "any string". To correct this, you have two options. The simplest is to enclose the "%" wildcard with brackets, so that the filter changes to "WHERE SalesDescription LIKE '50[%] off'".

The second option is to make use of the ESCAPE clause of the LIKE operator. What this method lacks in simplicity, it make up in robustness (and isn't really that complicated anyways). To solve the above problem suchwise, the filter changes to "WHERE SalesDescription LIKE '50!% off' ESCAPE '!'". I prefer the first method above because 1. it is simpler, and 2. in order to use the ESCAPE clause, you must be certain that your target expression doesn't contain the escape character. So if a given SalesDescription value in the table was, unbeknowst to you, something like '50% off!!!', the results start to become unreliable. Best practices for using ESCAPE stipulate first starting with uncommon characters such as "~" or "", and then querying your column to make sure they are not present.

The best use of ESCAPE is when you want to find brackets in your target. Let's say that you wanted to find the SalesDescription value "[50% off]". After checking to ensure that the column values don't contain the tilde ("~") character, you would use the filter "WHERE SalesDescription LIKE '~[50~% off~]' ESCAPE '~'".

Friday, April 17, 2009

Converting Datetime Values to Varchar

Back when I used to code in Foxpro, I had to write custom code to convert date/time values to various formats. T-SQL provides for a great number of formats using the CONVERT function. An article on MSSQLTips lists many (but not all) of these formats. This code (which can easily be wrapped into a stored procedure), will list out all valid format codes and an example of how they will appear:


SET NOCOUNT ON
CREATE TABLE #Fmts (FmtNo tinyint, Example varchar(max))
DECLARE @fmt int; SET @fmt = 0
DECLARE @dt datetime; SET @dt = GETDATE()
WHILE @fmt < 132
BEGIN
BEGIN TRY
INSERT INTO #Fmts (FmtNo, Example)
VALUES (@fmt, CONVERT(varchar, @dt, @fmt))
END TRY
BEGIN CATCH
PRINT '@fmt = ' + LTRIM(STR(@fmt)) + ' is not valid.'
END CATCH
SET @fmt = @fmt + 1
END
SELECT FmtNo, Example = LEFT(Example, 30) FROM #Fmts
DROP TABLE #Fmts
SET NOCOUNT OFF



And sample output:





Wednesday, April 15, 2009

Question of the Day

I stumped the gurus on SQLServerCentral.com with another challenging Question of the Day on April 6th, 2009:

Given this code,

DECLARE @val int;
SET @val = -1
CREATE TABLE #empty (val int)

which statement(s) will result in @val being NULL? (select all that apply)

  1. SET @val = NULL
  2. SELECT @val = NULL FROM #empty
  3. SELECT @val = val FROM #empty
  4. SELECT @val = (SELECT val FROM #empty)
As of today, only about 30% of respondents answered correctly, and judging from the comments in the discussion section, a lot of them gained a deeper understanding of the concept of null in SQL. I try to make my questions (and answers), tricky enough so as to not be obvious, but my goal isn't to trick people with arcane technicalities - I want to make them aware of certain subtleties of the database engine. This question arose after some unexpected results made me delve into a bit of code, and tested scenarios just like those in the question.

Monday, April 13, 2009

Collation Sequences

Being a database developer rather than a DBA, I rarely deal with collation types, but I came across a situation recently where I had to dig into the issue. My objective was to produce a breakdown of how many rows contained each ASCII character. I considered two approaches: slice the values into one-character chunks, or loop through all 256 ASCII values and count the number of rows containing each character. The former approach has the advantage of not only counting the rows but the frequency of characters (e.g., "100 rows contain 250 instances of the character 'A'"), but I opted for the second approach since it intuitively seemed faster. If your database was created with case-insensitive collation (such as "SQL_Latin1_General_CP1_CI_AS"), checking for the characters 'A', would pull in values of 'a':

USE NorthWind
GO

SELECT DISTINCT City
FROM dbo.Customers
WHERE CHARINDEX('A', City) > 0




To fix, simply add the "COLLATE" clause to the query:

SELECT DISTINCT City
FROM dbo.Customers
WHERE CHARINDEX('A' COLLATE Latin1_General_BIN, City) > 0


Friday, April 3, 2009

"Average" Date

Have you ever tried to calculate the average of a datetime column, and gotten this error:

"Msg 8117, Level 16, State 1, Line 1
Operand data type datetime is invalid for avg operator."

When you first think about it, the error makes sense - are you trying to determine the average month, year, hour, or second? But shouldn't there be such a thing as an "average" date? If we have a bunch of sales orders in a given month, doesn't the "average" date of sale actually mean something?

What if I calculated the MIN value, then calc'd the DATEDIFF between the MIN and all other values? At that point I'd essentially have an integer value, which of course I could average, and then derive the "average" date:

;WITH
CvrtToDate AS (
SELECT
/* "DataValue" is assumed to be varchar(max) */
DataValue = CONVERT(datetime, DataValue)
FROM DataSet
WHERE ISDATE(DataValue) = 1
)
,MinAndMax AS (
SELECT
ValueMin = MIN(DataValue)
,ValueMax = MAX(DataValue)
FROM CvrtToDate
)
,DateDiffs AS (
SELECT
DaysFromMin = DATEDIFF(d, MinAndMax.ValueMin, DataValue)
FROM CvrtToDate, MinAndMax
)
,AvgDaysFromMin AS (
SELECT DataValue = AVG(DaysFromMin)
FROM DateDiffs
)
SELECT
AvgDate = DATEADD(d, AvgDaysFromMin.DataValue, MinAndMax.ValueMin)
FROM MinAndMax, AvgDaysFromMin


This query bears a result that makes sense - we have a date that is between the oldest, and most recent, that is somewhere near the midway point.

A little Google research bears fruit for a much simpler calculation. From "Ask Ben: Averaging Date/Time Stamps In SQL": "The secret to date/time averaging is that date/time stamps can be represented as a floating point number. I have covered this a number of times on this blog so I won't go into too much detail, but the idea is that as a floating point number, the integer part represents the number of days since the beginning of time (as the SQL server defines it) and the decimal part represents the time or rather, the fraction of days. SQL does not make this conversion for you; you have to CAST the date/time stamp as a FLOAT value."

This leads to the revised calculation:

SELECT
ValueMin = MIN(CONVERT(datetime, DataValue))
,ValueMax = MAX(CONVERT(datetime, DataValue))
,ValueAvg = CONVERT(datetime, AVG(CONVERT(float,
CONVERT(datetime, DataValue))))
FROM DataSet
WHERE ISDATE(DataValue) = 1


Not only is this calc far simpler, but it is slightly more precise, as it includes time-of-day in the result.

Calculating Median

I found an article on calculating the median in SQL (http://www.sqlservercentral.com/scripts/Miscellaneous/31775/), and after reading a Wikipedia article on it, I realized that it was incorrect for sample sets of even size. I left a comment with a version of the calc that accounts for this, with a copy of this code:

;WITH
TopHalf AS (
SELECT TOP 50 PERCENT DataValue
FROM DataSet
ORDER BY DataValue ASC
)
,BottomHalf AS (
SELECT TOP 50 PERCENT DataValue
FROM DataSet
ORDER BY DataValue DESC
)
,BottomOfTopHalf AS (
SELECT TOP 1 DataValue
FROM TopHalf
ORDER BY DataValue DESC
)
,TopOfBottomHalf AS (
SELECT TOP 1 DataValue
FROM BottomHalf
ORDER BY DataValue ASC
)
SELECT
Median = (BottomOfTopHalf.DataValue
+ TopOfBottomHalf.DataValue) / 2.0
FROM BottomOfTopHalf, TopOfBottomHalf

Wednesday, April 1, 2009

Function to Insert Commas Into Number String

While looking into some data quality issues, I considered the case of source data arriving as a string of numbers with embedded commas (for example, "1,526,734.56"), and as I didn't have any such data to test with, I decided to create some, and also created this function in the process (which can be used to create some): fn_AddCommasToNumberString.

(Followup on 4/23/09): A forum post on SQLServerCentral.com explained a very easy way to do this using varchar conversion:

declare @test float
set @test = 7265342.12
select @test, convert(varchar(20),cast(@test as money),1)