Wednesday, September 19, 2012
Create SQL Column with Variable Default
This is what I came up with:
CREATE TABLE [dbo].[metadata] (
[metadata_id] int NOT NULL identity(1,1)
,[project_id] int NOT NULL
,[descr] varchar(100) NOT NULL
,[delete_stmt] varchar(max) NOT NULL
,[execution_order] int NULL
,[insert_date] datetime NOT NULL
,[insert_userid] varchar(50) NOT NULL
,[is_active] bit NOT NULL
,CONSTRAINT pk_dbo_metadata PRIMARY KEY ([metadata_id])
)
GO
ALTER TABLE [dbo].[metadata]
ADD CONSTRAINT df_dbo_metadata__insert_date
DEFAULT(GETDATE()) FOR [insert_date]
GO
ALTER TABLE [dbo].[metadata]
ADD CONSTRAINT df_dbo_metadata__insert_userid
DEFAULT(SUSER_NAME()) FOR [insert_userid]
GO
ALTER TABLE [dbo].[metadata]
ADD CONSTRAINT df_dbo_metadata__is_active
DEFAULT(1) FOR [is_active]
GO
ALTER TABLE [dbo].[metadata]
ADD CONSTRAINT df_dbo_metadata__execution_order
DEFAULT((IDENT_CURRENT('[db].[dbo].[metadata]')) * 100) FOR [execution_order]
GO
Wednesday, December 28, 2011
Parsing Stored Procedure Result Set
select * into #t from openquery(loopback, 'exec yourSproc')
to select the results into a temp table, the metadata of which can then be parsed as needed. This leads to the 'pie in the sky' idea of a stored proc that, given the name of another stored proc, will output a CREATE TABLE script that mirrors the result set of the second stored proc.
Removing Trailing Zeroes
Wednesday, July 13, 2011
View EPs Neatly
-- replace 'TABLENAME' with the name of your table:
DECLARE @ObjId int; SET @ObjId = OBJECT_ID('TABLENAME')
DECLARE @sql varchar(max)
SET @sql =
'SELECT
TableName = T.Name
,ColumnName = C.Name
'
; WITH TableNameBase AS (
SELECT *
FROM SYS.TABLES
WHERE OBJECT_ID = @ObjId
), PropNames0 AS (
SELECT DISTINCT Name
FROM SYS.EXTENDED_PROPERTIES
WHERE Major_Id = (SELECT [OBJECT_ID] FROM TableNameBase)
AND Minor_Id > 0
), PropNames AS (
SELECT Name, NameOrder = ROW_NUMBER() OVER (ORDER BY Name)
FROM PropNames0
)
SELECT * INTO #PropNames FROM PropNames
SELECT @sql = @sql +
' ,[' + P.Name + '] = ISNULL(P' + LTRIM(STR(P.NameOrder)) + '.Value, '''')
'
FROM #PropNames P
SET @sql = @sql +
'FROM SYS.TABLES T
JOIN SYS.COLUMNS C
ON C.OBJECT_ID = T.OBJECT_ID
'
SELECT @sql = @sql +
'LEFT JOIN SYS.EXTENDED_PROPERTIES P' + LTRIM(STR(P.NameOrder)) + '
ON P' + LTRIM(STR(P.NameOrder)) + '.Major_Id = T.OBJECT_ID
AND P' + LTRIM(STR(P.NameOrder)) + '.Minor_Id = C.Column_Id
AND P' + LTRIM(STR(P.NameOrder)) + '.Name = ''' + P.Name + '''
'
FROM #PropNames P
SET @sql = @sql +
'WHERE T.OBJECT_ID = ' + LTRIM(STR(@ObjId))
PRINT @sql
EXEC(@sql)
DROP TABLE #PropNames
Thursday, June 9, 2011
Query to List Tables and their Primary Keys
; WITH Base AS (
SELECT
TABLE_NAME = QUOTENAME(T.TABLE_SCHEMA) + '.' + QUOTENAME(T.TABLE_NAME)
,C.Column_Name
,C.ORDINAL_POSITION
FROM INFORMATION_SCHEMA.TABLES T
LEFT JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS PK
ON PK.TABLE_NAME = T.TABLE_NAME
AND PK.TABLE_SCHEMA = T.TABLE_SCHEMA
AND PK.CONSTRAINT_TYPE = 'PRIMARY KEY'
LEFT JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE C
ON C.CONSTRAINT_NAME = PK.CONSTRAINT_NAME
AND C.TABLE_NAME = PK.TABLE_NAME
AND C.TABLE_SCHEMA = C.TABLE_SCHEMA
), OrderIt AS (
SELECT
TABLE_NAME
,Column_Name = CONVERT(varchar(max), Column_Name)
,ORDINAL_POSITION
,RecurseVar = ROW_NUMBER() OVER (PARTITION BY TABLE_NAME ORDER BY ORDINAL_POSITION)
FROM Base
), Recursed AS (
SELECT
Table_Name
,Column_Name
,RecurseVar
FROM OrderIt
WHERE RecurseVar = 1
UNION ALL
SELECT
B.Table_Name
,Column_Name = RTRIM(R.Column_Name) + ', ' + B.Column_Name
,B.RecurseVar
FROM Recursed R
JOIN OrderIt B
ON B.Table_Name = R.Table_Name
AND B.RecurseVar = R.RecurseVar + 1
), GetMax AS (
SELECT
Table_Name
,MAX_RecurseVar = MAX(RecurseVar)
FROM Recursed
GROUP BY Table_Name
), Results AS (
SELECT
R.Table_Name
,Primary_Key = R.Column_Name
FROM Recursed R
JOIN GetMax G
ON G.Table_Name = R.Table_Name
AND G.MAX_RecurseVar = R.RecurseVar
)
SELECT * FROM Results
ORDER BY Table_Name
Tuesday, May 17, 2011
XML SQL - Probing Depth of Document
; WITH XML_Doc AS (
SELECT
id
,parentid
,[level] = 1
,nodetype
,localname
,prev
,text
FROM #tmp2
WHERE LocalName = 'RootNode'
UNION ALL
SELECT
T.id
,T.parentid
,[level] = R.[level] + 1
,T.nodetype
,T.localname
,T.prev
,T.text
FROM XML_Doc R
JOIN #tmp2 T
ON R.Id = T.ParentId
)
SELECT *
INTO #XML_Doc
FROM XML_Doc
XML Parsing with Recursive Table Structure
SELECT
X1.Id
,X1.ParentId
,X1.NodeType
,X1.LocalName
,X1.prev
,text = COALESCE(X1.text, X3.text)
INTO #tmp2
FROM #tmp X1
LEFT JOIN #tmp X3
ON X3.ParentId = X1.Id
AND X3.NodeType = 3
WHERE X1.NodeType <> 3
ORDER BY Id
Monday, May 16, 2011
XML into SQL
I did discover this tidbit for taking advantage of SQL's built-in tools for handling XML:
DECLARE @hdoc int DECLARE @doc varchar(max) SELECT @doc = CONVERT(varchar(max), XML_Column) FROM dbo.XML_Table EXEC sp_xml_preparedocument @hdoc OUTPUT, @doc PRINT @hdoc SELECT * INTO #tmp FROM OPENXML (@hdoc, '/RootNodeName',2) EXEC sp_xml_removedocument @hdoc
Works well.
Thursday, May 12, 2011
Getting the Real Name of a Temp Table
SELECT TempTableName = OBJECT_NAME(OBJECT_ID('TempDb..#tmp'), (SELECT Database_Id FROM SYS.DATABASES WHERE Name = 'TempDb'))
Monday, August 23, 2010
Getting Row Counts
SELECT rows FROM sysindexes WHERE id = OBJECT_ID('table_name') AND indid < 2
Params for Pivoting
AttribName AttribValue Row Number
---------- ----------- ----------
and the other value will return the fully-pivoted data set:
AttribName AttribValue_Row1 ... AttribValue_RowN
---------- ---------------- ... ----------------
Thursday, August 19, 2010
How to Pivot
PivotTable SQL Code
Structure of return set:
AttribName AttribValue_Row1 ... AttribValue_RowN
---------- ---------------- ... ----------------
So, how do we do this? We can easily pivot the column names into values for 'AttribName' by pulling from Sys.Columns view. The problem then becomes how do we pivot the attribute values into the genericized column?
We need to first pivot the data into this structure:
AttribName AttribValue Row Number
---------- ----------- ----------
Then we can self join the data set as many times as there are rows in the original data. Or not.
Stairway to Database Design
The fundamentals of design are often not taught to developers or DBAs when they are given the task of building tables. Many people do not understand the logic used to decide what columns to use, the data types for each one, constraints between columns, and the keys that should be defined.
Joe Celko, widely viewed as an expert in the SQL language, brings us a series that looks to help you understand the fundamentals of the design process. The articles in the series are linked below..."
Monday, August 9, 2010
Random Rows
SELECT TOP 1 * FROM Test4 ORDER BY NEWID()
Tuesday, August 3, 2010
Data Profiling, Data Quality
Friday, July 30, 2010
SQL Date Functions Syntax
DATEADD (datepart , number, date )The designers were consistent with putting the 'datepart' as the first parameter, but for DATEADD, why did they made the base date the last parameter? I think it should be consistent with DATEPART and DATEDIFF, so that the second parameter is always a DATETIME.
DATEDIFF ( datepart , startdate , enddate )
DATEPART ( datepart , date )
Tips to optimize your SQL statements
There is a huge difference between writing an SQL statement which works and one which works well and performs well. Sometimes developers are too focused on just writing their SQL to perform the required task, without taking into consideration its performance and most importantly its impact on the SQL Server Instance, i.e. the amount of CPU, IO, and memory resources their SQL is consuming. Thus, they starve other SQL Server processes during the SQL statement execution bringing the whole instance to its knees. This article is intended to provide the SQL developer with a set of easy checks to perform to try and optimize the SQL Statements."
Tuesday, July 27, 2010
Level of measurement
The "levels of measurement", or scales of measure are expressions that typically refer to the theory of scale types developed by the psychologist Stanley Smith Stevens. Stevens proposed his theory in a 1946 Science article titled "On the theory of scales of measurement"[1]. In this article Stevens claimed that all measurement in science was conducted using four different types of scales that he called "nominal", "ordinal", "interval" and "ratio".
Monday, July 26, 2010
NK Search
I'm polishing off the code and should post it by the end of the week.
Thursday, July 15, 2010
New and Improved Natural Key Discovery Algorithm
Wednesday, July 14, 2010
Table Variables and Query Optimizer
[Ed. note 7/26]Here's another interesting discussion about this. Note the post that reads "And also, because table variables do not have statistics, the query optimizer will often take dubious choices because it believes the row count to be 1 all the time - leading to inappropriate join methods etc..."
Tuesday, July 13, 2010
VS2008 SSRS and Temp Tables in Spds
Code Review, Years Later
The first thought one might have would be something along the lines of "comment much?", but truthfully, I do, or at least I think I do. The code is found in a SQL Server Central article, explaining much of the background of the problem I was trying to solve, and the code itself has an entire page of comments, and yet I find now that there simply isn't one concise explanation of the resultant design of my algorithm.
There must be a magic number of weeks after code is finished and polished off, when the original intent of the developer is still in his or her brain, but beginning to fade. That is the perfect time to finish commenting the code, because at that time the developer will have to approach the code as an outsider, but still be able to summon those fuzzy thoughts about why something was done a certain way. Does anyone know what that magic number of weeks is?
Sunday, July 11, 2010
Recursive Stored Procedures and Temp Tables
Friday, July 2, 2010
Temp Table Scope
A big enhancement in SQL in this department: table variables. The scope of a table variable is the same as that of a "regular" variable - local to the scope of the procedure. So replacing the use of a temp table in a stored procedure eliminates the concern that a called inner procedure will unexpectedly change the data. However, there are major performance declines when using table variables to hold large amounts of data, as this article explains.
Another link: http://www.mssqltips.com/tip.asp?tip=1556
Wednesday, June 30, 2010
Installing Second Hard Drive
Tuesday, June 29, 2010
T-SQL Challenge #33
Also last week R. Barry Young published articles on how to gain this mode of thinking: "There Must Be 15 Ways To Lose Your Cursors... part 1, Introduction".
Thursday, June 24, 2010
SQL Search Tool
Friday, June 11, 2010
Needle in a Haystack
"Msg 537, Level 16, State 5, Procedure spd_Load_Stg_Names, Line 31
Invalid length parameter passed to the LEFT or SUBSTRING function.
The statement has been terminated."
This is almost useless as far as finding the problem data, as it does not tell me what value, or which row, caused the failure.
Sunday, May 23, 2010
Bulk Insert CSV with Text Qualifiers
"John Smith", "123 Main St, Apt 3", "Anytown", "XX", "12345"
So the format of the record is Name, Address, City, State, ZIP. The fields are separated by commas, and encapsulated with double-quotes. The "Address" field demonstrates the need for the text qualifier: the embedded comma that separates the first address line from the second. Without text qualifiers each line of address would appear as a separate field, rather than one discrete value. That in itself would not be a problem, but because some addresses only have one line, the number of delimiters becomes variable, and the import is then faulty.
The problem crystalizes when we try to import a file of that format into SQL Server. The BCP/Bulk Insert utilities do not have an option to specify the text qualifier. We have a couple of "kloodgy" options before us: We can specify the delimiter as ','. The problem here is that every field encapsulated with the double quotes will retain that character in the database, leaving us to update the table to remove those characters. We can specify that the delimiter is '","' instead of ','; this is a step in the right direction, but it will still leave a double quote leading in the first field and trailing in the last, leaving us with less work than in the first case.
Given the maturity of the SQL Server product, I'm surprised that Microsoft hasn't added this feature. I suppose that is their way of moving developers towards SSIS, which of course does have it.
If we really want to properly import this file using BCP or BULK INSERT without any weird cleanup kludges, we have to use a format file. Here's a good article on MSDN about how to create a format file from scratch using a BCP option. To complete my task, I will take the resultant format file and modify it to account for my comma delimiters and text qualifiers.
Tuesday, December 8, 2009
TSQL Challenge #18 Blues
This is the error I encounter: "The conversion of a char data type to a datetime data type resulted in an out-of-range datetime value." If I limit my day-related tally table to a max of 28 (rather than 31), the error disappears, but I don't get any days in my calendar past the 28th of course. The weird thing is, if I set the tally to max of 31, and I select from the 'Days2' CTE, I don't get an error, and all the days I expect to see are present. Something strange is going on under the hood of SQL.
Thursday, December 3, 2009
Kimball Design Tip #119 Updating the Date Dimension
Creating and automatically updating these attributes of the Date dimension will save time (sorry, no pun intended), for report writers and data mart developers, and also standardize such calculations across the enterprise. For example, let's say you are responsible for the data warehouse of a catalog company, and you want to calculate lag time between ordering and shipping, a common metric of customer satisfaction. Furthermore, management decides to give everyone an extra holiday as a bonus for company profitability from last year, and so lets everyone take off Groundhog Day. To account for this in your metric, you are going to want to have an attribute like "BusinessDayOfYear" as an incrementing integer, so that the first business day in January gets assigned BusinessDayOfYear=1, the second day is 2, and so forth, until you have the highest number as the last business day in December (which is most likely Dec 30th). If the company now has Groundhog Day as a holiday, then the IsHoliday for Feb 2nd is changed from 0 to 1, and then the BusinessDayOfYear attribute is recalculated.
Calculating how many business days between ordering and shipping is then trivial. Of course, this does not account for orders placed in December that ship in January, so you might want to have an attribute BusinessDayLifetime, which starts at 1 the first day the company was in business, and increments freely forever.
Monday, November 30, 2009
Building a Yahoo Finance input adapter for SQL Server StreamInsight
"Combine StreamInsight with data mining, and you can have real-time fraud detection, stock market prediction, you name it... Imagine a real-time Business Intelligence-application where the management can actually see the KPI gauges moving on the screen. I would say that monitoring real-time flow of information is one of the key success factors for tomorrow's Business Intelligence. This is also supported by Ralph Kimball, saying in an interview that Business Intelligence is moving from the strategic level towards operational level processes such as customer support, logistics and sales. At the operational level data must be continuously updated, not just once per day. I would add also that the new generation, that has grown up with Facebook and Twitter, will make it necessary to monitor new sources for successful Business Intelligence."
Monday, November 23, 2009
Data Vault Institute
Monday, November 16, 2009
TSQL Challenge #17
Monday, November 9, 2009
Intelligent Keys
Now the internal debate isn't over that issue per se - I fall on the side that favors the surrogate key creation. The real debate I'm in is whether it's okay to create an intelligent surrogate key. The most typical surrogate as mentioned previously is an auto-incrementing integer identity - every time a row is inserted into the table, a new key is created by adding one to the max value. These keys have zero business meaning - that's their advantage, that they are decoupled from the business data. However, there are situations where it makes sense to create this value intelligently. One example is creating a time dimension in a data warehouse, whereby the primary key consists of an integer in the form "YYYYMMDD". Microsoft favors this method (as I discovered in their training kit for exam 70-448). A big advantage to this approach is that if you create a clustered index on that intelligent surrogate key, all of your data will be sorted in date order (of course, if you insert into that table by earliest date first, it will also be in that order - unless you add an earlier time period at a later date).
Tuesday, October 27, 2009
SQL/ETL/BI 101
Introduction to Indexes, by Gail Shaw
Index of MS-SQL Articles on DatabaseJournal.com, a gold mine of introductory articles.
"What a Data Warehouse is Not" by Bill Inmon.
Friday, October 23, 2009
SSIS Data Cleansing Webinar
Help, my database is corrupt. Now what?
Found a good article on SSC about database corruption by Gail Shaw:
"What to do when the database is corrupt.
- Don't panic
- Don't detach the database
- Don't restart SQL
- Don't just run repair.
- Run an integrity check
- Afterwards, do a root-cause analysis"
Don't have a corrupt database, but still want to play in the sandbox? Click here and here for a couple of ways to corrupt your database. (Don't do this to production data!)
And if all else fails, there's a product called Recovery for SQL Server to help fix your files.
Thursday, October 22, 2009
SQL Server Execution Plans
Tuesday, October 20, 2009
Kimball University: Six Key Decisions for ETL Architectures
Monday, October 19, 2009
The ‘Subscription List’ SQL Problem
Celko on Stats
T-SQL Challenge #15
Tuesday, October 13, 2009
Database Space Capacity Planning
Saturday, October 3, 2009
Are Row Based DBs the Problem in BI?
Wednesday, September 30, 2009
Setting up Change Data Capture in SQL Server 2008
How to Calculate Data Warehouse Reliability
Monday, September 28, 2009
T-SQL Challenge #14
Friday, September 18, 2009
Books by Joe Celko
Wednesday, September 9, 2009
SQL Formatting
Thursday, September 3, 2009
TSQL Challenge #12
Tuesday, September 1, 2009
TSQL Challenge #13 - Set-based solution
Monday, August 31, 2009
TSQL Challenge #13
The most straight-forward approach from here is to insert this into a temp table, open a cursor on it, run through the data and modify the "Set" column so that the batch/invoice numbers are appropriately grouped together. Then join the results of that temp table back to the original data set via batch/invoice number, so that the modified "Set" column is appended. This cursor-based solution is here (the rules require a set-based solution, so I did the cursor-based one just as a baseline to compare to the set-based solution).
Saturday, August 1, 2009
Tuesday, July 7, 2009
Avoid Logging When Populating a Table
Thursday, June 18, 2009
Cool Use of CTEs to create number table
Poker DW: Stacking the Deck Part 2
Now how will I represent the cards in the table of poker hands? The most obvious way is to create a five column key consisting of the 5 CardId values that make that hand. Question: does the order of those CardIds matter? Remember what we talked about last entry, the order should not matter. But we have to put them in some sort of order - if we have five columns, say CardId1, CardId2, CardId3, CardId4, and CardId5, something is going to have to go somewhere. Let's say that we arbitrarily enter the CardIds into the columns in no particular order - how will we now query them? Let's make a trivial example of querying for two cards. Our WHERE clause of such a query would look like:
WHERE CardId1 = @CardId1 AND CardId2 = @CardId2
OR CardId1 = @CardId2 AND CardId2 = @CardId1
We have to match every permutation of variables to columns. With three cards:
WHERE
CardId1 = @CardId1 AND CardId2 = @CardId2 AND CardId3 = @CardId3
OR CardId1 = @CardId1 AND CardId2 = @CardId3 AND CardId3 = @CardId2 OR CardId1 = @CardId2 AND CardId2 = @CardId1 AND CardId3 = @CardId3 OR CardId1 = @CardId2 AND CardId2 = @CardId3 AND CardId3 = @CardId1 OR CardId1 = @CardId3 AND CardId2 = @CardId1 AND CardId3 = @CardId2 OR CardId1 = @CardId3 AND CardId2 = @CardId2 AND CardId3 = @CardId1
Going back to our research on permutations, the number of permutations of n elements is n!, which is also equal to n(n + 1)/2. With five cards we're looking at a WHERE clause that is 5(5+1)/2 = 5*6/2 = 15 lines long. The coding for that isn't so bad (try not to make a mistake - you'll be matching 5 variable/column pairs per line for 15 lines, for a total of 75 equality checks), but think of how slowly that would perform! And that's just to evaluate one hand - imagine the gears grinding away to find all possible 5 card hands with two cards to come - if you're on the flop, and you want to evaluate your chances to the river, you have "47 choose 2" =
1081 possible outcomes.
What I came up with is a solution using prime numbers that I learned while studying Gödel's incompleteness theorems. We assign every card in the deck a unique prime number; the first card gets 2, the second card 3, all the way up to the last card, which gets prime number 239. Now what happens if we want to look at a two-card hand and match it to a table of all possible two-card hands? If we multiply the prime numbers corresponding to those cards, we will get a number that is unique to those two cards (the primes of any other two cards will result in a different number when multiplied). Obviously it doesn't matter which order the primes are multiplied, so we have just found the perfect primary key for our poker hands table. When we want to evaluate a hand, we multiply the primes corresponding to the cards and match the result to the primary key.
We have an updated Dim_Deck creation script that adds a "PrimeFactor" column to it. Now I'm working on a creating the table of all possible hands.
Poker DW: Stacking the Deck
I created a representation of a 52 card deck of cards in the Poker DW, and I started thinking about how to evaluate 5 card poker hands (i.e., determining what a player had at the end of the hand). What I really want is to be able to evaluate the odds of making the best hand on the next card or the river, which would ultimately allow me to judge whether a player made the right decision. This result would be similar to the "% to win" stats that you see on TV.
After I created my deck of cards, I started playing around with representing a hand of 5 cards. How many possible 5 card hands are there? Easy - think of it like this. Take 1 card from the deck, there's 52 cards to choose from. Take another card, there's 51 to choose from. Keep picking until you have 5 cards in your hand, that leaves 52 * 51 * 50 * 49 * 48 = 311,875,200 possible 5 card hands.
The problem with this method is that I'm picking permutations of 5 card hands, rather than combinations. Let's reduce my example above to picking two cards rather than five. According to that math, there are 52 * 51 = 2,652 possible two card hands. Using the card deck created above, this query will return that count, 2652 rows:
;WITH Draw1 AS (
SELECT Card1 = CardId
FROM Dim_Deck
),
Draw2 AS (
SELECT
Card1,
Card2 = CardId
FROM Dim_Deck D2
JOIN Draw1 D1
ON D1.Card1 <> D2.CardId
)
SELECT COUNT(*) FROM Draw2
Note the use of the recursive CTE to create the second draw, Draw2. So let's say that I picked the five of clubs first, and the four of hearts second. That is one of the 2,652 possible events. But the reversal of that order is also one of the possible events (picking the four of hearts first, and the five of clubs second). But I really don't care which order the two cards come in (the permutation), I only care about the set of cards that results.
Looking at an even simpler example of a deck of 5 cards, ace to five, how many ways are there to pick two? Here's a simple matrix:
The code above will pick everything except the diagonal that shows pairs:
but what we really want is this:
And in order to get it, we change the "<>" operators to ">":
;WITH Draw1 AS (
SELECT Card1 = CardId
FROM Dim_Deck
),
Draw2 AS (
SELECT
Card1,
Card2 = CardId
FROM Dim_Deck D2
JOIN Draw1 D1
ON D1.Card1 > D2.CardId
)
SELECT COUNT(*) FROM Draw2
and we obtain the correct result, 1326 rows.
Monday, June 15, 2009
StackOverflow
Friday, June 12, 2009
T-SQL to Export Table Structure to a script
Wednesday, May 27, 2009
CRM Import - Importing Into Drop-Down Combobox Targets
Thursday, May 21, 2009
BCP out Temp Tables
Monday, May 18, 2009
Case Study: Poker DW: Reporting Questions
- Which players will call a big checkraise on the flop with an overpair?
- What is the actual expected outcome of reraising on the button with a suited connector and bluffing the flop?
Next: ???
Case Study: Poker DW: Entities
The next section begins with the header line "*** POCKET CARDS ***". Here we have such information as the hole cards dealt to the player, and all of the preflop action (fold, check, call, or raise). We can identify three more entities here: Betting Stage, Cards and Actions. The next section, "*** FLOP *** [10s 9d 3h]", contains the same entities, but this time we have community cards. At each step in these sections, we can calculate the pot size and stack sizes for each player. Two more sections, "Turn" and "River", provide similar info.
Special consideration should be given to the next section, "*** SHOW DOWN ***", as it will show us exactly what cards other players held during the hand, allowing us to "backfill" that info for earlier rounds of betting. This will help us answer some important questions in the hand histories. The final section, "*** SUMMARY ***", provides info such as the rake, the Hi hand (and Low if this is a hi/lo split game), and the final pot size (which we can use to verify our "running" pot size throughout the hand).
So let's summarize our entities and their relationships. Central to this is Hands. Hands occur at certain Times at a particular Table, which have Seats. Players make Actions with Money based on Cards appearing at a Betting Stage.
Friday, May 15, 2009
Case Study: Data Warehouse for Poker (Intro)
Guide to Entries:
- Intro (this entry)
- Identifying Entities
- Reporting Questions
Thursday, May 14, 2009
Grouping Datetimes to Identify Sessions
Now, if I only have the events, how do I create sessions around them?
Wednesday, May 6, 2009
Another Version for Calculating Median
Tuesday, May 5, 2009
Querying Sys.Columns & Sys.Types
Running the query:
SELECT
Tb.Name,
C.Name,
Tp.Name
FROM Sys.Tables Tb
JOIN Sys.Schemas Sch
ON Sch.Schema_Id = Tb.Schema_Id
JOIN Sys.Columns C
ON C.Object_Id = Tb.Object_Id
JOIN Sys.Types Tp
ON Tp.System_Type_Id = C.System_Type_Id
WHERE Tb.Name = 'Address'
ORDER BY Tb.Name, C.Name, Tp.Name
produces these results:
Weird, huh? Why did 'AddressLine1' show up six times with six different data types? The reason is two-fold. First, 'AddressLine1' is defined as nvarchar(60), which means that it will also show up as "sysname" datatype (think of "sysname" as MicroSoft's built-in user-defined data type).
Take a look at the results of the query below. It shows that, including itself, six different data types are based on nvarchar! That's why 'AddressLine1' showed up six times in the query above.
SELECT Name FROM Sys.Types Tp
WHERE System_Type_Id = 231
Name
-------------------
nvarchar
sysname
AccountNumber
Name
OrderNumber
Phone
(6 row(s) affected)
So let's change our query to use this 'User_Type_Id' column instead:
SELECT
Tb.Name,
C.Name,
Tp.Name
FROM Sys.Tables Tb
JOIN Sys.Schemas Sch
ON Sch.Schema_Id = Tb.Schema_Id
JOIN Sys.Columns C
ON C.Object_Id = Tb.Object_Id
JOIN Sys.Types Tp
ON Tp.User_Type_Id = C.System_Type_Id
WHERE Tb.Name = 'Address'
ORDER BY Tb.Name, C.Name, Tp.Name
Tuesday, April 21, 2009
Data Patterns and the LIKE Clause
The list of other wildcard characters related to LIKE includes "_", "[", "-", "]", and "^". The first, "_", is the 'any single character' expression. The "[]" characters act as a single character wildcard, but allow us to specify which characters will match. The WHERE clause above is equivalent to "WHERE LastName LIKE '[M][c]%'". When multiple characters reside within the brackets, the filter acts like an "or" expression. So changing the filter to "WHERE LastName LIKE '[M][c][aeiou]%'" would produce last names beginning with "Mc", then followed by a vowel, then any terminating string.
If you use the "-" with the brackets, you can specify ranges of characters (ranges defined by ASCII order). For example, let's say we want to search for user names that begin with 'jmclain' and are then followed by a single digit number. We would execute "SELECT * FROM Users WHERE UserName LIKE 'jmclain[0-9]'".
Where it gets complicated is when you want to search a column for wildcard literals. For example, let's say that you have a column called 'SalesDescription', and you want to count the rows where the SalesDescription column contains the string "50% off". If you were to execute "SELECT COUNT(*) FROM Sales WHERE SalesDescription LIKE '50% off'", you would mistakenly pull in rows with SalesDescription values such as '50 cents off', since the "%" wildcard represents "any string". To correct this, you have two options. The simplest is to enclose the "%" wildcard with brackets, so that the filter changes to "WHERE SalesDescription LIKE '50[%] off'".
The second option is to make use of the ESCAPE clause of the LIKE operator. What this method lacks in simplicity, it make up in robustness (and isn't really that complicated anyways). To solve the above problem suchwise, the filter changes to "WHERE SalesDescription LIKE '50!% off' ESCAPE '!'". I prefer the first method above because 1. it is simpler, and 2. in order to use the ESCAPE clause, you must be certain that your target expression doesn't contain the escape character. So if a given SalesDescription value in the table was, unbeknowst to you, something like '50% off!!!', the results start to become unreliable. Best practices for using ESCAPE stipulate first starting with uncommon characters such as "~" or "", and then querying your column to make sure they are not present.
The best use of ESCAPE is when you want to find brackets in your target. Let's say that you wanted to find the SalesDescription value "[50% off]". After checking to ensure that the column values don't contain the tilde ("~") character, you would use the filter "WHERE SalesDescription LIKE '~[50~% off~]' ESCAPE '~'".
Friday, April 17, 2009
Converting Datetime Values to Varchar
SET NOCOUNT ON
CREATE TABLE #Fmts (FmtNo tinyint, Example varchar(max))
DECLARE @fmt int; SET @fmt = 0
DECLARE @dt datetime; SET @dt = GETDATE()
WHILE @fmt < 132
BEGIN
BEGIN TRY
INSERT INTO #Fmts (FmtNo, Example)
VALUES (@fmt, CONVERT(varchar, @dt, @fmt))
END TRY
BEGIN CATCH
PRINT '@fmt = ' + LTRIM(STR(@fmt)) + ' is not valid.'
END CATCH
SET @fmt = @fmt + 1
END
SELECT FmtNo, Example = LEFT(Example, 30) FROM #Fmts
DROP TABLE #Fmts
SET NOCOUNT OFF
And sample output:

Wednesday, April 15, 2009
Question of the Day
Given this code,
DECLARE @val int;
SET @val = -1
CREATE TABLE #empty (val int)
which statement(s) will result in @val being NULL? (select all that apply)
- SET @val = NULL
- SELECT @val = NULL FROM #empty
- SELECT @val = val FROM #empty
- SELECT @val = (SELECT val FROM #empty)
Monday, April 13, 2009
Collation Sequences
USE NorthWind
GO
SELECT DISTINCT City
FROM dbo.Customers
WHERE CHARINDEX('A', City) > 0
To fix, simply add the "COLLATE" clause to the query:
SELECT DISTINCT City
FROM dbo.Customers
WHERE CHARINDEX('A' COLLATE Latin1_General_BIN, City) > 0
Friday, April 3, 2009
"Average" Date
"Msg 8117, Level 16, State 1, Line 1
Operand data type datetime is invalid for avg operator."
When you first think about it, the error makes sense - are you trying to determine the average month, year, hour, or second? But shouldn't there be such a thing as an "average" date? If we have a bunch of sales orders in a given month, doesn't the "average" date of sale actually mean something?
What if I calculated the MIN value, then calc'd the DATEDIFF between the MIN and all other values? At that point I'd essentially have an integer value, which of course I could average, and then derive the "average" date:
;WITH
CvrtToDate AS (
SELECT
/* "DataValue" is assumed to be varchar(max) */
DataValue = CONVERT(datetime, DataValue)
FROM DataSet
WHERE ISDATE(DataValue) = 1
)
,MinAndMax AS (
SELECT
ValueMin = MIN(DataValue)
,ValueMax = MAX(DataValue)
FROM CvrtToDate
)
,DateDiffs AS (
SELECT
DaysFromMin = DATEDIFF(d, MinAndMax.ValueMin, DataValue)
FROM CvrtToDate, MinAndMax
)
,AvgDaysFromMin AS (
SELECT DataValue = AVG(DaysFromMin)
FROM DateDiffs
)
SELECT
AvgDate = DATEADD(d, AvgDaysFromMin.DataValue, MinAndMax.ValueMin)
FROM MinAndMax, AvgDaysFromMin
This query bears a result that makes sense - we have a date that is between the oldest, and most recent, that is somewhere near the midway point.
A little Google research bears fruit for a much simpler calculation. From "Ask Ben: Averaging Date/Time Stamps In SQL": "The secret to date/time averaging is that date/time stamps can be represented as a floating point number. I have covered this a number of times on this blog so I won't go into too much detail, but the idea is that as a floating point number, the integer part represents the number of days since the beginning of time (as the SQL server defines it) and the decimal part represents the time or rather, the fraction of days. SQL does not make this conversion for you; you have to CAST the date/time stamp as a FLOAT value."
This leads to the revised calculation:
SELECT
ValueMin = MIN(CONVERT(datetime, DataValue))
,ValueMax = MAX(CONVERT(datetime, DataValue))
,ValueAvg = CONVERT(datetime, AVG(CONVERT(float,
CONVERT(datetime, DataValue))))
FROM DataSet
WHERE ISDATE(DataValue) = 1
Not only is this calc far simpler, but it is slightly more precise, as it includes time-of-day in the result.
Calculating Median
;WITH
TopHalf AS (
SELECT TOP 50 PERCENT DataValue
FROM DataSet
ORDER BY DataValue ASC
)
,BottomHalf AS (
SELECT TOP 50 PERCENT DataValue
FROM DataSet
ORDER BY DataValue DESC
)
,BottomOfTopHalf AS (
SELECT TOP 1 DataValue
FROM TopHalf
ORDER BY DataValue DESC
)
,TopOfBottomHalf AS (
SELECT TOP 1 DataValue
FROM BottomHalf
ORDER BY DataValue ASC
)
SELECT
Median = (BottomOfTopHalf.DataValue
+ TopOfBottomHalf.DataValue) / 2.0
FROM BottomOfTopHalf, TopOfBottomHalf
Wednesday, April 1, 2009
Function to Insert Commas Into Number String
(Followup on 4/23/09): A forum post on SQLServerCentral.com explained a very easy way to do this using varchar conversion:
declare @test float
set @test = 7265342.12
select @test, convert(varchar(20),cast(@test as money),1)