Tuesday, December 24, 2019

Complicated Comments

This is a simpler version of a post I made 10 years ago.


DECLARE @x int

/*

-- */

SET @x = 1

-- /*

--*/ SET @x = 2

PRINT @x





Question: after running the statement above, what will the output be?

  1. 1
  2. 2
  3. Null
  4. An error occurs.

Monday, December 16, 2019

Removing 100% Duplicates

In a table that contains duplicates as identified by some business rule (same name/address, etc.), but the rows included in the sets of duplicates are actually 100% the same, it's impossible to identify a "survivor" of a de-duplication process that deletes all but one of the rows in each set. There is a way in T-SQL to do this, by using the TOP(n) clause after the DELETE. It's kind of unusual, and you should be careful with it. Here's an example:

CREATE TABLE #src (x int, y int, z int)
GO
INSERT #src VALUES (1, 2, 3)
INSERT #src VALUES (1, 2, 3)
INSERT #src VALUES (1, 2, 3)
INSERT #src VALUES (10, 20, 30)
INSERT #src VALUES (100, 200, 300)
GO

SELECT * FROM #src

At this point, #src looks like this (I cannot seem to remove the odd white space in this table):
x   y   z
1   2   3
1   2   3
1   2   3
10  20  30
100 200 300


Now let's remove our duplicates:
DELETE TOP(2)
FROM #src
WHERE x = 1


SELECT * FROM #src

And we see the results:
x   y   z
1   2   3
10  20  30
100 200 300


In the DELETE TOP(n) part of the statement, n=#dupes - 1. In our case, we had 3 duplicates and removed 2 of them. Which brings up an important point - we used the "x=1" condition to identify the group, and we knew that it had exactly 3 rows in that dupe group. So if we generalize the problem, we may want to use dynamic SQL to build a solution:

; WITH DupeGroups AS (
 SELECT
  [x]
  ,[cnt] = COUNT(*)
 FROM #src
 GROUP BY [x]
 HAVING COUNT(*) > 1
)
SELECT

 [sql] = 'DELETE TOP(' + CONVERT(varchar, [cnt] - 1)
 + ') FROM #src WHERE [x] = ' + CONVERT(varchar, [x]) + ';'
FROM DupeGroups


This can then be executed using sp_SqlExec to remove the duplicates.


Wednesday, June 12, 2019

Default Schema

I'm working at a site where prior developers referenced objects without schemas, because everything is done under 'dbo'. I fell into this habit, and this led to me creating a table with no explicit schema - which means it was created under a schema tied to my login. When I later tried to reference the table in code, using the implicit 'dbo' schema, I got an error that it didn't exist, when it certainly seemed to exist! It took a  bit to track down the error, and it was a 'smack my forehead' feeling when I found it. Lesson learned: don't get lazy about explicit schemas.

Saturday, May 25, 2019

Multiple UNPIVOTs

This is a great article about using multiple UNPIVOTs simultaneously on a data set, which I hadn't seen before.

Wednesday, May 15, 2019

Conversion failed when converting the varchar value '*' to data type int.

Interesting error today: "Conversion failed when converting the varchar value '*' to data type int." I partially recognized it as a conversion issue, when an int value with more digits than the target varchar column has, it will hold the value "*". A later process tried to convert it back to an int.

Thursday, February 14, 2019

SSMS Slowness


I’ve had some Hulk-like moments dealing with SSMS’s tendency to burn through all available memory, push usage to near 100%, and then freeze when I try to continue working in it. Classic “turn it off and turn it on again” fix works sometimes. I’ve even threatened it with uninstalling it (I think it knows I was bluffing). But today – today – I found a silver bullet to kill the beast. This post recommends turning off Auto-recovery. I tried it – and it works. Before, memory for SSMS was allocated at around 2.5gb, but now it’s a sweet and cuddly half gig. Yup, 80% reduction. Now, you may flinch at turning off Auto-recovery, and in principle I agree. The thing about that is, when SSMS has completely crashed with it on, it doesn’t always recover my most important documents. So, if you’re sick of SSMS slowness, consider this.

Monday, February 11, 2019

CREATE TABLE Script for Temp Table

Following up on the earlier queries that interrogated INFORMATION_SCHEMA.TABLES to get metadata on temp tables, this script builds on those to produce a CREATE TABLE statement. I find this useful for capturing the table-typed output of a stored procedure, especially if I need to capture that output in an INSERT..EXEC statement.

DECLARE @sql varchar(max); SET @sql = 'CREATE TABLE #capture_output (' + CHAR(13) + CHAR(10)
; WITH struct AS (
SELECT *
FROM TempDb.INFORMATION_SCHEMA.COLUMNS
WHERE [TABLE_NAME] = OBJECT_NAME(OBJECT_ID('TempDb..#tmp'), (SELECT Database_Id FROM SYS.DATABASES WHERE Name = 'TempDb'))
)
SELECT
@sql += CHAR(9) + ',' + QUOTENAME(struct.[COLUMN_NAME]) + ' ' + struct.[DATA_TYPE]
+ ISNULL('(' + CASE struct.[DATA_TYPE]
WHEN 'varchar' THEN CONVERT(varchar, struct.[CHARACTER_MAXIMUM_LENGTH])
WHEN 'decimal' THEN CONVERT(varchar, struct.[NUMERIC_PRECISION]) + ', ' + CONVERT(varchar, struct.[NUMERIC_SCALE])
END
+ ')', '')
+ CHAR(13) + CHAR(10)
FROM struct
SET @sql = REPLACE(@sql, '(' + CHAR(13) + CHAR(10) + CHAR(9) + ',', '(' + CHAR(13) + CHAR(10) + CHAR(9)) + ')'
PRINT @sql

I realized that I don't have a link to a tip on the FOR XML trick for concatenating values over rows into one value. I like this one: https://www.mytecbits.com/microsoft/sql-server/concatenate-multiple-rows-into-single-string