The excuses my kids use to avoid bed, and how it affects you and your SQL Server

December 24, 2010 12:06 by Matt Whitfield

My kids are very creative. They can think up 101 excuses as to why they haven't gone to sleep yet, why they aren't able to sleep, and why they should get up at crack-sparrow-fart-o'clock and get in bed with me and my wife. When I say get in bed with, I really mean take over and shove.

It occurred to me that the excuses they give aren't dissimilar from some of the errors you can see in SQL Server, so I thought I would write a post on a more relaxed topic...

So, let's hear some of the excuses, their SQL Server equivalents, and the things you should do to avoid the problem:

Kid's excuseSQL Server errorNotes
'I can't get to sleep because of the noise from next door's TV''There is insufficient system memory in resource pool 'internal' to run this query'Ensure that the environment in which the server is running is suitable for the task at hand
'I need dog' (teddy which has been surgically attached since birth)'The procedure was called with an incorrect number of parameters'Things have to be just right - make sure you have everything you need to call a procedure, including all the parameters, otherwise things just won't go smoothly
'I've got a bad dream''Transaction was deadlocked on resources with another process and has been chosen as the deadlock victim. Rerun the transaction'Sometimes you just need to try again. Speak nicely to your SQL Server, reassure it, and run the query again
'I can't see''Assembly is not visible for creating SQL objects. Use ALTER ASSEMBLY to change the assembly visibility.'You need to set the visibility of the assembly before you can run DDL statements against it. It may be possible, of course, that you're not meant to be running DDL statements against it...
'I need some medicine because I'm poorly''BACKUP detected corruption in the database log. Check the errorlog for more information.'It's going to be a long night

Happy Christmas everyone!



So when is SqlContext.Pipe available?

December 18, 2010 22:12 by Matt Whitfield

Following on from my last post about returning error messages from the SQL CLR, I thought it would be good to put up just a very short post showing when you can and can't use SqlContext.Pipe:

Stored ProceduresAvailable
User Defined FunctionsNot available
User Defined AggregatesNot available
User Defined TypesNot available



Returning succinct exception messages from CLR code in SQL Server

December 17, 2010 14:15 by Matt Whitfield

I recently received a question from Dave Ballantyne (blog|twitter) about returning succinct error messages from CLR code.

Let's have a look at a basic example of some CLR code where you might want to throw an exception:

private const string _passwordSalt = "--^p@$$w0rD-sAlT-";[Microsoft.SqlServer.Server.SqlFunction]public static SqlString fn_getPasswordHashCode(string password){    using (SHA256Managed hashProvider = new SHA256Managed())    {        byte[] hash = hashProvider.ComputeHash(             Encoding.UTF8.GetBytes(_passwordSalt + password));        StringBuilder output = new StringBuilder();        foreach (byte b in hash)        {            output.Append(b.ToString("X2"));        }        return output.ToString();    }}

So, this returns us a hex string that contains the hash of a user-supplied password. However, this code will return the hash of the salt if an empty string is passed, and will return a hash even if the input is NULL. We may well want to avoid those situations, so we could modify the code in order to throw an exception that notifies the user of this fact:

[Microsoft.SqlServer.Server.SqlFunction]public static SqlString fn_getPasswordHashCode(string password){    if (string.IsNullOrEmpty(password))    {        throw new ArgumentException(            "Supplied password cannot be null or empty.", "password");    }    using (SHA256Managed hashProvider = new SHA256Managed())    {        byte[] hash = hashProvider.ComputeHash(            Encoding.UTF8.GetBytes(_passwordSalt + password));        StringBuilder output = new StringBuilder();        foreach (byte b in hash)        {            output.Append(b.ToString("X2"));        }        return output.ToString();    }}

This returns us a particularly verbose and ugly error message:

Line 1: Msg 6522, Level 16, State 2:A .NET Framework error occurred during execution of user-defined routine or aggregate "fn_getPasswordHashCode": System.ArgumentException: Supplied password cannot be null or empty.Parameter name: passwordSystem.ArgumentException: at UserDefinedFunctions.fn_getPasswordHashCode(String password).

What this is doing is returning a static string which tells us the name of the assembly module being executed, as well as giving us the Message and StackTrace members.

My first point of call was to see if deriving from ArgumentException would allow us to control that output. So I created a class 'MyArgumentException':

private class MyArgumentException : ArgumentException{    public MyArgumentException(string message, string paramName)        : base(message, paramName)    {    }    public override string Message    {        get        {            return null;        }    }    public override string StackTrace    {        get        {            return null;        }    }}

Once the code was modified to throw MyArgumentException instead of ArgumentException, the error message was as follows:

Line 1: Msg 6522, Level 16, State 2:A .NET Framework error occurred during execution of user-defined routine or aggregate "fn_getPasswordHashCode": MyArgumentException: MyArgumentException: .

So, still pretty ugly - obviously this wasn't the way to go. At this point I was due elsewhere so I emailed Dave back to say I didn't think it was possible, and went on my merry way.

The next day, however, I did some more digging, and found that some people had been using SqlContext.Pipe.ExecuteAndSend(SqlCommand) in order to be able to send better error messages.

So I tried that, in a stored procedure, and it did what I wanted - almost. It returned the error to the client, but also returned the 6522 error afterwards. So I posted a connect item - Allow SQL CLR Exception messages to be passed more succinctly to the client - and Niels Burglund posted that you can wrap the ExecuteAndSend call in a try {} catch {} block to eat the .NET exception - and then the job's done. I let Dave know, and thought 'I'll write a blog post about that'.

It was only while I was thinking about how to write this very post, that I realised that method of returning errors is great for Stored Procedures - but for User Defined Functions the Pipe is not available - meaning you can't ExecuteAndSend, meaning no nice error message goodness.

So, if you want to be able to return nice error messages from the CLR, go vote for that connect item. Then hope.

Tags: , ,

clr | workarounds

T-SQL Tuesday #13 - What The Business Wants Is Not What The Business Wants

December 14, 2010 00:00 by Matt Whitfield

This is a really interesting topic chosen by Steve Jones of SQLServerCentral fame, and, for me, is a story of communication skills.

What the business wants can be expressed in a few different ways, by varying people. For example, a high availability system might be described as having no downtime, or it might be described as having geographic redundancy. These requirements might be voiced in a few different ways:

  T-SQL Tuesday
'We need a system that is always available for our customers'


'This system is business critical and cannot be unavailable at all'


'We need to use replication to make sure our system is disaster proof'


'The system has to be hosted in the cloud so that if the office network goes down it is still available'

Based on the premice that a little knowledge is dangerous, those statements are listed in order of increasing danger.

What do I mean by that? Well, the sentiment behind each of those statements is:

'We recognise that our systems and the data within them are valuable, and we want to protect them as much as is reasonably practicable given our budget'.

But the statements above start to not only specify the what - but the how. I have seen many, many systems where dubious architectural decisions have been taken at an early stage because 'that was the way Mr. Manager wanted it'.

Is it reasonable for us, not only as DBAs, but as computer scientists, to allow Mr. Manager to specify the how without a reasonable understanding of what was being asked for?

We've all been there - sitting in a meeting room and Mr. Manager comes up with another gem:

'We could use an EAV table to store that kind of information'

Everyone in the room knows that's not the way to go for the scenario at hand, but, strangely, nobody speaks up. Everyone has that slightly sinking feeling at the pit of their stomach, but nobody says a word.

And there, right there, is the failing. Both in Mr. Manager's 'I've just read about technology X and will suggest it every time I feel slightly out of my depth' and, in each of the people who failed to speak.

That sinking feeling doesn't mean 'Oh dear this system is going to be a mess'. That feeling means 'I need to speak up now in order to avoid a steaming pile of rubbish'.

You might not feel that you have the right environment in which to speak. Or maybe you feel like you shouldn't 'embarrass' Mr. Manager. Well, you're wrong. Ok, you can take times and places to deliver your message, but the message that needs to be delivered at the initial stage of the project is:

'We need, right now, to focus on the what, and leave the how for later - getting the what right is critically important to our success'.

Sometimes, a picture can paint a thousand words. If you're having trouble, get out a flip chart and write, in big letters:

'We need to avoid this:'

Below that, attach a print-out of this classic project picture:

The project

That should make the point, while not embarrassing anyone, and getting things off to a good-humoured start.

Tags: , ,


SQL Southampton User Group

December 13, 2010 00:53 by Matt Whitfield

Last week I attended the first SQL Southampton user group meeting, and I have to say it was a hugely enjoyable occasion.

First of all, thanks to Mark Pryce-Mayer (Twitter | Blog) who put in a lot of care and effort into setting up the user group - even going as far as extensive 'biscuit research'.

We had a small room at St. Andrew's Church on the Avenue in Southampton, and it worked out really well. I gave my presentation 'CLR Demystified' that I gave at PASS Scania and SQL Bits 7, but it was nice to give it to a much smaller audience who could ask questions and engage with me during the presentation. For those of you who haven't seen it - it's a pretty packed presentation - covering everything the CLR has to offer SQL Server in just one session.

What was really interesting was giving that presentation with no projector. I spent the first half bent over my laptop with everyone crowded round - but in the second half I sat down, put the laptop on my knee and presented that way. I felt like Santa at story time - well, it is nearly Christmas!

However, what was really good was to meet some people who I hadn't met before (or, at least, hadn't had the chance to talk to properly) and have a good chat. I had already had the pleasure of meeting Adrian Hills (Twitter | Blog), but it was just a really good opportunity to meet some more people, and I enjoyed having a good chat with Christian Bolton and Andrew Fryer too.

I am really looking forward to the next user group meeting, and I am sure everyone else is too.

Tags: , , , ,


SQL Tools for Denali - Get yours here!

December 4, 2010 01:06 by Matt Whitfield

Just a short post this time - but after a hugely enjoyable exercise, all of the Atlantis applications now play nicely with SQL Server 2011 Denali CTP1.

SELECT ProductName, LatestVersionNumber, DownloadLink FROM DenaliCompatibleTools


SQL Everywhere2.0.97Download
Schema Inspector2.0.21Download
Data Inspector2.0.53Download
Data Surf1.1.35Download
Schema Surf1.1.42Download

Data Space Analyser has not been updated, as it is (in the main) already compatible with Denali. The object browser will require a re-build, and this will appear in the next week or so...


Tags: , ,


Finding the age of a person in T-SQL

December 1, 2010 00:28 by Matt Whitfield

Ok, sounds simple, right? How old are you? Easy - it's the difference in years between your date of birth and the current date...

So, expressed in T-SQL, that's:

DECLARE @dob [datetime], @currentdate [datetime]SET @dob = '19791101'SET @currentdate = '20101027'SELECT DATEDIFF(yy, @dob, @currentdate)


Well, no. DATEDIFF will return the difference between the year numbers, but won't consider the lower-order parts of the date. So, someone who was born in November, like me, would show up as being a year older than they are from January onwards. So we need to take into account the lower-order parts of the date. What we need to do is make sure that the day of the year of the current date is greater than or equal to the day of the year of the date of birth. Again, expressed in T-SQL, that's:

SELECT DATEDIFF(yy, @dob, @currentdate) -        CASE WHEN DATEPART(y, @dob) > DATEPART(y, @currentdate)             THEN 1             ELSE 0             END

I.e. we subtract one from the number that DATEDIFF(yy) returns if the current day of the year is less than the dat of the year on which the date of birth falls. Great, job done. Almost.

What about leap years? Let's consider someone who was born on the first of march, 1980. Using the code above, they are always a year younger than they actually are on their birthday. Why is this? Because the day of the year for the 1st of March in a leap year is 61, wheras the day of the year for the 1st of March in a non-leap year is 60. Rats.

So we need to delve a little deeper, and consider the actual month and day parts. If the month of the date of birth is greater than the month of the current date, then we subtract a year. If the month of the date of birth is less than the month of the current date, then we're all good, otherwise we need to look at the day of the current month. So that leaves us with:

SELECT DATEDIFF(yy, @dob, @currentdate) -   CASE WHEN DATEPART(m, @dob) > DATEPART(m, @currentdate)        THEN 1        WHEN DATEPART(m, @dob) < DATEPART(m, @currentdate)        THEN 0       ELSE CASE WHEN DATEPART(d, @dob) > DATEPART(d, @currentdate)             THEN 1             ELSE 0             END       END

However, that's a little bit onerous. Another way around it is to subtract the number of years difference from the current date, and then compare the resulting dates. In T-SQL this is then:

SELECT DATEDIFF(yy, @dob, @currentdate) -   CASE WHEN @dob > DATEADD(yy, DATEDIFF(yy, @dob, @currentdate) * -1, @currentdate)       THEN 1 ELSE 0 END

So there it is, reliable logic with which to determine exactly how old someone is in SQL Server. So, we'd want to put that in a scalar function, right? NO!

Scalar functions just suck performance out of any query. CLR Scalar functions are much more performant. How would this look in CLR code?

[Microsoft.SqlServer.Server.SqlFunction]public static int fn_GetAge(DateTime dateOfBirth, DateTime currentDate){    int yearsDifference = currentDate.Year - dateOfBirth.Year;    return yearsDifference -            ((dateOfBirth > currentDate.AddYears(yearsDifference * -1)) ? 1 : 0);}

Simple, succinct, performant. That's the way to do it.

Tags: , ,


The IN and NOT IN constraints and NULL values

November 24, 2010 01:28 by Matt Whitfield

Ok, so the IN clause, it's a simple way of saying 'I want a value that is also present in the result of this single column sub-query'.

But it has hidden complexities - let's have a look at those.

Consider the following test tables:

DECLARE @testData TABLE (value [int] NULL)INSERT INTO @testData ([value]) VALUES (1)INSERT INTO @testData ([value]) VALUES (2)INSERT INTO @testData ([value]) VALUES (3)INSERT INTO @testData ([value]) VALUES (NULL)DECLARE @testMatchValues TABLE (value [int] NULL)INSERT INTO @testMatchValues ([value]) VALUES (1)INSERT INTO @testMatchValues ([value]) VALUES (2)INSERT INTO @testMatchValues ([value]) VALUES (5)

So, if we run the query

SELECT * FROM @testData [td]  WHERE [value] IN (SELECT [tmv].[value] FROM @testMatchValues [tmv])

Then we expect to receive rows containing the values 1 and 2, because those are the intersection of the two test tables. Consequently, we also expect the same result when we reverse the tables:

SELECT * FROM @testMatchValues [tmv]  WHERE [value] IN (SELECT [td].[value] FROM @testData [td])

And this is what we get, from both queries.

So, when we run a query with NOT IN, we expect to get the other rows that do not form part of the intersection. For example, with the following query:

SELECT * FROM @testData [td]  WHERE [value] NOT IN (SELECT [tmv].[value] FROM @testMatchValues [tmv])

We expect to receive 3 and NULL, right? Well, that depends on the setting of ANSI_NULLS. If ANSI_NULLS is ON, then we get only the value 3.

Why is this?

It's because the NULL value is not equal to to any row from @testMatchValues - but, at the same time, it is not different from any row from @testMatchValues.


Let's look at some simpler SQL:

DECLARE @test [int]SELECT CASE when @test = 3 THEN 1 ELSE 0 END AS TestEqualsThree,        CASE WHEN @test <> 3 THEN 1 ELSE 0 END AS TestDoesNotEqualThree

In this SQL, the value of @test is NULL, because it is not assigned. And both values are returned as 0 - meaning that @test = 3 is false, and @test <> 3 is false. Basically, both comparisons result is UNKNOWN, which is neither true nor false, but when evaluated in a CASE statement or WHERE clause, would not count as a match.

We can also see the same effect in reverse - so consider the SQL:

SELECT * FROM @testMatchValues [td]  WHERE [value] NOT IN (SELECT [tmv].[value] FROM @testData [tmv])

We might reasonably expect this to return us a single row with the value 5 - but it returns us no rows at all. Again, why?

Well, let's simplify it a bit. Consider this SQL:

SELECT 'Present' WHERE 5 NOT IN (1, 2, 3, NULL)

This is effectively replicating the match on the value 5 from @testMatchValues in the query above.

This query may also be written as:

SELECT 'Present' WHERE 5 <> 1 AND 5 <> 2 AND 5 <> 3 AND 5 <> NULL

And there we have it - we saw above that @test <> 3 would not count as a match - and here, in the same way, 5 <> NULL will not count as a match, because UNKNOWN is not true.

What have we learnt?

That any SQL using NOT IN where the list of values contains NULL will never return anything at all, because value <> NULL will never be a match.

Tags: ,


IF...THEN in a SQL Server SELECT Clause

November 19, 2010 21:45 by Matt Whitfield

This is a commonly asked question: 'How do I do the equivalent of an IF...THEN in a SELECT Clause?'

The answer is fairly commonly given that the Transact-SQL CASE expression is the way to achieve this. So, for example, we might want to achieve:

SELECT (IF [name] LIKE '%ID' THEN 'Yes' ELSE 'No') as ColumnNameEndsWithID  FROM [sys].[columns];

This is done using the case expression:

SELECT (CASE WHEN [name] LIKE '%ID' THEN 'Yes' ELSE 'No' END) as ColumnNameEndsWithID  FROM [sys].[columns];

So the syntax here is fairly close to how we would express it in natural language - and will look fairly familiar to anyone who has used a ternary operation in C, C++ or C#.

However, the CASE expression can also handle switch style syntax. So, for example, we might want to return the letter 'A', 'B', 'C' or 'D' depending on whether a column's value is 0, 1, 2 or 3.

This would look like the following:

SELECT TOP 10 CASE [integerValue]              WHEN 0 THEN 'A'               WHEN 1 THEN 'B'               WHEN 2 THEN 'C'               WHEN 3 THEN 'D'               END AS [characterValue]FROM   [dbo].[myTable];

Excellent - we have the basics of a switch syntax - although we are limited to expressions and not statements here - i.e. we can return different values based on the input value, but we cannot perform different operations.

This works well when the input value to the CASE statement is deterministic. What do we mean by deterministic? A deterministic function always returns the same value for the same inputs. So, looking at the value of the column at a given point in time is deterministic. A non-deterministic function might be something like GETDATE() - which returns a different value each time it is called.

So how does CASE cause us problems with non-deterministic input? Consider the following SQL:

SELECT TOP 10 CASE ABS(CONVERT (INT, RAND() * object_id)) % 4               WHEN 0 THEN 'A'               WHEN 1 THEN 'B'               WHEN 2 THEN 'C'               WHEN 3 THEN 'D'               ENDFROM   [sys].[columns];

What do we expect this to do? We expect it to return us a random character, 'A', 'B', 'C' or 'D' for each of the 10 rows we're selecting. What happens when we run that? Well, you'll get a random result set, but here's the one I got:


Hang on - NULL? Why is that there? Ok, so we're asking for a random positive value, then taking modulo 4 - which will always be 0, 1, 2 or 3. So how come our CASE statement isn't matching on any of those branches? To answer that question, we have to look at how the query is executed - or how it is expanded. I know I was surprised when I saw this in a presentation given by Peter Larsson, but the actual statement you're running is more akin to:

SELECT TOP 10 CASE WHEN ABS(CONVERT (INT, RAND() * OBJECT_ID)) % 4 = 0 THEN 'A' ELSE                CASE WHEN ABS(CONVERT (INT, RAND() * OBJECT_ID)) % 4 = 1 THEN 'B' ELSE                  CASE WHEN ABS(CONVERT (INT, RAND() * OBJECT_ID)) % 4 = 2 THEN 'C' ELSE                    CASE WHEN ABS(CONVERT (INT, RAND() * OBJECT_ID)) % 4 = 3 THEN 'D' END                  END                END              ENDFROM   [sys].[columns];

Oh. So, because the expression on which we're matching is non-deterministic, we are evaluating the expression on each branch of the CASE statement, and coming up with a different value each time. That would explain how we could fail to match on any of the branches - seeing as the value tested against each branch is different from the last value that was tested.

You can see the same effect when using non-deterministic scalar functions too. The moral of the story? Be careful when using case with non-deterministic expressions...

Tags: ,



November 9, 2010 15:32 by Matt Whitfield

So, we all know we can UPDATE and DELETE data in tables - but what happens when more than one table is involved?

We know we can use JOIN syntax to query data from multiple tables in SELECT statements, and that it's easy to pipe the results of a SELECT statement to an INSERT statement.

However, how do we use JOIN with UPDATE and DELETE? Let's have a look. We'll start off with a couple of simple table variables holding a couple of rows to demonstrate...

DECLARE @data TABLE (i [int], j [int])INSERT INTO @data ([i], [j]) VALUES (1, 10)INSERT INTO @data ([i], [j]) VALUES (2, 20)INSERT INTO @data ([i], [j]) VALUES (3, 30)INSERT INTO @data ([i], [j]) VALUES (4, 40)DECLARE @keys TABLE (i [int], j [int])INSERT INTO @keys ([i], [j]) VALUES (1, 100)INSERT INTO @keys ([i], [j]) VALUES (2, 210)

So, let's say we want to delete from @data where there are matching rows in @keys... We could use:

DELETE FROM @data WHERE [i] IN (SELECT [i] FROM @keys)

But, this would get complicated. Say we wanted to only delete the rows where @keys.j = @data.j * 10?

Enter the DELETE JOIN:

DELETE @data  FROM @data [d] INNER JOIN        @keys [k]     ON [d].[i] = [k].[i]    AND [d].[j] * 10 = [k].[j]SELECT *   FROM @data

And here's what we get:


Nice. But, be careful - if you specify a LEFT OUTER JOIN then all the rows in @data would be deleted - so you do have to make sure that you use a JOIN that represents the value that you want.

So, how do we UPDATE from one table to the other? Again, it's very simple:

UPDATE @data   SET [j] = [k].[j] / 10  FROM @data [d] INNER JOIN        @keys [k]     ON [d].[i] = [k].[i] 

Again though, we need to be careful... If we use a LEFT OUTER JOIN then the values of the rows with no match will receive NULL - and the contents of the @data table would look like the following:


So - JOIN in UPDATE and DELETE statements can be really helpful - but we must use them with care, respect their meaning and take the time to understand what they will do.

Tags: ,


SQL Everywhere

Tag cloud


<<  September 2018  >>

View posts in large calendar