Month: April 2015

Excel in T-SQL Part 1 – HARMEAN, GEOMEAN and FREQUENCY

Posted on Updated on

Today’s blog will be the first in a multi-part series on replicating Excel functions in T-SQL, starting with HARMEAN, GEOMEAN and FREQUENCY.

We’ll focus our solutions on creating T-SQL in-line, Table Valued Functions (iTVFs), introduced in SQL Server 2005, that apply the solution technique to a specifically designed table of sample data.  If you plan on using any of these solutions, this means you’ll need to adapt them to your table of interest.  Due to the need to remain somewhat brief in my explanations, I probably won’t be doing a lot of performance testing of these iTVFs (I certainly will not in today’s blog).  But you can rest assured I’ve put my head to the grindstone in an attempt to do my best to incorporate all of the performance best practices I know of when writing this library of functions.

Some Sample Data

We’ll need to construct some sample data to validate our T-SQL code against, so let’s start with a table and some sample data.

-- A table to hold our sample data
CREATE TABLE dbo.ExcelExamples
(
    Grp         INT
    ,Value      INT
);
GO

INSERT INTO dbo.ExcelExamples (Grp, Value)
-- Grp identifiers 1, 2, 3 and 4 
SELECT 1, n
FROM (VALUES(15),(20),(35),(40),(50)) a (n)
UNION ALL 
SELECT 2, n
FROM (VALUES(3),(6),(7),(8),(8),(10),(13),(15),(16), (20)) a (n)
UNION ALL 
SELECT 3, n
FROM (VALUES(3),(6),(7),(8),(8),(9),(10),(13),(15),(16), (20)) a (n)
UNION ALL
SELECT 4, n
FROM (VALUES(1),(2),(3),(4)) a(n);

Let’s construct an Excel spreadsheet containing this sample data and show Excel’s results for the three functions we’ll attempt to replicate.  The Breaks column represents one of the arguments to the FREQUENCY function.

BLOG - Excel in T-SQL Part 1

In order to use the Excel FREQUENCY function, I needed to learn how to enter array formulas, which you’ll also need to do if you want to try some examples of your own that are like the above.

Harmonic Mean

The harmonic mean of the set of observations represented by our sample can be described as the total number of observations divided by the sum of the reciprocals of the data elements.  If you are into the mathematical representation of this, you can visit the linked Wiki page.

This is actually quite straightforward in T-SQL:

SELECT Grp, HarmonicMean=COUNT(*)/SUM(1./Value)
FROM dbo.ExcelExamples
GROUP BY Grp;
-- Results:
Grp	HarmonicMean
1	26.28285356807711767
2	8.00674561002681970
3	8.08789038406971559
4	1.92000000000307200

Note the correspondence between returned values and those calculated by Excel above.

To keep things neat and tidy, we’ll encapsulate this in an iTVF like I said I would in the introduction.

CREATE FUNCTION dbo.Excel_HarmonicMean
(
    @Grp    INT
)
RETURNS TABLE WITH SCHEMABINDING
RETURN
SELECT Grp=@Grp, HarmonicMean=COUNT(*)/SUM(1./a.Value)
FROM dbo.ExcelExamples a
WHERE @Grp IS NULL OR @Grp = a.Grp;

Which we can call either for a single group or the entire set as follows.

-- Call the iTVF for each group (Grp) in the set
SELECT a.Grp, b.HarmonicMean
FROM
( 
    SELECT a.Grp
    FROM dbo.ExcelExamples a
    GROUP BY a.Grp
) a
CROSS APPLY dbo.Excel_HarmonicMean(a.Grp) b;

-- Call the iTVF with NULL for the Grp to apply to all values in the table
SELECT Grp, HarmonicMean
FROM dbo.Excel_HarmonicMean(NULL);
-- Results:
Grp  HarmonicMean
1    26.28285356807711767
2    8.00674561002681970
3    8.08789038406971559
4    1.92000000000307200

Grp  HarmonicMean
NULL 6.14429720701647651

You can also check that the harmonic mean for all of the sample data matches our Excel spreadsheet example above.  These results show slightly more precision than Excel does, but presumably you can adjust that if you are interested.

Geometric Mean

The geometric mean of the set of observations represented by our sample data can be described as the nth root of the product of the values in the sample.  An alternative description is to find the product of the values in the sample, and raise that to the power that is the reciprocal of n.  Once again, the Wiki page will show you the mathematical representation of this calculation, if that would be more clear to you.

T-SQL has a SQRT function but it does not have a function for calculating the nth root.  It does have a POWER function, which we can use as in the secondary description.

Of course, the real challenge is calculating the product of all items in the sample, which could turn out to be quite large and end up causing an arithmetic overflow quite quickly.  To deal with this issue, we’re going to use a Stupid T-SQL Trick for Logarithms that we learned from an earlier blog.

Let’s look at the results from the following query:

SELECT Grp, SUM(LOG10(Value)), COUNT(*)
FROM dbo.ExcelExamples
GROUP BY Grp
ORDER BY Grp;
-- Results:
Grp  (No column name)   (No column name)
1    7.32221929473392   5               
2    9.70173510878387   10              
3    10.6559776182232   11              
4    1.38021124171161   4 

By summing the LOG10 values, we’ve got a logarithmic number that can be converted back to the product of the numbers that were summed by using POWER(10.0, x), where x is SUM(LOG10(Value)).  That could be quite problematic due to the potential for arithmetic overflow, but is possible for the set of sample data we’ve provided.  If we cast the SUM to a FLOAT, we’ll have a much wider range of values available, albeit at a loss of precision.

Also note that LOG10(0.0) is undefined and will return an error, so if you’ve got zero data points anywhere, you’ll need to handle that case as well.

SELECT Grp, SUM(LOG10(Value)), COUNT(*)
    ,CAST(POWER(10., CAST(SUM(LOG10(Value)) AS FLOAT)) AS FLOAT) 
FROM dbo.ExcelExamples
GROUP BY Grp
ORDER BY Grp;
-- Results:
Grp    (No column name)   (No column name)  (No column name)
1      7.32221929473392   5                 21000000
2      9.70173510878387   10                5031936000
3      10.6559776182232   11                45287424000
4      1.38021124171161   4                 24

For a quick check, we can see that 1*2*3*4 = 24 (the last row or where Grp=4).

We’re not quite there yet, because we still need to take the nth root, but we can now calculate the geometric mean thusly:

SELECT Grp, GeometricMean=
    POWER(CAST(POWER(10., CAST(SUM(LOG10(Value)) AS FLOAT)) AS FLOAT)
        , 1./COUNT(*))              -- Reciprocal of count gives nth root
FROM dbo.ExcelExamples
GROUP BY Grp
ORDER BY Grp;
-- Results:
Grp   GeometricMean
1     29.1369345857619
2     9.33627232372056
3     9.30518984213118
4     2.21336383940064

Our results indicate that we seem to have nailed the geometric mean as calculated by the Excel GEOMEAN function.

Let’s now do what we always like to do and said we were going to do in the introduction, and that is to encapsulate this in an iTVF.

CREATE FUNCTION dbo.Excel_GeometricMean
(
    @Grp    INT
)
RETURNS TABLE WITH SCHEMABINDING
RETURN
SELECT Grp=@Grp, GeometricMean=
    POWER(CAST(POWER(10., CAST(SUM(LOG10(a.Value)) AS FLOAT)) AS FLOAT)
        , 1./COUNT(*))              -- Reciprocal of count gives nth root
FROM dbo.ExcelExamples a
WHERE @Grp IS NULL OR @Grp = a.Grp;

And we can now either call that function for a group or we can calculate the geometric mean across all of the sample data.

-- Call the iTVF for each group (Grp) in the set
SELECT a.Grp, b.GeometricMean
FROM
( 
    SELECT a.Grp
    FROM dbo.ExcelExamples a
    GROUP BY a.Grp
) a
CROSS APPLY dbo.Excel_GeometricMean(a.Grp) b;

-- Call the iTVF with NULL for the Grp to apply to all values in the table
SELECT Grp, GeometricMean
FROM dbo.Excel_GeometricMean(NULL);
-- Results:
Grp   GeometricMean
1     29.1369345857619
2     9.33627232372056
3     9.30518984213118
4     2.21336383940064

Grp   GeometricMean
NULL  9.3040372829131

Once again, note the correspondence between returned values and those calculated by Excel above.

Frequency

Building an iTVF for FREQUENCY is going to be just a little more challenging than the two cases we’ve studied above.  Let’s take a look at a specialized query targeting our first Grp to get started.

DECLARE @Grp  INT = 1;

WITH Ranges (r) AS
(
    -- The ranges of interest for Grp=1 (refer to the Excel example)
    SELECT 10 UNION ALL SELECT 25 UNION ALL SELECT 40
),
    RangesExt (r, rn) AS
(
    -- Add a row number (i.e., extend the ranges)
    SELECT r, rn=ROW_NUMBER() OVER (ORDER BY r)
    FROM Ranges
),
    JoinedRanges AS
(
    SELECT Grp, r1, r2 
    FROM
    (
        -- Full join to increase range rows from 3 to 4
        SELECT r1=a.r, r2=b.r
        FROM RangesExt a
        FULL JOIN RangesExt b ON a.rn = b.rn - 1
    ) a
    CROSS APPLY (SELECT Grp=@Grp) b
)
SELECT *
FROM JoinedRanges;
-- Results:
Grp	r1	r2
1	NULL	10
1	10	25
1	25	40
1	40	NULL

If you’ve looked at how FREQUENCY works, it constructs four rows from a three value range (each value being the upper end of a range), in order to find anything less than the second value (first array value returned), greater than the first value (last array value returned), or otherwise between intermediate values.  More than three rows in the range of course always results in one additional row of results.

Our r1 and r2 columns can be used to perform the range checks described above.  In fact, they’re really the heart of the function we’ll show you next.

Note that Excel’s FREQUENCY function may only calculate counts between zero and the value in the first row, whereas what we’re about to do extends the lower end of the range into negative numbers.  The documentation was unclear and I didn’t personally check it out.  But we encourage our valued readers to do so and modify my code accordingly if it better suits your needs that way.

Before we create a function to do the heavy lifting here, we want to mention Table Valued Parameters (TVPs).  This is a way of creating and using tables based on a pattern on-the-fly (sort of) in T-SQL.  I’ve written before about how TVPs can be used.  Let’s create one now (this is done by the CREATE TYPE AS TABLE statement):

CREATE TYPE dbo.Excel_FrequencyRanges AS TABLE
(
    [Range] INT
);
GO
DECLARE @Sample_Range dbo.Excel_FrequencyRanges;

INSERT INTO @Sample_Range
SELECT 10 UNION ALL SELECT 25 UNION ALL SELECT 40;

SELECT *
FROM @Sample_Range;
-- Results:
Range
10
25
40

TVPs can be created as above and passed into a T-SQL function, assuming you are using at least SQL 2008.

So now here’s my take on a T-SQL iTVF to calculate frequency. Note how I pass in the TVP as the second parameter to the function.

CREATE FUNCTION dbo.Excel_Frequency
(
    @Grp        INT
    ,@Ranges    dbo.Excel_FrequencyRanges READONLY
)
RETURNS TABLE WITH SCHEMABINDING
RETURN
WITH RangesExt (r, rn) AS
(
    -- Add a row number
    SELECT r=[Range], rn=ROW_NUMBER() OVER (ORDER BY [Range])
    FROM @Ranges
),
    JoinedRanges AS
(
    SELECT Grp, r1, r2 
        -- Create a string to represent the range
        ,[Range]=CASE 
                WHEN a.r1 IS NULL
                THEN 'x less than or equal to ' + CAST(a.r1 AS VARCHAR(20))
                WHEN a.r2 IS NULL
                THEN 'x greater than ' + CAST(a.r1 AS VARCHAR(20))
                ELSE CAST(a.r1 AS VARCHAR(20)) + ' less than x less than or equal to +
                    CAST(a.r2 AS VARCHAR(20))
                END
    FROM
    (
        -- Full join to increase range rows from 3 to 4
        SELECT r1=a.r, r2=b.r
        FROM RangesExt a
        FULL JOIN RangesExt b ON a.rn = b.rn - 1
    ) a
    CROSS APPLY (SELECT Grp=@Grp) b
)
SELECT a.Grp, a.[Range], Frequency=COUNT(Value)
    ,[Order]=ROW_NUMBER() OVER (ORDER BY r1)
FROM JoinedRanges a
OUTER APPLY
(
    SELECT Grp, Value
    FROM dbo.ExcelExamples b
    WHERE (@Grp IS NULL OR a.Grp = b.Grp) AND
        (r1 IS NULL OR b.Value > r1) AND
        (r2 IS NULL OR b.Value <= r2)
) b
GROUP BY a.Grp, a.[Range], a.r1;

I’ve done a couple of things in our function here that perhaps require explanation:

  • I’ve formatted the ranges we’re interested in into character strings that list out the values compared to an “x” – the column of interest, which in the case of our table is Value.
  • I’ve returned an [Order] column to help in ordering the results after their return should we need it for display purposes.

Let’s run this bad boy through its paces.

DECLARE @Sample_Range dbo.Excel_FrequencyRanges;

-- Create the range for Grp=1
INSERT INTO @Sample_Range
SELECT 10 UNION ALL SELECT 25 UNION ALL SELECT 40;

-- Display the results for Grp=1
SELECT *
FROM dbo.Excel_Frequency(1, @Sample_Range);

-- Clear the table so we can create the range for Grp IN (2, 3)
DELETE FROM @Sample_Range;

-- Create the range for Grp IN (2, 3)
INSERT INTO @Sample_Range
SELECT 5 UNION ALL SELECT 10 
UNION ALL SELECT 15 UNION ALL SELECT 20;

-- Display the results for Grp IN (2, 3)
SELECT a.Grp, b.[Range], b.Frequency
FROM
(
    SELECT Grp
    FROM dbo.ExcelExamples
    WHERE Grp IN (2, 3)
    GROUP BY Grp
) a
CROSS APPLY dbo.Excel_Frequency(a.Grp, @Sample_Range) b
ORDER BY b.Grp, b.[Order];

-- Clear the table so we can create the range for Grp=4
DELETE FROM @Sample_Range;

-- Create the range for Grp=4
INSERT INTO @Sample_Range
SELECT 1 UNION ALL SELECT 3;

-- Display the results for Grp=1
SELECT *
FROM dbo.Excel_Frequency(4, @Sample_Range);

-- Clear the table so we can create the range across all data points
DELETE FROM @Sample_Range;

-- Create the range to use across all data points
INSERT INTO @Sample_Range
SELECT 5 UNION ALL SELECT 10 
UNION ALL SELECT 15 UNION ALL SELECT 20
UNION ALL SELECT 30;

-- Call the iTVF with NULL for the Grp to apply to all values in the table
SELECT *
FROM dbo.Excel_Frequency(NULL, @Sample_Range);

The script above produces the four results sets shown below.

Grp  Range                                    Frequency  Order
1    x less than or equal to 10               0          1
1    10 less than x less than or equal to 25  2          2
1    25 less than x less than or equal to 40  2          3
1    x greater than 40                        1          4

Grp  Range                                    Frequency
2    x less than or equal to 5                1
2    5 less than x less than or equal to 10   5
2    10 less than x less than or equal to 15  2
2    15 less than x less than or equal to 20  2
2    x greater than 20                        0
3    x less than or equal to 5                1
3    5 less than x less than or equal to 10   6
3    10 less than x less than or equal to 15  2
3    15 less than x less than or equal to 20  2
3    x greater than 20                        0

Grp  Range                                    Frequency  Order
4    x less than or equal to 1                1          1
4    1 less than x less than or equal to 3    2          2
4    x greater than 3                         1          3

Grp  Range                                    Frequency  Order
NULL x less than or equal to 5                6          1
NULL 5 less than x less than or equal to 10   11         2
NULL 10 less than x less than or equal to 15  5          3
NULL 15 less than x less than or equal to 20  5          4
NULL 20 less than x less than or equal to 30  0          5
NULL x greater than 30                        3          6

You might want to go back and check those frequencies against the Excel example just to be sure I’m not trying to pull a fast one on you.

So there you have it!  Our first attempt at replicating a few Excel functions in T-SQL.

And here you may have been thinking that I was going to teach you how to excel at writing SQL when you first read that title!  Maybe that will come later, once I’ve figured it out myself.

Follow me on Twitter: @DwainCSQL

© Copyright Dwain Camps 29 Apr 2015.  All rights reserved.

Advertisements

Ruminations on Writing Great T-SQL

Posted on Updated on

Today we’re going to try to extend some advice that I once heard from SQL MVP Jeff Moden, which I wrote about in my previous blog entitled “Make it Work, Make it Fast, Make it Pretty.”  That advice was just as pertinent then as it is now, but perhaps I can add to it what I think makes for really great T-SQL code.

Great T-SQL code is going to start with all of the advice that I gave in that blog.  That same advice really applies to any coding language, not just T-SQL.

But great T-SQL code is actually much more than that.  Let’s look at some of the aspects that transcend what I’ve previously written.

Elegance of a Set-based Algorithm

Obviously in our T-SQL we’d like to use predominately set-based algorithms.  Depending on the problem being addressed, oftentimes there are just a few very elegant solutions, and probably a bunch of solutions that are not nearly as elegant.

In simplicity there is often elegance.  Simplicity, elegance and speed may unfortunately not all play well together.  But if you can come up with a solution that is elegant, simple and fast, that’s usually going to get you a lot closer to writing great T-SQL.

Reusability

From Facts and Fallacies of Software Engineering by Robert L. Glass, here is Fact #18: “There are two “rules of three” in reuse: (a) It is three times as difficult to build reusable components as single use components, and (b) a reusable component should be tried out in three different applications before it will be sufficiently general to accept into a reuse library.

I will suggest that in order to be great, T-SQL code must be reusable.  I’ve written before that well-focused code that solves only a specific problem is probably going to be a whole lot faster than more generalized code that solves a wider variety of problems.  So that means it should be focused on solving a common problem that occurs frequently, so that it can be reused (as a tool) whenever the need for it arises.

I tend to agree with Mr. Glass in this case; finding a common problem and then building a fast-performing, simple and elegant T-SQL solution is probably a lot more challenging than building any other one-off query.  Part of this challenge of course is simply finding a suitably simple problem to focus on, which is also encountered commonly enough to want to build something that’s reusable.

Frequency and Quality of its Reuse

Since beginning my study of T-SQL, and of course I remain a student of the subject to this day, I’ve seen a fair number of really elegant solutions to specific problems.  Of the ones that can be classified as reusable, I’ve seen many that solve well a somewhat common problem, but unfortunately the problem itself is not so common that the solution will be reused often.  Let’s face it, maybe you’ve got a solution for Gaps and Islands that is outrageously wonderful, simple, fast and elegant.  But Gaps and Islands isn’t a particularly common problem to encounter.  Sure you see it once in a while, and certainly it is good to have an elegant solution just waiting in the wings for when you need it.  But commonality of the problem is a great influencer, at least to me, of what makes for great T-SQL.

Quality of reuse is a different story entirely.  So let’s say you’ve got this (arguably) great solution and it sees quite a fair bit of reuse.  Who are the folks that are reusing it?  Are they peers in the SQL community, are they SQL Server MVPs who have embraced your method thus utilizing it themselves?  Or is it the great unwashed masses?

Facebook is a great application because millions of people use it.  But who are those users?  I’d say it is a much higher quality pattern of reuse, to have a group of peers or those that are more skilled than you are, using your reusable code, rather than just anybody else in the world.  So while Facebook may be a great app, it may not represent great code!  The quality of the people reusing your code counts for something.

Productivity

Most great T-SQL code should enhance your productivity.  By that I mean, if you’re a developer it can be considered a tool that allows you to avoid spending awkward moments of your development time solving silly sub-problems, when really you need to be focusing on solving the bigger problems that relate to the business solutions you’re trying to produce.  As a DBA, your productivity can be enhanced by any T-SQL script that automates some activity that allows you to be more proactive in your database monitoring activities.

Of course, just because you can solve a problem more quickly because the sub-problems have already been addressed doesn’t mean you should sacrifice performance.  But since great T-SQL should already be pretty fast, you may find that using the tool makes the solution to your bigger problem fast enough!  Ultimately you may have to spend some time to make it even faster, but at least you’ve gotten through the first hurdle of a development project, which is to just get it working.

Encapsulation

Any great T-SQL solution is one that can be encapsulated such that it can be used as a “black box,” without the need for users (or more importantly maintainers) of the using bit of T-SQL, to need to worry about how it works.

Yes, they should be able to look under the covers, so that when need be they can figure out how it works.  But mostly the encapsulations simply makes a great piece of T-SQL, which may otherwise be elegant or complex, really, really easy to use.

My Vote for the Greatest Piece of T-SQL

Arguably the greatest piece of T-SQL code that I’ve ever seen written is the DelimitedSplit8K string splitter.  While Jeff Moden probably wasn’t the originator of the concept of splitting a string, his first attempt at writing a fast string splitter (Tally OH! An Improved SQL 8K “CSV Splitter” Function) was pretty awesome.  I’d say that it clearly meets all of the criteria I’ve mentioned so far in this blog, and with more than 50,000 views of that article as of this writing I’d imagine there’s more than just a few people that agree with me.

Jeff really did a great job of setting up a proper test harness and proving the speed potential of the algorithm, so whether or not he claims to be the author of this wonderful little function, he’ll be the one everyone remembers once all of the dust has settled.

It has also achieved another level of greatness, which is unmatched in my experience.  That is that an entire community has sprung up around it.  There have over the years since its original publication, been numerous contributors to the evolution of DelimitedSplit8K.  These contributions have spanned more than 75 pages of discussion thread on the topic (again, as of this writing).

Alas, there are too many contributors to mention all of them by name, but suffice it to say that list includes some really great SQL talent.  Some have focused on small tweaks, others have focused on producing better test harnesses.  Others have written custom CLRs to improve the speed even further, when CLRs are allowed.  All of the contributors probably use this tool frequently, so this speaks volumes to the function’s adoption and reuse.  One of the recent contributors, Eirikur Eiriksson, upped the ante significantly by writing a version that was even faster by utilizing a new SQL 2012 feature, and his diligence in constructing test harnesses is also worthy of honorable mention.

So here’s this huge group of peers (by that I mean SQL-folk of course), that have not only embraced Jeff’s work, but have also formed an on-going support community because they felt the underlying problem was so worthwhile that they should expend their precious time and effort on it.

There’s an interesting sidebar to my vote.  The Federal Relational Protection Agency (FRPA) might scoff at how preposterous the idea of a string-splitter being great T-SQL is.  After all, this is a relational database and we simply don’t store delimited strings in our properly normalized relational databases, now do we?  I guess what I’m saying here is that academic considerations probably take a back seat to adoption.  Clearly this problem occurs relatively frequently in the real world, regardless of what the FRPA would have us think.  And solving real world problems, to me at least, is really what it’s all about.

Aspirations

One day when I grow up I want to write a truly great snippet of T-SQL.  In my development career, knowing that solutions I’ve written are actually being used has always brought me great comfort and satisfaction.  No developer in their right mind, at least in my humble opinion, wants to knowingly build shelfware.  I’ve been around for a long time so maybe not everyone feels the same any more, but I hope this attitude still carries forward to all levels of developers currently putting their code out there for others to learn from and grow with.

So how about you?  Do you have a vote for the greatest snippet of T-SQL ever?  Post a link as a comment telling us why you think it qualifies, so all who see this might benefit from your particular experience with some T-SQL you think is great.  Affirmations of my vote are also welcome.

In the end, if this blog becomes a repository of some really great T-SQL code patterns, I will derive a lot of satisfaction from that too.

And to those of you out there that share my aspirations, whether or not the favorite tool/snippet you develop ever makes the list of all-time greats, wouldn’t you like to hear from all of those people that found your code wonderful enough to add to their T-SQL toolkit?  So take note you adopters out there!  If you’re using someone else’s code commonly to solve a problem, post them a thank you.  I’m sure they’ll be happy to hear that from you, and who knows?  Perhaps it will even spawn further greatness and sharing among one of the greatest communities the coding world has ever known.  That right, I mean the SQLverse, and we’re all a part of it!

Thanks for listening to my ruminations today on what makes T-SQL code great.

Follow me on Twitter: @DwainCSQL

© Copyright Dwain Camps 22 Apr 2015.  All rights reserved.

Stupid T-SQL Tricks – Part 3: A Zodiacal SQL

Posted on Updated on

Today we’ll learn how you can use T-SQL to calculate your zodiac sign from your birthdate.

“Heeeeeere’s Johnny!” – Ed McMahon introducing Johnny Carson for the Tonight Show

If you’re old enough to remember the Tonight Show hosted by Johnny Carson, how can you not remember the iconic Stupid Pet Tricks skits, and the later tributes to those done by David Letterman?  My Stupid T-SQL Tricks are hopefully a bit more useful, but just as amusing as Carson’s Stupid Pet Tricks were!

Everyone knows the twelve signs of the zodiac, and there’s a pretty good chance you also know what your sign is, and that which sign of the zodiac you are depends on your date of birth.

Putting the Zodiac Signs into a Lookup Table

Four our lookup table, we’ll use a Common Table Expression (CTE) with the twelve zodiac signs, but with one repeated.

WITH Zodiac
(
    SignNo, SignName, Symbol, StartDT, EndDT, SignIs
) 
AS
(
    SELECT           1,  'Aries',       NCHAR(9800), '1900-03-21', '1900-04-20', 'Ram'
    UNION ALL SELECT 2,  'Taurus',      NCHAR(9801), '1900-04-21', '1900-05-21', 'Bull'
    UNION ALL SELECT 3,  'Gemini',      NCHAR(9802), '1900-05-22', '1900-06-21', 'Twins'
    UNION ALL SELECT 4,  'Cancer',      NCHAR(9803), '1900-06-22', '1900-07-22', 'Crab'
    UNION ALL SELECT 5,  'Leo',         NCHAR(9804), '1900-07-23', '1900-08-22', 'Lion'
    UNION ALL SELECT 6,  'Virgo',       NCHAR(9805), '1900-08-23', '1900-09-23', 'Maiden'
    UNION ALL SELECT 7,  'Libra',       NCHAR(9806), '1900-09-24', '1900-10-23', 'Scales'
    UNION ALL SELECT 8,  'Scorpio',     NCHAR(9807), '1900-10-24', '1900-11-22', 'Scorpion'
    UNION ALL SELECT 9,  'Sagitarius',  NCHAR(9808), '1900-11-23', '1900-12-21', 'Archer (Centaur)'
    UNION ALL SELECT 10, 'Capricorn',   NCHAR(9809), '1900-12-22', '1900-12-31', 'Sea-Goat (Goat)'
    UNION ALL SELECT 10, 'Capricorn',   NCHAR(9809), '1900-01-01', '1900-01-20', 'Sea-Goat (Goat)'
    UNION ALL SELECT 11, 'Aquarius',    NCHAR(9810), '1900-01-21', '1900-02-19', 'Water-bearer'
    UNION ALL SELECT 12, 'Pisces',      NCHAR(9811), '1900-02-20', '1900-03-20', 'Fish'
)
SELECT *
FROM Zodiac;

The signs are numbered in the order that they’re typically described in the literature, e.g., on the Wiki page for The Zodiac.  Note how Capricorn is duplicated, because the dates for that sign cross the end-of-year boundary.  Here are the displayed results from our table:

SignNo  SignName     Symbol  StartDT     EndDT       SignIs
1       Aries        ♈      1900-03-21  1900-04-20   Ram
2       Taurus       ♉      1900-04-21  1900-05-21   Bull
3       Gemini       ♊      1900-05-22  1900-06-21   Twins
4       Cancer       ♋      1900-06-22  1900-07-22   Crab
5       Leo          ♌      1900-07-23  1900-08-22   Lion
6       Virgo        ♍      1900-08-23  1900-09-23   Maiden
7       Libra        ♎      1900-09-24  1900-10-23   Scales
8       Scorpio      ♏      1900-10-24  1900-11-22   Scorpion
9       Sagitarius   ♐      1900-11-23  1900-12-21   Archer (Centaur)
10      Capricorn    ♑      1900-12-22  1900-12-31   Sea-Goat (Goat)
10      Capricorn    ♑      1900-01-01  1900-01-20   Sea-Goat (Goat)
11      Aquarius     ♒      1900-01-21  1900-02-19   Water-bearer
12      Pisces       ♓       1900-02-20  1900-03-20   Fish

I think it is kind of cool that we could use the NCHAR built-in T-SQL function (introduced in SQL 2005) to return the UNICODE symbol for each zodiac sign!

Suppose we’d now like to calculate the sign for a birthdate.  We’ll use two cases, a Capricorn and a Gemini to illustrate how easy it is using the lookup table above.

DECLARE @BirthDate  DATE = '1999-01-12';

WITH Zodiac
(
    SignNo, SignName, Symbol, StartDT, EndDT, SignIs
) 
AS
(
    SELECT           1,  'Aries',       NCHAR(9800), '1900-03-21', '1900-04-20', 'Ram'
    UNION ALL SELECT 2,  'Taurus',      NCHAR(9801), '1900-04-21', '1900-05-21', 'Bull'
    UNION ALL SELECT 3,  'Gemini',      NCHAR(9802), '1900-05-22', '1900-06-21', 'Twins'
    UNION ALL SELECT 4,  'Cancer',      NCHAR(9803), '1900-06-22', '1900-07-22', 'Crab'
    UNION ALL SELECT 5,  'Leo',         NCHAR(9804), '1900-07-23', '1900-08-22', 'Lion'
    UNION ALL SELECT 6,  'Virgo',       NCHAR(9805), '1900-08-23', '1900-09-23', 'Maiden'
    UNION ALL SELECT 7,  'Libra',       NCHAR(9806), '1900-09-24', '1900-10-23', 'Scales'
    UNION ALL SELECT 8,  'Scorpio',     NCHAR(9807), '1900-10-24', '1900-11-22', 'Scorpion'
    UNION ALL SELECT 9,  'Sagitarius',  NCHAR(9808), '1900-11-23', '1900-12-21', 'Archer (Centaur)'
    UNION ALL SELECT 10, 'Capricorn',   NCHAR(9809), '1900-12-22', '1900-12-31', 'Sea-Goat (Goat)'
    UNION ALL SELECT 10, 'Capricorn',   NCHAR(9809), '1900-01-01', '1900-01-20', 'Sea-Goat (Goat)'
    UNION ALL SELECT 11, 'Aquarius',    NCHAR(9810), '1900-01-21', '1900-02-19', 'Water-bearer'
    UNION ALL SELECT 12, 'Pisces',      NCHAR(9811), '1900-02-20', '1900-03-20', 'Fish'
)
SELECT BirthDate=@BirthDate, a.SignNo, a.SignName, SignIs='The ' + a.SignIs, a.Symbol
FROM Zodiac a
WHERE @BirthDate BETWEEN 
    DATEADD(year, DATEDIFF(year, a.StartDT, @BirthDate), a.StartDT) AND 
    DATEADD(year, DATEDIFF(year, a.EndDT, @BirthDate), a.EndDT);

Which returns one row for our Capricorn:

BirthDate   SignNo   SignName    SignIs              Symbol
1999-01-12  10       Capricorn   The Sea-Goat (Goat) ♑

You can see how we’ve used the DATEADD and DATEDIFF functions to place our birthdate between the StartDT and EndDT columns within our lookup table.

A Function to Calculate the Sign of the Zodiac for a Birth Date

I like in-line, Table Valued Functions (iTVFs) because I consider them tools.  This little T-SQL query is easy enough to put into an iTVF, so let’s do that now.

CREATE FUNCTION dbo.ZodiacSign
(
    @BirthDate  DATE
) RETURNS TABLE WITH SCHEMABINDING
RETURN
WITH Zodiac
(
    SignNo, SignName, Symbol, StartDT, EndDT, SignIs
) 
AS
(
    SELECT           1,  'Aries',       NCHAR(9800), '1900-03-21', '1900-04-20', 'Ram'
    UNION ALL SELECT 2,  'Taurus',      NCHAR(9801), '1900-04-21', '1900-05-21', 'Bull'
    UNION ALL SELECT 3,  'Gemini',      NCHAR(9802), '1900-05-22', '1900-06-21', 'Twins'
    UNION ALL SELECT 4,  'Cancer',      NCHAR(9803), '1900-06-22', '1900-07-22', 'Crab'
    UNION ALL SELECT 5,  'Leo',         NCHAR(9804), '1900-07-23', '1900-08-22', 'Lion'
    UNION ALL SELECT 6,  'Virgo',       NCHAR(9805), '1900-08-23', '1900-09-23', 'Maiden'
    UNION ALL SELECT 7,  'Libra',       NCHAR(9806), '1900-09-24', '1900-10-23', 'Scales'
    UNION ALL SELECT 8,  'Scorpio',     NCHAR(9807), '1900-10-24', '1900-11-22', 'Scorpion'
    UNION ALL SELECT 9,  'Sagitarius',  NCHAR(9808), '1900-11-23', '1900-12-21', 'Archer (Centaur)'
    UNION ALL SELECT 10, 'Capricorn',   NCHAR(9809), '1900-12-22', '1900-12-31', 'Sea-Goat (Goat)'
    UNION ALL SELECT 10, 'Capricorn',   NCHAR(9809), '1900-01-01', '1900-01-20', 'Sea-Goat (Goat)'
    UNION ALL SELECT 11, 'Aquarius',    NCHAR(9810), '1900-01-21', '1900-02-19', 'Water-bearer'
    UNION ALL SELECT 12, 'Pisces',      NCHAR(9811), '1900-02-20', '1900-03-20', 'Fish'
)
SELECT BirthDate=@BirthDate, a.SignNo, a.SignName, SignIs='The ' + a.SignIs, a.Symbol
FROM Zodiac a
WHERE @BirthDate BETWEEN 
    DATEADD(year, DATEDIFF(year, a.StartDT, @BirthDate), a.StartDT) AND 
    DATEADD(year, DATEDIFF(year, a.EndDT, @BirthDate), a.EndDT);

With this function, we can now calculate the Zodiac signs for a whole range of birthdates as follows:

WITH BirthDates (BirthDate) AS
(
    SELECT '1998-06-17'             -- A Gemini
    UNION ALL SELECT '1999-01-10'   -- A Capricorn
)
SELECT a.BirthDate, SignNo, SignName, SignIs, Symbol
FROM BirthDates a
CROSS APPLY dbo.ZodiacSign(a.BirthDate) b;

Which gives us these results:

BirthDate   SignNo  SignName    SignIs               Symbol
1998-06-17  3       Gemini      The Twins            ♊
1999-01-10  10      Capricorn   The Sea-Goat (Goat)  ♑

The Zodiac Date Range in which a Birth Date Falls

Because of Capricorn, which splits two different years, were we to want to calculate the date range for the sign on which a particular birth date falls, things get just a little more complicated.  For example, for our two birthdates shown above, we’d like to see a results set that looks like this.

BirthDate   SignNo  SignName    SignIs               SignPeriodStart  SignPeriodEnd  Symbol
1998-06-17  3       Gemini      The Twins            1998-05-22       1998-06-21     ♊
1999-01-10  10      Capricorn   The Sea-Goat (Goat)  1998-12-22       1999-01-20     ♑

Notice how Capricorn’s SignPeriodStart is the year before that particular birthdate.

Our ZodiacSign function can be modified as follows to handle this case.

ALTER FUNCTION dbo.ZodiacSign
(
    @BirthDate  DATE
) RETURNS TABLE WITH SCHEMABINDING
RETURN
WITH Zodiac
(
    SignNo, SignName, Symbol, StartDT, EndDT, SignIs
) 
AS
(
    SELECT           1,  'Aries',       NCHAR(9800), '1900-03-21', '1900-04-20', 'Ram'
    UNION ALL SELECT 2,  'Taurus',      NCHAR(9801), '1900-04-21', '1900-05-21', 'Bull'
    UNION ALL SELECT 3,  'Gemini',      NCHAR(9802), '1900-05-22', '1900-06-21', 'Twins'
    UNION ALL SELECT 4,  'Cancer',      NCHAR(9803), '1900-06-22', '1900-07-22', 'Crab'
    UNION ALL SELECT 5,  'Leo',         NCHAR(9804), '1900-07-23', '1900-08-22', 'Lion'
    UNION ALL SELECT 6,  'Virgo',       NCHAR(9805), '1900-08-23', '1900-09-23', 'Maiden'
    UNION ALL SELECT 7,  'Libra',       NCHAR(9806), '1900-09-24', '1900-10-23', 'Scales'
    UNION ALL SELECT 8,  'Scorpio',     NCHAR(9807), '1900-10-24', '1900-11-22', 'Scorpion'
    UNION ALL SELECT 9,  'Sagitarius',  NCHAR(9808), '1900-11-23', '1900-12-21', 'Archer (Centaur)'
    UNION ALL SELECT 10, 'Capricorn',   NCHAR(9809), '1900-12-22', '1900-12-31', 'Sea-Goat (Goat)'
    UNION ALL SELECT 10, 'Capricorn',   NCHAR(9809), '1900-01-01', '1900-01-20', 'Sea-Goat (Goat)'
    UNION ALL SELECT 11, 'Aquarius',    NCHAR(9810), '1900-01-21', '1900-02-19', 'Water-bearer'
    UNION ALL SELECT 12, 'Pisces',      NCHAR(9811), '1900-02-20', '1900-03-20', 'Fish'
),
    GroupedSigns AS
(
    SELECT SignNo
        ,StartDT    = CASE WHEN SignNo = 10 AND MONTH(@BirthDate) = 1
                        THEN DATEADD(year, -1, MAX(StartDT)) 
                        WHEN SignNo = 10 AND MONTH(@BirthDate) = 12
                        THEN MAX(StartDT)
                        ELSE MIN(StartDT) 
                        END
        ,EndDT      = CASE WHEN SignNo = 10 AND MONTH(@BirthDate) = 1
                        THEN MIN(EndDT) 
                        WHEN SignNo = 10 AND MONTH(@BirthDate) = 12
                        THEN DATEADD(year, 1, MIN(EndDT))
                        ELSE MAX(EndDT) 
                        END    FROM Zodiac
    GROUP BY SignNo
)
SELECT a.SignNo, a.SignName, SignIs='The ' + a.SignIs, a.Symbol
    ,SignPeriodStart    = CAST(DATEADD(year, YEAR(@BirthDate)-1900, b.StartDT) AS DATE)
    ,SignPeriodEnd      = CAST(DATEADD(year, YEAR(@BirthDate)-1900, b.EndDT) AS DATE)
FROM Zodiac a
JOIN GroupedSigns b ON a.SignNo = b.SignNo
WHERE @BirthDate BETWEEN 
    DATEADD(year, DATEDIFF(year, a.StartDT, @BirthDate), a.StartDT) AND 
    DATEADD(year, DATEDIFF(year, a.EndDT, @BirthDate), a.EndDT);

Because of our split row for Capricorn, we needed to create a grouping that set the dates accordingly and then JOIN this back to our original Zodiac lookup table.  A little more date arithmetic magic, this time using the DATEADD and YEAR built-in functions, will get us to our period dates.

This can be demonstrated by running the following query to call our newly ALTERed function:

WITH BirthDates (BirthDate) AS
(
    SELECT '1998-06-17'             -- A Gemini
    UNION ALL SELECT '1999-01-10'   -- A Capricorn
)
SELECT a.BirthDate, SignNo, SignName, SignIs
    ,SignPeriodStart
    ,SignPeriodEnd
    ,Symbol
FROM BirthDates a
CROSS APPLY dbo.ZodiacSign(a.BirthDate) b;

Which gives us the results we seek!

BirthDate   SignNo  SignName    SignIs               SignPeriodStart  SignPeriodEnd  Symbol
1998-06-17  3       Gemini      The Twins            1998-05-22       1998-06-21     ♊
1999-01-10  10      Capricorn   The Sea-Goat (Goat)  1998-12-22       1999-01-20     ♑

Conclusions

I love to create iTVFs as tools I can always use when some calculation gets hairy!  I have many tools that are specific to their purpose.  I also like to adopt iTVFs of this sort from other sources (a great example of that being DelimitedSplit8K from SQL MVP Jeff Moden).  You should probably consider doing this yourself to improve your skill at both writing iTVFs and using the tools you create to accelerate your own T-SQL coding jobs.

Now if I could just figure out the Yin and the Yang of the Chinese calendar to confirm I’m a Dog that would really be something to chew on.

Yet another stupid, T-SQL trick, right?  Who knows, perhaps one day you’ll even have a use for it!

Follow me on Twitter: @DwainCSQL

© Copyright Dwain Camps 16 Apr 2015.  All rights reserved.

An Even Faster Method of Calculating the Median on a Partitioned Heap

Posted on Updated on

Back in 2013 I wrote an article kindly accepted for publication on the Simple Talk web site called Calculating the Median Value within a Partitioned Set Using T-SQL.  In that article, I was delighted to propose a previously unpublished solution for calculating the median over a partitioned set that did not rely on INDEXing.  In fact, in my tests it was the fastest solution available for that case.

Later on, I was quite pleased to hear that 18 times nominated (as of this writing) SQL MVP Aaron Bertrand published a comparative study of methods for calculating the median called the Best approaches for grouped median, where he kindly included my submission, and confirmed that it was the highest performance solution available for the case of a heap.

Today we’re going to take a second look at the solution I proposed, and offer a couple of interesting possibilities that may be even better for the case of a partitioned heap.

A Heap Example and Some Sample Data to Populate the Table

The following DDL can be used to create a heap, and load it with some partitioned sample data (the ID column is the partition).

CREATE TABLE dbo.MedianValues
(
   ID INT
   ,N INT
);

INSERT INTO dbo.MedianValues
VALUES (1,1),(1,2),(1,3),(1,4),(1,5)
   ,(2,1),(2,2),(2,3),(2,4)
   ,(3,10),(3,2),(3,10),(3,4)
   ,(4,1),(4,5),(4,1),(4,3),(4,3);

From my original article, below are the results you would expect for the median for these four sets of N:

ID    Ordered Sets         Median
1     {1, 2, 3, 4, 5}      3
2     {1, 2, 3, 4}         2.5
3     {2, 4, 10, 10}       7
4     {1, 1, 3, 3, 5}      3

As a reminder, the median is calculated as follows:

  • Order the values for which you want to calculate the median over the partitioning column (ID).
  • If the count of the rows is odd, you take the middle value. So in the example above, you would take the third row for IDs 1 and 4.
  • If the count of the rows is even, you take the average of the middle two rows. In the example above, this means you use the second and third elements in IDs 2 and 3.

My Original Solution to Median for this Case

Aaron commented that my somewhat “elaborate” solution un-pivots the rows of interest using the CROSS APPLY VALUES approach.

WITH Counts AS
(
   SELECT ID, c
   FROM
   (
      SELECT ID, c1=(c+1)/2, c2=CASE c%2 WHEN 0 THEN 1+c/2 ELSE 0 END
      FROM
      (
          SELECT ID, c=COUNT(*)
          FROM dbo.MedianValues
          GROUP BY ID
      ) a
   ) a
   CROSS APPLY (VALUES(c1),(c2)) b(c)
)
SELECT ID=a.ID, Median=AVG(0.+b.N)
FROM
(
   SELECT ID, N
      ,rn=ROW_NUMBER() OVER (PARTITION BY ID ORDER BY N)
   FROM dbo.MedianValues a
) a
CROSS APPLY
(
   SELECT N
   FROM Counts b
   WHERE a.ID = b.ID AND a.rn = b.c
) b
GROUP BY a.ID;

It is important to note how we convert the integer Ns to a FLOAT within the AVG grouping function, to ensure no truncation occurs.  The results set for this on our sample data appears as follows, confirming the correct calculation of the median for each case.

ID   Median
1    3.000000
2    2.500000
3    7.000000
4    3.000000

Calculating a Median Using Grouping Sets

While I was playing around with GROUPING SETS the other day, I hit upon the strangest query result, which gave me an idea that relates to calculating medians.  Let’s take a look at a sample of what I found.

SELECT ID=CASE GROUPING_ID(ID) WHEN 1 THEN ID1 ELSE ID END
    ,c=CASE GROUPING_ID(ID1) WHEN 1 THEN 1+COUNT(*)/2
        ELSE (COUNT(*)+1)/2 
        END
FROM 
(
    SELECT ID, ID1=ID
    FROM dbo.MedianValues
) a
GROUP BY GROUPING SETS (ID, ID1)
HAVING GROUPING_ID(ID) = 1 OR (GROUPING_ID(ID1) = 1 AND COUNT(*)%2 = 0)
ORDER BY ID, c;

This query produces a results set consisting of all the partitioning IDs with one row for IDs 1 and 4 (where c=3) and two rows for IDs 2 and 3 (where c=2 and 3 respectively).  Take a look at that description and compare it to my description of how to calculate median, and see if you too are not intrigued.

ID   c
1    3
2    2
2    3
3    2
3    3
4    3

Coincidentally, this is exactly what I was trying to achieve by my “elaborate” (according to Aaron) method that used CROSS APPLY VALUES!  One could argue that this is even more elaborate, and I would tend to agree.  So naturally the thought struck me that perhaps I could use this approach to calculate a median.

My initial attempts at JOINing these six rows back to the full set with a ROW_NUMBER() added yielded me no joy however, as the result was slower than my original.

Once again inspiration struck and I had the idea to aggregate the GROUPING SETS so that I only had one row for each ID, after which I could do a range (BETWEEN) JOIN.  So my aggregation now looks like this.

SELECT ID, c1=MIN(c), c2=MAX(c) 
FROM
(
    SELECT ID=CASE GROUPING_ID(ID) WHEN 1 THEN ID1 ELSE ID END
        ,c=CASE GROUPING_ID(ID1) WHEN 1 THEN 1+COUNT(*)/2
            ELSE (COUNT(*)+1)/2 
            END
    FROM 
    (
        SELECT ID, ID1=ID
        FROM dbo.MedianValues
    ) a
    GROUP BY GROUPING SETS (ID, ID1)
    HAVING GROUPING_ID(ID) = 1 OR (GROUPING_ID(ID1) = 1 AND COUNT(*)%2 = 0)
) a
GROUP BY ID;

Which produces a set that looks like this:

ID  c1  c2
1   3   3
2   2   3
3   2   3
4   3   3

Playing around with JOINing this back to the original data set, led me to using a CROSS APPLY instead of a JOIN.  So what I ended up with, which calculates the median quite nicely, is the following query.

WITH RowsOfInterest AS
(
    SELECT a.ID, N
    FROM 
    (
        SELECT ID, c1=MIN(c), c2=MAX(c) -- * -- ID, c
        FROM
        (
            SELECT ID=CASE GROUPING_ID(ID) WHEN 1 THEN ID1 ELSE ID END
                ,c=CASE GROUPING_ID(ID1) WHEN 1 THEN 1+COUNT(*)/2
                    ELSE (COUNT(*)+1)/2 
                    END
            FROM 
            (
                SELECT ID, ID1=ID
                FROM dbo.MedianValues
            ) a
            GROUP BY GROUPING SETS (ID, ID1)
            HAVING GROUPING_ID(ID) = 1 OR (GROUPING_ID(ID1) = 1 AND COUNT(*)%2 = 0)
        ) a
        GROUP BY ID
        HAVING COUNT(*) <= 2
    ) a
    CROSS APPLY
    (
        SELECT ID, N
        FROM 
        (
            SELECT ID, N
                ,rn=ROW_NUMBER() OVER (PARTITION BY ID ORDER BY N)
            FROM dbo.MedianValues b
        ) b
        WHERE a.ID = b.ID AND b.rn BETWEEN a.c1 AND a.c2
    ) b
)
SELECT @ID=ID, @Median=AVG(0.+N)
FROM RowsOfInterest a
GROUP BY a.ID;

How’s that for elaborate, Aaron?  Initial timings appeared to be pretty decent, but more importantly what I was seeing was a drastically reduced IO count and CPU usage.  Clearly this was a promising vector of attack, although the truth will only get sorted out using SQL Profiler to average the results of multiple runs (the elapsed timings were very close).

One rather annoying note on this solution.  When I first hit upon it, I could have sworn I had something that was consistently faster than my original solution.  Unfortunately due to a PC reboot (pesky Windows updates!) I lost that one and I was never able to quite achieve the speed I was seeing.  This could be an interesting assignment for those so inclined.

Simplifying the Elaborate GROUPING SETS Approach

After thinking a little more about this, I realized it might be possible to greatly simplify this aggregation to produce just the four rows I needed.  Silly of me really not to think of it in the first place, but then my brain may not be wired that way.

So here’s another method to get the median using a simpler initial aggregation to get ID, c1 and c2 just like the query in the prior section.

WITH RowsOfInterest AS
(
    SELECT ID, c=COUNT(*)
    FROM dbo.MedianValues
    GROUP BY ID
)
SELECT ID=ID, Median=AVG(0.+N)
FROM
(
    SELECT a.ID, N
    FROM 
    (
        SELECT ID, c1=(c+1)/2, c2=CASE c%2 WHEN 0 THEN 1+c/2 ELSE (c+1)/2 END
        FROM RowsOfInterest
    ) a
    JOIN 
    (
        SELECT ID, N
            ,rn=ROW_NUMBER() OVER (PARTITION BY ID ORDER BY N)
        FROM dbo.MedianValues b
    ) b
    ON a.ID = b.ID AND b.rn BETWEEN a.c1 AND a.c2
) a
GROUP BY a.ID;

This time I used a JOIN instead of a CROSS APPLY because initial timings seemed to indicate a slight advantage.  This method also exhibited a serious reduction in IO counts and CPU usage over my original solution.

You’ll note that this method is quite similar to my original approach except that it doesn’t CROSS APPLY VALUES and doesn’t do the pre-aggregation in the second solution either.  The main difference from the original solution is the method of JOINing, which is to use the BETWEEN range selector.

The One Million Row Test Harness

In my original Simple Talk article I developed a one million row test harness that can be used to test a median query, and I have reproduced that below exactly as before.

TRUNCATE TABLE dbo.MedianValues;

DECLARE @N INT = 10000; -- 10,000 = ~1,000,000 rows

WITH Tally (n) AS
(
   SELECT TOP (@N) ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
   FROM sys.all_columns a CROSS JOIN sys.all_columns b
)
INSERT INTO dbo.MedianValues
SELECT a.n, ABS(CHECKSUM(NEWID()))%@N
-- Always create exactly 100 IDs from the Tally table
FROM (SELECT TOP 100 n FROM Tally) a
-- Generate either an odd or even number of rows within IDs
CROSS APPLY
(
   SELECT TOP (@N-a.n%2) n
   FROM Tally
)b;

This actually generates 999,950 rows into our sample data table.

The SQL Profiler Test

My initial timings using SET STATISTICS IO, TIME ON were producing pretty fine numbers for IO and CPU, but pretty close scores for elapsed times.  So the only way to really be sure which of these three queries is the fastest, it is necessary to run each batch multiple times (I chose that to be ten) and then average the results.  I won’t include the Profiler test harness here because this is just a blog, and I’m sure if you’ve read my previous blog on the One Million Row Test Harness, you’re capable of doing this yourself given everything we’ve given you above.

Here are the raw and averaged SQL Profiler results for the three queries, with the last row being the average of the ten prior.

Solution #1: Original method Solution #2: Using Grouping Sets Solution #3: Simplified aggregation
CPU Reads Duration CPU Reads Duration CPU Reads Duration
4,943 2,006,286 1,585 1,622 5,843 1,663 2,840 5,674 1,147
4,759 2,006,286 1,638 1,826 5,843 1,663 2,979 5,674 1,106
4,820 2,006,286 1,575 1,715 5,843 1,662 2,682 5,674 1,326
4,836 2,006,286 1,654 1,732 5,843 1,672 2,932 5,674 1,353
4,822 2,006,286 1,608 1,747 5,843 1,620 2,949 5,674 1,190
4,787 2,006,286 1,598 1,701 5,843 1,659 2,729 5,674 1,161
4,788 2,006,286 1,669 1,716 5,843 1,638 2,872 5,674 1,175
5,104 2,006,286 1,574 1,716 5,843 1,538 3,026 5,674 1,244
4,698 2,006,286 1,569 1,700 5,843 1,665 2,840 5,674 1,296
4,632 2,006,286 1,571 1,747 5,843 1,663 2,792 5,674 1,195
4,819 2,006,286 1,604 1,722 5,843 1,644 2,864 5,674 1,219

It is also instructive to look at a closer comparison of the averages.

  Averages
Solution CPU Reads Duration
#1: Original method 4,819 2,006,286 1,604
#2: Using Grouping Sets 1,722 5,843 1,644
#3: Simplified aggregation 2,864 5,674 1,219
Best Solution #2 #3 #3
New #2 %O/(U) #1 -64% -100% 3%
New #3 %O/(U) #1 -41% -100% -24%

You can see how solution #3 was the best performer in terms of IO (Reads) and Duration (Elapsed MS).  Solution #2 had the edge in CPU and was nearly tied in duration with solution #1.  Overall my take on these results is that they are both better than my original solution, but of the two #3 is probably the best.

The percentage rows indicate that solution #3 was 41% better than (improved over) solution #1 in CPU, while solution #2 was 64% better than solution #1.  Both new solutions pretty much ripped solution #1 apart in terms of IO.  And probably the most important statistics shows that while solutions #1 and #2 were probably indeterminately different, solution #3 was about 24% faster than both.

The Final Word

The two new solutions for calculating the median within partitioned sets on a heap (no indexing on the table) seem to have delivered a slight edge over the previous benchmark case.  It is always gratifying to do something just a little bit better, even when you’re kicking yourself for not thinking of doing it that way in the first place.

Performance testing was performed on:

  • SQL Server 2012 64 bit
  • Windows 7 64 bit with 12GB of memory installed
  • Intel Core i5-3230M CPU @ 2.60 GHz

I did not attempt to test these solutions using either of the INDEXing schemes proposed by Aaron in his comparative article, because my gut tells me that they’re probably no better than the ones that won out in those particular scenarios.  Since this is just a blog, we’ll have to settle today for seeing just this one case.

And to Aaron, if you happen to be reading this blog at some time, my little digs at you calling my solution “elaborate” were all in good fun.  Actually I considered it quite a badge of honor that you picked my solution in your comparative study.

Thanks folks as always for listening.  Hope the solutions you find here and in my other blogs are helpful to you some day.

Follow me on Twitter: @DwainCSQL

© Copyright Dwain Camps 08 Apr 2015.  All rights reserved.