Tag Archives: SQLServer

SQL Server: What, no arrays? Table valued types/parameters, functions and so forth.

When in SQL think tables, an array of integers is just a table with a single integer column. You can define what you might think of as an array type, they are called table valued types.

Some databases will allow a type to be defined of the same type as a table row or column (e.g. Oracle – %ROWTYPE, %TYPE), SQL Server doesn’t, which seems a shame to me at times as I hate re-defining something again in a table valued type, but that is life.

What struck me today was I was trying to return a table valued type from a function, no, it wouldn’t have it, so I had to again re-define the same thing.
This is fine:

CREATE FUNCTION tim(@t NVARCHAR(MAX))
RETURNS @res TABLE (rescol1 INT, rescol2 NVARCHAR(MAX))
AS
BEGIN
	INSERT INTO @res (rescol1, rescol2) VALUES (9, 'hello')
	RETURN
END

But I already have a table valued type defined like so:

CREATE TYPE restype AS TABLE (rescol1 INT, rescol2 NVARCHAR(MAX))

And I can write functions like:

CREATE FUNCTION tim2(@t NVARCHAR(MAX))
RETURNS NVARCHAR(MAX)
AS
BEGIN
	RETURN 'hello'
END

But it won’t allow:

CREATE FUNCTION tim3(@t NVARCHAR(MAX))
RETURNS restype
AS
BEGIN
	DECLARE @res AS restype
	INSERT INTO @res (rescol1, rescol2) VALUES (9, 'hello')
	RETURN @res
END

The error is [Must declare the scalar variable “@res”.]
Any other combination with restype doesn’t work because this way I am declaring a scalar valued function, so it won’t allow restype to be used.

But I want to use restype in my calling T-SQL? Well, you can do this:

DECLARE @vals restype
INSERT INTO @vals
SELECT * FROM tim('t val')

Great, now let’s pass one to a function.

CREATE FUNCTION bill(@tvp restype READONLY)
RETURNS NVARCHAR(MAX)
AS
BEGIN
	DECLARE @r AS NVARCHAR(MAX)
	SELECT TOP 1 @r = CONVERT(NVARCHAR(MAX), rescol1) + rescol2
	FROM @tvp
	RETURN @r
END
GO

DECLARE @vals restype
INSERT INTO @vals SELECT * FROM dbo.tim('t val')
SELECT dbo.bill(@vals)

Notice they have to be READONLY.

Alternatives? Of course, temporary tables spring to mind, but I find them slightly more tricky, you get fatal errors if they already exist, do anything in a loop and kaboom, you’ll have to code that more carefully.

Hook an ASP.NET MVC 2 Web Application into an existing SQL Server 2008 R2 instance

I was revisiting project templates in VisualStudio 2010 and when I was playing around I created an ASP.NET MVC 2 Web Application. I then wanted to hook into my existing SQL Server 2008 R2 instance rather than the SQLEXPRESS instance (the default the project template is setup to use) because I was getting this error:
“Unable to connect to SQL Server database.”
“A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 – Error Locating Server/Instance Specified)”

This was because the SQLEXPRESS instance was disabled, of course I enabled it and then it created the ASPNETDB.MDF in the App_Data folder and registered a new user just fine. But I didn’t want this, I wanted a “real” DB to play about with so I disabled the SQLEXPRESS instance again and deleted the ASPNETDB.MDF file.
I changed the default connection string from:
“data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|aspnetdb.mdf;User Instance=true”
to:
“Data Source=localhost;Integrated Security=True;Initial Catalog=aspnetdb;”
Great, but I then needed to create the aspnetdb DB, but that was easy, I used aspnet_regsql.exe (ASP.NET SQL Server Setup Wizard). You can configure the name you want for your database (aspnetdb in my case) in the wizard or the application takes command line arguments too.
I then had a “real” DB I could play about with in Management Studio without attaching the ASPNETDB.MDF or any other fiddling about.

DATALENGTH function

If you get the error “Argument data type XXXX is invalid for argument 1 of len function.” then you’ll need to use the DATALENGTH function instead.
I typically get this when I want to know how many bytes (how big) an image or binary field is in SQL Server.
Note: For strings the trailing spaces WILL be counted, which isn’t the case with LEN.

See also DATALENGTH (Transact-SQL)
and LEN (Transact-SQL)

SQL Server: Linked server select query into a new table

It’s always worth remembering SELECT INTO a new table or a temporary table, here’s and example:

SELECT [COL1], [COL2]
INTO [DB name].[Schema name].[Table name]
FROM [Source DB name].[Source Schema name].[Source Table name]

This will create the new table with the columns matching the name and types of the selected columns.
Alternatively you could select into a temporary table like so:

SELECT [COL1], [COL2]
INTO #temporarytable
FROM [Source DB name].[Source Schema name].[Source Table name]

For performance reasons you may want to copy the contents of a linked server into new or temporary tables in SQL Server, then run complex scripts and SQL against your SQL instance. Here’s an example of how I would do this, I am using OPENQUERY because that takes care of type inconsistencies for me:

SELECT [COL1], [COL2]
INTO [DB name].[Schema name].[Table name]
FROM OPENQUERY([Linked Server], N'SELECT * FROM SourceSchema.SourceTable')

Of course here your linked server could be anything, but I’ve used this for Oracle most recently and it was so much better than trying to export the data from Oracle and import it into SQL Server.

SQL Server: Bulk updates, bulk inserts and table valued parameters for arrays

When writing bulk updates or inserts into Oracle with ODP.NET you can use PLSQLAssociativeArray (example here) and you may search around for an equivalent array type in SQL Server but there isn’t one. Firstly this is disappointing, then you kind of get used to the idea, hopefully this post will help you on onto this path.
First off I thought it was wrong not to have an array type, “Oracle does, this is stupid” I thought. After a cup of tea, I put a couple of seconds thought into this and the best I could come up with is all the data we use in SQL is tabular data anyway and you could represent a PLSQLAssociativeArray as a table with one column right? So is it any loss? No, of course not, but the programmer in me is used to dealing with arrays.
However there is a drawback that I haven’t completely got over yet, and that is that you must define the type name of the SqlParameter, this must be a valid type, so randomly creating some SQL and firing it at the database isn’t an option. But I have other options in my mind, here’s some examples of bulk operations utilizing table valued parameters:

Create a table to manipulate:

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE TABLE [dbo].[Greetings](
	[PrimaryId] [int] NOT NULL,
	[FirstValue] [nchar](10) NULL,
	[SecondValue] [nchar](10) NULL
) ON [PRIMARY]

GO

Create a loosely typed DataTable and populate it with some data to insert into the table:

DataTable dt = new DataTable("Tim");
dt.Columns.Add(new DataColumn("FirstCol", typeof(int)));
dt.Columns.Add(new DataColumn("SecondCol", typeof(String)));
dt.Columns.Add(new DataColumn("ThirdCol", typeof(String)));

DataRow r1 = dt.NewRow();
r1["FirstCol"] = 1;
r1["SecondCol"] = "Hello";
r1["ThirdCol"] = "Hi";
dt.Rows.Add(r1);

DataRow r2 = dt.NewRow();
r2["FirstCol"] = 2;
r2["SecondCol"] = "Goodbye";
r2["ThirdCol"] = "Bye";
dt.Rows.Add(r2);

(brings back memories of .NET 1.1 doesn’t it)

Next up create a table valued type in SQL Server, of course you could keep well defined types in your database, but for this example I just want something “temporary”. Also this could be a generated string from the DataTable, look around someone might already be doing this, else it’s easy enough to roll your own

using (SqlCommand cmd = conn.CreateCommand())
{
    cmd.CommandType = CommandType.Text;
    cmd.CommandText =
        "BEGIN TRY"
        + " CREATE TYPE dbo.T1"
        + " AS TABLE ( [FirstCol] [int] NOT NULL, [SecondCol] [nchar](10) NULL, [ThirdCol] [nchar](10) NULL )"
        + " END TRY"
        + " BEGIN CATCH"
        + " END CATCH";
    cmd.ExecuteNonQuery();
}

So by this point in execution your app would have a type defined called dbo.T1, you could of course use a temporary generated name for this too. Note the TRY CATCH in there; ignoring errors.

Next do a bulk insert: (Note: SqlBulkCopy may be quicker, I’ve not checked)

using (SqlCommand cmd = conn.CreateCommand())
{
    SqlParameter p = new SqlParameter("@MyInputDT", SqlDbType.Structured);
    p.Value = dt;
    p.TypeName = "dbo.T1";
    cmd.Parameters.Add(p);
    cmd.CommandType = CommandType.Text;
    cmd.CommandText =
        "INSERT INTO dbo.Greetings (PrimaryId, SecondValue)"
        + " SELECT FirstCol, ThirdCol"
        + " FROM @MyInputDT";
    cmd.ExecuteNonQuery();
}

This to me is better than passing two arrays, a size parameter and looping though and inserting each. But we’ve got SqlBulkCopy, so you may not want to use this approach.

By this point in execution there would be two new entries in the dbo.Greetings table but I’ve deliberately missed out the FirstValue, so that will be NULL.

Let’s do a bulk update to fix it, also remove the “temporary” table valued type:

using (SqlCommand cmd = conn.CreateCommand())
{
    SqlParameter p = new SqlParameter("@MyUpdDT", SqlDbType.Structured);
    p.Value = dt;
    p.TypeName = "dbo.T1";
    cmd.Parameters.Add(p);
    cmd.CommandType = CommandType.Text;
    cmd.CommandText =
        "UPDATE Greetings"
        + " SET FirstValue = SecondCol"
        + " FROM @MyUpdDT"
        + " WHERE PrimaryId = FirstCol"
        + " BEGIN TRY"
        + " DROP TYPE dbo.T1"
        + " END TRY"
        + " BEGIN CATCH"
        + " END CATCH";
    cmd.ExecuteNonQuery();
}

So there’s a bulk insert and bulk update, I think this is pretty neat, but the requirement on having a well defined type is a gripe, now I’m over the lack of array type I want a var type now, shame.

See also Table-Valued Parameters in SQL Server 2008 (ADO.NET)

SQL Server: Updating XML fields – XPath, XQuery

Further to my post SQL Server: Querying XML fields – XPath, XQuery, I’d like to expand on this and cover updating XML fields too.

Unfortunately it appears you can only do one update at at a time, so often looping is required, extracting data into temporary tables or complex XPath.
I’m going to cover the later as it’s one that isn’t so well written about.

The great thing about SQL Server is you can use the XPath String Functions, for example I’ve used the concatenation and substring functions before and here is a simple example to show how they can be used. Of note is also the neat way to include SQL variables into the XPath.

-- Create a temporary table for the example, drop if already exists

BEGIN TRY
DROP TABLE #T1
END TRY
BEGIN CATCH
END CATCH

CREATE TABLE #T1
(Val XML)

-- Stick in some XML data

INSERT INTO #T1 (Val) VALUES ('<root xmlns="http://tim.xsd">
  <branch>
    <leaf name="displayname" type="String">cycling</leaf>
    <leaf name="alternativename" type="String">is sometimes fun</leaf>
  </branch>
</root>')
INSERT INTO #T1 (Val) VALUES ('<root xmlns="http://tim.xsd">
  <branch>
    <leaf name="displayname" type="String">running</leaf>
    <leaf name="alternativename" type="String">is sometimes fun</leaf>
  </branch>
</root>')

-- Update the cycling row using a SQL variable and some functions (concat and substring)

DECLARE @timvar VARCHAR(MAX)
SELECT @timvar = ' is always '

UPDATE #T1 SET Val.modify('declare namespace TIM="http://tim.xsd";
  replace value of (/TIM:root/TIM:branch/TIM:leaf[@name="alternativename"]/text())[1]
  with fn:concat(
    "Everybody knows that ",
    (/TIM:root/TIM:branch/TIM:leaf[@name="displayname"]/text())[1],
    sql:variable("@timvar"),
    fn:substring((/TIM:root/TIM:branch/TIM:leaf[@name="alternativename"]/text())[1], 14, 3))
')
WHERE Val.query('declare namespace TIM="http://tim.xsd";(/TIM:root/TIM:branch[1]/TIM:leaf[@name="displayname"]/text())[1]')
.value('.', 'NVARCHAR(MAX)') = 'cycling'

-- Check the result

SELECT * FROM #T1

And the result is:

<root xmlns="http://tim.xsd">
  <branch>
    <leaf name="displayname" type="String">cycling</leaf>
    <leaf name="alternativename" type="String">Everybody knows that cycling is always fun</leaf>
  </branch>
</root>

<root xmlns="http://tim.xsd">
  <branch>
    <leaf name="displayname" type="String">running</leaf>
    <leaf name="alternativename" type="String">is sometimes fun</leaf>
  </branch>
</root>

SQL Server: Read only views via GROUP BY and UPDATE FROM pitfalls #sql #sqlserver

I often hit a pitfall when doing an UPDATE FROM when it is from a table INNER JOINed with itself but I only want to update one side of the join. However SQL Server would complain that it doesn’t know which Table_1 is to be considered anyway. Why would I be doing this? Imagine a table with a parent child relationship, so an id column and a parent id column that contains the id of the parent record. If I wanted to update children based on some criteria of the parent I would look at doing an UPDATE FROM with the table inner joining to itself on id equals parent id with the where criteria I wanted.

I get around this by using views, but it can be tricky to make sure I only update the child records and not the parent records, so I employ a mix of table and view. Unfortunately there is no such thing as a read only query or a read only view, but you can do a few tricks to make them non-updatable. For a view a GROUP BY (or DISTINCT) is one way and that is what I’ll show too.

Here are a couple of examples of things that work and things that don’t, unfortunately I came up with the example before this article, so it isn’t specific to a parent child relationship, but still a good example:

CREATE TABLE [dbo].[Table_1](
	[id] [int] NULL,
	[val] [int] NULL
) ON [PRIMARY]
GO

-- Three rows.
INSERT INTO Table_1 (id, val) VALUES (1, 5)
INSERT INTO Table_1 (id, val) VALUES (2, 10)
INSERT INTO Table_1 (id, val) VALUES (3, 15)
GO

CREATE VIEW View_2
AS
SELECT id, val FROM Table_1 GROUP BY id, val
GO

-- Fails, because of GROUP BY, effectively the view is read only, without the group by this would work.
UPDATE View_2
SET val = 1

-- Fails, which Table_1 is to be updated? It doesn't make sense does it?
UPDATE Table_1
SET val = 1
FROM Table_1 t1
INNER JOIN Table_1 t2
ON t1.Id != t2.Id
WHERE t1.id = 1

-- Only 1 row affected, View_2 is not updated but NOT because of GROUP BY, it is not updated because we are updating Table_1
-- The query actually returns two rows (or all three if you see what I mean), try it for yourself.
UPDATE Table_1
SET val = 1
FROM Table_1 t1
INNER JOIN View_2
ON t1.Id != View_2.Id
WHERE t1.id = 1

-- The exception to this is if Table_1 isn't in the FROM part. So the following statement updates all rows even though you might think View_2 should make this fail; nasty!
UPDATE Table_1
SET val = 1
FROM View_2

An alternative to a view is to use the WITH statement, like so:

WITH v AS (SELECT id FROM Table_1)
UPDATE Table_1
SET val = 1
FROM Table_1 t1
INNER JOIN v
ON t1.Id != v.Id
WHERE t1.id = 1