Search This Blog

Friday, 15 February 2008

Building Time Zone awareness - Date Time types

I have been working for the last two years on projects which involve synchronising data between web applications and many a the date and time on web application used as the source of the replication is not on the same time zone as the destination web application. Some times it is necessary that some portion of the date time values moved across these applications be time zone aware. In other words some date time values need to be moved into the local time where the server is present, while some needed to be moved at absolute times(same as specified on source).

For eg.

Consider a table with four columns on a web application in UK on SQL 2000 server
ID - identity
Name - VARCHAR(100)
sellingDate - datetime -- The date and time on which the product should start selling
ModifiedDate - datetime -- The date and time on which the product record or data was last modified

Now when I move this data across to a web application in India the selling date should still remain on absolute date time specified, but the last modified time should be moved across as relative time to the time zone where this is present.

This is quite a constraint since you would end up having a business rule for every datetime column of your database and this rule is only a adhoc one determined by functionality rather than any real type based information. It would be great to distinguish these time zone aware columns in the database level.

Another problem when a .Net datetime value is serialised is it has an extra time offset appended at the end. This when passed to SQL 2000 parameters was not usable in its raw string form (say when passed as a XML fragment parameter containing this value)

In SQL 2008 we have new date time data types as below
DATE. - Allows only dates, accuracy is a day
TIME - Allows only times to the accuracy of nano seconds
SMALLDATETIME - Allows dates and times , accuracy is one second
DATETIME - Allows date and time as before with 3ms accuracy, but the range of dates that can be specified has been increased to 01-01-1753 to 31-12-9999
DATETIME2 - Allows specifying date and time like the DATETIME type but to a accuracy of nano seconds , YYYY-MM-DD hh:mm:ss:[nnnnnnn]
DATETIMEOFFSET - Allows users to specify a date and time with a time zone offset.

With these 6 new date time types in SQL server 2008, time zone aware applications can be more efficiently and easily designed/ developed.

As we have seen the date time types, lets see what functions are available, its great to see an introduction of the datetimeoffset but it would even better to do some playing around with it at the database level

Functions which return DATETIME values, These are as present in SQL 2000, 2005
GETDATE(), CURRENT_TIMESTAMP and GETUTCDATE() are still available.

Functions which return DATETIME2 values, other wise refered to as HIGH PRECISION date time functions, SYSDATETIME(), SYSUTCDATETIME(),SYSDATETIMEOFFSET() (returns DATETIMEOFFSET type value).

Other functions as in previous versions such as DATEPART, DATEDIFF, DATEADD are available , but two new useful functions which complement the DATETIMEOFFSET datatype hav been introduced.



SELECT SWITCHOFFSET(sysdatetimeoffset(), '+08:00'

The SWITCHOFFSET function converts a value of datetimeoffset type from stored time zone to specified time zone


The TODATETIMEOFFSET function converts a datetime type value into a DATETIMEOFFSET type with the offset specified


Another useful function for datetime is the ISDATE() function, which validates that the specified value in the parameter is a valid date time value. There are several different scenarios to use it but in general for something as below

SELECT ISDATE(NULL); --returns 0

USE AdventureWorks;
SELECT ISDATE(s.OrderDate) FROM Sales.SalesOrderHeader s;
--Returns ones

All the information above is based on the November CTP for Katmai. Looking at all the above it does show good prospects for handling date and times for developers.

Tuesday, 12 February 2008

Change Data Capture (CDC) - I


Oracle 9i has this feature since 2003 as I can recollect, but being a SQL Server developer I am quite pleased its available in Katmai. To give a bit of background as to why CDC is a crucial feature , we will try to understand ETL in Data warehousing. A crucial function of data warehousing is extracting data on a regular basis applying some kind of transform if necessary to the data and storing it in the warehouse. So if i have a database replication setup I would run DTS packages which replicate and transform all data between the transactional database and the warehouse database. If you are a DBA or a developer involved in a data integration project that involves the regular extraction and transportation of a large amount of data from one system to another system or systems, you could consider CDC to help reduce your ETL time. The point is its more efficient than our previous replication methodologies for sure and it is particularly effective as we scale up on the database size.
In SQL Server 2000, 2005 you would have written triggers to update data into another table where the metadata on the change is stored and then further right more plumbing code to put it all together ... I work on a product which has a similar concept.. Believe me it’s not a easy on 2000 and 2005. We where able to track data even though it was complicated but now in Katmai we will be able to capture the details of the event which caused data change on the relevant database table.

In Katmai, when we use Change Data Capture feature on a database table, a mirror of the tracked table schema is created with additional columns that include metadata. The metadata holds all the information about the change.

To know if CDC is enabled for a database, we could run the following query

SELECT * FROM sys.databases WHERE is_cdc_enabled = 1

Enabling the database for CDC

USE AdventureWorks


EXEC sys.sp_cdc_enable_db_change_data_capture


When you execute this command five tables in a schema called cdc are created. In addition to this there are two table valued functions added on to the cdc schema of the database. These functions behave like stubs for table based CDC functions

The five tables created are as below






In Katmai, when we use Change Data Capture feature on a database table, a mirror of the tracked table schema is created with additional columns that include metadata. The metadata holds all the information about the change.

In addition to this you will notice there are two Jobs one for Capture and one for cleanup added under the SQL Server Agent jobs for the database CDC functionality

CDC Schema

Enabling the table for CDC

On the AdventureWorks database to enable CDC on HumanResources.Employee table, we should use the following functions

USE Adventureworks


EXEC sys.sp_cdc_enable_table_change_data_capture

@source_schema = 'HumanResources',

@source_name = 'Employee',

@role_name = 'cdc_AdventureWorks',



When we execute this command a new capture instance is created a table with this name of the capture instance is created with the schema as shown in the diagram

Note that columns as in the Employee table are created, In addition to which there are five extra columns which store metadata about the data.

Now we update a row on the HumanResources.Employee table as below

UPDATE HumanResources.Employee

SET LoginID = 'adventureworks\guy2'

WHERE employeeid = 1

Now select the data in the HumanResources_Employee_CT, we will find two rows inserted in this table, One representing the data before the update and the other after the data was inserted

Disabling CDC on a table/capture instance

USE Adventureworks


EXEC sys.sp_cdc_disable_table_change_data_capture

@source_schema = 'HumanResources',

@source_name = 'Employee',

@capture_instance = 'HumanResources_Employee_CT'


I guess I will pause here and continue on the rest in my next blog…. There are several cdc related system functions which could be used for different purposes, I will detail each of these in my next blog.. and i will also detail the schema a bit when i get my head around it hopefully

Download - Scripts CDC

Monday, 11 February 2008

Resource Governor

Before we understand the Resource Governor let us consider an example where we have hosted two databases App1 and App2 on a single SQL Server 2005 with a query execution time out of 600 seconds. Lets say App1 database should not allow queries to execute for more than 5 minutes which is 300 seconds and App2 should allow 600 seconds. This was not easy to achieve on SQL 2005. (We may argue that we could specify connection timeout on the client connection string in ADO.Net, but that’s just a work around), Alright, what if I want to give connections of App1 database more priority than App2 database connections when server resources are limited?

One of the new features in SQL 2008 (Katmai) is the Resource Governer. It is a component that allows administrators to configure pools of resources. Consider this an analogy to the Application Pool in IIS 6.

The 2 main components in the Resource Governer that matter are the Resource Pool and the Workload Group.

Resource Pool

The Resource Pool allows you set server (as in the physical server on which SQL server is installed) level resource usage such as


Using TSQL


Workload Group.

The Workload Group allows you to control some specific SQL Server related conditions, as well as apply Resource Pools. It allows you to control settings such as the connection priority, query time out, max parallel requests and other parameters as mentioned below


Using TSQL

IMPORTANCE = MEDIUM , -- default is medium, its like OS thread priority
REQUEST_MEMORY_GRANT_TIMEOUT_SEC = 300,-- max query timeout to 5 min
MAX_DOP = 8, -- maximum degree of parallelism to 8
USING poolNorthwind;

In order for SQL Server to be able to use the Workload Groups that you've created, you need to classify each connection to determine which group that specific connections falls into. This is done through the use of a Classifier Function. Classifier functions are new in SQL Server 2008 and execute each time a new connection is made to the server. Classifier functions are scalar user-defined functions and are basically used to return the name of the target Workload Group for each user connection.

Code Example

CREATE FUNCTION ufnNorthwindRsGovMgr()

IF (APP_NAME() = 'Microsoft SQL Server Management Studio')
SET @grpName = 'workGroupNorthwind'
SET @grpName = 'default'

The final step in the configuration of the Resource Governor is to assign the classifier function and activate the configuration.



Using the Management Studio

If we use the Management Studio which comes with the Developer versrion of SQL 2008 CTP allows you to create the Resource Pool using the Resource Governor properties dialog, shown in the picture on this entry

I have created a solution with the above code .

Features on SQL Server 2008 - Katmai

For the last three weeks i have been playing around with SQL Server 2008 (Katmai) CTP. I managed to install it on Windows 2003 server without any problems. Although i would advise people not to install alongside SQL Server 2005. If you want to use management studio of SQL 2008 CTP be sure to use the developer edition CTP, the expression edition only comes with some client tools. Since its a CTP and you only are trying to get your head around new features it is useful to have management studio installedAs with any CTP i didnt know where to start and i found that Chad Boyd ihas listed the new features for Katmai quite extensively on his blog. Since I am just a developer and i filtered the list of features that may be more useful for a developer to look at.. I m planning to plough through one feature at a time and put up some information on this site for people who are interested.



  • Policy based management framework (manage via policies vs. scripts, enterprise wide support, automated monitoring/enforcement, etc.).
  • Integrate with Microsoft System Centre.
  • Extended Events (high perf lightweight tracing infrastructure, NOT sql trace, integrated with ETW).

Development Enhancements

Service Broker

  • New UI and Tools for working with (add/drop/edit functionality within SSMS, Diag tools).
    Conversation Priority (set message ordering, send/receive impact, 110 levels).

Data Storage

  • Data compression (see above).
  • FILESTREAM attribute (get the 'best of both' functionality from BLOBs in the DB vs. BLOBs on filesystem, no more "to blob or not to blob").
  • Integrated Full Text Search (FTS fully integrated into DB engine, no external storage, no external service, more efficient and reliable costing).
  • Sparse columns (more efficient storage for 'wide' tables with many columns that repeat and don't contain data).
  • New index types (spatial indexes, hierarchical indexes, FILTERED indexes (indexes on filtered values within columns), etc.).
  • Reporting

    • IIS Agnostic Reporting Services Deployment (no IIS required to run RS any longer).Richtext support.
    • Enhanced visualiztion (graphing).
    • New Word rendering (render reports to Microsoft Word).