Friday, April 29, 2011

Recursive Query To Find All Child Node in SQL server 2005+

How to find the all child of particular parent using the Recursive Query
--here we are creating the table variable so that we can insert some dummy record
Declare @Table Table
(
 TId int,
 ParentId int,
 Name varchar(10)
)
--inserting some records
--using the union all to insert more than one records
insert into @Table
Select 1,NULL,'ICT'
Union All
Select 2,1,'ICT-M1'
Union All
Select 4,1,'ICT-M2'
Union All
Select 7,2,'ICT-M1U1'
Union All
Select 8,2,'ICT-M1U2'
Union All
Select 9,4,'ICT-M2U1'
Union All
Select 10,4,'ICT-M2U2'
Union All
Select 11,7,'ICT-M1U1P1'
Union All
Select 12,7,'ICT-M1U1P2'
Union All
Select 13,8,'ICT-M1U2P1'
Union All
Select 14,8,'ICT-M1U2P2'
Union All
Select 15,9,'ICT-M2U1P1'
Union All
Select 16,9,'ICT-M2U1P2'
Union All
Select 17,10,'ICT-M2U2P1'
Union All
Select 18,10,'ICT-M2U2P2'

--variable to hold data
Declare @ChildNode varchar(1000)
Set @ChildNode='';

 
--use the standard recursive query
;with [CTE] as 
(
 --anchor query where we are finding the all parents
    select TId,ParentId,Name,CAST(ISNULL(CAST(ParentId as varchar(10)),'0') As Varchar(100)) As ChildNode
    from @Table c where c.[ParentId] is null

    union all

 --recursive query where we are finding the all child according to anchor query parent
    select c.TId,c.ParentId,c.Name,
    CAST( p.ChildNode +','+cast(c.TId as varchar(10) ) As Varchar(100)) As ChildNode
    from [CTE] p, @Table c where c.[ParentId] = p.[TId]    
)
--select the child node as per the id
--Assigin the all Ids into  one variable
select @ChildNode=@ChildNode+','+Cast(TId as varchar(10))
from [CTE]
Cross Apply
dbo.Split(ChildNode,',')
where items=9
order by TId

select SUBSTRING(@ChildNode,2,LEN(@ChildNode))

--outpu
--2,7,8,11,12,13,14


----
--create the tabular function to split the multi valued into table
Create FUNCTION [dbo].[Split](@String varchar(8000), @Delimiter char(1))     
returns @temptable TABLE (items varchar(8000))     
as     
begin     
 declare @idx int     
 declare @slice varchar(8000)     
    
 select @idx = 1     
  if len(@String)<1 or @String is null  return     
    
 while @idx!= 0     
 begin     
  set @idx = charindex(@Delimiter,@String)     
  if @idx!=0     
   set @slice = left(@String,@idx - 1)     
  else     
   set @slice = @String     
  
  if(len(@slice)>0)
   insert into @temptable(Items) values(@slice)     

  set @String = right(@String,len(@String) - @idx)     
  if len(@String) = 0 break     
 end 
return     
end


--
Happy Coding

Saturday, April 16, 2011

How to insert multiple record using xml in sql server 2005+

How to insert the multiple record using the xml into sql server, please check the following example.


DECLARE @idoc int
DECLARE @doc varchar(max)
SET @doc ='
<ROOT>
 <Trans TransId="1" Add="false" Edit="true" Delete="true" View="true" Block="false">   
 </Trans>
 <Trans TransId="2" Add="1" Edit="1" Delete="1" View="1" Block="0">   
 </Trans>
</ROOT>'

--Create an internal representation of the XML document.
EXEC sp_xml_preparedocument @idoc OUTPUT, @doc
-- Execute a SELECT statement that uses the OPENXML rowset provider.


SELECT *
Into #TempTable 
FROM OPENXML(@idoc, '/ROOT/Trans',1)
WITH 
( 
 TransId  varchar(10),
    [Add] bit,
    Edit bit,
    [Delete] bit,
    [View] bit,
    Block bit
)

Select * From #TempTable

drop table #TempTable

--
happy coding 

Tuesday, December 14, 2010

Performance Comparision Between For, While and Foreach Loop

Today I was posted one post regarding list to datatable conversion. But one of the my senior told me that try to avoid foreach loop.

So that I was googled/binged regarding the same.

Here I am showing you some figure for the same here.

Using the System.Diagnostics.Stopwatch class I ran some tests. 100,000 iterations in a for loop that did nothing inside took me 0.0003745 seconds. This was the code for the loop:
 
for (int i = 0; i < 100000; i++) ;

The while loop resulted 0.0003641 seconds, which is pretty much the same as the for loop. Here is the code I used:

int i=0;
while (i < 100000)
 i++;

 
The foreach loop has a slightly different purpose. It is meant for itterating through some collection that implements IEnumerable. It's performance is much slower, my test resulted in 0.0009076 seconds with this code:

int[] test = new int[100000];
foreach (int i in test) ;

foreach creates an instance of an enumerator (returned from GetEnumerator) and that enumerator also keeps state throughout the course of the foreach loop. It then repeatedly calls for the Next() object on the enumerator and runs your code for each object it returns.


So, it seems that 'while' is the fastest looping technique among the three available techniques in C#,  for a given processing within the loop. Right?

It varies. "while" and "for" have pretty much the same results. While had a slight advantage, but not one that great.


Im not an expert, but I have the feeling that a while and for loop, once compiled to MSLI, probably are both the exact same thing.

And I wouldn't be surprised if foreach was faster when iterating through objects, since the optimizer can "expect" whats going to happen... So I'd go with foreach being faster if you can use it, and the 2 others being the same thing, if foreach isn't applicable.

My logic here is that if you're looping through a collection, and you use a for loop for example, you're going to have to use the index of the collection, which depending on implementation, could be a minor performance hit to "seek" the object, as opposed to going through a well written iterator.

Just an example.So really: I wouldn't care too much about performance of these loops. This isn't C/C++, and when you compile, its not native code (at first). So it is safe to assume that the solution that seems the most efficient "logically" will be so in practice.

For each is slower for a number of reasons. One is it is using the IEnumerable interface, which requires some casting (assuming you aren't using a generic collection). My test above seemed to go along with that as well. A simple for/while/do loop is pretty much as simple as it gets.

As for the MSIL code, let's take a look. This is the MSIL for the for loop:

IL_0000:  nop
  IL_0001:  ldc.i4.0
  IL_0002:  stloc.0
  IL_0003:  br.s       IL_0009
  IL_0005:  ldloc.0
  IL_0006:  ldc.i4.1
  IL_0007:  add
  IL_0008:  stloc.0
  IL_0009:  ldloc.0
  IL_000a:  ldc.i4     0x186a0
  IL_000f:  clt
  IL_0011:  stloc.1
  IL_0012:  ldloc.1
  IL_0013:  brtrue.s   IL_0005

And here it is for the while loop:

IL_0000:  nop
  IL_0001:  ldc.i4.0
  IL_0002:  stloc.0
  IL_0003:  br.s       IL_0009
  IL_0005:  ldloc.0
  IL_0006:  ldc.i4.1
  IL_0007:  add
  IL_0008:  stloc.0
  IL_0009:  ldloc.0
  IL_000a:  ldc.i4     0x186a0
  IL_000f:  clt
  IL_0011:  stloc.1
  IL_0012:  ldloc.1
  IL_0013:  brtrue.s   IL_0005

So yes, they are exactly identical.

Although very handy, C#'s foreach statement is actually quite dangerous. In fact, I may swear off its use entirely. Why? Two reasons: (1) performance, and (2) predictability.

Performance

Iterating through a collection using foreach is slower than with for. I can't remember where I first learned that, perhaps in Patterns & Practices: Improving .Net Application Performance. Maybe it was from personal experience. How much slower? Well, I suppose that depends on your particular circumstances. Here are a few interesting references:

Predictability

I was looking at the C# Reference entry for foreach today and noticed this for the first time (italics added by me):

The foreach statement is used to iterate through the collection to get the desired information, but should not be used to change the contents of the collection to avoid unpredictable side effects.
What's that all about? Let's take this as an example:
foreach(MyClass myObj in List)
Looking deeper into the C# Language Specification... the iteration variable is supposed to be read-only, though apparently that doesn't stop you from updating a property of an object. Thus for instance it would be illegal to assign a new value to myObj, but not to assign a new value to myObj.MyProperty.
And that's all I can find. Why are there unpredictable side effects? I don't know. But seems best to heed Microsoft's warning.

Conclusion

Some argue that you shouldn't code for performance from the beginning, and therefore go ahead and use foreach whenever you want so long as you don't update the values. In my experience that's hogwash — most of the code I work on goes into environments where performance is extremely important. Besides, writing a for statement requires very little extra coding compared to a foreach statement. Furthermore, if you have a lot going on inside your iteration block, it can be easy to forget and accidentally update the iteration variable inside a foreach loop. Thus do I conclude: just avoid foreach altogether.

Honestly the academically correct answer is "It's Irrelevant".  You cant optimize performance by somehow picking "the best loop".  If you're doing performance optimizations you should start with a Big O analysis of your algorithms and Profiling.  You'd be amazed at how much faster keeping around a dictionary for lookups or using a sorted list + binary search is than looping over a list every time you need to find an object.

Doing so will give you unnoticeable performance increases at the cost of maintainability and programmer time and the effort you're putting in to such small performance boosts is better spent on a proper design and implementation.   Let's take a real world example of finding duplicates in a list to illustrate my point:
first let's assume that comparing two items is O(1).  We can implement this any number of ways, two of which are:

A) use nested loops with i on the outer loop and j on the inner loop, when list[i] == list[j] push the pair onto a list of duplicates.
B) copy the list to tmpList.  quicksort tmpList.  iterate over the list with i, when list[i] == list[i+1] push the pair onto the list of duplicates

No amount of optimization can change the fact that A runs in O(n^2) while B runs in O(2*n+n*log(n)).

Convert List to DataTable

One day I confused to convert the List collection to datatable object. I was wondering how to iterate the columns wise and rows wise in list to make some calculation.

I got the solution which was posted below:

public DataTable ListToDataTable(IEnumerable list)
{
    var dt = new DataTable();

    foreach (var info in typeof(T).GetProperties())
    {
        dt.Columns.Add(new DataColumn(info.Name, info.PropertyType));
    }
    foreach (var t in list)
    {
        var row = dt.NewRow();
        foreach (var info in typeof(T).GetProperties())
        {
            row[info.Name] = info.GetValue(t, null);
        }
        dt.Rows.Add(row);
    }
    return dt;
}

--
Happy coding.

Wednesday, May 19, 2010

Age calculation with SQL Server

There seem to be many different methods being suggested to calculate an age in SQLServer.  Some are quite complex but most are simply wrong.  This is by far the simplest and accurate method that I know.

Declare @Date1 datetime
Declare @Date2 datetime


Select @Date1 = '15Feb1971'Select @Date2 = '08Dec2009'select CASE
WHEN dateadd(year, datediff (year, @Date1, @Date2), @Date1) > @Date2
THEN datediff (year, @Date1, @Date2) - 1
ELSE datediff (year, @Date1, @Date2)END as Age
 
--
happy coding 

Monday, April 19, 2010

[WPF] How to assign a dynamic resource from code-behind ?

When working on WPF projects, it’s mandatory to assign resources to user interface controls. When you work in XAML, it’s pretty simple: you just need to use the MarkupExtension named StaticResource (or DynamicResource if the resource is going to be modified):

<Button Content="Find Position" Click="Button_Click" Background="{DynamicResource brush}" />
But, how to do the same using code-behind ? The key is to use the method SetResourceReference (http://msdn.microsoft.com/en-us/library/system.windows.frameworkelement.setresourcereference.aspx):

<Window.Resources>
    Key="brush" Color="Red" />
Window.Resources>


this.btn.SetResourceReference(BackgroundProperty, "brush");
As you can see, it’s really simple to use: you define the resource, you define the control and, in code behind, you call the method SetResourceReference and use the following parameters:
  • the DependencyProperty on which the resource will be applied
  • the name of the resource

Happy coding !

Wednesday, February 24, 2010

Reset Identity Column Value in SQL Server

If you are using an identity column on your SQL Server tables, you can set the next insert value to whatever value you want. An example is if you wanted to start numbering your ID column at 1000 instead of 1.


It would be wise to first check what the current identify value is. We can use this command to do so:


DBCC CHECKIDENT ('tablename', NORESEED) 
 
For instance, if I wanted to check the next ID value of my orders table, I could use this command:


DBCC CHECKIDENT (orders, NORESEED) 
 
To set the value of the next ID to be 1000, I can use this command:
DBCC CHECKIDENT (orders, RESEED, 999)

Note that the next value will be whatever you reseed with + 1, so in this case I set it to 999 so that the next value will be 1000.


Another thing to note is that you may need to enclose the table name in single quotes or square brackets if you are referencing by a full path, or if your table name has spaces in it. (which it really shouldn’t)


DBCC CHECKIDENT ('databasename.dbo.orders',RESEED, 999)

Monday, February 15, 2010

Custom Paging In GridView without objectDataSource

All we need to use the Custom paging in grid view without object data source.

Here i am going to explain my code which is used in custom paging of grid view... What you will need to do is used your DataSource SP or query with this paging...

All magic lies in SQL Server 2005 ROWNumber() Function.... Simple SP for this gridview datasource is


Select Row,ID,Name
(
    Select ROW_Number()OVER(ORDER BY ID) As Row,ID,Name 
    from table1
) AS A
Where Row=>@PageIndex*PageSize
and Row<(@PageIndex+1)*PageSize;
--here @PageIndex and @PageSize are passed as parameter as gridview1.PageIndex and gridview1.PageSize  
    

if you are not familiar with RowNumber function than create one temp table use Row as primary key with auto increament number and than use insert select statement.... this will work in all database.....

Follow is the C# code for custom gridview...I m creating new grid view control here

using System; 
using System.Collections.Generic; 
using System.ComponentModel; 
using System.Text; 
using System.Web; 
using System.Web.UI; 
using System.Web.UI.WebControls;  
namespace CustomPagingGridView 
{  
    [DefaultProperty("Text")]  
    [ToolboxData("<{0}:CustomePagingGrid runat=server>CustomePagingGrid>")]  
    public class CustomePagingGrid : GridView 
    {   
        public CustomePagingGrid(): base()
        {}
        
        #region Custom properties       
        // this property is to use to find the total number of record for grid         
        [Browsable(true), Category("NewDynamic")]         
        [Description("Set the virtual item count for this grid")]         
        public int VirtualItemCount         
        {             
            get
            {
                if (ViewState["pgv_vitemcount"] == null)
                    ViewState["pgv_vitemcount"] = -1;                 
                return Convert.ToInt32(ViewState["pgv_vitemcount"]);             
            }             
            set             
            {                 
                ViewState["pgv_vitemcount"] = value;             
            }         
        }         
        
        // this is used to sort the gridview columns        
        [Browsable(true), Category("NewDynamic")] [Description("Get the order by string to use for this grid when sorting event is triggered")] 
        public string OrderBy         
        {             
            get             
            {                 
                if (ViewState["pgv_orderby"] == null)                     
                    ViewState["pgv_orderby"] = string.Empty;                 
                return ViewState["pgv_orderby"].ToString();             
            }             
            protected set             
            {                 
                ViewState["pgv_orderby"] = value;             
            }         
        }            
        
        private int Index         
        {             
            get             
            {                 
                if (ViewState["pgv_index"] == null)                     
                    ViewState["pgv_index"] = 0;                 
                return Convert.ToInt32(ViewState["pgv_index"]);             
            }             
            set             
            {                 
                ViewState["pgv_index"] = value;             
            }         
        }            
        
        public int CurrentPageIndex         
        {             
            get             
            {                 
                if (ViewState["pgv_pageindex"] == null)                     
                    ViewState["pgv_pageindex"] = 0;                 
                return Convert.ToInt32(ViewState["pgv_pageindex"]);             
            }             
            set             
            {                 
                ViewState["pgv_pageindex"] = value;             
            }         
        }            
        
        private int SetCurrentIndex         
        {             
            get             
            {                 
                return CurrentPageIndex;             
            }             
            set             
            {                 
                CurrentPageIndex = value;             
            }         
        }                 
        
        
        // if this property is set to greater than zero means custom paging neede                    
        private bool CustomPaging         
        {             
            get             
            {                 
                return (VirtualItemCount != -1);             
            }         
        }          
        #endregion             
        
        #region Overriding the parent methods           
        public override object DataSource         
        {             
            get             
            {                 
                return base.DataSource;             
            }             
            set             
            {                 
                base.DataSource = value;                 
                // we store the page index here so we dont lost it in databind                 
                CurrentPageIndex = PageIndex;             
            }         
        }     
        
        protected override void OnSorting(GridViewSortEventArgs e) 
        { 
            // We store the direction for each field so that we can work out whether next sort 
            // should be asc or desc order  PageIndex = CurrentPageIndex; SortDirection direction = SortDirection.Ascending;  
            if(ViewState[e.SortExpression]!=null&& (SortDirection)ViewState[e.SortExpression] == SortDirection.Ascending)  
            {   
                direction = SortDirection.Descending; 
            }  
            ViewState[e.SortExpression] = direction;             
            OrderBy = string.Format("{0} {1}", e.SortExpression, (direction == SortDirection.Descending ? "DESC" : ""));             
            base.OnSorting(e);         
        }            
        
        protected override void InitializePager(GridViewRow row, int columnSpan, PagedDataSource pagedDataSource)         
        {             
            // This method is called to initialise the pager on the grid. We intercepted this and override             
            // the values of pagedDataSource to achieve the custom paging using the default pager supplied             
            if (CustomPaging)             
            {                 
                pagedDataSource.AllowCustomPaging = true;                 
                pagedDataSource.VirtualCount = VirtualItemCount;                 
                pagedDataSource.CurrentPageIndex = CurrentPageIndex;             
            }             
            base.InitializePager(row, columnSpan, pagedDataSource);         
        }               
        
        // here we do custom paging         
        protected override void OnPageIndexChanging(GridViewPageEventArgs e)         
        {             
            if (CustomPaging)             
            {                 
                if (this.PagerSettings.Mode == PagerButtons.NumericFirstLast || this.PagerSettings.Mode == PagerButtons.Numeric)                 
                {                     
                    base.OnPageIndexChanging(e);                 
                }                 
                else                 
                {                     
                    if (e.NewPageIndex == -1)                     
                    {                         
                        Index -= 1;                     
                    }                     
                    else if (e.NewPageIndex == 0)
                    {                         
                        Index = 0;                     
                    }                     
                    else if (e.NewPageIndex == ((int)Math.Ceiling((decimal)(VirtualItemCount) / PageSize) - 1))                     
                    {                         
                        Index = ((int)Math.Ceiling((decimal)(VirtualItemCount) / PageSize) - 1);                     
                    }                     
                    else                     
                    {                         
                        Index += 1;                     
                    }                     
                    if (Index < 0)                     
                    { 
                        Index = 0; 
                    }
                    CurrentPageIndex = Index;                     
                    e.NewPageIndex = Index;                     
                    base.OnPageIndexChanging(e);                 
                }             
            }         
        }           
    #endregion     
    } 
}
    

if u have any doubt,please feel free to ask me....

Wednesday, February 10, 2010

.NET 3.5 LANGUAGE ENHANCEMENTS

There are several .NET language enhancements to be introduced with Visual Studio 2008 including implicitly typed variables, extension methods, anonymous types, object initializers, collection initializers and automatic properties. These language enhancements, along with features like generics, are critical to the use of some of the new features, such as LINQ with the ADO.NET Entity Framework. What can be confusing is that these features are often referred to in the same conversation as LINQ. Because of this relation by association, you may be led to believe that these
features are part of LINQ. They are not; they are part of the .NET Framework 3.5 and the VB 9 and C# 3.0 languages. They are very valuable in their own rights as well as playing a huge role for LINQ.

This article will demonstrate and discuss several key language features including:

  • Automatic Property setters/getters
  • Object Initializers
  • Collection Initializers
  • Extension Methods
  • Implicitly Typed Variable
  • Anonymous Type


Automatic Properties

Since creating classes by hand can be monotonous at times, developers use either code generation programs and IDE Add-Ins to assist in creating classes and their properties. Creating properties can be a very redundant process, especially when there is no logic in the getters and setters other than getting and setting the value of the private field. Using public fields would reduce the code required, however public fields do have some drawbacks as they are not supported by some other features such as inherent data binding.

public class Customer
{
private int _customerID;
private string _companyName;
private Address _businessAddress;
private string _phone;
public int CustomerID
{
get { return _customerID; }
set { _customerID = value; }
}
public string CompanyName
{
get { return _companyName; }
set { _companyName = value; }
}
public Address BusinessAddress
{
get { return _businessAddress; }
set { _businessAddress = value; }
}
public string Phone
{
get { return _phone; }
set { _phone = value; }
}
}

how the same result can be achieved through automatic properties with less code than

public class Customer
{
public int CustomerID { get; set; }
public string CompanyName { get; set; }
public Address BusinessAddress { get; set; }
public string Phone { get; set; }
}


Object Initializers

It is often helpful to have a constructor that accepts the key information that can be used to initialize an object. Many code refactoring tools help create constructors like this with .NET 2. However another new feature coming with .NET 3.5, C# 3 and VB 9 is object initialization. Object Initializers allow you to pass in named values for each of the public properties that will then be used to initialize the object.

For example, initializing an instance of the Customer class could be accomplished using the following code:

Customer customer = new Customer();
customer.CustomerID = 101;
customer.CompanyName = "Foo Company";
customer.BusinessAddress = new Address();
customer.Phone = "555-555-1212";

However, by taking advantage of Object Initializers an instance of the Customer class can be created using the following syntax:

Customer customer = new Customer {
CustomerID = 101,
CompanyName = "Foo Company",
BusinessAddress = new Address(),
Phone = "555-555-1212" };

The syntax is to wrap the named parameters and their values with curly braces. Object Initializers allow you to pass in any named public property to the constructor of the class. This is a great feature as it removes the need to create multiple overloaded constructors using different parameter lists to achieve the same goal. While you can currently create your own constructors, Object initializers are nice because you do not have to create multiple overloaded constructors to handle the various combinations of how you might want to initialize the object. To make matters easier, when typing the named parameters the intellisense feature of the IDE will display a list of the named parameters for you. You do not have to pass all of the parameters in and in fact, you can even use a nested object initialize for the BusinessAddress parameter, as shown below.

Customer customer = new Customer
{
CustomerID = 101,
CompanyName = "Foo Company",
BusinessAddress = new Address { City="Somewhere", State="FL" },
Phone = "555-555-1212"
};

Collection Initializers

Initializing collections have always been a bother to me. I never enjoy having to create the collection first and then add the items one by one to the collection in separate statements. (What can I say, I like tidy code.) Like Object Initializers, the new Collection Initializers allow you to create a collection and initialize it with a series of objects in a single statement. The following statement demonstrates how the syntax is very similar to that of the Object Initializers. Initializing a List is accomplished by passing the instances of the Customer objects wrapped inside of curly braces. 

List custList = new List 
{ customer1, customer2, customer3 };


Collection Initializers can also be combined with Object Initializers. The result is a slick piece of code that initializes both the objects and the collection in a single statement.

List custList = new List
{
new Customer {ID = 101, CompanyName = "Foo Company"},
new Customer {ID = 102, CompanyName = "Goo Company"},
new Customer {ID = 103, CompanyName = "Hoo Company"}
};



The List and its 3 Customers from this example could also be written without Object Initializers nor Collection Initializers, in several lines of code. The syntax for that could look something like this without  using these new features:

Customer customerFoo = new Customer();
customerFoo.ID = 101;
customerFoo.CompanyName = "Foo Company";
Customer customerGoo = new Customer();
customerGoo.ID = 102;
customerGoo.CompanyName = "Goo Company";
Customer customerHoo = new Customer();
customerHoo.ID = 103;
customerHoo.CompanyName = "Hoo Company";
List customerList3 = new List();
customerList3.Add(customerFoo);
customerList3.Add(customerGoo);
customerList3.Add(customerHoo);


Extension Methods

Have you ever looked through the list of intellisense for an object hoping to find a method that handles your specific need only to find that it did not exist? One way you can handle this is to use a new feature called Extension Methods. Extension methods are a new feature that allows you to enhance an existing class by adding a new method to it without modifying the actual code for the class. This is especially useful when using LINQ because several extension methods are available in writing LINQ query expressions.

For example, imagine that you want to cube a number. You might have the length of one side of a cube and you want to know its volume. Since all the sides are the same length, it would be nice to simply have a method that calculates the cube of an integer. You might start by looking at the System.Int32 class to see if it exposes a Cube method, only to find that it does not. One solution for this is to create an extension method for the int class that calculates the Cube of an integer. Extension Methods must be created in a static class and the Extension Method itself must be defined as static. The syntax is pretty straightforward and familiar, except for the this keyword that is passed as the first parameter to the Extension Method. Notice in the code below that I create a static method named Cube that accepts a single parameter. In a static method, preceding the first parameter with the this keyword creates an extension method that applies to the type of that parameter. So in this case, I added an Extension Method called Cube to the int type.

public static class MyExtensions
{
public static int Cube(this int someNumber)
{
return someNumber ^ 3;
}
}

When you create an Extension Method, the method sows up in the intellisense in the IDE, as well. With this new code I can calculate the cube of an integer using the following code sample:


int oneSide = 3;
int theCube = oneSide.Cube(); // Returns 27

As nice as this feature is I do not recommend creating Extension Methods on classes if instead you can create a method for the class yourself. For example, if you wanted to create a method to operate on a Customer class to calculate their credit limit, best practices would be to add this method to the Customer class itself. Creating an Extension method in this case would violate the encapsulation principle by placing the code for the Customer’s credit limit calculation outside of the Customer class.

However, Extension Methods are very useful when you cannot add a method to the class itself, as in the case of creating a Cube method on the int class. Just because you can use a tool, does not mean you should use a tool.


Anonymous Types and Implicitly Typed Variables

When using LINQ to write query expressions, you might want to return information from several classes. It is very likely that you'd only want to return a small set of properties from these classes. However, when you retrieve information from different class sources in this manner, you cannot retrieve a generic list of your class type because you are not retrieving a specific class type. This is where Anonymous Types step in and make things easier because Anonymous Types allow you to create a class structure on the fly.


var dog = new { Breed = "Cocker Spaniel",
Coat = "black", FerocityLevel = 1 };

Notice that the code above creates a new instance of a class that describes a dog. The dog variable will now represent the instance of the class and it will expose the Breed, Coat and Ferocity properties. Using this code I was able to create a structure for my data without having to create a Dog class explicitly. While I would rarely create a class using this feature to represent a Dog, this feature does come in handy when used with LINQ.

When you create an Anonymous Type you need to declare a variable to refer to the object. Since you do not know what type you will be getting (since it is a new and anonymous type), you can declare the variable with the var keyword. This technique is called using an Implicitly Typed Variable. When writing a LINQ query expression, you may return various pieces of information. You could return all of these data bits and create an Anonymous Type to store them. For example, let’s assume you have a List and each Customer has a BusinessAddress property of type Address. In this situation you want to return the CompanyName and the State where the company is located. One way to accomplish this using an Anonymous Type is shown in below. 

List customerList = new List
{
new Customer {ID = 101,
CompanyName = "Foo Co",
BusinessAddress = new Address {State="FL"}},
new Customer {ID = 102,
CompanyName = "Goo Co",
BusinessAddress = new Address {State="NY"}},
new Customer {ID = 103,
CompanyName = "Hoo Co",
BusinessAddress = new Address {State="NY"}},
new Customer {ID = 104,
CompanyName = "Koo Co",
BusinessAddress = new Address {State="NY"}}
};
var query = from c in customerList
where c.BusinessAddress.State.Equals("FL")
select new { Name = c.CompanyName,
c.BusinessAddress.State };
foreach (var co in query)
Console.WriteLine(co.Name + " - " + co.State);


Pay particular attention to the select clause in the LINQ query expression. The select clause is creating an instance of an Anonymous Type that will have a Name and a State property. These values come from 2 different objects, the Customer and the Address. Also notice that the properties can be explicitly renamed (CompanyName is renamed to Name) or they can implicitly take on the name as happens with the State property. Anonymous Types are very useful when retrieving data with LINQ.



 Summary

There are a lot of new language features coming with .NET 3.5 that both add new functionality and make the using of existing technologies easier. As we have seen in the past, when new technologies have been introduced, such as with generics, they often are the precursors to other technologies. The introduction of Generics allowed us to create strongly typed lists. Now because of those strongly typed lists of objects we will be able to write LINQ query expressions against the strongly typed objects and access their properties explicitly even using intellisense. These new features such as Object Initializers and Anonymous Types are the building blocks of LINQ and other future .NET technologies.

Saturday, February 6, 2010

NEW DATA TYPES IN SQL SERVER 2008

We will take
a look at the following new data types, each of which is available in all editions
of SQL Server 2008:Date and Time: Four new date and time data types
have been added, making working with time much easier than it ever has in the past.
They include: DATE, TIME, DATETIME2, and DATETIMEOFFSET.

Spatial: Two new spatial data types have been added--GEOMETRY and GEOGRAPHY--
which you can use to natively store and manipulate location-based information, such
as Global Positioning System (GPS) data.

HIERARCHYID:
The HIERARCHYID data type is used to enable database applications
to model hierarchical tree structures, such as the organization chart of a business.

FILESTREAM: FILESTREAM is not a data type as
such, but is a variation of the VARBINARY(MAX) data type that allows unstructured
data to be stored in the file system instead of inside the SQL Server database.
Because this option requires a lot of involvement from both the DBA administration
and development side, I will spend more time on this topic than the rest.

Date and Time
In SQL Server 2005 and earlier, SQL Server only offered two date and time data types:
DATETIME and SMALLDATETIME. While they were useful in many cases, they had a lot
of limitations, including:


  • Both the date value and the
    time value are part of both of these data types, and you can’t choose to store one
    or the other. This can cause several problems:
  • It often causes a lot of wasted
    storage because you store data you don’t need or want.
  • It adds unwanted complexity
    to many queries because the data types often have to be converted to a different
    form to be useful.
  • It often reduces performance
    because WHERE clauses with these data and time data types  often have to include
    functions to convert them to a more useful form, preventing these queries from using
    indexes.
  • They are not time-zone aware,
    which requires extra coding for time-aware applications.
  • Precision is only .333 seconds,
    which is not granular enough for some applications.
  • The range of supported dates
    is not adequate for some applications, and the range does not match the range of
    the .NET CLR DATETIME data type, which requires additional conversion code.

To overcome these problems, SQL Server 2008 introduces four new date and time data
types,
described in the
following sections. All of these new date and time data types work with SQL Server
2008 date and time functions,
which have been enhanced in order to properly understand the new

In addition, some new date and time functions have been added to take advantage
of the
capabilities of
these new data types. The new functions include SYSDATETIME,

TODATETIMEOFFSET, SYSUTCDATETIME,
and
DATE.

As you can imagine, the DATE data type only stores a date in the format of YYYY-MM-DD.
It has


a range of 0001-01-01 through
9999-12-32, which should be adequate for most business and


scientific applications. The
accuracy is 1 day, and it only takes 3 bytes to store the date.






        --Sample DATE output
        DECLARE @datevariable as DATE            
        SET @datevariable = getdate()            
        PRINT @datevariable
        Result: 2008-08-15        
        
TIME 


 


TIME is stored in the format:
hh:mm:ss.nnnnnnn, with a range of 00:00:00.0000000 through


23:59:59:9999999 and is accurate
to 100 nanoseconds. Storage depends on the precision and scale

selected, and runs from 3 to 5 bytes.



                    --Sample TIME output                        
                    DECLARE @timevariable as TIME
                    SET @timevariable = getdate()                       
                    PRINT @timevariable                        
                    Result: 14:26:52.3100000
                    
DATETIME2




DATETIME2 is very similar to
the older DATETIME data type, but has a greater range and


precision. The format is YYYY-MM-DD
hh:mm:ss:nnnnnnnm with a range of 0001-01-01


00:00:00.0000000 through 9999-12-31
23:59:59.9999999, with an accuracy of 100 nanoseconds.

depends on the precision and scale selected, and runs from 6 to 8 bytes.




        --Sample DATETIME2 output with a precision of 7
        DECLARE @datetime2variable datetime2(7)
        SET @datetime2variable = Getdate()
        PRINT @datetime2variable
        Result: 2008-08-15 14:27:51.5300000
        


DATETIMEOFFSET




DATETIMEOFFSET is similar to
DATETIME2, but includes additional information to track the


time zone. The format is YYYY-MM-DD
hh:mm:ss[.nnnnnnn] [+|-]hh:mm with a range of 0001-


01-01 00:00:00.0000000 through
0001-01-01 00:00:00.0000000 through 9999-12-31 23:59:59.9999999.

Universal Time (UTC), with an accuracy of 100 nanoseconds. Storage depends on the
and scale selected, and
runs from 8 to 10 bytes.

zone aware means a time zone identifier is stored as a part of DATETIMEOFFSET column.
time zone identification
is represented by a [-|+] hh:mm designation. A valid time zone falls in

range of -14:00 to +14:00, and
this value is added or subtracted from UTC to obtain the local 


 
--Sample DATETIMEOFFSET
output with a precision of 0



        --Specify a date, time, and time zone
        DECLARE @datetimeoffsetvariable DATETIMEOFFSET(0)
        SET @datetimeoffsetvariable ='2008-10-03 09:00:00 -10:00'
        --Specify a different date, time and time zone
        
        DECLARE @datetimeoffsetvariable1 DATETIMEOFFSET(0)
        SET @datetimeoffsetvariable1= '2008-10-04 18:00:00 +0:00'
        
        --Find the difference in hours between the above dates, times,and timezones
        SELECT DATEDIFF(hh,@datetimeoffsetvariable,@datetimeoffsetvariable1)                    
        
        Result: 23
        





Spatial

While spatial data has been stored
in many SQL Server databases for many years (using conventional

data types), SQL Server 2008 introduces two specific spatial data types that can
make it easier for
developers
to integrate spatial data in their SQL Server-based applications. In addition, by
storing
spatial data in
relational tables, it becomes much easier to combine spatial data with other kinds
of
business data. For
example, by combining spatial data (such as longitude and latitude) with the

physical address of a
business, applications can be created to map business locations on a map.


The two new spatial data types
in SQL 2008 are:





GEOMETRY: Used to store
planar (flat-earth) data. It is generally used to store XY


coordinates that represent points,
lines, and polygons in a two-dimensional space. For example


storing XY coordinates in the
GEOMETRY data type can be used to map the exterior of a


building.

GEOGRAPHY: Used to store
ellipsoidal (round-earth) data. It is used to store latitude and


longitude coordinates that represent
points, lines, and polygons on the earth’s surface. For


example, GPS data that represents
the lay of the land is one example of data that can be stored


in the GEOGRAPHY data type.



GEOMETRY and GEOGRAPHY
data types are implemented as .NET CLR data types. This means

that they can support various properties and methods specific to the data. For example,
a method
can be used to
calculate the distance between two GEOMETRY XY coordinates, or the distance

between two GEOGRAPHY latitude
and longitude coordinates. Another example is a method to see

if two spatial objects intersect or not. Methods defined by the Open Geospatial
Consortium
standard, and
Microsoft extensions to that standard, can be used. To take full advantage of these

methods, you will have to be
an expert in spatial data.
Another
feature of spatial data types is that they support special spatial indexes. Unlike
conventional
indexes,
spatial indexes consist of a grid-based hierarchy in which each level of the index
subdivides


the grid sector that is defined
in the level above. But like conventional indexes, the SQL Server query

optimizer can use spatial indexes
to speed up the performance of queries that return spatial data.
Spatial
data is an area unfamiliar to many DBAs. If this is a topic you want to learn more
about, you
will need a
good math background, otherwise you will get lost very quickly.







HIERARCHYID


While hierarchical tree structures
are commonly used in many applications, SQL Server has, up to

not made it easy to represent and store them in relational tables. In SQL Server
2008, the


HIERARCHYID data type
has been added to help resolve this problem. It is designed to store

that represent the position of
nodes in a hierarchal tree structure.

For example, the HIERARCHYID data type makes it easier to express the following
types of
relationships
without requiring multiple parent/child tables and complex joins:


  • Organizational structures

  • A set of tasks that make up a larger projects (like a GANTT chart)

  • File systems (folders and their sub-folders)

  • A classification of language terms

  • A bill of materials to assemble or build a product

  • A graphical representation of links between web pages

Unlike standard data types, the HIERARCHYID data type is a CLR user-defined
type, and it exposes
many
methods that allow you to manipulate the date stored within it. For example, there
are methods
to get the
current hierarchy level, get the previous level, get the next level, and many more.
In fact,
the HIERARCHYID
data type is only used to store hierarchical data; it does not automatically
represent a hierarchical
structure. It is the responsibility of the application to create and assign

HIERARCHYID values in a way that
represents the desired relationship. Think of a

HIERARCHYID data type as a place to store positional nodes of a tree structure,
not as a way to
create
the tree structure.






FILESTREAM


SQL Server is great for storing
relational data in a highly structured format, but it has never been

particularly good at storing
unstructured data, such as videos, graphic files, Word documents, Excel

spreadsheets, and so on. In the
past, when developers wanted to use SQL Server to manage such

unstructured data, they essentially had two choices:


  • Store it in VARBINARY(MAX) columns inside the database

  • Store the data outside of the database as part of the file system, and include pointers
    inside
    a column that pointed
    to the file’s location. This allowed an application that needed access

    to the file to find it by looking
    up the file’s location from inside a SQL Server table.Neither of these options was
    perfect. Storing unstructured data in VARBINARY(MAX) columns

    offers less than ideal performance, has a 2 GB size limit, and can dramatically
    increase the size of a

    database. Likewise, storing unstructured data in the file system requires the DBA
    to overcome several
    difficulties. 

For example:


  • Files have a unique naming system that allows hundreds, if not thousands of files
    to be keep
    track of and
    requires very careful management of the folders to store the data.

  • Security is a problem and often requires using NTFS permissions to keep people from
    accessing the files inappropriately.

  • The DBA has to perform separate backups of the database and the files

  • Problems can occur when outside files are modified or moved and the database is
    not updated
    to reflect
    this.



To help resolve these problems,
SQL Server 2008 has introduced what is called FILESTREAM

storage, essentially a hybrid approach that combines the best features of the previous
two options.





Benefits of FILESTREAM

FILESTREAM storage is
implemented in SQL Server 2008 by storing VARBINARY(MAX) binary

large objects (BLOBs) outside of the database and in the NTFS file system. While
this sounds very
similar
to the older method of storing unstructured data in the file system and pointing
to it from a
column, it
is much more sophisticated. Instead of a simple link from a column to an outside
file, the
SQL Server Database
Engine has been integrated with the NTFS file system for optimum
performance and ease of
administration. For example, FILESTREAM data uses the Windows OS

system cache for caching data
instead of the SQL Server buffer pool. This allows SQL Server to do
what it does best: manage structured
data, and allows the Windows OS to do what is does best:

manage large files. In addition, SQL Server handles all of the links between database
columns and
the files,
so we don’t have to.
In
addition, FILESTREAM storage offers these additional benefits:


  • Transact-SQL can be used to SELECT, INSERT, UPDATE, DELETE FILESTREAM data.

  • By default, FILESTREAM data is backed up and restored as part of the database file.
    If you want, there is an option available so you can backup a database without the
    FILESTREAM data.

  • The size of the stored data is only limited by the available space of the file system.
    Standard
    VARBINARY(MAX)
    data is limited to 2 GB.

Limitations of FILESTREAM


As you might expect, using FILESTREAM
storage is not right for every situation. For example, it is

best used under the following conditions:


  • When the BLOB file sizes average 1MB or higher.

  • When fast read access is important to your application.

  • When applications are being built that use a middle layer for application logic.

  • When encryption is not required, as it is not supported for FILESTREAM data.

    If your application doesn’t meet
    the above conditions, then using the standard VARBINARY(MAX) data type might be
    your best option.
    If you
    are used to storing binary data inside your database, or outside your database (but
    using
    pointers inside
    the database that point to the binary files), then you will find using FILESTREAM

    storage to be substantially different.
    You will want to thoroughly test your options before

    implementing one option or the other, in any new applications you build. 

How to Implement FILESTREAM
Storage
Enabling SQL Server
to use FILESTREAM data is a multiple-step process, which includes:


  • Enabling the SQL Server instance to use FILESTREAM data

  • Enabling a SQL Server database to use FILESTREAM data

  • Creating FILESTREAM-enabled columns in a table, by specifying the "VARBINARY(MAX)
    FILESTREAM" data type.