Total Pageviews

Saturday, February 6, 2010

NEW DATA TYPES IN SQL SERVER 2008

We will take
a look at the following new data types, each of which is available in all editions
of SQL Server 2008:Date and Time: Four new date and time data types
have been added, making working with time much easier than it ever has in the past.
They include: DATE, TIME, DATETIME2, and DATETIMEOFFSET.

Spatial: Two new spatial data types have been added--GEOMETRY and GEOGRAPHY--
which you can use to natively store and manipulate location-based information, such
as Global Positioning System (GPS) data.

HIERARCHYID:
The HIERARCHYID data type is used to enable database applications
to model hierarchical tree structures, such as the organization chart of a business.

FILESTREAM: FILESTREAM is not a data type as
such, but is a variation of the VARBINARY(MAX) data type that allows unstructured
data to be stored in the file system instead of inside the SQL Server database.
Because this option requires a lot of involvement from both the DBA administration
and development side, I will spend more time on this topic than the rest.

Date and Time
In SQL Server 2005 and earlier, SQL Server only offered two date and time data types:
DATETIME and SMALLDATETIME. While they were useful in many cases, they had a lot
of limitations, including:


  • Both the date value and the
    time value are part of both of these data types, and you can’t choose to store one
    or the other. This can cause several problems:
  • It often causes a lot of wasted
    storage because you store data you don’t need or want.
  • It adds unwanted complexity
    to many queries because the data types often have to be converted to a different
    form to be useful.
  • It often reduces performance
    because WHERE clauses with these data and time data types  often have to include
    functions to convert them to a more useful form, preventing these queries from using
    indexes.
  • They are not time-zone aware,
    which requires extra coding for time-aware applications.
  • Precision is only .333 seconds,
    which is not granular enough for some applications.
  • The range of supported dates
    is not adequate for some applications, and the range does not match the range of
    the .NET CLR DATETIME data type, which requires additional conversion code.

To overcome these problems, SQL Server 2008 introduces four new date and time data
types,
described in the
following sections. All of these new date and time data types work with SQL Server
2008 date and time functions,
which have been enhanced in order to properly understand the new

In addition, some new date and time functions have been added to take advantage
of the
capabilities of
these new data types. The new functions include SYSDATETIME,

TODATETIMEOFFSET, SYSUTCDATETIME,
and
DATE.

As you can imagine, the DATE data type only stores a date in the format of YYYY-MM-DD.
It has


a range of 0001-01-01 through
9999-12-32, which should be adequate for most business and


scientific applications. The
accuracy is 1 day, and it only takes 3 bytes to store the date.






        --Sample DATE output
        DECLARE @datevariable as DATE            
        SET @datevariable = getdate()            
        PRINT @datevariable
        Result: 2008-08-15        
        
TIME 


 


TIME is stored in the format:
hh:mm:ss.nnnnnnn, with a range of 00:00:00.0000000 through


23:59:59:9999999 and is accurate
to 100 nanoseconds. Storage depends on the precision and scale

selected, and runs from 3 to 5 bytes.



                    --Sample TIME output                        
                    DECLARE @timevariable as TIME
                    SET @timevariable = getdate()                       
                    PRINT @timevariable                        
                    Result: 14:26:52.3100000
                    
DATETIME2




DATETIME2 is very similar to
the older DATETIME data type, but has a greater range and


precision. The format is YYYY-MM-DD
hh:mm:ss:nnnnnnnm with a range of 0001-01-01


00:00:00.0000000 through 9999-12-31
23:59:59.9999999, with an accuracy of 100 nanoseconds.

depends on the precision and scale selected, and runs from 6 to 8 bytes.




        --Sample DATETIME2 output with a precision of 7
        DECLARE @datetime2variable datetime2(7)
        SET @datetime2variable = Getdate()
        PRINT @datetime2variable
        Result: 2008-08-15 14:27:51.5300000
        


DATETIMEOFFSET




DATETIMEOFFSET is similar to
DATETIME2, but includes additional information to track the


time zone. The format is YYYY-MM-DD
hh:mm:ss[.nnnnnnn] [+|-]hh:mm with a range of 0001-


01-01 00:00:00.0000000 through
0001-01-01 00:00:00.0000000 through 9999-12-31 23:59:59.9999999.

Universal Time (UTC), with an accuracy of 100 nanoseconds. Storage depends on the
and scale selected, and
runs from 8 to 10 bytes.

zone aware means a time zone identifier is stored as a part of DATETIMEOFFSET column.
time zone identification
is represented by a [-|+] hh:mm designation. A valid time zone falls in

range of -14:00 to +14:00, and
this value is added or subtracted from UTC to obtain the local 


 
--Sample DATETIMEOFFSET
output with a precision of 0



        --Specify a date, time, and time zone
        DECLARE @datetimeoffsetvariable DATETIMEOFFSET(0)
        SET @datetimeoffsetvariable ='2008-10-03 09:00:00 -10:00'
        --Specify a different date, time and time zone
        
        DECLARE @datetimeoffsetvariable1 DATETIMEOFFSET(0)
        SET @datetimeoffsetvariable1= '2008-10-04 18:00:00 +0:00'
        
        --Find the difference in hours between the above dates, times,and timezones
        SELECT DATEDIFF(hh,@datetimeoffsetvariable,@datetimeoffsetvariable1)                    
        
        Result: 23
        





Spatial

While spatial data has been stored
in many SQL Server databases for many years (using conventional

data types), SQL Server 2008 introduces two specific spatial data types that can
make it easier for
developers
to integrate spatial data in their SQL Server-based applications. In addition, by
storing
spatial data in
relational tables, it becomes much easier to combine spatial data with other kinds
of
business data. For
example, by combining spatial data (such as longitude and latitude) with the

physical address of a
business, applications can be created to map business locations on a map.


The two new spatial data types
in SQL 2008 are:





GEOMETRY: Used to store
planar (flat-earth) data. It is generally used to store XY


coordinates that represent points,
lines, and polygons in a two-dimensional space. For example


storing XY coordinates in the
GEOMETRY data type can be used to map the exterior of a


building.

GEOGRAPHY: Used to store
ellipsoidal (round-earth) data. It is used to store latitude and


longitude coordinates that represent
points, lines, and polygons on the earth’s surface. For


example, GPS data that represents
the lay of the land is one example of data that can be stored


in the GEOGRAPHY data type.



GEOMETRY and GEOGRAPHY
data types are implemented as .NET CLR data types. This means

that they can support various properties and methods specific to the data. For example,
a method
can be used to
calculate the distance between two GEOMETRY XY coordinates, or the distance

between two GEOGRAPHY latitude
and longitude coordinates. Another example is a method to see

if two spatial objects intersect or not. Methods defined by the Open Geospatial
Consortium
standard, and
Microsoft extensions to that standard, can be used. To take full advantage of these

methods, you will have to be
an expert in spatial data.
Another
feature of spatial data types is that they support special spatial indexes. Unlike
conventional
indexes,
spatial indexes consist of a grid-based hierarchy in which each level of the index
subdivides


the grid sector that is defined
in the level above. But like conventional indexes, the SQL Server query

optimizer can use spatial indexes
to speed up the performance of queries that return spatial data.
Spatial
data is an area unfamiliar to many DBAs. If this is a topic you want to learn more
about, you
will need a
good math background, otherwise you will get lost very quickly.







HIERARCHYID


While hierarchical tree structures
are commonly used in many applications, SQL Server has, up to

not made it easy to represent and store them in relational tables. In SQL Server
2008, the


HIERARCHYID data type
has been added to help resolve this problem. It is designed to store

that represent the position of
nodes in a hierarchal tree structure.

For example, the HIERARCHYID data type makes it easier to express the following
types of
relationships
without requiring multiple parent/child tables and complex joins:


  • Organizational structures

  • A set of tasks that make up a larger projects (like a GANTT chart)

  • File systems (folders and their sub-folders)

  • A classification of language terms

  • A bill of materials to assemble or build a product

  • A graphical representation of links between web pages

Unlike standard data types, the HIERARCHYID data type is a CLR user-defined
type, and it exposes
many
methods that allow you to manipulate the date stored within it. For example, there
are methods
to get the
current hierarchy level, get the previous level, get the next level, and many more.
In fact,
the HIERARCHYID
data type is only used to store hierarchical data; it does not automatically
represent a hierarchical
structure. It is the responsibility of the application to create and assign

HIERARCHYID values in a way that
represents the desired relationship. Think of a

HIERARCHYID data type as a place to store positional nodes of a tree structure,
not as a way to
create
the tree structure.






FILESTREAM


SQL Server is great for storing
relational data in a highly structured format, but it has never been

particularly good at storing
unstructured data, such as videos, graphic files, Word documents, Excel

spreadsheets, and so on. In the
past, when developers wanted to use SQL Server to manage such

unstructured data, they essentially had two choices:


  • Store it in VARBINARY(MAX) columns inside the database

  • Store the data outside of the database as part of the file system, and include pointers
    inside
    a column that pointed
    to the file’s location. This allowed an application that needed access

    to the file to find it by looking
    up the file’s location from inside a SQL Server table.Neither of these options was
    perfect. Storing unstructured data in VARBINARY(MAX) columns

    offers less than ideal performance, has a 2 GB size limit, and can dramatically
    increase the size of a

    database. Likewise, storing unstructured data in the file system requires the DBA
    to overcome several
    difficulties. 

For example:


  • Files have a unique naming system that allows hundreds, if not thousands of files
    to be keep
    track of and
    requires very careful management of the folders to store the data.

  • Security is a problem and often requires using NTFS permissions to keep people from
    accessing the files inappropriately.

  • The DBA has to perform separate backups of the database and the files

  • Problems can occur when outside files are modified or moved and the database is
    not updated
    to reflect
    this.



To help resolve these problems,
SQL Server 2008 has introduced what is called FILESTREAM

storage, essentially a hybrid approach that combines the best features of the previous
two options.





Benefits of FILESTREAM

FILESTREAM storage is
implemented in SQL Server 2008 by storing VARBINARY(MAX) binary

large objects (BLOBs) outside of the database and in the NTFS file system. While
this sounds very
similar
to the older method of storing unstructured data in the file system and pointing
to it from a
column, it
is much more sophisticated. Instead of a simple link from a column to an outside
file, the
SQL Server Database
Engine has been integrated with the NTFS file system for optimum
performance and ease of
administration. For example, FILESTREAM data uses the Windows OS

system cache for caching data
instead of the SQL Server buffer pool. This allows SQL Server to do
what it does best: manage structured
data, and allows the Windows OS to do what is does best:

manage large files. In addition, SQL Server handles all of the links between database
columns and
the files,
so we don’t have to.
In
addition, FILESTREAM storage offers these additional benefits:


  • Transact-SQL can be used to SELECT, INSERT, UPDATE, DELETE FILESTREAM data.

  • By default, FILESTREAM data is backed up and restored as part of the database file.
    If you want, there is an option available so you can backup a database without the
    FILESTREAM data.

  • The size of the stored data is only limited by the available space of the file system.
    Standard
    VARBINARY(MAX)
    data is limited to 2 GB.

Limitations of FILESTREAM


As you might expect, using FILESTREAM
storage is not right for every situation. For example, it is

best used under the following conditions:


  • When the BLOB file sizes average 1MB or higher.

  • When fast read access is important to your application.

  • When applications are being built that use a middle layer for application logic.

  • When encryption is not required, as it is not supported for FILESTREAM data.

    If your application doesn’t meet
    the above conditions, then using the standard VARBINARY(MAX) data type might be
    your best option.
    If you
    are used to storing binary data inside your database, or outside your database (but
    using
    pointers inside
    the database that point to the binary files), then you will find using FILESTREAM

    storage to be substantially different.
    You will want to thoroughly test your options before

    implementing one option or the other, in any new applications you build. 

How to Implement FILESTREAM
Storage
Enabling SQL Server
to use FILESTREAM data is a multiple-step process, which includes:


  • Enabling the SQL Server instance to use FILESTREAM data

  • Enabling a SQL Server database to use FILESTREAM data

  • Creating FILESTREAM-enabled columns in a table, by specifying the "VARBINARY(MAX)
    FILESTREAM" data type.




Thursday, February 4, 2010

Base Page For Detecting Session Timeout in ASP.Net/C#

In this tutorial we will be going over how to create a base page class to handle your sessions. The number one question I get asked time and time again is how to manage sessions, and how to detect if a session has expired. Back in the days before .Net things were a little more complicated when it came to solving this riddle, but with the advent of the .Net Framework 2.0 a new class was introduced, the HttpSessionState Class, which is a member of the System.Web.SessionState Namespace. The new HttpSessionState Class gives us access to session state items and other lifetime management methods.

One of the items in the HttpSessionState class we will be looking at is the IsNewSession Property. This property lets us know whether the current session was created wtih the current request, or if it was an existing session. This is invaluable as we can use it to determine if the users session had expired or timed out. The IsNewSession Property is more robust and advanced then simply checking if the session is null because it takes into account a session timeout as well.

In this tutorial we will create a base page class that we can inherit all our pages from, and in this class we will check the status of the users session in the Page.OnInit Method. The OnOnit Method fires before the Page Load Event, giving us the ability to check the session before the page is actually rendered. So lets get to some code.

The first thing we will need to do, as with any class you create, is to make sure we have a reference to the appropriate Namespaces. For our class we need but 2 Namespaces, the System.Web.UI Namespace and the System Namespace, so lets add them to our class.

NOTE: All Namespace references need to come before the declaration of your class.

using System;
using System.Web.UI;

Now we are going to declare our class, the class in this example is named SessionCheck, and it looks like

public class SessionCheck: System.Web.UI.Page
{

}

You will notice that our base class, which we will be inheriting from, inherits from the System.Web.UI.Page class. Doing this gives us access to all the methods, properties and events of the Page  class. In our base class we will have a single property, this will be the property that will hold the URL we want the user to redirect to if there is a problem with their session. We make this property static  so we can access it without having to create an instance of the class. We dont want to have to do this because we are inheriting from it. 
This is our property


/// 
/// property vcariable for the URL Property
/// 
private static string _url;

/// 
/// property to hold the redirect url we will
/// use if the users session is expired or has
/// timed out.
/// 
public static string URL
{
    get { return _url; }
    set { _url = value; }
}
Now that we have our property out of the way, we will look at the only of our base class, the OnInit which we will override in order to add our ow functionality. In this method we will also initialize our base class, you do that with line
base.OnInit(e);
In our OnInit Method we will first check to see if the current session is null. If the session is null we then will check the IsNewSession Property to see if this session was created in this request. If we determine the session is a new session, we will then cal upon the Headers Property of the HttpRequest Class, which is located in the System.Web Namespace. In our OnInit Method we will first check to see if the current session is null. If the session is null we then will check the IsNewSession Property to see if this session was created in this request. If we determine the session is a new session, we will then cal upon the Headers Property of the HttpRequest Class, which is located in the System.Web Namespace. The Header we are retrieving is the Cookie Header. Once we have this, we will first check to see if it's null, if it's not null we will look for the value ASP.Net_SessionId. Now if we make it this far, and that cookie exists, we know the session has timed out, so we will then redirect the user to our redirect page, which is set with the URL Property. So lets take a look at our new OnInit Method:
override protected void OnInit(EventArgs e)
{
    //initialize our base class (System.Web,UI.Page)
    base.OnInit(e);
    //check to see if the Session is null (doesnt exist)
    if (Context.Session != null)
    {
        //check the IsNewSession value, this will tell us if the session has been reset.
        //IsNewSession will also let us know if the users session has timed out
        if (Session.IsNewSession)
        {
           //now we know it's a new session, so we check to see if a cookie is present
            string cookie = Request.Headers["Cookie"];
            //now we determine if there is a cookie does it contains what we're looking for
            if ((null != cookie) && (cookie.IndexOf("ASP.NET_SessionId") >= 0))
            {
                //since it's a new session but a ASP.Net cookie exist we know
                //the session has expired so we need to redirect them
                Response.Redirect("Default.aspx?timeout=yes&success=no");
            }
        }
    }
}
That's it, we have completed our base class which all our web forms will inherit from, allowing us to keep an eye on the users session. Now that we have the class completed we need to use it. Before it can be affected we need to do 1 of 2 things
  • Add EnableSessionState = true to the @Page directive on all pages that will inherit from our base class or
  • Add the following line to the section of our web.config file:
That's it, we have completed our base class which all our web forms will inherit from, allowing us to keep an eye on the users session. Now that we have the class completed we need to use it. Before it can be affected we need to do 1 of 2 things
  • Add EnableSessionState = true to the @Page directive on all pages that will inherit from our base class or
  • Add the following line to the section of our web.config file



Number 2 on that list will enable session state on all pages in the web. If you dont access session items in each of your pages, this might be overkill. Next we will need to inherit from our base class. Doing this will give our web form the following declaration

public partial class _Default : SessionCheck
{

}

Then in the Page_Load Event we will set the redirect URL for our base class
protected void Page_Load(object sender, EventArgs e)
{
    SessionCheck.URL = "Default.aspx";
}

Now here is the entire base page in its entirety
//   A Base Page class for detecting session time outs
//
//   This program is free software: you can redistribute it and/or modify
//   it under the terms of the GNU General Public License as published by
//   the Free Software Foundation, either version 3 of the License, or
//   (at your option) any later version.
//
//   This program is distributed in the hope that it will be useful,
//   but WITHOUT ANY WARRANTY; without even the implied warranty of
//   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
//   GNU General Public License for more details.
//
//   You should have received a copy of the GNU General Public License
//   along with this program.  If not, see .
//*****************************************************************************************

using System;
using System.Web.UI;

/// 
/// This is a custom "base page" to inherit from which will be used
/// to check the session status. If the session has expired or is a timeout
/// we will redirect the user to the page we specify. In the page you use
/// to inherit this from you need to set EnableSessionState = True
/// 
public class SessionCheck : System.Web.UI.Page
{
    /// 
    /// property vcariable for the URL Property
    /// 
    private static string _redirectUrl;

    /// 
    /// property to hold the redirect url we will
    /// use if the users session is expired or has
    /// timed out.
    /// 
    public static string RedirectUrl
    {
        get { return _redirectUrl; }
        set { _redirectUrl = value; }
    }

        public SessionCheck()
        {
        _redirectUrl = string.Empty;
        }

    override protected void OnInit(EventArgs e)
    {
        //initialize our base class (System.Web,UI.Page)
        base.OnInit(e);
        //check to see if the Session is null (doesnt exist)
        if (Context.Session != null)
        {
            //check the IsNewSession value, this will tell us if the session has been reset.
            //IsNewSession will also let us know if the users session has timed out
            if (Session.IsNewSession)
            {
               //now we know it's a new session, so we check to see if a cookie is present
                string cookie = Request.Headers["Cookie"];
                //now we determine if there is a cookie does it contains what we're looking for
                if ((null != cookie) && (cookie.IndexOf("ASP.NET_SessionId") >= 0))
                {
                    //since it's a new session but a ASP.Net cookie exist we know
                    //the session has expired so we need to redirect them
                    Response.Redirect("Default.aspx?timeout=yes&success=no");
                }
            }
        }
    }
}

And there you have it, a custom base class that you can use to detect session timeouts. I hope you found this tutorial helpful and useful, and thank you for reading :)

Happy Coding!

Saturday, January 30, 2010

Reading and Writing XML in C#

In this article, you will see how to read and write XML documents in Microsoft .NET using C# language. First, I will discuss XML .NET Framework Library namespace and classes. Then, you will see how to read and write XML documents. In the end of this article, I will show you how to take advantage of ADO.NET and XML .NET model to read and write XML documents from relational databases and vice versa.

Introduction to Microsoft .NET XML Namespaces and Classes Before start working with XML document
in .NET Framework, It is important to know about .NET namespace and classes provided by .NET Runtime Library. .NET provides five namespace - System.Xml, System.Xml.Schema, System.Xml.Serialization, System.Xml.XPath, and System.Xml.Xsl to support XML classes. The System.Xml namespace contains major XML classes. This namespace contains many classes to read and write XML documents. In this article, we are going to concentrate on reader and write class. These reader and writer classes are used to read and write XMl documents. These classes are - XmlReader, XmlTextReader, XmlValidatingReader, XmlNodeReader, XmlWriter, and XmlTextWriter. As you can see there are four reader and two writer classes. The XmlReader class is an abstract bases classes and contains methods and properties to read a document. The Read method reads a node in the stream. Besides reading functionality, this class also contains methods to navigate through a document nodes. Some of these methods are MoveToAttribute,  MoveToFirstAttribute, MoveToContent, MoveToFirstContent, MoveToElement and  MoveToNextAttribute. ReadString, ReadInnerXml, ReadOuterXml, and ReadStartElement are more read methods. This class also has a method Skip to skip current node and move to next one. We'll see these methods in our sample example. The XmlTextReader, XmlNodeReader and XmlValidatingReader classes are derived from XmlReader class. As their name explains, they are used to read text, node, and schemas. The XmlWrite class contains functionality to write data to XML documents. This class provides many write method to write XML document items. This class is base class for XmlTextWriter class, which we'll be using in our sample example. The XmlNode class plays an important role. Although, this class represents a single node of XML but that could be the root node of an XML document and could represent the entire file. This  class is an abstract base class for many useful classes for inserting, removing, and replacing nodes, navigating through the document. It also contains properties to get a parent or child, name, last child, node type and more. Three major classes derived from XmlNode are XmlDocument, XmlDataDocument and XmlDocumentFragment. XmlDocument class represents an XML document and provides methods and properties to load and save a document. It also provides functionality to add XML items such as attributes, comments, spaces, elements, and new nodes.

The Load and LoadXml methods can be used to load XML documents and Save method to save a document respectively. XmlDocumentFragment class represents a document fragment, which can be used to add to a document. The XmlDataDocument class provides methods and properties to work with ADO.NET data set objects. In spite of above discussed classes, System.Xml namespace contains more classes. Few of them are XmlConvert, XmlLinkedNode, and XmlNodeList. Next namespace in Xml series is  System.Xml.Schema.

It classes  to work with XML schemas such XmlSchema, XmlSchemaAll, XmlSchemaXPath, XmlSchemaType. The System.Xml.Serialization namespace contains classes that are used to serialize objects into XML format documents or streams. The System.Xml.XPath Namespce contains XPath related classes to use XPath specifications. This namespace has following classes  -XPathDocument, XPathExression, XPathNavigator, and XPathNodeIterator. With the help of XpathDocument, XpathNavigator provides a fast
navigation though XML documents. This class contains many Move methods to move through a document. The System.Xml.Xsl namespace contains classes to work with XSL/T transformations. Reading XML Documents In my sample application, I'm using books.xml to read and display its data through XmlTextReader. This file comes with VS.NET samples. You can search this on your machine and change the path of the file in the following line: XmlTextReader textReader = new XmlTextReader("C:\\books.xml"); Or you can use any XML file. The XmlTextReader, XmlNodeReader and XmlValidatingReader classes
are derived from XmlReader class. Besides XmlReader methods and properties, these classes also contain members to read text, node, and schemas respectively. I am using XmlTextReader class to read an XML file. You read a file by passing file name as a parameter in constructor. XmlTextReader textReader = new XmlTextReader("C:\\books.xml"); After creating an instance of XmlTextReader, you call Read method to start reading the document. After read method is called, you can read all information and data stored in a document. XmlReader class has properties such as Name, BaseURI, Depth, LineNumber an so on. List 1 reads a document and displays a node information using these properties. About Sample Example 1 In this sample example, I read an XML file using XmlTextReader and call Read method to read its node one by one until end of file and display the contents to the console output. Sample Example 1.

using System;

using System.Xml;

namespace ReadXml1

{

    class Class1

    {

        static void Main(string[] args)

        {

            // Create an isntance of XmlTextReader and call Read method to read the file

            XmlTextReader textReader = new XmlTextReader("C:\\books.xml");

            textReader.Read();

            // If the node has value

            while (textReader.Read())

            {

                // Move to fist element

                textReader.MoveToElement();

                Console.WriteLine("XmlTextReader Properties Test");

                Console.WriteLine("===================");

                // Read this element's properties and display them on console

                Console.WriteLine("Name:" + textReader.Name);

                Console.WriteLine("Base URI:" + textReader.BaseURI);

                Console.WriteLine("Local Name:" + textReader.LocalName);

                Console.WriteLine("Attribute Count:" + textReader.AttributeCount.ToString());

                Console.WriteLine("Depth:" + textReader.Depth.ToString());

                Console.WriteLine("Line Number:" + textReader.LineNumber.ToString());

                Console.WriteLine("Node Type:" + textReader.NodeType.ToString());

                Console.WriteLine("Attribute Count:" + textReader.Value.ToString());

            }

        }

    }

}

The NodeType property of XmlTextReader is important when you want to know the content type of a document. The XmlNodeType enumeration has a member for each type of XML item such as Attribute, CDATA, Element, Comment, Document, DocumentType, Entity, ProcessInstruction, WhiteSpace and so on. List 2 code sample reads an XML document, finds a node type and writes information at the end with how many node types a document has. About Sample Example 2 In this sample example, I read an XML file using XmlTextReader and call Read method to read its node one by one until end of the file. After reading a node, I check its NodeType property to find the node and write node contents to the console and keep track of number of particular type of nodes. In the end, I display total number of different types of nodes in the document. Sample Example 2.

using System;

using System.Xml;

namespace ReadingXML2

{

    class Class1

    {

        static void Main(string[] args)

        {

            int ws = 0;

            int pi = 0;

            int dc = 0;

            int cc = 0;

            int ac = 0;

            int et = 0;

            int el = 0;

            int xd = 0;

            // Read a document

            XmlTextReader textReader = new XmlTextReader("C:\\books.xml");

            // Read until end of file

            while (textReader.Read())

            {

                XmlNodeType nType = textReader.NodeType;

                // If node type us a declaration

                if (nType == XmlNodeType.XmlDeclaration)

                {

                    Console.WriteLine("Declaration:" + textReader.Name.ToString());

                    xd = xd + 1;

                }

                // if node type is a comment

                if (nType == XmlNodeType.Comment)

                {

                    Console.WriteLine("Comment:" + textReader.Name.ToString());

                    cc = cc + 1;

                }

                // if node type us an attribute

                if (nType == XmlNodeType.Attribute)

                {

                    Console.WriteLine("Attribute:" + textReader.Name.ToString());

                    ac = ac + 1;

                }

                // if node type is an element

                if (nType == XmlNodeType.Element)

                {

                    Console.WriteLine("Element:" + textReader.Name.ToString());

                    el = el + 1;

                }

                // if node type is an entity\

                if (nType == XmlNodeType.Entity)

                {

                    Console.WriteLine("Entity:" + textReader.Name.ToString());

                    et = et + 1;

                }

                // if node type is a Process Instruction

                if (nType == XmlNodeType.Entity)

                {

                    Console.WriteLine("Entity:" + textReader.Name.ToString());

                    pi = pi + 1;

                }

                // if node type a document

                if (nType == XmlNodeType.DocumentType)

                {

                    Console.WriteLine("Document:" + textReader.Name.ToString());

                    dc = dc + 1;

                }

                // if node type is white space

                if (nType == XmlNodeType.Whitespace)

                {

                    Console.WriteLine("WhiteSpace:" + textReader.Name.ToString());

                    ws = ws + 1;

                }

            }

            // Write the summary

            Console.WriteLine("Total Comments:" + cc.ToString());

            Console.WriteLine("Total Attributes:" + ac.ToString());

            Console.WriteLine("Total Elements:" + el.ToString());

            Console.WriteLine("Total Entity:" + et.ToString());

            Console.WriteLine("Total Process Instructions:" + pi.ToString());

            Console.WriteLine("Total Declaration:" + xd.ToString());

            Console.WriteLine("Total DocumentType:" + dc.ToString());

            Console.WriteLine("Total WhiteSpaces:" + ws.ToString());

        }

    }

}

Writing XML Documents XmlWriter class contains the functionality to write to XML documents. It is an abstract base class used through XmlTextWriter and XmlNodeWriter classes. It contains methods and properties to write to XML documents. This class has several Writexxx method to write every type of item of an XML document. For example, WriteNode, WriteString, WriteAttributes, WriteStartElement, and WriteEndElement are some of them. Some of these methods are used in a start and end pair. For example,
to write an element, you need to call WriteStartElement then write a string followed by WriteEndElement.          

Besides many methods, this class has three properties. WriteState, XmlLang, and XmlSpace. The WriteState gets and sets the state of the XmlWriter class. Although, it's not possible to describe all the Writexxx methods here, let's see some of them. First thing we need to do is create an instance of XmlTextWriter using its  constructor. XmlTextWriter has three overloaded constructors, which can take a string, stream, or a TextWriter as an argument. We'll pass a string (file name) as an argument, which we're going to create in C:\ root. In my sample example, I create a file myXmlFile.xml in C:\\ root directory. // Create a new file in C:\\ dir XmlTextWriter textWriter = new XmlTextWriter("C:\\myXmFile.xml", null) ; After creating an instance, first
thing you call us WriterStartDocument. When you're done writing, you call WriteEndDocument and TextWriter's Close method. textWriter.WriteStartDocument(); textWriter.WriteEndDocument(); textWriter.Close(); The WriteStartDocument and WriteEndDocument methods open and close a document for writing. You must have to open a document before start writing to it.  WriteComment method writes comment to a document. It takes only one string type of argument. WriteString method writes a string to a document. With the help of WriteString, WriteStartElement and WriteEndElement methods pair can be used  to write an element to a document. The WriteStartAttribute and WriteEndAttribute pair writes an attribute. WriteNode is more write method, which writes an XmlReader to a document as a node of the document. For example, you can use WriteProcessingInstruction and WriteDocType methods to write a ProcessingInstruction and DocType items of a document. //Write the ProcessingInstruction node string PI= "type='text/xsl' href='book.xsl'" textWriter.WriteProcessingInstruction("xml-stylesheet", PI); //'Write the DocumentType node textWriter.WriteDocType("book", Nothing, Nothing, ""); The below sample example summarizes all these methods and creates a new xml document with some items in it such as elements, attributes, strings, comments and so on. See Listing 5-14. In this sample example, we create a new xml file c:\xmlWriterText.xml. In this sample example, We create a new xml file c:\xmlWriterTest.xml using XmlTextWriter: After that, we add comments and elements to the document using Writexxx methods. After that we read our books.xml xml file using XmlTextReader and add its elements to xmlWriterTest.xml using XmlTextWriter. About Sample Example 3 In this sample example, I create a new file myxmlFile.xml using XmlTextWriter and use its various write methods to write XML items. Sample Example 3.

using System;

using System.Xml;

namespace ReadingXML2

{

    class Class1

    {

        static void Main(string[] args)

        {

            // Create a new file in C:\\ dir

            XmlTextWriter textWriter = new XmlTextWriter("C:\\myXmFile.xml", null);

            // Opens the document

            textWriter.WriteStartDocument();

            // Write comments

            textWriter.WriteComment("First Comment XmlTextWriter Sample Example");

            textWriter.WriteComment("myXmlFile.xml in root dir");

            // Write first element

            textWriter.WriteStartElement("Student");

            textWriter.WriteStartElement("r", "RECORD", "urn:record");

            // Write next element

            textWriter.WriteStartElement("Name", "");

            textWriter.WriteString("Student");

            textWriter.WriteEndElement();

            // Write one more element

            textWriter.WriteStartElement("Address", ""); textWriter.WriteString("Colony");

            textWriter.WriteEndElement();

            // WriteChars

            char[] ch = new char[3];

            ch[0] = 'a';

            ch[1] = 'r';

            ch[2] = 'c';

            textWriter.WriteStartElement("Char");

            textWriter.WriteChars(ch, 0, ch.Length);

            textWriter.WriteEndElement();

            // Ends the document.

            textWriter.WriteEndDocument();

            // close writer

            textWriter.Close();

        }

    }

}

Using XmlDocument The XmlDocument class represents an XML document. This class provides similar methods and properties we've discussed earlier in this article. Load and LoadXml are two useful methods of this class. A Load method loads XML data from a string, stream, TextReader or XmlReader. LoadXml method loads XML document from a specified string. Another useful method of this class is Save. Using Save method you can write XML data to a string, stream, TextWriter or XmlWriter. About Sample Example 4 This tiny sample example pretty easy to understand. We call LoadXml method of XmlDocument to load an XML fragment and call Save to save the fragment as an XML file. Sample Example 4.

//Create the XmlDocument. XmlDocument doc = new XmlDocument();
doc.LoadXml(("Tommyex")); //Save the document to a file. doc.Save("C:\\std.xml");

You can also use Save method to display contents on console if you pass Console.Out as a arameter. For example: doc.Save(Console.Out); About Sample Example 5 Here is one example of how to load an XML document using XmlTextReader. In this sample example, we read books.xml file using XmlTextReader and call its Read method. After that we call XmlDocumetn's Load method to load XmlTextReader contents to XmlDocument and call Save method to save the document. Passing Console.Out as a Save method
argument displays data on the console Sample Example 5. XmlDocument doc = new XmlDocument();
//Load the the document with the last book node.
XmlTextReader reader = new XmlTextReader("c:\\books.xml");
reader.Read(); // load reader doc.Load(reader); // Display contents on the console
doc.Save(Console.Out); Writing Data from a database to an XML Document Using XML and ADO.NET mode, reading a database and writing to an XML document and vice versa is not a big deal. In this section of this article, you will see how to read a database table's data and write the contents to an XML document. The DataSet class provides method to read a relational database table and write this table to an XML file.
You use WriteXml method to write a dataset data to an XML file. In this sample example, I have used commonly used Northwind database comes with Office 2000 and later versions. You can use any database you want. Only thing you need to do is just chapter the connection string and SELECT SQ L query. About Sample Example 6  In this sample, I reate a data adapter object and selects all records of Customers table. After that I can fill method to fill a dataset from the data adapter. In this sample example,I have used OldDb data provides. You need to add reference to the Syste.Data.OldDb namespace to use OldDb data adapters in your program. As you can see from Sample Example 6, first I create a connection with northwind database using OldDbConnection. After that I create a data adapter object by passing a SELECT SQL query and connection. Once you have a data adapter, you can fill a dataset object using Fill method of the data adapter. Then you can WriteXml method of DataSet, which creates an XML document and write its contents to the XML document. In our sample, we read Customers table records and write DataSet contents to OutputXml.Xml file in C:\ dir. Sample Example 6.
using System; using System.Xml; using System.Data; using System.Data.OleDb; namespace ReadingXML2 { class Class1 { static void Main(string[] args) { // create a connection OleDbConnection con = new OleDbConnection(); con.ConnectionString = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:\\Northwind.mdb"; // create a data adapter OleDbDataAdapter da = new OleDbDataAdapter("Select * from Customers", con); // create a new dataset DataSet ds = new DataSet(); // fill dataset da.Fill(ds, "Customers"); // write dataset contents to an xml file by calling WriteXml method ds.WriteXml("C:\\OutputXML.xml"); } } }

.NET Framework Library provides a good support to work with XML documents. The XmlReader,
XmlWriter and their derived classes contains methods and properties to read and write XML documents. With the help of the XmlDocument and XmlDataDocument classes, you can read entire document. The Load and Save method of XmlDocument loads a reader or a file and saves document respectively. ADO.NET provides functionality to read a database and write its contents to the XML document using data providers and a DataSet object.

Main Differences between ASP.NET 3.5 and ASP.NET 4.0

As we all know, ASP.NET 3.5 has introduced with the following main new features

1) AJAX integration
2) LINQ
3) Automatic Properties
4) Lambda expressions

I hope it would be useful for everyone to know about the differences about asp.net 3.5 and its next version asp.net 4.0

Because of space consumption I’ll list only some of them here.

1) Client Data access:



ASP.NET 3.5: There is no direct method to access data from client side. We can go for any of these methods

1) Pagemethods of script manager
2) ICallbackEventHandler interface
3) XMLHttphanlder component

ASP.NET 4.0: In this framework there is an inbuilt feature for this. Following are the methods to implement them.

1) Client data controls
2) Client templates
3) Client data context

i.e we can access the data through client data view & data context objects from client side.

2) Setting Meta keyword and Meta description:



Meta keywords and description are really useful for getting listed in search engine.

ASP.NET 3.5: It has a feature to add meta as following tag
&ltmeta name="keywords" content="These, are, my, keywords" /> 
  &ltmeta name="description" content="This is the description of my page" /> 


ASP.NET 4.0: Here we can add the keywords and description in Page directives itself as shown below.
< %@ Page Language="C#"  CodeFile="Default.aspx.cs" 
  Inherits="_Default" 
  Keywords="Keyword1,Key2,Key3,etc" 
  Description="description" %>


3) Enableviewstage property for each control



ASP.NET 3.5: this property has two values “True” or “false”

ASP.NET 4.0: ViewStateMode property takes an enumeration that has three values: Enabled, Disabled, and Inherit.
Here inherit is the default value for child controls of a control.

4) Setting Client IDs



Some times ClientID property creates head ach for the programmers.

ASP.NET 3.5: We have to use ClientID property to find out the id which is dynamically generated

ASP.NET 4.0: The new ClientIDMode property is introduced to minimize the issues of earlier versions of ASP.NET.

It has following values.

AutoID – Same as ASP.NET 3.5
Static – There won’t be any separate clientid generated at run time
Predictable-These are used particularly in datacontrols. Format is like clientIDrowsuffix with the clientid vlaue
Inherit- This value specifies that a control's ID generation is the same as its parent. 



--
happy programming.

Monday, January 25, 2010

.Net Memory Management & Garbage Collection

The Microsoft .NET common language runtime requires that all resources be allocated from the managed heap. Objects are automatically freed when they are no longer needed by the application.
When a process is initialized, the runtime reserves a contiguous region of address space that initially has no storage allocated for it. This address space region is the managed heap. The heap also maintains a pointer. This pointer indicates where the next object is to be allocated within the heap. Initially, the pointer is set to the base address of the reserved address space region.
Garbage collection in the Microsoft .NET common language runtime environment completely absolves the developer from tracking memory usage and knowing when to free memory. However, you’ll want to understand how it works. So let’s do it.
Topics Covered
We will cover the following topics in the article

  1. Why memory matters
  2. .Net Memory and garbage collection
  3. Generational garbage collection
  4. Temporary Objects
  5. Large object heap & Fragmentation
  6. Finalization
  7. Memory problems

1) Why Memory Matters

Insufficient use of memory can impact
  • Performance
  • Stability
  • Scalability
  • Other applications
Hidden problem in code can cause

  • Memory leaks
  • Excessive memory usage
  • Unnecessary performance overhead

2) .Net Memory and garbage collection

.Net manages memory automatically
  • Creates objects into memory blocks(heaps)
  • Destroy objects no longer in use
Allocates objects onto one of two heaps
  • Small object heap(SOH) – objects < 85k
  • Large object heap(LOH) – objects >= 85k
You allocate onto the heap whenever you use the “new” keyword in code
Small object heap (SOH)

  • Allocation of objects < 85k – Contiguous heap – Objects allocated consecutively
  • Next object pointer is maintained – Objects references held on Stack, Globals, Statics and CPU register
  • Objects not in use are garbage collected
Figure-1
SOH_1
Next, How GC works in SOH?
GC Collect the objects based on the following rules:

  • Reclaims memory from “rootless” objects
  • Runs whenever memory usage reaches certain thresholds
  • Identifies all objects still “in use”

    • Has root reference
    • Has an ancestor with a root reference

  • Compacts the heap

    • Copies “rooted” objects over rootless ones
    • Resets the next object pointer

  • Freezes all execution threads during GC

    • Every GC runs it   hit the performance of your app

Figure-2
SOH2

3) Generational garbage collection


Optimizing Garbage collection
  • Newest object usually dies quickly
  • Oldest object tend to stay alive
  • GC groups objects into Generations

    • Short lived – Gen 0
    • Medium – Gen 1
    • Long Lived – Gen 2

  • When an object survives a GC it is promoted to the next generation
  • GC compacts Gen 0 objects most often
  • The more the GC runs the bigger the impact on performance
Figure-3
Generations_3
Figure-4
Generations_4
Here object C is no longer referenced by any one so when GC runs it get destroyed & Object D will be moved to the Gen 1 (see figure-5). Now Gen 0 has no object, so when next time when GC runs it will collect object from Gen 1.

Figure-5
Generations_5
Figure-6
Generations_6
Here when GC runs it will move the object D & B to Gen 2 because it has been referenced by Global objects & Static objects.
Figure-7
Generations_7

Here when GC runs for Gen 2 it will find out that object A is no longer referenced by anyone so it will destroy it & frees his memory. Now Gen 2 has only object D & B.
Garbage collector runs when
  • Gen 0 objects reach ~256k
  • Gen 1 objects reach ~2Meg
  • Gen 2 objects reach ~10Meg
  • System memory is low
Most objects should die in Gen 0.
Impact on performance is very high when Gen 2 run because
  • Entire small object heap is compacted
  • Large object heap is collected

4) Temporary objects

  • Once allocated objects can’t resize on a contiguous heap
  • Objects such as strings are Immutable


    • Can’t be changed, new versions created instead
    • Heap fills with temporary objects

Let us take example to understand this scenario.
Figure – 8
TempObj_8
After the GC runs all the temporary objects are destroyed.
Figure–9

TempObj_9

5) Large object heap & Fragmentation

Large object heap (LOH)
  • Allocation of object >=85k
  • Non contiguous heap

    • Objects allocated using free space table


  • Garbage collected when LOH Threshold Is reached
  • Uses free space table to find where to allocate
  • Memory can become fragmented
Figure-10
LOH_10
After object B is destroyed free space table will be filled with a memory address which has been available now.
Figure-11
LOH_11

Now when you create new object, GC will check out which memory area is free or available for our new object in LOH. It will check out the Free space table & allocate object where it fit.
Figure-12
LOH_12

6) Object Finalization

  • Disk, Network, UI resources need safe cleanup after use by .NET classes
  • Object finalization guarantees cleanup code will be called before collection
  • Finalizable object survive for at least 1 extra GC & often make it to Gen 2
  • Finalizable classes have a

    • Finalize method(c# or vb.net)
    • C++ style destructor (c#)

Here are the guidelines that help you to decide when to use Finalize method:
  • Only implement Finalize on objects that require finalization. There are performance costs associated with Finalize methods.
  • If you require a Finalize method, you should consider implementing IDisposable to allow users of your class to avoid the cost of invoking the Finalize method.
  • Do not make the Finalize method more visible. It should be protected, not public.
  • An object’s Finalize method should free any external resources that the object owns. Moreover, a Finalize method should release only resources that are held onto by the object. The Finalize method should not reference any other objects.
  • Do not directly call a Finalize method on an object other than the object’s base class. This is not a valid operation in the C# programming language.
  • Call the base.Finalize method from an object’s Finalize method.
Note: The base class’s Finalize method is called automatically with the C# and the Managed Extensions for C++ destructor syntax.
Let see one example to understand how the finalization works.
Each figure itself explain what is going on & you can clearly see how the finalization works when GC run.

Figure-13
Final_13
Figure-14
Final_14
Figure-15
Final_15
Figure-16
Final_16
Figure-17

Final_17
For more information on Finalization refer the following links:
http://www.object-arts.com/docs/index.html?howdofinalizationandmourningactuallywork_.htm
http://blogs.msdn.com/cbrumme/archive/2004/02/20/77460.aspx
How to minimize overheads
Object size, number of objects, and object lifetime are all factors that impact your application’s allocation profile. While allocations are quick, the efficiency of garbage collection depends (among other things) on the generation being collected. Collecting small objects from Gen 0 is the most efficient form of garbage collection because Gen 0 is the smallest and typically fits in the CPU cache. In contrast, frequent collection of objects from Gen 2 is expensive. To identify when allocations occur, and which generations they occur in, observe your application’s allocation patterns by using an allocation profiler such as the CLR Profiler.
You can minimize overheads by:
  • Avoid Calling GC.Collect
  • Consider Using Weak References with Cached Data
  • Prevent the Promotion of Short-Lived Objects
  • Set Unneeded Member Variables to Null Before Making Long-Running Calls
  • Minimize Hidden Allocations
  • Avoid or Minimize Complex Object Graphs
  • Avoid Preallocating and Chunking Memory
Read more: http://www.guidanceshare.com/wiki/.NET_2.0_Performance_Guidelines_-_Garbage_Collection

7) Common Memory Problems

  • Excessive RAM footprint

    • App allocates objects too early or for too long using more memory than needed
    • Can affect other app on system

  • Excessive temporary object allocation

    • Garbage collection runs more frequently
    • Executing threads freeze during garbage collection

  • Memory leaks


    • Overlooked root references keep objects alive (collections, arrays, session state, delegates/events etc)
    • Incorrect or absent finalization can cause resource leaks

References & other useful articles related to this topic:
Hope this help

Thursday, January 7, 2010

Backing Up MySQL Database


Backing Up MySQL Database
MySQL database backup can be accomplished in two ways:
a) Copying the raw mysql database files &
b) Exporting tables to text files



Copying the MySQL database files
MySQL uses the same table format on different platforms, so it's possible to copy MySQL table and index files from one platform and use them on another without any difficulties (assuming, of course, that you're using the same version of MySQL on both platforms).


Exporting tables to text files
The MySQLDump is handy utility that can be used to quickly backup the MySQL Database to the text files. To use the MySQLDump utility it is required to logon to the System running the MySQL Databse. You can use Telnet to remotely logon to the system if you don't have the physical access to the machine.

The syntax for the command is as follows.

mysqldump -u [Username] -p [password] [databasename] > [backupfile.sql]
[username] - this is your database username
[password]- this is the password for your database
[databasename] - the name of your database
[backupfile.sql] - the filename for your database backup


Let's discuss the example of backing up MySQL Database named "accounts" into text file accounts.sql. Here are the scenarios of taking the backup assuming that both user name and password of the database is "admin".

a) Taking the full backup of all the tables including the data.
 

Use the following command to accomplish this:
mysqldump -u admin -p admin accounts > accounts.sql


b) Taking the backup of table structures only.
 

Use the following command to accomplish this:
mysqldump -u admin -p admin --no-data accounts > accounts.sql


c) Taking the backup data only.
 

Use the following command to accomplish this:
mysqldump -u admin -p admin --no-create-info accounts > accounts.sql



Restoring MySQL Database
Restoring the MySQL is very easy job. You can use the following to command to restore the accounts database from accounts.sql backup file.

mysql - u admin -p admin accounts < accounts.sql

In this tutorial you learned how to take the backup of your MySQL Database and restore the same in the event of some database crash or on some other machine.


-----------
Enjoy Programming



Monday, January 4, 2010

Uploading Multiple Files in ASP.NET 2.0

In ASP.NET We can upload more than one file using the following classes:

HttpFileCollection
HttpPostedFile
Request.Files
System.IO.Path

HttpFileCollection:

HttpFileCollection will provide access to the files uploaded by a client.

HttpPostedFile:

HttpPostedFile will provide access to individual files uploaded by a client. Through this class we can access the content and properties of each individual file, and read and save the files.

Request.Files :

The Request.Files will return collection of all files uploaded by user and store them inside the HttpFileCollection.

Follow these steps mentioned below to do so:

Step 1:

Drag and drop multiple (in our case four) FileUpload controls on to the designer.

Step 2:

Drop a Button control and rename it to “Upload”
<asp:Button ID=”btnUpload” runat=”server” Text=”Upload” />

Step 3:

Double click the Upload Button to add an event hander to the code behind.
protected void btnUpload_Click(object sender, EventArgs e)
{

}

Step 4: Import the System.IO namespace.
using System.IO;

Step 5:

Use the ‘HttpFileCollection’ class to retrieve all the files that are uploaded. Files are encoded and transmitted in the content body using multipart MIME format with an HTTP Content-Type header. ASP.NET extracts
this information from the content body into individual members of an HttpFileCollection.

The code would look as follows:
protected void btnUpload_Click(object sender, EventArgs e)
{
try
{
// Get the HttpFileCollection
HttpFileCollection hfc = Request.Files;
for (int i = 0; i < hfc.Count; i++)
{
HttpPostedFile hpf = hfc[i];
if (hpf.ContentLength > 0)
{
hpf.SaveAs(Server.MapPath(”MyFiles”) + “\\” +
Path.GetFileName(hpf.FileName));

}
}
}
catch (Exception ex)
{
// Handle your exception here
}

}
Some important points to consider while uploading

1.    To save a file to the server, the account associated with ASP.NET must have sufficient permissions on the folder, where the files are being uploaded. This would usually be the ‘ASPNET’ account for Windows XP or a similar OS. In Windows Server 2003, the account used is ‘NETWORKSERVICE’. So you would be required to explicitly grant write permissions to these accounts on the folder.

2.    While uploading the files to a remote server, the default ASPNET user account used by ASP.NET does not have network permissions by default. The solution is to either give the account such permissions or
use impersonation to have it run under a different account that has the permissions.

3.    By default, you can upload no more than 4096 KB (4 MB) of data. However there is a workaround for this limitation. You can change the maximum file size by changing the maxRequestLength attribute of the
httpRuntime element in the web.config file. You can also increase the ‘executionTimeout’. By default it is 110 seconds. I would encourage you to experiment with the other attributes of the httpRuntime element.

<configuration>
<system.web>
<httpRuntime
executionTimeout=”200″
maxRequestLength=”8192″
requestLengthDiskThreshold=”256″
useFullyQualifiedRedirectUrl=”false”
minFreeThreads=”8″
minLocalRequestFreeThreads=”4″
appRequestQueueLimit=”5000″
enableKernelOutputCache=”true”
enableVersionHeader=”true”
requireRootedSaveAsPath=”true”
enable=”true”
shutdownTimeout=”90″
delayNotificationTimeout=”5″
waitChangeNotification=”0″
maxWaitChangeNotification=”0″
enableHeaderChecking=”true”
sendCacheControlHeader=”true”
apartmentThreading=”false”/>
</system.web>
</configuration>

References

There are a number of good resources I referred to, for this article. A few of them are:

http://www.wrox.com/WileyCDA/Section/id-292160.html
http://msdn2.microsoft.com/en-US/library/aa479405.aspx
http://msdn2.microsoft.com/en-us/library/system.web.httpfilecollection.aspx

Blog Archive

Ideal SQL Query For Handling Error & Transcation in MS SQL

BEGIN TRY BEGIN TRAN --put queries here COMMIT; END TRY BEGIN CATCH IF @@TRANCOUNT>0 BEGIN SELECT @@ERROR,ERRO...