Skip to content

Quick and dirty way to serialize an object to XML and back

A lot of time I just want to quickly look at how an object graph will be serialized to XML. I end up writing quick and dirty code like below time after time since I cannot find the code snippet I had written a while back. I am documenting it here so I do not have to write it again when I need it next time.

 
public class SerUtils
{
  public static string SerializeToString<T>(T source)
  {
    if(null == source)
	    return string.Empty;
 
    XmlSerializer ser = new XmlSerializer(source.GetType());
    MemoryStream ms = new MemoryStream();
    ser.Serialize(ms, source);			
    ms.Flush();
    ms.Position = 0;
    StreamReader sr = new StreamReader(ms);
    string s = sr.ReadToEnd();
    return s;
  }
 
  public static T DeserializeFromString<T>(string xml)
  {
    if(string.Empty == xml)
	    return default(T);
 
    XmlSerializer ser = new XmlSerializer(typeof(T));
    MemoryStream ms = new MemoryStream();
    StreamWriter sw = new StreamWriter(ms);
    sw.Write(xml);
    sw.Flush();
    ms.Position = 0;
    return (T) ser.Deserialize(ms);
  }
}
 

Don't use this in production!

Getting declared type of a NulllableType variable

If the GetType method is called on a variable of NullableType then you will get the underlying type instead of the NullableType. This is because of boxing that occurs when the type object is converted to Object as documented at MSDN. So how do you get the declared type if you have a variable of NullableType? Here is the code that will return the Type object of the declared type of the NullableType variable.

 
        private static Type GetUnderlyingType<T>(T t)
        {
            return typeof(T);
        }
 

The code above uses Generics and typeof operator to obtain the underlying NullableType. The trick here is to call typeof on the type itself and not the variable. This is not usually possible when you just have a variable but the power of Generics allows you to do so. So, say if you have declared a variable of NullableType int

int? i;

the code in GetUnderlyingType method above does the equivalent of

typeof(int?)

to return the underlying NullableType. To further understand the difference between calling GetType on a variable and using typeof operator on the Type itself, look at the following code.

 
	private static void PrintType<T>(T t)
        {
            Console.WriteLine("{0}, {1}", t.GetType().FullName, typeof(T).FullName);
        }
 

Calling GetType on variable t returns the underlying type where as using typeof operator on Generic type parameter T returns the declared type of variable t.
Given a variable we can further test if the variable holds a NullableType by using the following code.

 
        private static bool IsNullable<T>(T t)
        {
            Type tx = GetUnderlyingType(t);
            return tx.IsGenericType && tx.GetGenericTypeDefinition() == typeof(Nullable<>);
        }
 

The code above obtains the underlying type first and then tests if the type is Generic, and if it is Generic type then it tests for NullableType.

How Windows CRT Checks For Managed Module

While debugging an issue I came across an interesting piece of code in Windows CRT source. So far I have been using the code below to determine if an executable is managed or native. Basically I am just checking for the presence of IMAGE_DIRECTORY_ENTRY_COM_DESCRIPTOR Data Directory entry in PE headers.

 
int CheckCLRHeader(PBYTE pbFile)
{
    PIMAGE_DOS_HEADER pDOSHeader;
    PIMAGE_NT_HEADERS pNTHeader;
    PIMAGE_OPTIONAL_HEADER32 pNTHeader32;
    PIMAGE_OPTIONAL_HEADER64 pNTHeader64;
    PIMAGE_DATA_DIRECTORY pDataDirectory;
 
	if(NULL == pbFile)
	{
		cout << "Invalid file" << endl;
		return 0;
	}
 
	pDOSHeader = reinterpret_cast
< PIMAGE_DOS_HEADER >(pbFile);
	if(IMAGE_DOS_SIGNATURE != pDOSHeader->e_magic)
	{
		cout << "Cannot find DOS header. Not an executable file" << endl;
		return 0;
	}
 
	pNTHeader = ImageNtHeader(pbFile);
	if(NULL == pNTHeader)
	{
		cout << "Cannot find PE header. Invalid or corrupt file" << endl;
		return 0;
	}
 
	if(IMAGE_NT_SIGNATURE != pNTHeader->Signature)
	{
		cout << "Cannot fine PE signature. Invalid or corrupt file" << endl;
		return 0;
	}
 
	switch(pNTHeader->OptionalHeader.Magic)
	{
		case IMAGE_NT_OPTIONAL_HDR32_MAGIC:
			pNTHeader32 = reinterpret_cast
< PIMAGE_OPTIONAL_HEADER32 >(&pNTHeader->OptionalHeader);
			if(pNTHeader32->NumberOfRvaAndSizes <= IMAGE_DIRECTORY_ENTRY_COM_DESCRIPTOR)
			{
				cout << "No CLR Data Dictionary. Not a Managed Assembly" << endl;
				return 0;
			}
			pDataDirectory = pNTHeader32->DataDirectory;
			break;
 
		case IMAGE_NT_OPTIONAL_HDR64_MAGIC:
			pNTHeader64 = reinterpret_cast
< PIMAGE_OPTIONAL_HEADER64 >(&pNTHeader->OptionalHeader);
			if(pNTHeader64->NumberOfRvaAndSizes <= IMAGE_DIRECTORY_ENTRY_COM_DESCRIPTOR)
			{
				cout << "No CLR Data Dictionary. Not a Managed Assembly" << endl;
				return 0;
			}
			pDataDirectory = pNTHeader64->DataDirectory;
			break;
 
		default:
			cout << "Invalid NT header. Invalid or corrupt file" << endl;
			return 0;
	}
 
	if(NULL == pDataDirectory)
	{
		cout << "Cannot find data directories. Invalid or corrupt file" << endl;
		return 0;
	}
 
	if(0 == pDataDirectory[IMAGE_DIRECTORY_ENTRY_COM_DESCRIPTOR].VirtualAddress ||
		0 == pDataDirectory[IMAGE_DIRECTORY_ENTRY_COM_DESCRIPTOR].Size)
	{
		cout << "COM Data Directory not found. Not a Managed Assembly" << endl;
		return 0;
	}
 
	cout << "Managed Assembly" << endl;
 
	return 1;
}
 

The code for check_managed_app function in crtexe.c is similar but slightly different in how it checks for the same IMAGE_DIRECTORY_ENTRY_COM_DESCRIPTOR but only checks for presence of VirtualAddress.

 
/***
*check_managed_app() - Check for a managed executable
*
*Purpose:
*       Determine if the EXE the startup code is linked into is a managed app
*       by looking for the COM Runtime Descriptor in the Image Data Directory
*       of the PE or PE+ header.
*
*Entry:
*       None
*
*Exit:
*       1 if managed app, 0 if not.
*
*Exceptions:
*
*******************************************************************************/
 
static int __cdecl check_managed_app (
        void
        )
{
        PIMAGE_DOS_HEADER pDOSHeader;
        PIMAGE_NT_HEADERS pPEHeader;
        PIMAGE_OPTIONAL_HEADER32 pNTHeader32;
        PIMAGE_OPTIONAL_HEADER64 pNTHeader64;
 
        pDOSHeader = (PIMAGE_DOS_HEADER)&amp;__ImageBase;
        if ( pDOSHeader->e_magic != IMAGE_DOS_SIGNATURE )
            return 0;
 
        pPEHeader = (PIMAGE_NT_HEADERS)((char *)pDOSHeader +
                                        pDOSHeader->e_lfanew);
        if ( pPEHeader->Signature != IMAGE_NT_SIGNATURE )
            return 0;
 
        pNTHeader32 = (PIMAGE_OPTIONAL_HEADER32)&pPEHeader->OptionalHeader;
        switch ( pNTHeader32->Magic ) {
        case IMAGE_NT_OPTIONAL_HDR32_MAGIC:
            /* PE header */
            /* prefast assumes we are overflowing __ImageBase */
#pragma warning(push)
#pragma warning(disable:26000)
            if ( pNTHeader32->NumberOfRvaAndSizes <=
                    IMAGE_DIRECTORY_ENTRY_COM_DESCRIPTOR )
                return 0;
#pragma warning(pop)
            return !! pNTHeader32 ->
                      DataDirectory[IMAGE_DIRECTORY_ENTRY_COM_DESCRIPTOR] .
                      VirtualAddress;
        case IMAGE_NT_OPTIONAL_HDR64_MAGIC:
            /* PE+ header */
            pNTHeader64 = (PIMAGE_OPTIONAL_HEADER64)pNTHeader32;
            if ( pNTHeader64->NumberOfRvaAndSizes <=
                    IMAGE_DIRECTORY_ENTRY_COM_DESCRIPTOR )
                return 0;
            return !! pNTHeader64 ->
                      DataDirectory[IMAGE_DIRECTORY_ENTRY_COM_DESCRIPTOR] .
                      VirtualAddress;
        }
 
        /* Not PE or PE+, so not managed */
        return 0;
}
 

I came across this code while debugging past the main() function. I thought this might be a new piece of code in CRT source that comes with VS2008 only to be proven wrong when searching through the CRT source that comes with all versions of Visual Studio starting from VS2002. Another interesting discovery (at least for me) is that similar code is in various CRT source files. In VS2008 CRT source the files crt0dat.c, crt0.c and crtexe.c contain check_managed_app function in Win32 CRT and crt0dat.c in WinCE CRT. In VS2005 the same function is in crt0.c, crt0dat.c and crtexe.c. In VS2003 and VS2002 it is in crt0.c and crtexe.c. There are other gems of code in CRT source waiting to be discovered!

How to determine event subscribers

While debugging a reasonably complex .NET application, many times you may come across a situation where you want to know which different methods have subscribed to your event. If you are using VS2005 or VS2008 then DebuggerTypeProxyAttribute, a relatively unknown feature of Visual Studio, comes to your rescue. For a long time Visual Studio has allowed developers to customize the information about the variable that is displayed in the IDE during debugging. Before VS2005 the customization was done using mcee_cs.dat (C#), mcee_mc.dat (MC++), and autoexp.dat (C++) files. Starting with VS2005, this customization is taken to next level. VS2005 introduces two new attributes, DebuggerDisplayAttribute and DebuggerTypeProxyAttribute, that allows you to change the information displayed in the Visual Studio debugger. Typically one would apply these attributes to the class in the code which the VS debugger will use to display information. However, there is a trick in which you can use these attributes outside of the original code. This means that you can change the information displayed about ANY class, even the classes that you did not write yourself. VS2005 and above use AutoExp.dll which provides display information about various types to the debugger. The code for AutoExp.dll is provided in AutoExp.cs file with the Visual Studio installation in Common7\Packages\Debugger\Visualizers\Original\ folder. You can add your own code to this file, compile the AutoExp.dll, and place it in either the Common7\Packages\Debugger\Visualizers folder or in the My Documents\Visual Studio 2005 (or Visual Studio 2008)\Visualizers folder. I learned about this trick a while back from John Robbins' excellent book Debugging .NET 2.0 Applications. It turns out that Visual Studio loads all the valid assemblies from the Visualizers folder, and if the assembly contains Visualizer or any debugger related objects then it will use it during the debugging session. Now back to the topic of this post; to easily display which methods have subscribed to an event I wrote a class called MulticastDelegateDebuggerProxy for MulticastDelegate class. Since, all the events are delegates that are derived from MulticastDelegate class, my MulticastDelegateDebuggerProxy class will be used for all the events. The DebuggerTypeProxyAttribute is used at the assembly level to tell Visual Studio debugger that MulticastDelegateDebuggerProxy class is the proxy for MulticastDelegate class. The code for MulticastDelegateDebuggerProxy is listed below.

 
using System;
using System.Diagnostics;
using System.Reflection;
using System.Text;
 
[assembly: DebuggerTypeProxy(typeof(MulticastDelegateDebuggerProxy),
		Target = typeof(MulticastDelegate))]
 
public class MulticastDelegateDebuggerProxy
{
	private string[] m_methods;
 
	public MulticastDelegateDebuggerProxy(System.MulticastDelegate del)
	{
		m_methods = GetMethodList(del);
	}
 
	[DebuggerDisplay(@"Subscribers:\{{Methods.Length}}")]
	public string[] Methods
	{
		get { return m_methods; }
	}
	private string[] GetMethodList(System.MulticastDelegate myEvent)
	{
		string retType = myEvent.Method.ReturnType.Name;
 
		Delegate[] delegates = myEvent.GetInvocationList();
		string[] methods = new string[delegates.Length];
 
		for (int i = 0; i &lt; delegates.Length; ++i)
		{
			Delegate del = delegates[i];
			MethodInfo m = del.Method;
			string accessModifier =
				m.IsPrivate ? "private" : (m.IsPublic ? "public" : (m.IsFamily ? "protected" : ""));
			StringBuilder sb = new StringBuilder();
			foreach (ParameterInfo param in m.GetParameters())
			{
				sb.Append(param.ParameterType.FullName).Append(" ").Append(param.Name).Append(",");
			}
			if (sb.Length &gt; 0)
			{
				sb.Remove(sb.Length - 1, 1);
			}
			string isStatic = m.IsStatic ? "static" : string.Empty;
			string isVirtual = m.IsVirtual ?
				(((m.Attributes &amp; MethodAttributes.NewSlot) == MethodAttributes.ReuseSlot)
				? "override" : "virtual") : string.Empty;
			methods[i] = string.Format("{0} {1} {2} {3} {4}.{5}({6})",
				accessModifier, isStatic, isVirtual, retType, m.ReflectedType.FullName, m.Name, sb.ToString());
		}
 
		return methods;
	}
}

The information displayed in VS2005 by default is shown below.
Default Event information in VS2005

The information displayed in VS2005 with MulticastDelegateDebuggerProxy is shown below. The MulticastDelegateDebuggerProxy displays fully qualified method name and the parameter names used in the method.

Event information in VS2005 with MulticastDelegateDebuggerProxy
In VS2008, the default information displayed is slightly better than VS2005 as it shows _invocationList by default which can be drilled down to the actual methods. However, the default display can get ambiguous as shown in the screenshot below. The MulticastDelegateDebuggerProxy does a much better job at displaying the same information.

Default event information in VS2008

Learning .NET debugging using WinDBG

If you wanted to learn advanced debugging of .NET application using WinDBG but didn't know where to start, look no further. Tess Ferrandez has started posting debugging labs to teach you how to debug tough problems. Her blog has invaluable tips about debugging real world problems. She has posted real world case studies and explained how to approach and solve the problems. In debugging labs, she posts a buggy application with instructions for troubleshooting the problem. A couple of days later, she posts a review of the lab where she walks you through troubleshooting the problem in great detail. So far she has posted 3 debugging labs (one, two, three) and 2 lab reviews (one, two). She will be posting a total of 10 labs. These labs are a great learning tool and best of all, they are all free! Thank you Tess!

Ghazal-E-Surat

Ghazal is a form of poetry that was introduced to India by the Moghuls in the 12th century. While Ghazals can be written in any language, in India it is more popularly written in Urdu, Hindi and Gujarati. When it comes to Gujarati Ghazals, the city of Surat is considered the front runner with the most popular Ghazalkars or Shayers (i.e. poets) coming from Surat. In an unprecedented move, a publisher from Surat has published a book called Ghazal-E-Surat that contains about 76 Ghazals from the 41 living Ghazalkars. I came across this news on the blog of Dr. Vivek Tailor a Ghazalkar from Surat, and Layastro a leading Gujarati Ghazal site. One of the Ghazalkars published in the book is my Grandfather, H. N. Keshwani. My Grandfather is well known in the Gujarati Ghazal circle in Surat, but little outside of it. I am very proud of my Grandpa for his achievements, and very happy that he was recognized by the publishers of Ghazal-E-Surat. Among the other Ghazalkars' pictures on the front and back cover of the book, my Grandpa is on front cover, first picture on the third row.

Ghazal-E-Surat

Image courtesy: Dr. Vivek Tailor who is also published in the book.

Vista Dual Monitor Woes

This issue with the dual monitor setup in Vista has been bugging me a lot. I have 2 19" Westinghouse monitors that have DVI and VGA inputs. I have an ATI Radeon 9200 video card that has a DVI and a VGA output. The left monitor (primary) is connected using DVI and the right one is connected to VGA. The dual monitor is working fine except for one problem. When I turn off the left primary monitor (the one using DVI) and turn it back on, I get no signal. I only have two options, either reboot the machine, or login to the machine from another machine using remote desktop. As soon as I login using RDP, the monitor comes to life. I Googled for dual monitor related issues in Vista and followed through various suggestion including installing the latest ATI Catalyst video drivers, disabling turning off monitors in the power management settings and disabling Transient Multi-Monitor Manager (TMM) task, but nothing has helped so far. I have found only one reference to the exact same issue that I am facing, but there is no resolution. The exact same setup works fine on Windows XP.

DEP, NXCOMPAT Redux

I recently wrote about the impact of DEP on .NET applications due to changes in the .NET Framework 2.0 SP1. A couple of days later I came across Michael Howard's post about new APIs related to DEP. As Michael points out, in Windows Vista SP1, Windows XP SP3 and Windows Server 2008, it is possible for developers to enable DEP for an application at runtime instead of using a linker switch or being forced into setting NXCOMPAT bit by the C# compiler. The new function SetProcessDEPPolicy, allows the running process to change the DEP settings for the process calling it. I think this is a much more flexible option since it puts application developers in control of using DEP for their applications. Michael also explains the options available in SetProcessDEPPolicy API which allows the application that is using the old ATL code to enable DEP, and not throw the DEP exception at runtime. Raymond Chen has written about how in Windows XP, DEP will not throw an exception for what he calls "well-known DEP-voilating thunks". The new DEP API brings this feature to WinXP SP3, Vista, and Windows Server 2008. Now if only Microsoft would release a hotfix for the C# compiler to provide a /NXCOMPAT switch, the .NET development world would be a much better place.

How JNI calls functions from DLL

When a function is exported from a DLL the compiler will decorate or mangle the function name. To export the undecorated function name the linker needs an export definition file (.DEF). A while back I was working on a project in Java that calls functions from a DLL using JNI. I had created a DLL that exported some functions to be called from a Java application. Just when I started to test the application I realized that I had forgotten to update the .DEF file. I was expecting the Java application to fail when it would try to call the functions in the DLL. To my surprise, the function calls succeeded. I was curious to find out why these calls succeeded. I wrote a sample Java application that had one class called Hello which would call a native function called sayHello in a DLL called Hello.DLL. Running javah on the Java source generated the header file with a native function named Java_Hello_sayHello. I created the DLL and made sure that the exported function was undecorated. Using the fantastic Dependency Walker's lightweight profiler I ran the Java application. Looking at the output window in Depends, it was clear why a call to a function in decorated form, would succeed. JNI first tries to call the function using two decorated names before calling it with the undecorated name as illustrated in the image below.

JNI calling native function

The JNI specification mentions the naming convention for the native function. However, there is no mention of the fact that when calling the functions from a native library the runtime will try to call the functions using both the decorated and undecorated names. Another peculiar behavior that I noticed is that when calling the other JNI related functions that you can export from a native library such as JNI_OnLoad, JNI_OnUnload, the runtime tries only one decorated name before calling the undecorated function as shown above. I find this very strange.

DEP, NXCOMPAT and changes in .NET Framework 2.0 SP1

This topic has been written about but not as much as it should have been. Hence, I am repeating it here again. It will also serve as a future reference for me.

DEP stands for Data Execution Prevention. It is a hardware based feature supported by Intel as well as AMD processors that disables execution of code that resides in memory areas marked as data pages. On AMD processors, this feature is implemented as NX (No eXecute) bit, and on Intel processors it is implemented as XD (eXecute Disable) bit. An OS can expose this feature and enable the application to take advantage of this protection. Once enabled, it becomes very difficult to inject and run malicious code using popular exploits such as buffer overflow which injects code on stack or heap memory and branches the instruction pointer (EIP) to execute the code. Microsoft has made this feature available in its operating systems starting with the Windows XP Service Pack 2. This feature can be enabled either system wide, or per application in Windows XP Service Pack 2, Windows XP Tablet PC Edition 2005, and Windows Server 2003. In Windows Vista this feature is enabled by default for all the system applications. Vista also has an Opt-out feature that allows you to enable DEP for all the (system as well as user) applications except those on the opt-out list. Robert Hensing explains this is great detail in his post. For VC++ developers there is one more way to take advantage of DEP. Visual Studio 2003 and above provide a linker switch called /NXCOMPAT. This switch sets a bit in the executable that indicates the OS loader that the application is DEP enabled. However, there is one catch. Setting this bit in the executable overrides all the other DEP settings - meaning even if DEP is disabled system wide or the application is opted out of DEP, the OS will still enable DEP for that application!!! This is where the problem starts for applications compiled using .NET 2.0 SP1. The C# compiler was updated to set this bit in the assemblies that it creates, and there is no way to unset it as in VC++ linker. If your application does not use native code, then you have nothing to worry about. If your application uses native then you may be in for a rude shock; your application might suddenly crash with IP_ON_HEAP errors. The old ATL framework generated code on the fly that ran on the stack or heap. If your .NET application uses a component or an ActiveX control that was built using the old ATL framework then your application will fail. The option is to recompile your native code using VS2003 or above. If this is a third party component that the vendor has not updated then you may be out of luck. Don't panic however! There is a ray of hope. You can disable the NXCOMPAT bit set by the C# compiler in the executable using the EditBin utility that is a part of Visual Studio installation. If your assembly is strongly named then you will have to resign it using SN utility. I strongly believe that Microsoft made a horrible mistake by deciding to set this bit by default on all the assemblies. It would have been much better if they had provided a switch just like the C++ linker. This new C# compiler feature is not well publicized by Microsoft. All the .NET developers that I have spoken to so far do not even know what DEP or NXCOMPAT is. Following the pure managed code mantra, most .NET developers do not care about native code any more even if they are using native code in their application. There is still lot of native code out there in the enterprises that is incompatible with DEP. There are lot of .NET applications that use such native code. This creates quite a problem when unknowing developers are faced with a situation when their application(s) suddenly stops working by just recompiling it in .NET 2.0 SP1. There are also other breaking changes as well as new properties, methods and Types added in SP1 as mentioned in a post by Scott Hanselman. This situation is made even worse by the release of Visual Studio 2008 as it installs .NET FX 2.0 SP1. Developers are keen to take advantage of the new features in C# 3.0 in .NET 2.0 applications using VS2008's multi-targeting feature. In an enterprise application development scenario, it is quite possible that developers will have .NET 2.0 SP1 on their development and test machines due to VS2008, but the user desktops will not have SP1. If the application uses new methods or types then it will fail on user desktops. Although one might be inclined to blame developers for developing and testing on environment that differs from the user's environment, such a scenario is highly likely to occur since the changes in .NET 2.0 SP1 are not well known. I have been able to find a list of bug fixes in SP1, but there is no knowledge base article that lists the differences between .NET 2.0 and SP1 other than the few scattered blog entries. It is a very bad idea to make such significant changes in a service pack level release. This is however a reality. . . so developers beware!