Build fail only on the build server with delay-signed assemblies

1
[Any CPU/Release] SGEN(0,0): error : Could not load file or assembly '<YOUR ASSEMBLY>' or one of its dependencies. Strong name validation failed. (Exception from HRESULT: 0x8013141A)

I got that error from our build machine. It crashed wonderfully and would tell me that an assembly could not be loaded.

After 1 hour of search I finally found the problem.

We are in an environment where we delay-sign all of our assemblies and we are fully signing them on the build server as an “AfterDrop” event. Of course, we add a “Skip Verification” for the public key token we are using so that we can put them inside the GAC.

All of our projects (more than 20) we built exactly that way and I was seeing no way why it would happen. I then decided to look at what depended on this assembly. 1 project and it was the one that was failing inside the “Build Status”.

I then found something that was used inside that specific project that was never used anywhere else. Somebody used the tab “Settings” to store application settings. Not a bad choice and a perfectly sound decision but… how can this make the build crash?

Well… it seems like using “Settings” force a call to SGEN.exe and SGEN.exe won’t take any partially signed assemblies. It’s then that I figured out that our build server didn’t had any of those “Skip Verification”.

After searching Google for some different terms, I found out under the “Build” tab a way to deactivate the call to SGEN. It’s called “Generate Serialization Assembly”. By default, the value is “Auto”. After settings the value at “Off” for “Release” mode only, the build was fixed and we were happy campers.

Code Snippet: Filtering a list using Lambda without a loop

This is not the latest news of the day… but if you are doing filtering with loops, you are doing it wrong.

Here’s a sample code that will replace a loop easily:

1
2
List<String> myList = new List<string> {"Maxim", "Joe", "Obama"};
myList = myList.Where(item => item.Equals("Obama")).ToList();

While this might sound too easy, it’s easily done and works fine. Just make sure that the objects that you are using inside this kind of filtering are not ActiveRecord DAL object or you might be sorry for the performance.

WSS/SharePoint Installation - Stand-Alone vs Web Front-End

Small post about that today. I ended up losing a good part of my day reinstalling a SharePoint installation on a VMWare machine. Why?

Because at first I installed it in Stand-Alone mode which sounded like a good idea at first. What does that do is that it creates a SQL Server Express database and won’t allow anyone connections on it. Adding insult to injury, it also doesn’t work with all the normal components of a SharePoint installation.

So… after reverting back to a working snapshot… redoing all my Windows Update plus reinstalling SharePoint… I end up with a working installation.

So… as a small reminder… don’t forget to specify “Web Front-End” when first prompted by the installation. It will save you a lot of time

Cross-Cutting Concerns should be handled on ALL projects. No Excuses

The title say it all. All cross-cutting concerns in a project should be handled or given some thought on ALL PROJECTS. No exceptions. No excuses.

Before we go further, what is a cross-cutting concern? Here is the definition from Wikipedia:

In computer science, cross-cutting concerns are aspects of a program which affect (crosscut) other concerns. These concerns often cannot be cleanly decomposed from the rest of the system in both the design and implementation, and result in either scattering or tangling of the program, or both.

The perfect example for this is error handling. The error handling is not part of the main model of an application but is required for developers to catch errors and log/display them. Logging is also a cross-cutting concern.

So let’s display the 3 most important concerns:

  • Exception Management
  • Logging/Instrumentation
  • Caching
Exception Management

This is the most important. It seems like a basic really. Wrapping code in try {…} catch{…} and make sure everything works is the most basic thing to do. I’ve seen projects without it. Honestly… it’s bad. Really bad. Everything was working fine but when something went wrong, nothing could handle it properly.

Adding exception handling to all and every method in an application is not a reasonable thing to do either.

Here is a small checklist for handling exceptions:

  1. Don’t catch an exception if there is no relevant information that can be added.
  2. Don’t swallow (empty catch) if there is not a good reason to.
  3. Make sure that exception are managed at the contact point between layers so relevant information can be added

Worst of all… you couldn’t know wether it was coming from the Presentation, Business or Data layer which leads to horrible debugging.

Which brings us to the next section….

Logging / Instrumentation

When an exception thrown, you want to know why. Logging allows you to log everything at a specific location based on the project you are on (database, flat file, WMI, Event Log, etc.). Most people already log a lot of stuff in the code but… is what’s logged relevant?

Logging is only important when it’s meaningful. Bloating your software with logging won’t bring any good. Too much log and nobody will inspect the log for things that went wrong. Too few and the software could generate errors for too long before anybody realize.

I won’t go in too much detail but if you want to know code to logging ratio and the problem with logging there is many information out there.

Caching

Caching is too often considered the root of all evil. However, it is evil only if the code become unreadable and it takes a developer 4 hours to get a 1% gain.

I coded a part of an application that generated XML for consumption by a Flash application. I didn’t had any specification but I knew that if let that uncached, I would have a bug report the next day on my desk. The caching that I added helped give all flash application responsiveness while keeping the server load under control.

Caching is too often pushed back to a later time and should be considered every time a service or any dynamically generated content is served to the public. The responsiveness will grow while keeping the amount of code to what is necessary.

If you need more argument, please take a look at ASP.NET Micro Caching: Benefits of a One-Second Cache.

How do I do it?

If you haven’t been present for the last 10 years or so, I would suggest taking a look at Enterprise Library. It’s a great tool that allows you to handle all those cross-cutting concerns without having to build your own tools.

If you don’t want to use Enterprise Library, there is plenty of other framework that will allow to handle those concerns.

Just remember that good coder code, great coder reuse.

Creating a StreamProxy with Delegate/Lambda to read/write to a file

I saw a question the last time on Stackoverflow. The accepted answer was from Jon Skeet which was:

  • Event handlers (for GUI and more)
  • Starting threads
  • Callbacks (e.g. for async APIs)
  • LINQ and similar (List.Find etc)
  • Anywhere else where I want to effectively apply “template” code with some specialized logic inside (where the delegate provides the specialization)

I was once asked “What is the use of a delegate?”. The main answer that I found was “to delay the call”. Most people see delegate as events most of the time. However, they can be put to much greater use. Here is an example that I’ll gladly share with you all:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
public class StreamProxy<T> where T : Stream
{
private Func<T> constructorFunction;

private StreamProxy(Func<T> constructor)
{
constructorFunction = constructor;
}

public void Write(Action<StreamWriter> func)
{
using(T stream = constructorFunction())
{
StreamWriter streamWriter = new StreamWriter(stream);
func(streamWriter);
streamWriter.Flush();
}
}

public String Read(Func<StreamReader, String> func)
{
using (T stream = constructorFunction())
{
string result = func(new StreamReader(stream));
return result;
}
}

public static StreamProxy<T> Create(Func<T> func)
{
return new StreamProxy<T>(func);
}
}

To summarize what it does… it accept a delegate that returns a class that derives from “Stream” that will be used as a Constructor. It will return you a StreamProxy object that you can then use to read or write String out of that stream. What is interesting is that when it’s first created… nothing is done on the file. You are just giving instruction to the class on how to access it. When you then read/write from the file, the class knows how to manage a stream and make sure that no locks are left on the files.

Here is a sample usage of that class:

1
2
3
4
5
6
7
8
// Here I use a FileStream but it can also be a MemoryStream or anything that derives from Stream
StreamProxy<FileStream> proxy = StreamProxy<FileStream>.Create(() => new FileStream(@"C:\MyTest.txt", FileMode.OpenOrCreate));

// Writing to a file
proxy.Write(stream => stream.WriteLine("I am using the Stream Proxy!"));

// Reading to a file
string contentOfFile = proxy.Read(stream => stream.ReadToEnd());

That’s all folks! As long as you can give a Stream to this proxy, you won’t need to do any “using” in your code and everything will stay clean!

See you all next time!

Xml Serialization Made Easy

Most of the people who I’ve seen dealing with XML have some different approaches in dealing with the content.

If you want to read XML content, you normally have many way to go. Some people use XPath to retrieve the value with an XmlDocument object to retrieve what they want. Others will browse by nodes and get to what they want.

Do you see the main problem here? The problem is not that you won’t be able to read your information. The problem is the format as well as a lot of “navigation” code to get the information. The easy way to get away with conversions and navigation code is to use the XmlSerializer.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
[XmlRoot(ElementName = "Library")]
public class Library
{
[XmlAttribute("id")]
public int ID { get; set; }

[XmlAttribute("name")]
public string Name { get; set; }

public string ToXml()
{

StringBuilder sb = new StringBuilder();
XmlSerializer serializer = new XmlSerializer(typeof(Library));
StringWriter sw = new StringWriter(sb);
serializer.Serialize(sw, this);
return sb.ToString();
}

public static Library FromXml(string xml)
{

StringReader sr = new StringReader(xml);
XmlSerializer serializer = new XmlSerializer(typeof(Library));
Library result = serializer.Deserialize(sr) as Library;
return result;
}
}

This will easily create an XML with one root node with 2 attributes with ID and Name. The XML will also serialize any children with the proper attributes on them. This is an easy way to serialize as well as deserialize XML without having to mess-up with XPath, XmlDocument, node navigation, etc.

Little Introduction

Hi everyone,

My name’s Maxim and I’m from Montreal, Quebec in Canada.

I decided to start a blog mainly because I want to have an online presence and I think I can bring a lot of information to the community which is not currently/properly available.

The main focus will be on .NET. I will be mainly posting code, demo, links that I think are relevant and should be looked into.

I do have a true passion about technology and hope I can actually help out on this side.

See you all later