Is the Ajax Control Toolkit dead?

So I’ve got an engagement to speak about Ajax with Visual Studio 2010. Of course, I’m talking about jQuery but there is obviously the Ajax Control Toolkit available for the people who are still on ASP.NET WebForm.

With such a huge focus right now on ASP.NET MVC, is the Ajax Control Toolkit still relevant?

The website is still there, the link toward the forums doesn’t work anymore and of course there is this gem :

AJAX Control Toolkit Release Notes - September 2009 Release Version 3.0.30930

We are 1 year and a half since the last major release. Of course, the CodePlex project is still there but the reviews are not encouraging me to use it.

Hell, the project could not even be mentioned as actively maintained anymore. One commit a month is hardly what I call support for such a project.

So the question remains… is it still relevant today? Will it ever be updated in the future?

Comparing Unity 2.0 to Ninject 2.2.0.0 in an ASP.NET MVC 3 project

Tools used

  • Ninject 2.2.0.0
  • Unity 2.0
  • Visual Studio 2010

Initial premise

The way to do Dependency Injection in ASP.NET MVC 3 is basically simple compared to previous versions. ASP.NET MVC 3 expose a static class called DependencyResolver. This class has one static method called SetResolver that takes into parameter an IDependencyResolver. This allows you to set a custom resolver that ASP.NET MVC 3 will you to instantiate your controllers with. Really easy stuff. As you can see, this is a really simple interface to implement.

##

Ninject

Ninject (an open-source tool) already comes with one since December 6th 2010. This allows you to simply set any configured Ninject StandardKernel as a parameter to the resolver and give it to the DependencyResolver.

Unity

Unity, however, doesn’t call it a DependencyResolver. The class we are looking for is UnityServiceLocator.aspx). Although it doesn’t seem quite apparent, it’s actually compatible with the DependencyResolver implementation and can be used with it.

Bindings

Ninject

Bindings with Ninject (without extensions) are done like this:

1
2
var kernel = new StandardKernel();
kernel.Bind<IHelloWorld>().To<HelloWorld>();

Unity

Bindings with Unity (without extensions) are done like this:

1
2
var container = new UnityContainer();
container.RegisterType<IHelloWorld, HelloWorld>();

This shows off how easy it is for both to setup.

Extensions

Ninject

There is a lot of extensions available on top of the actual Ninject project. Ninject is basically only caring about the basic bindings and let the extensions to handle the rest. There is extensions for everything. From injection, conventions, message broker, etc. Here is a few that are, for me, the most interesting:

Unity

There doesn’t seem to be any extensions available through Unity but rather be already included within the library itself. This might not be the best tactic as the releases of Unity are quite far away from each other and waiting for an update might take weeks or even months. Unity seems to be able to handle UnityContainerExtension but there is not much on the web that was developed by the open source community.

Which one to choose?

At that point, there isn’t much that differentiate both. It’s mainly a matter of taste whether you like on syntax or the other. However, on the extension side… there is a definite edge on the Ninject project.

Ninject project is definitely more active and you can see each and every modifications made in the source control on GitHub.

Simplifying Ninject Bindings with Ninject.Extensions.Conventions

So in my previous post, we talked about binding interface/abstract class to implementations with Ninject. It was a rather simple task and it allowed us to do the job quite easily.

But of course the example was a simple case and would never ever even look close to a real production project. A real production project might have 3-4 services with at least 2-3 dependencies each that might or might not be the same. This can easily explode when adding new functionalities to a software. Need to send email? Add a dependency on INotifier. Need to access another service elsewhere? Another dependency. This can grow quite big quite fast and it will be a pain to actually maintain all the bindings.

This is where Ninject.Extensions.Conventions come and save us some time.

Let’s take the following code and binding from our previous example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
using System;
using Ninject;

namespace ConsoleNinject
{
class Program
{
static void Main(string[] args)
{

var kernel = new StandardKernel();

kernel.Bind<IConsoleOutput>().To<MyConsoleOutput>();

var output = kernel.Get<IConsoleOutput>();
output.HelloWorld();

var service = kernel.Get<Service>();
service.OutputToConsole();

Console.ReadLine();
}
}

public interface IConsoleOutput
{
void HelloWorld();
}

public class MyConsoleOutput : IConsoleOutput
{
public void HelloWorld()
{

Console.WriteLine("Hello world!");
}
}

public class Service
{
private readonly IConsoleOutput _output;

public Service(IConsoleOutput output)
{

_output = output;
}

public void OutputToConsole()
{

_output.HelloWorld();
}
}
}

So we only have one line of binding. Now don’t forget that this line will explode to 4, 6, 10, 20 lines as things move forward. We want to simplify this to the most simplest case.

This line can be replaced with the following:

1
2
3
4
5
kernel.Scan( x=>
{
x.FromAssembliesMatching("*");
x.BindWith<DefaultBindingGenerator>();
});

However, running the code after that won’t work. Why? Because the DefaultBindingGenerator binds I{Name} to {Name}. Quite simple but it’s extendable.

So we need to change the name of our class MyConsoleOutput to ConsoleOutput and then everything works!

There you go! Now we can keep on adding interfaces and implementations without changing the actual binding codes.

On another note, you might want to change which assemblies are loaded for the conventions since this would load all DLLs in the executing directory.

Introduction to Ninject 2.1

Links

What is Ninject?

Ninject is a Inversion of Control (a.k.a IoC) tool that allows you to map Interfaces to Implementations.

An IoC main job is to instantiate an object that is linked to a requested type. An IoC tool will also control the lifetime of those objects to allow more flexibility in the invocation of them.

How do I use it?

Well first, you have to download the original version from the Ninject Github site.

There, you are met with many different options that will allow you to match your need. It is important to notice that if you are using a specific Ninject Extension, it requires the same version of Ninject to be used (packaged inside the Extension’s zip file).

For the demo purpose, we’re going to use the .NET 4.0 version of Ninject.

Prerequisites

Now let’s try to build a simple project. We’ll create a solution containing only a Console application for demo purpose and add the Ninject reference.

NinjectConsoleProject

Then we need an interface and a class implementing that interface to simulate a real-life situation.

1
2
3
4
5
6
7
8
9
10
11
12
public interface IConsoleOutput
{
void HelloWorld();
}

public class MyConsoleOutput : IConsoleOutput
{
public void HelloWorld()
{

Console.WriteLine("Hello world!");
}
}

Once this is implemented, we have everything we need to implement a simple demo for Ninject.

Ninject Implementation

So in a highly coupled application, we would see a direct invocation of MyConsoleOutput and it would end there. However, when that class change or if we want to test that class, it will be impossible to do so without invoking it. By coding directly against an interface or an abstract class, it allows us to decouple the implementation and the contract.

So next, we need to implement the standard ninject container that will hold “Bindings”. Those bindings represent the link between an interface and it’s implementation. Under Ninject, the container is called a Kernel and is represented by the class StandardKernel.

1
var kernel = new StandardKernel();

Then we create the binding between IConsoleOutput and MyConsoleOutput.

1
kernel.Bind<IConsoleOutput>().To<MyConsoleOutput>();

Finally, we’ll request an instance of IConsoleOutput and execute it. You should receive a “Hello world” on your console.

1
2
var output = kernel.Get<IConsoleOutput>();
output.HelloWorld();

Well that was easy! But what if it’s a dependency on another object? Like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
public class Service
{
private readonly IConsoleOutput _output;

public Service(IConsoleOutput output)
{

_output = output;
}

public void OutputToConsole()
{

_output.HelloWorld();
}
}

Requesting the Service like this and executing it will lead to the same result.

1
2
var service = kernel.Get<Service>();
service.OutputToConsole();

That’s it!

Conclusion

So Ninject resolves the dependencies of specific objects. What isn’t shown in this demo is that dependencies on dependencies are also going to be resolved. This allows you to build object graph really easily without having to build a factory or have a huge instantiations call. This allows you to have objects that are loosely coupled and highly testable.

Of course, Ninject isn’t the only tool that can do that job but so far, I’ve found it to be the easiest.

The alternatives

Using Ninject with ASP.NET MVC 3 dependency injection

Ninject is my favorite container for doing dependency injection. However, it always seemed a bit complicated to use with ASP.NET WebForms.

Things have improved with ASP.NET MVC 1 of course but now, with the release of ASP.NET MVC 3, things have become way easier to do.

Here is what my Global.asax.cs looks like when I want to register an IoC tool for ASP.NET MVC 3:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
protected void Application_Start()
{

AreaRegistration.RegisterAllAreas();

RegisterGlobalFilters(GlobalFilters.Filters);
RegisterRoutes(RouteTable.Routes);
RegisterDependencyResolver();
}

private void RegisterDependencyResolver()
{

var kernel = new StandardKernel();

DependencyResolver.SetResolver(new NinjectDependencyResolver(kernel));
}

Well that look easy enough no? Well, to be fair… NinjectDependencyResolver doesn’t exist. It’s a custom class that I created to allow the easy mapping of the Kernel.

The only thing it needs is the Ninject StandardKernel. Once all the mappings are done, we give the kernel to the resolver and let MVC handle the rest.

Here is the code for NinjectDependencyResolver for all of you to use:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
public class NinjectDependencyResolver : IDependencyResolver
{
private readonly IKernel _kernel;

public NinjectDependencyResolver(IKernel kernel)
{

_kernel = kernel;
}

public object GetService(Type serviceType)
{

return _kernel.TryGet(serviceType);
}

public IEnumerable&lt;object&gt; GetServices(Type serviceType)
{
try
{
return _kernel.GetAll(serviceType);
}
catch (Exception)
{
return new List&lt;object&gt;();
}
}
}

Tip & Trick : Intellisense for Javascript in Visual Studio 2010

I know pretty much everyone already knows that but I would love to remember everyone how to get Intellisense working in Visual Studio 2010.

I mainly use jQuery and the API is huge. A bit huge to remember sometimes and I always have the documentation opened in a browser window. To alleviate my pain, I love to use the Intellisense and here is how you get it to work.

First, open your javascript file. Then, drag and drop the file you want to reference at the top of the document.

That’s it. You now have Intellisense if a “vsdoc” of your library is available. I’ll even throw you something more for you to enjoy. When you declare an event handler for a jQuery element and that you want to access the “event” element. That event is of type jQuery.Event. Just add “/// <param name=variableName type=jQuery.Event />” right after the declaration and it will enable the Intellisense on that variable.

Enjoy!

View Source disabled in Internet Explorer?

Found the fix after scouting the forums.

The main reason is because the caching of SSL pages are disabled and are not on disk. Somehow, IE doesn’t allow you to view the source of those pages.

To fix the issue, open the registry and go to the following key:

HKCU\Software\Microsoft\Windows\CurrentVersion\InternetSettings\

There should be a REG_DWORD value with the name “DisableCachingOfSSLPages”. The value should be set to “0x00000001”. Change it to “0x00000000” and restart Internet Explorer.

This should allow you to view the HTML of your SSL pages when working in a secure environment.

Quick introduction to SOLID

For those who don’t know SOLID, it’s an acronym of acronym.

SOLID stands for the following:

SRP: Single Responsability Principle
OCP: Open/Closed Principle
LSP: Liskov Substitution Principle
ISP: Interface Segregation Principle
DIP: Dependency Inversion Principle

Bringing those principles together is creditted to Robert C. Martin (AKA Uncle Bob).

You can read about SOLID further on Wikipedia) or on Uncle Bob’s website.

Please note that I’m also in the process of writing some nice posts on how to apply them to your code.

So stay tuned!

Back to basics: Why should I use interfaces?

So I had this interesting discussion with a colleague about having a clean architecture for a small software he is doing. Since it’s his first step among SOLID, I wanted to take it easy see how things were laid out. Since the program was mostly already written, I immediately noticed the lack of pattern and the direct data access in the event of his WinForm application. The conversation went a bit like this:

Me: What is this code with data access on the “OnClick” of your button? Him: Well it’s the information I need to execute this command. Me: Do you know the Model-View-Presenter pattern? Because right now, you are mixing “Presentation”, “Data Access” and “Business Logic” Him: I’ve used it before but it’s been a while. How do you implement it?

So after showing him the pattern and explaining the basic implementation (because there is a lot of different way to implement this pattern), he asked me the following question:

“Of course, you don’t need to use interface everywhere, right?”

But then I went on to explain testability and such but there is something different I wanted to bring to this small discussion and that I wanted to share a bit. When my class have dependencies injected through the constructor, I have 2 choices. Either I depend upon the implementation or the abstraction (interface/abstract). What’s the difference and why is it so important?

MyClass depending upon the abstraction of “MyClassDataAccess”

When your class depend upon the abstraction, it can take any class that implement that abstraction (be it abstract class or interface). The implementation can easily be replaced by something else and that is essential in unit testing your logic.

MyClass depending upon the implementation of “MyClassDataAccess”

When your class depend upon the implementation directly, the only that can be sent to this class is this specific implementation. Anything else must derive from this class. This implementation couple the Caller and the Callee really tightly.

Why is it important ?

When you have a class that access services, slow resource (database, disk, etc.) or even a class that you haven’t coded yet… an interface should be used. Of course it’s not a law. You apply interface/abstract class when you need to decouple an implementation of a system from another system. That allows me to send mocked object and test my requirements/logic. This also bring another advantage that might not be evident at first. Customer changing his mind. When the customer change his mind and do not want to store information in an XML but instead want a database. Or when a customer say to not implement “this part of the system” because it will be available through a service. Etc Etc… Using interfaces and abstract class is the oil that makes the engine of your software turn smoothly and allow you to replace parts by better/different parts without hell breaking loose because of tightly coupled implementation.

ASP.NET MVC - How does the Html.ValidationMessage actually work?

When you create a basic ASP.NET MVC application, you normally will have “Html.ValidationMessage” inserted automatically for you in the Edit and Create views. Of course, if you try to type strings in a number field, it will fail. Same things for dates and such. The good question now is… how does it do it?

Well, the ValidationMessage method only look to see if the model you gave him with the name given have received errors. If it did, it will display the specified message. So now that we covered the “How”, I’ll show you where it does that.

The answer lies within the DefaultModelBinder that comes activated by default with ASP.NET MVC. The ModelBinder do a best guest to fill your model with the values sent from a post. However, when it can match a property name but can’t set the value (invalid data), it will catch the exception and add it as an error in a ModelStateDictionary. The ValidationMessage then picks up data from that dictionary and will find errors for the right property of your model.

That’s it! Of course it’s pretty simple validation and I would still recommend you to use a different validation library. There is already a few available on the MVCContrib project on CodePlex.

Simple explication of the MVC Pattern

Since the last time I wrote a blog post was more than a few months ago, I would like to start by saying that I’m still alive and well. I had changes in my career and my personal life that required some attention and now I’m back on track.

So for those that know me, I was participating to the TechDays 2009 in Montreal and presenting the “Introduction to ASP.NET MVC” session. I will also be presenting the same session in Ottawa (in fact this blog post is written on the way to Ottawa with Eric as my designated driver).

So what is exactly ASP.NET MVC? It’s simply the Microsoft’s implementation of the MVC pattern that was first described in 1979 by Trygve Reenskaug (see Model-View-Controller for the full history).

So in more details, the MVC is the acronym of Model, View and Controller. We will see each component and the advantages of having them separated properly.

Model

The model is exactly what you would expect. It’s your business logic, your data access layer and whatever else is part of your application logic. This I don’t really have to explain. It’s where your business logic will sit and therefore should be the most tested part of your application.

The model is not aware of the view or of the controller.

View

The view is where sit your presentation layer for your application. In a web framework, this is mostly ASPX pages with logic that is limited to showing the model. This layer is normally really thin and only focused on displaying the model. The logics are mostly limited to encoding, localization, looping (for grids) and such.

The view is not aware of which controller invokes it. The view is only aware of the model to display.

Controller

The controller is the coordinator. It will retrieve data from the model and hand it over to the view to display. The controller can also be associated to other cross-cutting concerns such as logging, authorization and performance monitoring (performance counter, timing each operations, etc.).

Advantages

Now, why should you have to care about all that? First, there is a clear cutting separation between WHAT is displayed to the user and HOW you get the information to display. In the example of a web site, it become clearly possible to display different views based on the browser, the device, the capabilities of the device (javascript, css, etc…) and any other information available to you at the moment.

Among the other advantages, it’s the ability to test your controller separated from your view. If your model is properly done too (coding against abstraction and not implementations), you will be able to test your controller separated from your model and your view.

Disadvantages

Mostly a web pattern than a WinForm pattern. There is currently no serious implementation of the MVC pattern for anything else other than web frameworks. The MVC pattern is hence found in ASP.NET MVC, FubuMVC and other MVC Framework. Thus it limits your choices to the web.

If you take a specific platform like ASP.NET MVC, other disadvantages (that could be seen as advantages) slips in. Mostly, you lose any drag and drop support for server controls. Any grids are now required to be hand-rolled and built manually instead the more usual abstraction offered by the original framework.

Conclusions

Since we mostly require to have a more fine grained control over our view, the abstraction offered by the core .NET Framework are normally not extensible/customizable enough for most web designers. Some abstraction might even become unsupported in the future and thus bringing us to a more precise control of our views. The pattern also allow us greater testability than what is normally offered by default in WebForms (Page Controller with Templating Views).

My recommendation will effectively be a “it depend”. If an application is already built with WebForms and doesn’t have any friction, there is no point in redoing the application completely in MVC. However, for any new greenfield project, I would recommend at least taking a look at ASP.NET MVC.

Back from vacation and personal changes

Alright! It’s been a while. Here is what happened during all that time. I went on vacation in August and I had some changes in my life on the way back.

Just to inform everyone now that the posting should be more common. I’m planned to be on the Visual Studio Talk show somewhere by the end of October. Also, don’t miss me at the TechDays 2009 in Montreal and Ottawa! Some pretty nice subjects are talked about!

Talk to you all later!

Is your debugger making you stupid?

What is one of the greatest advance of Visual Studio since the coming of .NET? You might think it is the Garbage Collector or the IL which allows interoperability between languages? I think of one the great advance of Visual Studio 2003 (all the way through Visual Studio 2010) is the debugger. Previously, debugger were hardly as powerful as Visual Studio. And that is the problem.

What is debugger?

The quote from Wikipedia is “a debugger is a computer program that is used to test and debug other programs“. The debugger is used to find bugs and figure out how to fix them.

The debugger is used to go step by step, step backward, step-into, etc. All this, in the hope of reproducing a bug.

Why does it make me stupid?

It might be one of the most powerful tools that you have under your hand. But it’s also one of the most dangerous. It encourages you to test your software yourself without having tools do the job for you. Once you are done debugging a module, you will never debug it again unless there is a bug in it. Then you start building around this module and you test the new modules against the first module. And then starts the fun. You start modifying the first module but don’t test again the first scenario that you built. Now, the next time another developer build something based on the same module, there is two thing that can happen. The first is that the developer is going to be afraid to change the module and will duplicate the code in another module to make sure he doesn’t break anything. The second option is that the developer is  going to change the module anyway and rerun the application to make sure he didn’t break anything obvious. Okay, there is a third option that consist in adding test to address the new behaviour but we’re not interested in the good stuff. Just the bad.

Ripples of the Debugger

Okay. After I just told you this nice little story, what do you think will happen in the future? As the other developers goes into the code, they will add over the modules again and again. As the modifications keep on coming, the module will keep on changing. As the changes goes “debugger tested” only, bugs will start to appear in modules that never had bugs before. To “test” the right behaviour, the team will start to add test script to execute manually to make sure no bugs are left behind. This will require interns or QA people to run the test.

The solution?

Infect your code with test and stop using the debugger. That’s simple. I know that Ruby doesn’t have a debugger integrated inside the main UI it’s using. Ruby developer still manage to deliver quality code without the integrated debugger. In fact, lots of developer manage to make great software without debugger. Running without debugger and test however is NOT the solution. You must ensure that your code is covered with code as much as possible. When you find a bug, make a test that reproduce the bug and fix the production code. As your code gets tested, additional modules will not make break unless they break a test. This is the solution. This is the way to make good and clean code.

The cost of Bad Code

Every developer writes code. Every developer works or has worked on a Brownfield project. Working on a Brownfield project often makes developer complain about the code being poorly written and hard to maintain. That surely sounds familiar, right?

This is basically a pledge to good code. Bad code makes things worse and cost business money.

How much are we talking about?

There is no scientific study about this. Primarily because most projects are private and won’t allow studies and there is still no clear metric that represent clean code. Mostly, metrics can’t represent bad code. So how much money can be saved? Well… bad code hinder maintenance, comprehension and scare programmers of changing a class that was working well before. I don’t think we can calculate now, but I think that Cyclomatic Complexity, LOC per functions and Code Coverage represent a big indicator of code that are hard to understand and difficult to make changes.

Code that have high cyclomatic complexity and huge LOC per functions scares programmers into making changes. Why? Because we all know that if we change something inside one of those method, the ripples of change will make something else break. This fear can be neutralized by high code coverage of those big methods and/or by splitting them up.

Time for totally unscientific numbers. I think that complex code will require more double the time to make modifications to. Why? Well… let’s say that the developer will have to spend a considerate amount of times in the debugger instead of running tests. Tests for a (big) module should take less than 10-15 seconds to run (including the test runner initialization). Debugging the same module to verify a behaviour will take normally a minute or two. Rinse and repeat at least a dozen times and you find yourself at 1 minutes for running the tests and 12 minutes for debugging an application. This is just the beginning. If there is no tests, a huge and complex method will take literally take at least 10 minutes to understand (depend of context). A test “infected” code base will allow for quick failure verification without having to spend hours in the debugger. Calculate as much as you want but… as Robert C. Martin said:

The only way to go fast is to go well.

So are you saving time in your company or are you costing your company money? I think we can all earn something from writing clean code. Companies will save on maintenance cost, programmer will improve their craft and become better programmer that are proud of what they do.

Improving code quality - 2 ways to go

I’ve been thinking about this for at least a week or two. In fact, it’s been since I started (and finished) reading the book “Clean Code” by Robert C. Martin. There is probably only two way to go.

Fix the bad code

This method is called refactoring and “cleaning” the code. This of course, you can’t truly know what code is bad without having a Static Analyser tool or programmers working on the code. The tool will allow you to spot piece of code that could bring bugs and/or be hard to work with. The problem, refactoring code or cleaning up code is really expensive on a business perspective. The trick is to fix it as you interact with the code. It is probably impossible to request time from your company to fix code that could cause bugs. If you ask your company to fix the code, you will probably receive this answer: “Why did you write it badly in the first place?”. Which brings us to the other way to improve the code quality.

Don’t write it

If you don’t write the bad code in the first place, you won’t have to fix it! That sounds simple to an experienced programmer that improved his craft with years but rookies will definitely leave bad code behind. Eventually, you will have to encounter this bad code. So how do you avoid the big refactoring of mistakes (not just rookies)? I believe that training might be a way to go. When I only had 1 year of experience in software development, I was writing WAY too many bad code. I still do. Not that I don’t see it go through. Sometimes, things must be rushed, I don’t understand fully the problem and some small abstraction mistakes gets in. I write way less bad code then when I started. However, this bad code is not just magically disappearing. It’s stays there.

What about training?

I think that training and/or mentoring might be the way to go. Mentoring might be hard to sell but training is definitely not that hard to sell. Most employees got an amount of money related to their name within a company that represent training expenses that can be spent on them. What I particularly recommend is some courses in Object-Oriented design or Advanced Object-Oriented Design. Hell, you might even consider an xDD course (and by xDD… I mean TDD, BDD, DDD, RDD, etc.). Any of those courses will improve your skill and bring you closer to identifying clean code from bad code. Other training that will form you specific framework (like ASP.NET MVC or Entity Framework) will only show you how to get things done with those framework. The latter can be learned on your own or through a good  book.

So? What do you all thinks? Do you rather have a framework course or a “Clean Code” course?

"If you build it, they will come" - Or how to start a community

I’ve always found that the best practices inside my field were not always respected. Doctors always wash their hands, architect follow all the rules to have a building that is safe for the people living/working inside it. However, with software, anyone can improvise himself “Software Architect” or “Software Developer” without having any problem to find a job. Most people in the .NET community will follow what is given to them by Microsoft. Be it SharePoint, Entity Framework, Linq To SQL, Visual Studio, or whatever. Sometimes, alternative is good because they offer you a different view on the state of things.

When I met Greg Young for the first time, it was in .NET Montreal Community meeting where he was doing a presentation on DDD. We took a beer together and talked about improving the level of those in Montreal. Improving the level of average developer in Montreal is a hell of a task. First, there is people like me, Greg and Eric De Carufel who are passionate with their craft and are not satisfied with the status quo. We believe in ALT.NET but are most of the time called “passionate programmer”. The people like me and Eric are the easy one to help. Then there is those that want to improve themselves but that doesn’t have time (life, family, house, etc.). They are not easy to attract and the best way to instruct them is to do it internally (official training or coworkers). Then there are those that don’t care about their craft. Those are of no interest to me.

When I took a beer with Greg Young, he talked about action on what would be needed to improve the level. That is the reason why I started (or at least… still trying) to start the ALT.NET Montreal Community. We started a month a ago. We were only 7 back then. It was small but friendly. Now, on June 25th, we will hold our second Coding Dojo of the ALT.NET Montreal Community.

What is important to remember when starting a community I think is, to start! So, if there is anyone from Montreal who wants to help us boot start a community… the ALT.NET Montreal Community, you are all welcome to our next Coding Dojo on June 25th.

My baby steps to PostSharp 1.0

So… you downloaded PostSharp 1.0 and you installed it and are wondering… “What’s next?”.

Well my friends, let me walk you through the first steps of PostSharp. What could we do that would be simple enough? Hummm… what about writing to a debug window? That sounds simple enough! Let’s start. So I created a new Console Application project and I added the reference to PostSharp.Laos and PostSharp.Public. As a requirement, the class must be tagged with “Serializable” attribute and implement OnMethodBoundaryAspect (not in all case but let’s start small here).

Next, I have a few methods I can override. The two that we are interested in right now is “OnEnter“ and “OnExit“. Inside of it, we’ll say which method we are entering and which one we are exiting. Here are my Guinea pig classes:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
public class FooBar
{
[DebugTracer]
public void DoFoo()
{

Debug.WriteLine("Doing Foo");
}

[DebugTracer]
public void DoBar()
{

Debug.WriteLine("Doing Bar");
}
}

[Serializable]
[AttributeUsage(AttributeTargets.Method | AttributeTargets.Property)]
public class DebugTracer : OnMethodBoundaryAspect
{
public override void OnEntry(MethodExecutionEventArgs eventArgs)
{

Debug.WriteLine(string.Format("Entering {0}", eventArgs.Method.Name));
}

public override void OnExit(MethodExecutionEventArgs eventArgs)
{

Debug.WriteLine(string.Format("Exiting {0}", eventArgs.Method.Name));
}
}

See how simple this is? But… does it work? Let’s see the trace of calling each methods:

1
> Entering DoFoo         
> Doing Foo          
> Exiting DoFoo          
> Entering DoBar          
> Doing Bar          
> Exiting DoBar

Isn’t that wonderful? Compile execute and enjoy. But… what about the community you say? Of course if the tool is not open source there is probably nothing built around it, right? Wrong!

Here is a few resources for PostSharp that include pre-made attributes that are ready to be used:

That was everything I could find. Do you know any others?

PostSharp - The best way to do AOP in .NET

Who knows about Aspect-Oriented Programming (AOP)? Common! Don’t be shy! Ok, now lower your hands. My prediction is that a lot of you didn’t raise their hands. So let’s resume what AOP is:

Aspect-oriented programming is a programming paradigm that increases modularity by enabling improved separation of concerns. This entails breaking down a program into distinct parts (so called concerns, cohesive areas of functionality). […]

So what does it mean truly? Well, it’s a way to declare part of your software (methods, classes, assembly) to have a “concern” applied to them. What is a concern? Logging is one. Exception handling is another one. But… let’s go wild… what about caching, impersonation, validation (null check, bound check), are all concerns. Do you mix them with your code? Right now… you are forced to do it.

The state of current AOP

Alright for those who raised their hands earlier, what are you using for your AOP concerns? If you are using patterns and practices Policy Injection module, well, you are probably not happy. First, all your objects need to be constructed by an object builder and need to inherit from MarshalByRefObject or implement an interface.

This is not the best way but it’s been done in the “proper” way without hack.

What is PostSharp bringing?

PostSharp might be a “hack” if think so. Of course, it does require you to have it installed on your machine while compiling for it to work. But… what does PostSharp does exactly? It does what every AOP should do. Inject the code before and after the matching method at compile time. Not just PostSharp methods but any methods that is inherited from the base class PostSharp is ofering you. Imagine what you could do if you could tell the compiler to inject ANY code before/after your method on ANY code you compile. Think of the possibilities. I’ll give you 2 minutes for all this information to sink in… (waiting)… got it? Start to see the possibility? All you need to do is put attributes on your methods/attributes like this:

1
2
3
4
5
[NotNullOrEmpty]
public string Name { get; set; }

[Minimum(0)]
public int Age { get; set; }

Now look at that code and ask yourself what it do exactly. Shouldn’t be hard. The properties won’t allow any number under “0” to be inserted inside “Age” and “Name” will not allow any null or empty string. If there is any code that try to do that, it will throw a ValidationException.

Wanna try it?

Go download PostSharp immediatly and it’s little friend ValidationAspects on Codeplex. After you have tried, try to build your own and start cleaning your code to achieve better readability.

And yes… both are Open-Source and can be used at no fee anywhere in your company.

Suggestion to CLR Team

Now, PostSharp force us to have it installed with the MSI for it to work because it needs to install a Post-Compile code injector (like some obfuscation tools). What would be really nice, is to be able to do the same thing built-in with the compiler. The compiler is already checking for some attribute already… I would love to have this “internal working” exposed to the public so that we can build better tools and, more importantly, better code.

UPDATE: I want to mention that PostSharp is NOT open-source. However is free unless you need to package it with your tool.

So I just finished reading Clean Code by Uncle Bob

This must have been the most enlightening book I’ve ever read. It’s filled with “evident” knowledge. Of course, some of them you have never thought about… but some that you just can’t avoid nodding in approval.

As everyone know, I’m a .NET Developer and Uncle Bob is a Java developer (not exactly but the book have code in Java). There is some recommendation in the books that are targeted at Java developer and that don’t apply to .NET.

So? If I had to tell what the book is about, what should I say?

I would say:

  • Humans are good at mixing abstraction level
  • Keep the variable/class/function clear and concise
  • Commenting must be done with care otherwise it just clutter the code
  • Refactor, refactor, refactor. A code base is never perfect but if you follow the Boy scout rule, the code base will always be better in the end
  • Code should always have test and high coverage

Am I hitting the bulls eye here? What do you think?