Community Update for November 19th 2013

Yesterday, not as much on ASP.NET as I would have liked.

However, we’ve got some great new F# package for OWIN as well as some Authorization with WebAPI. If you are building on those, ensure that you are actually reading those article as securing a Web API is now possible out of the box.

As for Visual Studio and ASP.NET, we get a more detailed explanation of what One ASP.NET is, a Visual Studio 2012 Update (2012.2) and a quick look at Visual Studio Online Code Named “Monaco”.

Enjoy!

Owin & WebAPI News

ASP.NET & Visual Studio News

Community Updates on ASP.NET, OWIN & Katana

Okay I might not be original but I’ve seen the work done by Alexandre Brisebois to keep you guys updated on what is happening in the Azure community. I thought I might want to try to do it on my own but instead of focusing on Azure, I would focus on the best and greatest of ASP.NET, JavaScript and everything web related.

If you guys like it, you might swing me a comment and I’ll make sure to continue. Here’s what I aggregated in the past few days.

ASP.NET News

OWIN/Katana & WebAPI News

What is unit testing? What’s a good unit test? What tools do you use?

mycodecantfail

I’ve often encountered projects where tests are not important. We need to deliver yesterday. The first thing that goes out the window when in this kind of rush are the unit tests. The next thing that pokes its head through the window however are bugs. Then as you make more and more modification to the code to reach yesterday’s deadline, functionalities that used to work doesn’t anymore. This is called regression.

When you introduce modifications to a system, there’s no way to ensure that nothing that used to work broke other than by testing it. It might be manual testing where a developer will redo all the steps that could have impacted its code. This kind of testing however take a bit of time to run and relies on the developer to redo all the manual testing every time a modification is done.

So…

What is unit testing?

Unit testing is a way to test individual unit of your code. Depending on the type of language you are using, we might talk about modules, classes, functions, procedures or methods as the smallest unit.

In C#, the smallest unit of work that you have is a method. However, in most scenarios a class will be your unit.

Unit test will allow you to write in code the expected behaviors of your code. If a module assume a certain behavior coming from your module, changing that behavior could cause a ripple effect of regressions across your system.

What are the characteristics of a good unit test?

Before showing you some examples, we’ll go over some attributes that makes a unit test a good one.

Fast

Unit test must be fast to run. We’re talking milliseconds per test here, not seconds. Since we’ll be creating tests by the numbers, we need them to be fast. Test that are long to run will never be run by any developers. > ### Easy to run

They should not require the setup of a machine or installing software to run. They should be runnable directly in one easy step. > ### Independent

They should not require a specific run order. Tests should be runnable in parallel, in sequence or in any order that the test runner will decide. This ensure the next attribute. > ### Repeatable

They should always yield the same result independent of the actual machine.

All those characteristics throws off any test that requires a database, network access or access to specific files on a disk. All external resources could potentially be locked, not accessible or long to initialize.

Another attribute that might considered such is Automatic. Having tests run as part of the build process will ensure that no commits will break any previously tested functionalities.

What are the benefits of unit testing?

Fail fast

Having tests that fails as soon as something doesn’t behave as expected will allow you to easily find problems with your code.

Easier refactoring

So you’ve refactored your code and its all cleaner now. Does all the tests passes? If yes, missing accomplished. Otherwise, you’ve introduced regression in your system. Tests on a unit you are refactoring will provide you the peace of mind that whatever you did, it still works. You will find yourself refactoring much more often as the confidence in your code (and tests) will increase.

Code documentation

If your tests are named properly, they will indicate what kind of behaviors you are testing. This will allow other developers to know exactly what they broke when one of them failed. However, it provides great documentation for developers that just joined your team and is wondering how to use your class. With a whole set of test, they can just see what kind of behavior that class has.

Design

If you do TDD or any other type of Test First code, tests will help you design your class in the way the user of the class is meant to use it. This might sound funny but by starting by _How to use it _rather than the implementation, it will lead you to simpler code and easier to setup.

Techniques that will help you test

Code against an abstraction. Not an implementation.

Coding against an interface or an abstract class might sound like one too many file in your project. However, this will provide you a seam to inject dependencies. You have an MailService that sends email? Try coding against it’s interface (IMailService) instead. When you need to create that dependency, you can use a tool like NSubstitute to create a mock for you or you can just create one manually. The same can be done for databases, network resources like web services.

Remember that you don’t want to test if your SMTP server can do its job. We assume it can. If you want to validate things like this, they are to be done in a separate test project under the label “Integration Tests” or “System Tests”. They are by definition slow to run and won’t, and don’t need, to be run as often. Those tests can be run in a nightly build and it’s often enough.

Name your tests properly

By following some conventions, your tests can be easy to understand. You have to name your test so that you can understand them when you come back 6 months later while inebriated.

Personally, I like to use the following format: WHEN I do this I EXPECT something TO RETURN/DO another thing. This normally allows you to describe properly behaviors of a method or a class. This is only a recommendation. If you have a better convention, use it. As long as I know what that test is doing without looking at the code, it’s fine.

One act and one assert per test

Your test should only invoke the class/method you are testing only once and assert the result of only one thing. What we are looking for is “Only one reason to fail”. If you remove one line of code it should break one test (or as few as possible). Imagine the scenario where you introduce a new line of code to introduce Feature A and 200 tests fails. You will end-up digging in the code asking yourself what you broke. Keep things simple for you and the next guy.

Follow the triple A (AAA)

Its called Arrange/Act/Assert. It’s also known in BDD as Given/When/Then. Arrange will contain your necessary code that is required to test your class. Act will invoke the method that is currently being tested. Assert will validate the what was done in the act. I try to split my test methods with three comments with the same name as the AAA to help future developers understand what the hell I was doing and where.

Write a test for every bug you find

Whenever a bug is reported, I write a test that reproduce the bug (that test is supposed to fail). Then, I’ll write the necessary code to make that test turn green. If you do that for every bug you find, you will end also do regression testing on your past failures.

Limits of unit testing

Unit testing is there to test small units. It’s in no way a complete test of your system. If you database or your mailing system is down, unit testing will not help you detect those problems. Different kind of test (much slower) are needed for those. I’ve put a lot of emphasis in this article about unit testing. It doesn’t mean that integration testing shouldn’t be done. It just means that you should have way more unit test than integration tests.

Unit testing will also only test what you have. It will not do exploratory testing to find unexpected behaviors. I recommend having your developers or (if you are lucky to have them) QA team check your system in its entirety before every release to ensure that nothing is broken.

Tools used for unit testing

If you need to do mocking with a tool, I recommend NSubstitute.

If you need a unit test framework, I would either use MSTest or NUnit.

If you want to shorten the time between when tests are run, I recommend NCrunch. As you modify your code, it compiles and runs impacted tests in real time.

If you want to see your test coverage, I recommend dotCover by the same people who does ReSharper.

If you want to see your coverage in a higher level, I recommend NDepend. It will incorporate metrics that are essential to proper testing coverage.

ECMAScript 6 will be finalized this year. What should we expect now and next year?

What is it?

ECMAScript 6 is the the new version of JavaScript. JavaScript is only an implementation to ECMAScript (currently ES5). ES6 will be the new standard for JavaScript for the browsers that support them. It will be finalized this year but browsers are already implementing some of the features that were proposed.

What’s new?

The work on ECMAScript 6 has been going for a while now and a lot of things are coming our way.

The first thing to remember is that ES6 will be based upon ES5 with Strict Mode.aspx) enabled. This means using the more beautiful parts of ES5 and dropping a few of the bad parts. E.g.:  it’s not possible to assign a value to a variable that hasn’t been declared, it’s not possible to write to a read-only property, it’s not possible to declare duplicate properties, etc.

Among the most awaited features are Classes and Typed Objects. This will bring ES6 one step closer to maturity as a language. Having constructors and inheritance will definitely change the way code is done today. The prototyping paradigm can bring us a long way but having support for real types will remove the headache of designing useful code.

If you are doing UI on a daily basis, one of the most exciting feature for you will be the observable proposal. It brings something like KnockoutJS observable to the core of JavaScript. This might not impact your work immediately but as more frameworks make use of this feature, your code will end-up being more simple to handle.

What are the potential impacts?

If we are talking desktop browsing, I don’t expect much to change right away until the three major browsers has implemented ES6 properly. I don’t expect everyone to upgrade their code but we can expect framework developers to adapt to ES6 very rapidly. ES6-enabled framework should allow us to use fewer lines of codes for the same operations as well as richer features.

However, mobile browsers evolve faster and more in a silo than other browsers. Developing an HTML application for WP8, Android or iOS will put you in the company’s silo and will allow you to target a very specific range of browsers.

One the major impact to be expected however would be on the end of those using NodeJS. Since JavaScript is running 100% on the server it’s easier for maintenance and updates to know what is supported and what isn’t.

When will it be available?

It’s currently being implemented by most major browsers (Internet Explorer, Firefox and Chrome). Firefox 25 was, at the time of this writing, the one the most ahead in term of implementation. As the specification is getting closer to being approved (end of this year), we should see Firefox and Chrome implement most of those features for 2014 and a bit later for Internet Explorer.

If you don’t want to search forever on which supports what, you can take a look at the ES6 Compatibility Table which is updated on a frequent basis.

Source

For the full list of proposal items that are tentatively accepted, see the ECMAScript.org website.

Nuget with custom package sources on a Build Machine

My build was failing due to moving to a new build machine. We do have custom packages that are used internally and they could not be found by NuGet on the new server.

This post might be more about me remembering where it is but the location of the package sources for NuGet is found here:

1
%APPDATA%\NuGet\NuGet.Config

Here is what mine looks like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<packageRestore>
<add key="enabled" value="True" />
</packageRestore>
<packageSources>
<add key="NuGet official package source" value="https://nuget.org/api/v2/" />
<add key="LocalFeed" value="C:\LocalPackages\" />
<add key="MyCompany" value="\\myServer\NugetPackages" />
</packageSources>
<disabledPackageSources />
<activePackageSource>
<add key="NuGet official package source" value="https://nuget.org/api/v2/" />
</activePackageSource>
</configuration>

The same file can be found on the build server (with some options missing). The main part is the packageSources section.

If you are depending on a local repository or a different repository than the public NuGet Gallery, it’s where you need to update your sources to have your package available.

KnockoutJS vs jQuery – A wonderful team

KnockoutJS is an MVVM JavaScript framework for your browser. It allows you to easily bind raw data to a model and update elements bound to that model.

jQuery is a DOM manipulation framework that has made JavaScript not suck.

Both have their reason to exist and they should actually not compete. It’s all about using the right tool for the right job.

The problem

When building rich HTML page with a lot of input/tag manipulation, the most common manipulation is changing what is displayed. Be it the content of a text box or the content of a tag, we need to update those elements a lot. What we often end up is a lot of jQuery code that selects element and then updates it.

It becomes very quickly clear that increasing the amount of code would make it look like spaghetti code. Another big disadvantage of having all of your UI driven by jQuery is that you end up with a lot of selectors. Unless you are selecting by ID, tag or classes (and depending on your browser), selectors will eventually slow your browser.

So how do we fix spaghetti event binding and keep our head cool?

Fixing the problem with KnockoutJS

Let’s start with some basic HTML:

1
2
3
4
5
6
7
8
9
<div id="myModel">    
<label>Firstname</label>
<input type="text" id="firstname" />

<label>Lastname</label>
<input type="text" id="lastname" />

<span><strong id="fullname">Displaying full name here</strong></span>
</div>

That’s the most simplistic example. We have a first name and a last name and we want to concatenate both of them into an tag to display the user and that, in real time. This might seem like an easy problem but keep in mind that real-life problems will actually be more fierce. With that said, let’s start coding.

What would that look like in jQuery?

1
2
3
4
5
6
7
8
$(document).ready(function () {
$("#firstname").on('keyup', UpdateFullname);
$("#lastname").on('keyup', UpdateFullname);
});

function UpdateFullname() {
$("#fullname").text($("#firstname").val() + ' ' + $("#lastname").val());
}

We basically have no less than 5 selectors. All based on IDs so they are going to be fast but still… it’s five selectors. We could probably optimize things a bit but this is as close as production code I’ve seen in the wild. The trick for the real-time requirement in this scenario is the ‘keyup’ event. As we add more elements to our model we might have to add more event binding to invoke that function on other elements. Maybe other functions will require that same function too and you end-up with a flurry of selectors left and right with a big JavaScript file of 800 lines of code in no times.

What about KnockoutJS?

1
2
3
4
5
6
7
8
9
10
11
12
13
$(document).ready(function () {
ko.applyBindings(new FullnameViewModel(), $("#myModel")[0]);
});

function FullnameViewModel() {
var self = this;
self.firstName = ko.observable('');
self.lastName = ko.observable('');

self.fullname = ko.computed(function() {
return self.firstName() + ' ' + self.lastName();
});
}

Of course, for the KnockoutJS version to work properly, I have to change the HTML a little bit. I’ll also take the time to remove unused attributes (required by jQuery in this case) to work. Here is how it looks now.

1
2
3
4
5
6
7
8
9
<div id="myModel">    
<label>Firstname</label>
<input type="text" data-bind="value: firstName, valueUpdate: 'afterkeydown'" />

<label>Lastname</label>
<input type="text" data-bind="value: lastName, valueUpdate: 'afterkeydown'" />

<span><strong data-bind="text: fullname">Displaying full name here</strong></span>
</div>

So what can we understand from that? Yes, it takes a bit more JavaScript to do the work but now, the whole “business rules” are actually encapsulated in one JavaScript function. If we need to reuse part of how the full name is built, it’s actually part of your model. It’s something you can write tests for. As more rules are added to the view model, less time is spent debugging which selector I am to use and more about writing business rules of our presentation layer.

Why should I go with Knockout?

  • If your application actually has some business rules that need encapsulating, Knockout will provide you an easy way to do it.* If you are unit testing your JavaScript, it is much faster and easier to test only the ViewModel without any actual HTML in the back. You could potentially run something like PhantomJS to test your ViewModels.
  • If you are going to reuse part of the model in other bits of HTML or simply if your HTML is still changing a lot
  • If you need to be able to serialize your whole model in JSON to send to the server.

What should I still use jQuery for?

  • Basic DOM manipulation/selection
  • Ajax requests
  • Effects and animation

Give KnockoutJS a try!

Try it out by going first to KnockoutJS.com and doing the live tutorial. It will be easy to get your feet wet. Then, use it in Visual Studio since it’s part of the template!

Easy deployment with IIS and Web Deploy with Visual Studio

You’ve probably all seen the Publish Method called “Web Deploy” in your Visual Studio 2010/2012 publish popup. On my end, I’ve used the classic “push to a local folder then copy/paste” method of deploying.

Today, I’m going to show you how to install it on your IIS and make it work with Visual Studio

Installing Web Deploy

The first thing to do is downloading the IIS extension and executing it. It should open the Web Platform Installer and suggest you the right version of Web Deploy and then click Install.

Then we need to go to IIS to confirm that it is properly installed. Right clicking on a Site should bring you this menu:

webdeploy_installed

Deploying through Visual Studio

IMPORTANT

You need to run Visual Studio as an Administrator for this to work.

Now when you try to publish an application through Visual Studio, select “Web Deploy” from the drop down and type in your URL.

Visual Studio 2010:

publishing_through_webdeploy

Visual Studio 2012:

publish_vs2012

In Visual Studio, you could always use the option called Build Deployment Package so that if you have to give a package to your sysadmin (or a client) to deploy, it can be done with single Zip and an package to import. Included in this package can be, folder authorization, registry keys that are required and more.

Why should I use Web Deploy?

  • You are not required to be an administrator on the server to publish an application.* Allow for partial differential deployment (only what changed)
  • You can push the database with Entity Framework and run migrations on the different databases Synchronize web servers between themselves (IIS6/IIS7/IIS8) Easily give a package for a client to deploy
  • Easily deploy from a Build server (Continuous integration)
  • Automatically backup before publishing

Conclusion

Web Deploy is already in V3 and barely been talked about in our spheres. Maybe we are too development oriented and are not looking at making our tasks easier. In any case, installing the web deploy extension on an IIS server should only take you a few minutes and will be worth it very fast by making your deployments easier than ever.

For more information, do not miss the MSDeploy blog. Or the initial blog post by Scott Guthrie… over two and a half years ago.

Introduction to NSubstitute

Moq has been for a long time my favorite tool to do mocking. Today, I’ll be introducing another tool I’ve recently start to use.

While Moq was more based around creating a mock, configuring it and then retrieving the object, NSubstitute is more about creating the object and then configuring it.

In NSubstitute, there is no direct concepts of Mock that is presented to the user.

As an example, here is a simple way to do a mock:

1
IProductRepository repository = Substitute.For<IProductRepository>();

That’s it. As of this moment, you have a mocked repository. Injecting it is very simple at that point.

Since we don’t have a Mock object to work with, how do we go and configure our mock to return what we want?

NSubstitute include an already familiar concept called “Extension Methods” to our users. The one that is going to be used the most is “Returns”. There is two available signature for Returns. Let’s see the decompiled signature for “Returns”.

1
2
public static ConfiguredCall Returns<T>(this T value, Func<CallInfo, T> returnThis, params Func<CallInfo, T>[] returnThese)
public static ConfiguredCall Returns<T>(this T value, T returnThis, params T[] returnThese)

You return an instance that has already been built or you return a function that will build (or return) the instance at the moment of invocation. As for the array at the end, those are what is going to be returned/invoked in subsequent calls.

That’s really as simple as it get. Since mocking framework are supposed to be easy to use and not get in your way, I think that NSubstitute might have just taken the place of Moq for me as my go-to framework from now on. Simple, lightweight… I like it!

Installing NSubstitute

Website / Source / Nuget

Nuget command line:

1
Install-Package NSubstitute

Better JavaScript notifications with Toastr

A use case I’ve always been pondering about is how to create a small popup that will tell the user that an operation has succeeded or failed.

What I usually did is write by own Success/Error functions to handle displaying the message to the user. Integrate a bit of jQuery here, a bit of jQuery UI here, download a custom script there…

Recently, I’ve found Toastr. What is great about that tool is how it integrates with pretty much anything a designer can throw at you. CSS classes are replaceable, icons are defined inline in the CSS to avoid external dependencies on images, etc.

Basically, you don’t need the CSS file that comes bundled with it. You only need the JS file if you already got the style all figured out.

Here is how I configured it on my end :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
toastr.options = {
toastClass: 'myCustomDivClass',
iconClasses: {
error: 'error_popup',
success: 'success_popup',
info: 'info_popup',
warning: 'warning_popup'
},
positionClass: '', // I position it properly already. not needed.
fadeIn : 300, // .3 seconds
fadeOut: 300, // .3 seconds
timeOut: 2000, // 2 seconds - set to 0 for 'infinite'
extendedTimeOut: 2000, // 2 seconds more if the user interact with it
target: 'body'
};

Basically, this will configure a basic DIV tag element and append it to the of the BODY tag. This is very important in some browser because in some scenarios, form elements might appear over the “toast”.

Another basic boilerplate that you might want to add to your code is this:

1
2
3
4
5
$.ajaxSetup({
error: function() {
toastr["error"]("&lt;h2&gt;An error has occured.&lt;/h2&gt;");
}
});

This will ensure that any AJAX errors that happens within jQuery is handled using Toastr.

Installing Toastr

Nuget Package / Source

Nuget command line:

1
Install-Package toastr

Hidden bug: Parameter passing

There is a bug in the following piece of code. The bug could potentially break behaviours.

1
2
3
4
5
6
7
8
private int ReturnDefaultYear(int? year)
{

if (year < 1970) // LowerYearBound defined as a int elsewhere
year = null;

var result = year ?? 1970;
return result;
}

Found it yet? I’ll give you a hint. When passing parameters in .NET, value types are copied (int, double, float, etc.) but reference type (string, nullable, etc.) have their reference passed around. So when you change them, their value is not lost when leaving the method scope. They are persisted up to the highest level.

In this scenario, the parameter that was passed for “year” would become null.

Try running this simple code in a Console Application and you will understand the problem:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
class Program
{
static void Main(string[] args)
{

int? test = 1;
Console.WriteLine(ConvertTest(test));
Console.WriteLine(ConvertTest(test++));
Console.WriteLine(ConvertTest(test++));
Console.WriteLine(ConvertTest(test++));

Console.ReadLine();
}

public static int ConvertTest(int? test)
{

if(test > 1)
test = null;
return test ?? 0;
}
}

Introduction to MVC 4 Beta – What I should know, what’s new and should I upgrade now?

What’s new?

A ton of stuff. We are talking here about the WebAPI, async support with the new .NET 4.5 async keywords, async HttpModules and HttpHandlers, Bundling and minifications, Single Page Applications (referred to as SPA), mobile project templates and the integrated AzureSDK.

Wow. That’s a pretty big release isn’t it? Let’s go through all those items one by one and give you all a small explanation of what it is.

WebAPI

WebAPI is an attempt by Microsoft to make the HTTP protocol a first class citizen. It looks like WCF (basic HTTP binding) but it’s not. In fact, it’s not using any of the WCF assemblies. Then, it uses the routes of MVC… but it’s not a content rendering engine.

WebAPI is a new service API to render service data based on the HTTP request. As an example, if you do send the HTTP header “Accept: text/json”, it will return JSON. No more mentioning that in the URL anymore. It’s the closest we have ever been to a RESTful service. Bonus point, it doesn’t use the JSON encoding of WCF (which was awfully slow) but the one from NewtonSoft (JSON.NET).

Bundling and Minification

Remember the previous post I did on how to bundle and minify your JS and CSS files for MVC3? Well now it’s integrated. You don’t need to do that anymore. It’s part of the default template. No more custom package from me. You compile and it works.

The advantage of this technique is that you can now upgrade your jQuery library without having to change its reference in the _Layout.cshtml or your masterpage. It will be automatically upgraded. The team as already received the feedback that the invocation call to the “ResolveBundleUrl” was a tad too long so no point in complaining.

Single Page Application

This is the first draft to making web pages in a single page. They are using knockout.js for the client-side data-binding and rendering, upshot.js for the JavaScript data access layer, WebAPI on the server side and jQuery for the goo that holds the world together. This deserves a whole post by itself but it’s a very promising technology.

Mobile Project Templates

A new project template based on jQuery Mobile. This will make your web pages look like a native mobile application. The UI is touch optimized, and easy to get the hang with.

AzureSDK

All ASP.NET projects now comes with the ASP.NET Universal Providers and will by the same time allow you to upload any web applications to the Azure platform without much modification to your application. This will reduce the friction between how we develop applications for the cloud and how we develop applications for a single server.

##

Should I upgrade now?

Of course not. This is all but a beta and most of you won’t need any of those new technologies immediately. However, the core MVC remains the same and you could easily upgrade to the best and latest and enjoy the new features.

Please be remembered that it’s only a beta and nothing in the API is promised to stay exactly the same. However, if you are ready to take a controlled risk, Microsoft has given it a “Go-live” license and you are just a few minutes away from running the best and latest from MVC.

Wait… you forgot the async stuff

Yeah… It’s pretty much the same as before but with the new keywords. I would new a whole new post to display those.

Browser Detection is bad. Feature Detection is better.

Context (Or a little bit of history)

Those who did websites in the beginning of the decade had to work with IE6, Netscape Communicator/Navigator and Opera. Some more browser introduced themselves along the years but most supported different things.

Netscape supported the blink tag (oh how I hate thee). Internet Explorer supported ActiveX and Opera was not popular enough to think about it.

We were in an era where people thought of the web as the blue “E” icon. People were not aware of browsers that much and those who had Netscape probably had it from an AOL promotion.

Since we wanted to display the same thing to every users as much as possible, people either started detecting browsers to clearly say on which browser their website is supported or they just didn’t do any checking at all!

That’s the main reason some internal web applications still display “Works better with IE6”.

Browser Detection

Browsers back then differed a lot from each other. So we had to know if we could use ActiveX or whether we wanted to piss off our users with a blink tag.

The rendering of all this of course was different per browsers.

Here is the script to detect which browsers you are running. I will not copy it here (too large) but you can see that it grows.

Once people knew which browser was running, they knew which feature was supported! So everything was perfect, right?

Not exactly. What if the feature was deprecated in the next version? What if the browser had that functionality disabled by an option? What if a new browser that your script don’t know come in? Why couldn’t he use that feature if he supported it?

This caused a lot of problem. Mostly because detecting a browser didn’t guarantee you anything beside that the user was sending you a UserAgent string that pleased you.

The browser detection was broken from the start and something had to be done to detect whether something was going to work or not. Irrespectively of the browser.

Feature Detection

Then came feature detection. The first real push for it was by Mr. John Resig himself when he posted “Future-Proofing JavaScript Libraries”. Today, jQuery allow developers to access a ton of features that would have taken individual developers months to develop and maintain. jQuery allows you to implement binding of events without knowing whether you should use attachEvent.aspx) or addEventListener.

If you have the time, go see the development version of jQuery. It’s commented and will allow you to find their hacks to offer everyone the same functionalities. This allows you to manipulate the browser without having to know which one it is. It allows you to do Ajax in IE6 with ActiveX or with IE7.aspx#_id) (or higher) with the proper object without knowing if your browser supports ActiveX.

Today, assuming everyone use a frameworks which smooth the basic differences between the browsers, the only thing you might be checking nowadays is HTML5 feature compatibility.

Introducing Modernizr

Modernizr is a library which does Feature detection and that can be tailor cut to your needs. In its complete version, it allows you to detect easily the following features (not a complete list):

  • SVG/Canvas support
  • Web sockets
  • Web SQL Database
  • Web workers
  • HTML5 Audio/video
  • Hash Change event
  • WebGL
  • Geolocation
  • Much more…

Modernizr once referenced in your webpage allow you to tailor your own code to know if you should gracefully degrade some feature or not.

Here is an example on how you could use Modernizr when implementing geolocation in your application:

1
2
3
4
5
Modernizr.load({
test: Modernizr.geolocation,
yep : 'geolocation.js',
nope: 'no-geolocation.js'
});

This simple code allows you to load a different JavaScript based upon the feature of a browser.

Modernizr offer support for IE6+, FireFox 3.5+, Opera 9.6+, Safari 2+, Chrome, iOS, Android, etc.

Conclusion

With the current quantity of browsers that supports different functionalities at various levels, we should not be implementing for specific browsers but rather for specific features. We would want those feature implemented in all browsers but with the fragmentation of the browser market, it’s just not going to happen soon.

So please, no more “This site works better with [INSERT BROWSER]”. Target a feature, test for it, and offer different (or no) implementation if the feature is not available. This will make your code cleaner and more efficient.

So stop the Browser detection madness and embrace the feature detection.

It might sound counter intuitive to some but it makes sense. So be ready for the future.

If you are using HTML5, use feature detection rather than browser detection.

Javascript and CSS Minifying/Bundling with the Microsoft.Web.Optimization Nuget package

So I’ve been wanting to write about this since the build and only gotten around to do it now.

When you write C# code, you rather have multiple small files with clear separation of concerns. This allow you to have small and clear classes and the compiler will never complain about it. However, in Javascript, you want to have smaller files. Most of the time in the .NET environment, there wasn’t any integrated way of doing so. Either it required an EXE call or outputing .min.js files.

This caused problems as we had to alter our Development version of our HTML to fit our Production environment. Microsoft released this tid bit early because it’s probably going to be integrated in the .NET 4.5 framework but is making it available to us now.

Please be aware that Microsoft.* DLLs are not part of the official framework and when they do, they will probably be changed namespace to System.*.

Pre-requisites

First, you will need NuGet to install the following packages:

  • Microsoft.Web.Optimization
  • WebActivator

How it works

Now, the way the JS/CSS minifying works is that it will dynamically inspect all your files, read them, minify them and then cache the result to be served later. This allow us to modify our files and have all the files re-minified. When one of our JS/CSS file get modified again, this process will restart until either the cache expire or a file change.

Setting up the base work

For the minify-er to work, it will require the registration of an HttpModule. It’s not already included in the Microsoft.Web.Optimization package but it will be necessary for us to add it if we want it to work.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
using Microsoft.Web.Infrastructure.DynamicModuleHelper;
using Microsoft.Web.Optimization;
using MvcBackbonePrototype.Bundle;

[assembly: WebActivator.PreApplicationStartMethod(typeof(MvcBackbonePrototype.AppStart.BundleAppStart), "Start")]

namespace MvcBackbonePrototype.AppStart
{
public static class BundleAppStart
{
public static void Start()
{

DynamicModuleUtility.RegisterModule(typeof (BundleModule));
RegisterFolders();
}

private static void RegisterFolders()
{

// configure Microsoft.Web.Optimization
}
}
}

The previous code will do the following, when your application start, it will register a dynamic HttpModule.

Now that the base work is done, we’ll jump right ahead to the configuration of the folders.

Configuring the package

Now that the HttpModule is properly registered, we need to tell the Module when to activate itself. In my specific scenario, I wanted to have jQuery, underscore.js and Backbone.js in that specific order.

By default, the Module will load most core frameworks first (jQuery, MooTools, prototype, scriptaculous) and then load the rest of the files that doesn’t match the wildcards after. The filters are done so that jQuery plugins will load after the jQuery core library and jQuery UI will load after jQuery.

However, there is nothing done for underscore.js and Backbone.js.

1
2
3
4
5
private static void RegisterFolders()
{

var js = new DynamicFolderBundle("js", typeof(JsMinify), "*.js", false);
BundleTable.Bundles.Add(js);
}

The previous code correctly configure the module to minify all files in a folder by just adding the suffix “js” to the folder (eg.: /Scripts/js).

However, it will register the the other modules in alphabetical order rather than the proper order.

Let’s fix that.

Custom Orderer

1
2
3
4
5
6
7
8
9
10
11
12
13
14
public class BackboneOrderer: DefaultBundleOrderer
{
public override IEnumerable<FileInfo> OrderFiles(BundleContext context, IEnumerable<FileInfo> files)
{

context.BundleCollection.AddDefaultFileOrderings();

var backboneOrdering = new BundleFileSetOrdering("backbone");
backboneOrdering.Files.Add("underscore.*");
backboneOrdering.Files.Add("backbone.*");
context.BundleCollection.FileSetOrderList.Add(backboneOrdering);

return base.OrderFiles(context, files);
}
}

We first inherit from the default order. Then, we add the default file ordering which will take care of the jQuery ordering for us. Then, we add the other files that we require to the list. The only thing left is to alter our RegisterFolders method to fix that.

1
2
3
4
5
6
private static void RegisterFolders()
{

var js = new DynamicFolderBundle("js", typeof(JsMinify), "*.js", false);
js.Orderer = new BackboneOrderer();
BundleTable.Bundles.Add(js);
}

That’s it. We are nearly done!

Modifying your _Layout.cshtml / masterpage

My masterpage head section first looked a lot like this:

1
2
3
<script src="@Url.Content("~/Scripts/Framework/jquery-1.7.1.min.js")" type="text/javascript"></script>
<script src="@Url.Content("~/Scripts/Framework/underscore.min.js")" type="text/javascript"></script>
<script src="@Url.Content("~/Scripts/Framework/backbone.min.js")" type="text/javascript"></script>

This was of course replaced by the following:

1
<script src="@Url.Content("~/Scripts/Framework/js")" type="text/javascript"></script>

And that’s all! All your files will be minimized, bundled and properly cached.

Bonus

If you want to have your URLs with a “version number” on it, I suggest that you use the following methods to resolve your URLs instead of the MVC way:

1
<script src="@Microsoft.Web.Optimization.BundleTable.Bundles.ResolveBundleUrl("~/Scripts/Framework/js", true)"></script>

How to insert page breaks in printed web pages using only CSS?

So one of my co-worker had to print a “report”. I say report but it’s more like they wanted the page printed but just more compact and less fluffy/pretty content. Like always, people tend to bring solutions instead of problems and they asked me how long it would take me to do that in SSRS.

Of course, I gave my evaluation on how long it should take but then… we stumbled into problems. My co-worker only have Visual Studio 2010 installed. SSRS requires you to have BIDS which uses Visual Studio 2008. Then, we have SQL Server 2008 R2 installed on our machine but no-R2 on our deployment server. This quickly became a mess.

Then after taking the time to look at the problem, we asked: “Why couldn’t we just print the web page?”

Well for starters, the requirements we had was there was a need for a page break at certain predictable location.

After searching a bit, I came up with this solution directly from W3C (and a bit other sources).

1
2
3
4
5
@media print {
.pagebreak {
page-break-before: always;
}

}

And that’s it. Just add the class to the element that should be on a new page and it works.

Now, what about compatibility you say?

It should be compatible with IE (all current versions down to 6), FireFox (all latest versions), Chrome (tested on v16) and Opera (all versions).

Siberix – Footer row higher than they should be

So I’ve had to build some reports with Siberix in the last few days and I had some less than pleasing results. I had a grid that was overflowing vertically but for some reason, the last row on the page would take 2-3 times more space than necessary.

After printing the page and looking at the results many times… something clicked when I saw the footer row that I had added. The blank space that was left was exactly the same size as my footer.

So maybe it’s a bug, maybe it’s not but Siberix let’s itself some room on every page to render the footer “in case” that it’s the last page. No concept what so ever of where he’s at when rendering the PDF.

The solution? Make the footer outside the IGrid and it won’t glitch anymore. And yes, I tried to set the IFooter.Repeat to false.

I can understand that a product have bugs but the worst part is that it’s a paying tool and no community built around it like Telerik, ReSharper or other third party tools. My last try was with Stackoverflow but didn’t have any luck either.

Siberix, if you want to increase your sales, make sure you have a community behind it. Just a friendly advice.

I’ve just been nominated ASP.NET MVP

First of all, I’m extremely happy with my nomination as an ASP.NET MVP. I did a lot of presentations in the past 2 years and I’m happy that I’m considered for this award.

MVP

I would like to thank everyone who nominated me and helped me get where I am today.

Mario Cardinal helped me by backing me with the nomination. Éric De Carufel is also a big part of my nomination since we started a small group (ALT.NET) more than a year ago. Since then, we’ve been working non-stop in presenting and “one-upping” each other. I would also thank Joël Quimper for his help and my boss Yves Forget for helping me give time to those presentations.

Most of my presentations has been done at the .NET Montreal Group so I would lastly thank Guy Barette for the opportunities he gave me as well as the inclusion of our group inside the big .NET Montreal community.

So I’ll be participating more in the following months and I will push ASP.NET as hard as before within Montreal.

Thanks again,

-Maxime Rouiller

Ajax with jQuery and ASP.NET MVC

Requirements

What should I learn first?

When doing AJAX with ASP.NET MVC, there is no “UpdatePanel” or high level abstraction that helps you do all the magic. You need to get your hands dirty. That means learning how to do actual JavaScript without having the framework do all the job for you.

My favorite tool is jQuery when it comes to JavaScript. It’s simple, small and allows you to do in a few line of codes in 5 minutes what would take me 3 days an 2000 lines of code.

What should be on your reading list?

Document Ready Event

First, do not forget to put any jQuery code inside the following snippet:

1
2
3
$(document).ready(function () {
// all code need to go there.
});

Selectors

The three most important selectors (in my opinion) are the following :

Those three selectors will include around 80 to 90% of all the selector you will require. The others, you can pick up along the way!

Manipulation

Here, we are talking about manipulating elements from the HTML page. We will need those since they are required for removing and adding HTML into the page. It’s basically what the UpdatePanel for WebForms do but we’ll do it manually and be more specific in what we want.

Here what I consider essential :

  • remove() – Allows you to remove an element from the DOM. This is useful when wanting to remove an element directly.
  • append()/prepend() – Allows you to insert an element inside the selected tags (either before or after the existing elements of that tag)
  • replaceWith() – Replace the element with what’s given in parameters. When using this, make sure that the replaced element have the same usable selector or it will harder to reselect that element.

And now, we have nearly everything that we need to start being AJAX-y!

I already know all that, let me do AJAX!

So what’s the easiest with jQuery to do AJAX?

$.get()

This method will basically do an HTTP Get on the selected URL and return the data inside the success function callback.

Let’s say I want to add the result of my AJAX request to the following tag:

1
2
<div id='result'>
</div>

It would be as easy as doing this :

1
2
3
4
$.get('/Controller/Action/', null, function (data) {
$('#result').empty(); // clears the content
$('#result').append(data); // append the data into the div
});

Wasn’t that easy?

The MVC Side

Of course, if your MVC Controller is returning an JsonResult, you could use $.getJSON instead and data would be an object instead of pure HTML. In fact, your controller can even return a simple string and it would work. But how do you make your controller returns only what you need? Actions can be split in 2 with MVC to respond differently whether the request is standard or an AJAX request. Here is what it would look like:

1
2
3
4
5
6
7
public ActionResult MyAction() 
{

if(Request.IsAjaxRequest())
return View('nameOfMyPartialView');
else
return View();

}

That’s it! And now if you want to return an object as JSON:

1
2
3
4
5
6
7
8
9
10
11
public ActionResult AnotherAction()
{

if(Request.IsAjaxRequest())
{
// The following line return Json to AJAX requests
return Json(new { name = 'Maxime Rouiller'}, JsonRequestBehavior.AllowGet);
}

// we still return normal HTML to standard requests.
return View();
}

I hope you enjoyed this small example. If you are still curious or if you are simply stuck with something, ask away!