What's new in VS2017? - Visual Studio Installer

When installing Visual Studio in the past, you would be faced with a wall of Checkboxes that would leave wondering what you needed for what.

Introducing Workloads

Visual Studio 2017 workloads

Workloads are an easy way to select what kind of work you are going to do with Visual Studio. .NET Desktop development? Windows Phone? Web? All are covered.

If you look at that screenshot, you can see something new in there that wasn’t included previously. Azure. No more looking around for that Web Platform Installer to install the proper SDK for you.

You can access it directly from the installer. But what happens once you’re done and you want to modify your workloads?

If you start working on Visual Studio extensions, you need to be able to install that too.

Access the Installer after installation

There is two ways.

The first is to just press your Windows key and type Visual Studio Installer. Once the window is opened, you click the little hamburger menu and click modify.

The second is to access it through the File > New Project... menu.

Visual Studio 2017 workloads through projects

By clicking this link, the installer will open for you without having to go through the hamburger menu. Just pick your features.

Does that matter for you?

How do you like the new installer? Is it more user friendly than what was before?

What else could be improved? Let me know in the comments.

What's new in VS2017? - Lightweight Solution Load

There’s a brand new option in Visual Studio 2017 that many users might overlook. It’s called Lightweight Solution Load.

While opening solutions in most project is relatively simple, some just simply aren’t.

If you take a solution like roslyn; this project has around 200 projects. That is massive. To be fair, you don’t need to be that demanding to see performance degradation in Visual Studio. Even if Visual Studio 2017 is faster than 2015, huge solutions can still take a considerable amount of time to load.

This is where Lightweight Solution Load comes into play.

What is lightweight solution load?

Once this option is turned on, Visual Studio will stop pre-loading all projects completely but instead rely on the minimal amount of information needed to have it functional in Visual Studio. Files will not be populated until the project is expanded and other dependencies that are not required yet will also not be loaded.

This allows for you to open a solution, expand a project, edit a file, recompile, and be on your way.

How to turn it on?

There’s two ways you can turn it on. Individually for a single solution by right clicking on a solution and selecting this option:

Turning it on for an Individual Solution

Or globally for all future solutions that are going to be loaded by opening your Tools > Options... menu:

Turning it on for All Solutions

What is the impact?

Beside awesome performance? There might be Visual Studio features that will just not work unless a project is fully loaded. Please see the list of known issues for the current release.

Alternative

If you do not want to play with this feature but you still find your solution to be too long to load, there’s an alternative.

Break up your solutions in different chunks. Most applications can be split into many smaller size solutions. This reduces load time as well as faster compile time in general.

Are you going to use it?

When new features are introduced, I like to ask people whether it’s a feature they would use.

So please leave me a comment and let me know if this feature is going to be used on your projects. How many projects do you normally have in a solution?

What's new in VS2017 and C# 7.0? - Throw Expression

throw has been a keyword since the first version of C#. The way the developer interact with it hasn’t been touched ever since.

Sure, lots of new features has been brought on board since but… throw? Never touched until now.

C# 6.0

I do not really need to tell you how to throw an exception. There’s 2 ways.

1
2
3
4
5
6
7
8
9
10
11
12
13
public void Something()
{

try
{
// throwing an exception
throw new Exception();
}
catch(Exception)
{
// re-throw an exception to preserve the stack trace
throw;
}
}

And that was it. If you wanted to throw an exception any other way,

C# 7.0

All you see below are invalid before C# 7.0.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
public class Dummy
{
private string _name;

public Dummy(string name) => _name = name ?? throw ArgumentNullException(nameof(name));
public void Something(string t)
{

Action act = () => throw new Exception();
var nonNullValue = t ?? throw new ArgumentNullException();
var anotherNonNullValue = t != null ? t : throw new ArgumentNullException();
}

public string GetName() => return _name;
}

The difference

Oh so many of them. Here’s what was included in the previous snippet of code.

You can now throw from:

  • Null-Coalescing operator. The ?? operator used to provide an alternative value. Now you can use it throw when value shouldn’t be null.
  • Lambda. Still don’t understand exactly why it wasn’t allow before but now? Totally legit.
  • Conditional Operator. Throw from the left or the right of the ?: operator anytime you feel like it. Before? Not allowed.
  • Expression Body. In fact, any expression body will support it.

Where you still can’t throw (that I verified):

  • if(condition) statement. Cannot be used in the condition of a if statement. Even if it’s using a null-coalescing operator, it won’t work and is not valid syntax.

Are you going to use it?

I know that not everyone will necessarily use all of these new form of expression body. But I’m interested in your opinion.

So please leave me a comment and let me know if it’s something that will simplify your life or, at least, your code.

What's new in VS2017 and C# 7.0? - More Expression Body

Expression Bodied functions is a relatively new concept brought to use in C# 6.0 to facilitate the creation of properties that included too much ceremony.

C# 7.0 remove even more ceremony from many more concepts.

C# 6.0

Previously, C# 6.0 introduced the concept of Expression bodies functions.

Here are a few examples.

1
2
3
4
// get only expression bodied property
public string MyProperty => "Some value";
// expression bodies method
public string MyMethod(string a, string b) => return a + b;

C# 7.0

With the new release of C# 7.0, the concept has been added to:

  • constructors
  • destructors
  • getters
  • setters

Here are a few examples.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
class TestClass
{
private string _name;

// expression-bodied constructor
public TestClass(name) => _name = name;
// expression-bodied destructor
~TestClass() => _name = null;

public string Name
{
get => _name;
set => _name = value;
}
}

The difference

The main difference in your code will be the amount of line of codes that will be used for useless curly braces or plumbing code.

Yet again, this new version of C# offer you more ways to keep your code concise and easier to read.

Are you going to use it?

I know that not everyone will necessarily use all of these new form of expression body. But I’m interested in your opinion.

So please leave me a comment and let me know if it’s something that will simplify your life or, at least, your code.

What's new in VS2017 and C# 7.0? - Local Functions

Local Functions is all about declaring functions within functions. See them as normal functions with a more restrictive scope than private.

In C# 6.0, if you needed to declare a method within a method to be used within that method exclusively, you created a Func or Action.

C# 6.0 - Local Functions (before)

But here are some issues with Func<>/Action<>, first they are an object. Not a function. Every time you declare a Func, you allocate memory for the variable and that could put unnecessary pressure in your environment.

Second, Func cannot call themselves (also know as recursion) and finally, they have to be declared before you use them just like any variables.

1
2
3
4
5
6
7
8
9
10
public void Something(object t)
{

// allocate memory
Func<object> a = () => return new object();

Func<int, int> b = (i) => {
//do something important
return b(i); // <=== ILLEGAL. Can't invoke itself.
};
}

Here’s the problem with those however… if you do not want to allocate the memory or if you need to du recursion, you need to move the method to an external method and scope it properly (private).

Doing that however, your method becomes available to the whole class to use. That’s not good.

C# 7.0 - Local Functions

1
2
3
4
5
6
7
8
9
10
public bool Something(object t)
{

return MyFunction(t);


bool MyFunction(object t)
{

// return value based on `t`
}
}

This is how a local function is declared in C# 7.0. It works the same way as a lambda but without allocation and without exposing private functions that shouldn’t be exposed.

The difference

The main difference are:

  • No memory allocation. Pure function that is just ready to be invoked and won’t be reallocated every time the method is called.
  • Recurse as much as you want. Since it’s a normal method, you can use recursion just like any other methods.
  • Use it before declaring it. Just like any other methods in a class, you can use it before (as in: line of codes) it is actually declared. Variables need to be declared before they are used.

Quickly, see it as normal method with a more aggressive scoping than private.

Are you going to use it?

When new features are introduced in a language, I like to ask people whether it’s a feature they would use.

So please leave me a comment and let me know if it’s something that will simplify your life or, at least, your code.

BrowserStack and Microsoft Partnership - Testing Microsoft Edge has never been that easy!

Microsoft just announced a partnership with BrowserStack to provide free testing of Microsoft Edge.

You probably already know that BrowserStack is the place to go when you need to test multiple browsers without installing them all on your machine.

Today, you can expand your manual and automated test to always include Microsoft Edge.

Not only that, they include 3 channels: Latest 2 stable versions as well as the preview (available to Insiders).

This will enable you to ensure that your application works on most Edge browsers as well as future proofing yourself with the next release.

Try it now!

What's new in VS2017 and C# 7.0? - Tuples

C# 7.0 introduces tuples for the first time. Although present in many other languages and introduced first as a generic type (System.Tuple<...>) before, it wasn’t until C# 7.0 that it was actually included into the language specification.

The raison-d’être of the Tuple is to return 2 values at the same time from a method.

C# 6.0

Many solutions were provided to us before.

We could use:

  • Out-Parameters but they are usable in async methods so it’s one solution.
  • System.Tuple<...> but just like Nullable<...>, it’s very verbose.
  • Custom type. But now you are creating classes that will never be reused ever again.
  • Anonymous types. But you were required to use dynamic which add a huge performance overheard everytime it’s used.

C# 7.0 - Defining tuples

The simplest use is like this:

1
2
3
4
5
6
7
8
9
10
11
public (string, string) Something()
{
// returns a literal tuple of strings
return ("Hello", "World");
}

public void UsingIt()
{

var value = Something();
Console.WriteLine($"{value.Item1} {value.Item2}")
}

Why stop there? If you don’t want to return ItemX as their name, you can customize it two different way.

1
2
3
4
5
6
7
8
9
public (string hello, string world) NamedTupleVersion1()
{
//...
}

public (string, string) NamedTupleVersion2()
{
return (hello: "Hello", world: "World");
}

The difference

The difference is simpler code, less out usage and less dummy classes that are only used to transport simple values between methods.

Advanced scenarios

C# 7.0 - Deconstructing tuples (with and without type inference)

When you invoke 3rd party library, tuples will already be with either their name or in a very specific format.

You can deconstruct the tuple and convert it straight variables. How you ask? Easily.

1
2
3
4
5
6
7
8
var myTuple = (1, "Maxime");

// explicit type definition
(int Age, string Name) = myTuple
// with type inference
var (Age, Name) = myTuple;

Console.WriteLine($"Age: {Age}, Name: {Name}.");

If you take the previous example, normally, you would need to access the first property by using myTuple.Item1.

Hardly readable. However, we created the Age variable easily by deconstructing it. Wherever the tuple come from, you can easily deconstruct it in one line of code with or without type inference.

Are you going to use it?

When new features are introduced in a language, I like to ask people whether it’s a feature they would use.

So please leave me a comment and let me know if it’s something that will simplify your life or, at least, your code.

What's new in VS2017 and C# 7.0? - Pattern Matching

C# 7.0 introduces pattern matching. Well, compared to other features, this require a little bit of explanation.

There is many types of pattern matching and three are supported in C# 7: Type, Const, Var.

If you used the is keyword before, you know it test for a certain type. However, you still needed to cast the variable if you wanted to use it. That alone made the is statement completely irrelevant and people preferred to cast and check for null rather than check for types.

C# 6.0 - Type Pattern Matching (before)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
public void Something(object t)
{

var str = t as string;
if(str != null)
{
//do something
}

var type = t.GetType();

if (type == typeof(string))
{
var s = t as string;
}
if (type == typeof(int))
{
var i = t as int;
}
}

C# 7.0 - Type Pattern Matching

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
public void Something(object t)
{

if(t is string str)
{
//do something
}

switch(t)
{
case string s:
break;
case int i:
break;
//...
case default:
break;
}
}

The difference with Type Pattern Matching

This saves you one line in a pattern that is common and repetitive way too often.

More pattern matching

C# 7.0 - Const Pattern Matching

Const pattern is basically checking for specific value. That includes null check. Other constant may also be used.

1
2
3
4
5
6
7
8
9
10
11
public void Something(object t)
{

if(t is null){ /*...*/ }
if(t is 42) { /*...*/}
switch(t)
{
case null:
// ...
break;
}
}
C# 7.0 - Var pattern Matching

This is a bit more weird and may look completely pointless since it will not match any types.

However, when you couple it with the when keyword… it where magic starts coming.

1
2
3
4
5
6
7
8
9
10
11
12
13
private int[] invalidValues = [1,4,7,9];
public bool IsValid(int value)
{

switch(value)
{
case var validValue when( !invalidValues.Contains(value)):
break;

case var invalidValue when( invalidValues.Contains(value)):
break;

case default:
break;
}
}

Of course, this example is trivial but add some real-life Line Of Business applications and you end up with a very versatile of putting incoming values into the proper bucket.

Are you going to use it?

When new features are introduced in a language, I like to ask people whether it’s a feature they would use.

So please leave me a comment and let me know if it’s something that will simplify your life or, at least, your code.

What's new in VS2017 and C# 7.0? - Out Variables

When using certain APIs, some parameters are declared as out parameters.

A good example of this is the Int32.TryParse(string, out int) method.

So let’s check the difference in invocation between C# 6.0 and C# 7.0

C# 6.0

1
2
3
4
5
6
7
8
public void DoSomething(string parameter)
{

int result;
if(Int32.TryParse(parameter, out result))
{
Console.WriteLine($"Parameter is an int and was parsed to {result}");
}
}

C# 7.0 (with type inference)

1
2
3
4
5
6
7
8
9
10
11
12
13
public void DoSomething(string parameter)
{

if(Int32.TryParse(parameter, out int result))
{
Console.WriteLine($"Parameter is an int and was parsed to {result}");
}

// w/ type inference
if(Int32.TryParse(parameter, out var i))
{
// ....
}
}

The difference

Now you don’t need to define the variable on a separate row. You can inline it directly and, in fact, you could just use var instead of int in the previous example since it can infer the type directly inline. This is called type inference.

It is important to note however that the variable is scoped on the method and not the if itself. So the result parameter is available in both the if and the else scope.

Are you going to use it?

When new features are introduced in a language, I like to ask people whether it’s a feature they would use.

So please leave me a comment and let me know if you are going to use inline out variables.

What's new in VS2017 and C# 7.0? - Literals

Literals

C# 6.0

The C# 6.0 way to define integers in .NET is to just type the number and there’s no magic about it.

You can assign it directly by typing the number or if you have a specific hexadecimal, you can use the literal annotation 0x to define it.

Not teaching anyone anything new today with the following piece of code.

1
2
int hexa = 0x12f4b12a;
int i = 1235;

C# 7.0

Now in C# 7.0, there’s support for binary annotations. If you have a specific binary representation that you want to test, you can use the literal annotation 0b to define it.

1
var binary = 0b0110011000111001;

Another nice feature that is fun to use is the separator. It was supposed to be introduced in C# 6.0 but was delayed to 7.0.

They do not affect the value in any way and can be applied to any literals.

1
2
3
var hexa = 0x12f4_b12a;
var binary = 0b0110_0110_0011_1001;
var integer = 1_000_000;

They can be applied any where and will not impact the evaluation of the number.

Are you going to use it?

When new features are introduced in a language, I like to ask people whether it’s a feature they would use.

So please leave me a comment and let me know if binary or separators are a feature that will be used.

Creating my first .NET Core app running on Linux with Docker

I’ve always liked the idea of running .NET Core on multiple platforms. I’ve never had the guts however to jump cold feet into a Linux installation only on my machine.

Since Windows 10 added support for Hyper-V and Docker a while ago and with the release of the tooling for .NET Core with VS2017, I decided to give it another go.

Here is what you will need to follow along before we get any further.

Requirements

  • Windows 10 64bits Pro and higher. This does not work with anything less.
  • If you haven’t installed Hyper-V, Docker will prompt you to install it for you and reboot your machine. Save your work!
  • Install Docker for Windows (I went with stable) and follow the steps

Making sure your C drive is shared

The last bit of requirement to ensure that your machine is ready to run .NET Core apps is to ensuring that your C drive is shared.

Once you install Docker for Windows for the first time, make sure to go in the notification tray and right click on the whale.

The whale

Once the contextual menu pops up, select the settings option:

The setting

Finally, go into the Shared Drives menu on the left and ensure that the C drive is shared. It will prompt your for your password.

Shared Drives

Click on Apply and now we are ready.

Creating a docker application

Once our little pre-requisite are satisfied, the steps are really easy to go from there.

We will create a new ASP.NET Core Web Application and making sure that we enable docker support.

New App

If you missed the previous step, it’s always possible to enable docker support once the application is created by right clicking on your project and clicking Add > Docker Support.

Adding docker support

Whatever path you took, you should now have 2 projects in your solution. Your initial project and a docker-compose project.

docker-compose

Testing out our application

The first modification that we will do to our application is add a line in our /Views/Home/Index.cshtml file.

1
<h1>@System.Runtime.InteropServices.RuntimeInformation.OSDescription</h1>

I’ve added it to the top to make sure it works.

First, select your project and ensure it starts in either Console or in IIS Express mode and press F5. Once the application is launched, you should see something like this:

windows-run

Now, select the docker-compose project and press F5. Another Window should open up and display something like this:

docker-run

The OS Description might not be exactly this but you should see “Linux” in there somewhere. And… that’s it!

You have officially a multiplatform .NET Core application running on your Windows 10 machine.

Conclusion

Without knowing anything about how docker works, we managed to create a .NET Core application and have it run in both Windows and Linux in less than 15 minutes.

Since, it’s not because you can than you should, I highly recommend that you read up on docker to ensure that it’s the proper tool for the job.

In fact, reading on the whole concept of containers would be advised before going with both feet in.

If you are interested in seeing how we can deploy this to the cloud, let me know in the comments!

Adding TFS to your Powershell Command Line

UPDATE: Added VS2017 support

If you are mostly working with other editors than Visual Studio but still want to be able to use TFS with your team mates, you will need a command line solution.

The first thing you will probably do is to do a Google Search and find where the command line utility is located.

C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\tf.exe

Now, you could simply add the folder to your %PATH% and make it available to your whole machine. But… what about setting an alias instead? Basically, just import this specific command without importing the whole folder.

PowerShell

First, run this notepad $PROFILE. This will open your PowerShell profile script. If the file doesn’t exist, it will prompt you to create it.

Once the file is opened, copy/paste the following line:

1
Set-Alias tf "$env:VS140COMNTOOLS..\IDE\tf.exe"

If you have a different version of Visual Studio installed, you may need to change the version of the common tools.

This is easily the easiest way to get Team Foundation Services added to your command line without messing with your PATH variable.

Tools Versions

Name Version Tools Variable
Visual Studio 2010 10.0 VS100COMNTOOLS
Visual Studio 2012 11.0 VS110COMNTOOLS
Visual Studio 2013 12.0 VS120COMNTOOLS
Visual Studio 2015 14.0 VS140COMNTOOLS

Handling Visual Studio 2017

The way Visual Studio 2017 has been re-organized, there is no more global environment variables laying around.

The tf.exe location is now there. I haven’t found an easy way to link to it but to use the full path. Please note that the path bellow will vary based on your edition of Visual Studio.

C:\Program Files (x86)\Microsoft Visual Studio\2017\\Common7\IDE\CommonExtensions\Microsoft\TeamFoundation\Team Explorer\tf.exe

So for my scenario (with Enterprise installed), the alias would be set as:

1
Set-Alias tf "C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\Common7\IDE\CommonExtensions\Microsoft\TeamFoundation\Team Explorer\tf.exe"

Testing it out

If you run tf get on a source controlled folder, you should see changes be brought back to your folder.

Is there any other tools that you are using that are not registered in the default PATH? Leave a comment and let everybody know!

If you want to know more about how to use the tf command, you should definitely take a look at the list of commands.

TFVS Command Reference

Managed Disk is now in GA - Convert all your VMs now!

Alright so this is kind of a big deal. This has been covered in the past but since this just hit General Availability, you need to get on this now. Or better, yesterday if you have access to a time machine.

Before Managed disks, you had to create an Azure Storage Account for each VM to avoid IOPS limits. But this wasn’t enough to keep your VMs up and running. You had to also manage availability sets.

This has led a some people as far away from VM as possible. But if you consider the insane advantages of VM Scale Sets (read: massive scale-out, massive machine specs, etc.), you don’t want to avoid this card in your solution deck. You want to embrace it. But if you start embracing VMs, you have to start dealing with the Storage Accounts and the Availability Sets and, let’s be honest, it was clunky.

Today, no more waiting. It’s generally available and it’s time to embrace it.

Migrating Existing VMs to Managed Disks

Note this code is taken from the sources bellow.

To convert single VMs without an availability set

1
2
3
4
5
6
7
# Stop and deallocate the VM
$rg = "MyResourceGroup"
$vm = "MyMachine"
Stop-AzureRmVM -ResourceGroupName $rg -Name $vm -Force

# Convert all disks to Managed Disks
ConvertTo-AzureRmVMManagedDisk -ResourceGroupName $rg -VMName $vm

To Convert VMs within an availability set:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$rgName = 'myResourceGroup'
$avSetName = 'myAvailabilitySet'

$avSet = Get-AzureRmAvailabilitySet -ResourceGroupName $rgName -Name $avSetName

Update-AzureRmAvailabilitySet -AvailabilitySet $avSet -Managed

foreach($vmInfo in $avSet.VirtualMachinesReferences)
{
$vm = Get-AzureRmVM -ResourceGroupName $rgName | Where-Object {$_.Id -eq $vmInfo.id}
Stop-AzureRmVM -ResourceGroupName $rgName -Name $vm.Name -Force
ConvertTo-AzureRmVMManagedDisk -ResourceGroupName $rgName -VMName $vm.Name

}

Source 1
|
Source 2

Here’s some more resource that may help you convert your VMs.

If you are using the Azure CLI, they updated their tooling to allow you to manage the “Managed Disks”.

If you rather use C# to manage your application, the .NET Azure SDK is also up to date.

Finally, if you want to start playing with VM Scale Sets and Managed Disks, here’s a quick “how to”.

Are Managed Disks something that you waited for? Let me know in the comments!

Setting up ESLint rules with AngularJS in a project using gulp

When creating Single Page Application, it’s important to keep code quality and consistency at a very high level.

As more and more developer work on your code base, it may seem like everyone is using a different coding style. In C#, at most, it’s bothersome. In JavaScript? In can be down right dangerous.

When I work with JavaScript projects, I always end up recommending using a linter. This will allow the team to make decisions about coding practices as early as possible and keep everyone sane in the long run.

If you don’t know ESLint yet, you should. It’s one of the best JavaScript linter available at the moment.

Installing ESLint

If your project is already using gulp to automate the different work you have to do, eslint will be easy to setup.

Just run the following command to install all the necessary bits to make it run-able as a gulp task.

1
npm install eslint eslint-plugin-angular gulp-eslint

Alternatively, you could also just install eslint globally to make it available from the command line.

1
npm install -g eslint

Configuring ESLint

Next step is to create a .eslintrc.json at the root of your project.

Here’s the one that I use.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
{
"env": {
"browser": true
}
,

"globals":{
"Insert Global Here": true
}
,

"extends": "angular",
"rules": {
"linebreak-style": [
"error",
"windows"
],

"semi": [
"error",
"always"
],

"no-console": "off",
"no-debugger": "warn",
"angular/di": [ "error", "$inject" ]
}

}

First, setting up the environment. Setting browser to true will import a tons of globals (window, document, etc.) that tells ESLint that the code is running inside a browser and not in a nodejs process for instance.

Next are globals. If you are using libraries that defines globals and that you are using those globals, it’s where you would define them. (Eg.: jQuery, $, _)

extends will allow you to define the base rules that we will follow. angular basically enables the plugin we have as well as all the basic JavaScript rules defined by default.

rules will allow you to customize the rules to your liking. Personally, I don’t like seeing the console and debugger errors so I adjusted them the way I like. As for angular/di, it allows you to set your prefered way of doing dependency injection with Angular. Anything that is not service.$inject = [...] will get rejected in my code base.

Sub-Folder Configuration

Remember that you can always add rules for specific folders. As an example, I often have a service folder. This folder only contains services but the rule angular/no-service-method will have an error for each of them.

Creating an .eslintrc.json file in that folder with the following content will prevent that error from ever showing up again.

1
2
3
4
5
{
"rules": {
"angular/no-service-method": "off"
}

}

Creating a gulp task

The gulp task itself is very simple to create. The only thing that is left to pick, is the format in which to display the errors.

You can pick from many formatters that are available.

1
2
3
4
5
6
7
8
var eslint = require("gulp-eslint");
var src = './app/';

gulp.task('eslint', function () {
return gulp.src(src + '**/*.js')
.pipe(eslint())
.pipe(eslint.format('stylish'));
});

Customizing your rules

As with every team I collaborate with, I recommend that everyone sits down and define their coding practices so that they may be no surprise.

Your first visit is to check the available rules on ESLint website. Everything with a checkmark is enabled by default and will be considered an error.

I wouldn’t take too much time going through the list. I would, however, run eslint on your existing code base and see what your team consider errors and identify cases where something should be errors.

  • Does extra-parenthesis gets on everyone’s nerves? Check out no-extra-parens
  • Does empty functions plaguing your code base? Check out no-empty-function
  • Do you consider using eval() a bad practice (you should!)? Add no-eval to the rulebook!

Improving your code one step at a time

By implementing a simple linter like ESLint, it’s possible to increase your code quality one step at a time. With the angular plugin for eslint, it’s also now possible to improve your Angular quality at the same time.

Any rules you think should always be enabled? What practices do you use to keep your code base clean and bug free? Let me know in the comments!

Angular 1.5+ with dependency injection and uglifyjs

Here’s a problem that doesn’t come too often.

You build your own build pipeline with AngularJS and you end-up going in production with your development version. Everything runs fine.

Then you try your uglified version and… it fails. For the fix, skip to the end of the article. Otherwise? Keep on reading.

The Problem

Here’s some Stack Trace you might have in your console.

Failed to instantiate module myApp due to:

Error: [$injector:unpr] http://errors.angularjs.org/1.5.8/$injector/unpr?p0=e

and this link shows you this:

Unknown provider: e

Our Context

Now… in a sample app, it’s easy. You have few dependencies and finding them will make you go through a few files at most.

My scenario was in an application with multiple developers after many months of development. Things got a bit sloppy and we made decisions to go faster.

We already had practices in place to require developers to use explicit dependency injection instead of implicit. However, we didn’t have anything but good faith in place. Nothing against human mistake or laziness.

Implicit vs Explicit

Here’s an implicit injection

1
2
3
4
angular.module('myApp')
.run(function($rootScope){
//TODO: write code
});

Here’s what it looks like explicitly (inline version)

1
2
3
4
angular.module('myApp')
.run(['$rootScope', function($rootScope){
//TODO: write code
}]);

Why is it a problem?

When UglifyJS will minify your code, it will change variable names. Names that AngularJS won’t be able to match to a specific provider/injectable. That will cause the problem we have where it can’t find the right thing to inject. One thing that UglifyJS won’t touch however is strings. so the '$rootScope' present in the previous tidbit of code will stay. Angular will be able to find the proper dependency to inject. And that, even after the variable names get mangled.

The Fix

ng-strict-di will basically fails anytime it finds an implicit declaration. Make sure to put that into your main Angular template. It will save you tons of trouble.

1
2
3
<html ng-app="myApp" ng-strict-di>
...
</html>

Instead of receiving the cryptic error from before, we’ll receive something similar to this:

Uncaught Error: [$injector:modulerr] Failed to instantiate module myApp due to:

Error: [$injector:strictdi] function(injectables) is not using explicit annotation and cannot be invoked in strict mode

Enable Transparent Data Encryption on SQL Azure Database

Among the many recommendations to make your data secure on Azure, one is to implement Transparent Data Encryption.

Most of the ways you’ll see online to enable it is to run the following command in SQL:

1
2
3
-- Enable encryption  
ALTER DATABASE [MyDatabase] SET ENCRYPTION ON;
GO

While this may be perfectly valid for existing database, what if you want to create it right from the start with TDE enabled?

That’s where ARM templates normally come in. It’s also where the documentation either fall short or isn’t meant to be used as-is right now.

So let me give you the necessary bits for you to enable it.

Enabling Transparent Data Encryption

First, create a new array of sub resources for your database. Not your server. Your database. It’s important otherwise it just won’t work.

Next, is to create a resource of type transparentDataEncryption and assign the proper properties.

It should look like this in your JSON Outline view.

Enabling Transparent Data Encryption on Azure

I’ve included the database ARM template I use for you to copy/paste.

ARM Template

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
{
"name": "[variables('databaseName')]",
"type": "databases",
"location": "[resourceGroup().location]",
"tags": {
"displayName": "Database"
}
,

"apiVersion": "2014-04-01-preview",
"dependsOn": [
"[concat('Microsoft.Sql/servers/', variables('sqlserverName'))]"
],

"properties": {
"edition": "[parameters('edition')]",
"collation": "[parameters('collation')]",
"maxSizeBytes": "[parameters('maxSizeBytes')]",
"requestedServiceObjectiveName": "[parameters('requestedServiceObjectiveName')]"
}
,

"resources": [
{
"name": "current",
"type": "transparentDataEncryption",
"dependsOn": [
"[variables('databaseName')]"
],

"location": "[resourceGroup().location]",
"tags": {
"displayName": "Transparent Data Encryption"
}
,

"apiVersion": "2014-04-01",
"properties": {
"status": "Enabled"
}

}

]
}

Want more?

If you are interested in more ways to secure your data or your application in Azure, please let me know in the comments!

Microsoft Azure Dashboards quick tips

Dashboards are a fantastic way of monitoring applications. However, most people don’t really know how to benefit from them.

All screenshots are from when you view your dashboard in Edit Mode (after clicking Edit Dashboard)

Markdown tiles

Those are readily accessible under General in the Tile Gallery blade.

Markdown tile

Drag and dropping will allow you to set a Title, Subtitle as well as content.

This fits perfectly well with the next feature.

Deep linking resources

Let’s say I want to link to an AppService webjobs but I don’t want to have the whole widget. Or maybe I just want to see the deployment list. Not just the active one.

You could spend the five minutes to search and add the widget which will increase the load time or… you could just create a link to it.

Want to direct-link to an AppService console?

https://portal.azure.com/#resource/subscriptions/{Subscription_Id}/resourceGroups/{Resource_Group_Name}/providers/Microsoft.Web/sites/{Site_Name}/console

Want to direct-link to the AppService Deployment Options?

https://portal.azure.com/#resource/subscriptions/{Subscription_Id}/resourceGroups/{Resource_Group_Name}/providers/Microsoft.Web/sites/{Site_Name}/DeploymentSource

Basically, just navigate to where you want to go, copy the link and you are almost done.

Use the previous trick to create a markdown tile and add links to your desired locations and you now have instant navigation to the exact feature you want to manage.

Sharing Dashboards

Now that you made that awesome dashboard full of custom links and useful widgets… what about sharing it so others don’t have to build one themselves?

It is possible to Share a dashboard by saving them through a resource group. By default, it will try to save them in a dashboards resource group but they can also be saved within existing resource group.

This allows you to easily share carefully crafted dashboards with other members of the team.

This can be accessed with a link right beside the Edit Dashboard button.

If you are interested in deploying templates with ARM in the future, I would keep an eye on this issue on GitHub.

If you want to try the undocumented way (which may break in the future), check out this blog post.

If you want to UnShare a dashboard, just navigate to the dashboard and click Share again and press on UnPublish.

Do you want more?

Is that a subject that you would be interested in? Do you want more tips and tricks about the Azure Portal and useful dashboard configurations?

Please let me know in the comments!

Azure Day is coming to Montreal January 17th 2017

If you are in Montreal around mid-January, Microsoft is hosting an awesome event called Azure Day.

What is Azure Day?

Azure Day is a 12 hours hands-on workshop centered on different services offered in Azure like IoT, Machine Learning and DevOps.

What subjects are going to be approached?

Data

  • Internet of Things
  • Machine Learning

DevOps

  • Continuous Integration
  • Continuous Deployment
  • Infrastructure as code
  • Microservices with Azure Container Services or Docker Swarm
  • Automated Security
  • A/B Testing

.NET

  • .NET Core
  • .NET 4.5

With all those subjects, if you don’t want to miss out, remember to Register and mention which of the following block you would like to book:

Azure Day Schedule

Resolved: AppInsights moved to EastUS, deployment failing with CentralUS message

TLDR AppInsights were moved to EastUS. AutoScale Settings and Alerts were kept in CentralUS causing a chain-reaction of failure all around. Code to clean-up available at end of post.

As AppInsights hit General Availability at Microsoft Connect 2016, a few issues were introduced that caused our VSTS builds to start failing. Here’s the message that we got:

1
2016-11-18T19:25:33.9545678Z [Azure Resource Manager]Creating resource group deployment with name WebSiteSQLDatabase-20161118-1925
2016-11-18T19:25:37.8711397Z ##[error]The resource 'myresource-int-myresource-int' already exists in location 'centralus' in resource group 'myresource-int'. A resource with the same name cannot be created in location 'East US'. Please select a new resource name.
2016-11-18T19:25:37.9071426Z ##[section]Finishing: Azure Deployment:Create Or Update Resource Group action on eClientRcgt-int

So I started debugging. After a few days trying to get this issue fixed, I decided to generate the template from the portal. Looked up the myresource-int-myresource-int inside of it and found out that it was an automatically generated name for Microsoft.insights/autoscalesettings. The worse was… its location was Central US. And it was not alone.

Other Alert rules were also located in Central US and just fixing the autoscalesettings would get me other error messages.

Of course, there’s no easy way in the portal to delete those. However, with PowerShell, it’s trivial.

It is important to note that it’s perfectly safe to delete them on our end since we deploy with Azure Resource Manager templates. They will be recreated at the next CI/CD run.

Here’s the quick code to delete them if you encounter this issue.

1
2
3
4
$resource = Get-AzureRmResource
$rgn = 'resourceGroup'
$resource | where { $_.ResourceType -eq 'Microsoft.insights/autoscalesettings' -and $_.Location -eq 'centralus' -and $_.ResourceGroupName -eq $rgn } | Remove-AzureRmResource
$resource | where { $_.ResourceType -eq 'Microsoft.insights/alertrules' -and $_.Location -eq 'centralus' -and $_.ResourceGroupName -eq $rgn } | Remove-AzureRmResource

Cleaning up Azure Resource Manager Deployments in Continuous Integration Scenario

When deploying with Azure Resource Manager Templates (aka: ARM Templates), provisioning an environment has never been that easy.

It’s as simple as providing a JSON file that represent your architecture, another JSON file that contains all the parameters for this architecture and boom. Online you go.

Personally, I hate deploying from Visual Studio for anything but testing. Once you start delivering applications, you want something centralized, sturdy and battle tested. My tool of choice is Visual Studio Team Services. VSTS integrates perfectly with Azure with tasks to Create/Upgrade ARM templates on an Azure Subscription.

Our current setup includes 4 environments and 4-5 developers. One of this environment is a CI/CD environment. Every single check-in that happens in a day will be deployed. So our resource group is also being updated like crazy. Just to give you numbers, 50 deployments in a day hasn’t been unheard of.

The problem, is the Azure Resource Manager Deployment limit.

Azure Resource Manager Deployment Limit

So … 800 eh? Let’s make a calculation. 20 deployments per day, 20 workable day in a week… 400 deployments per months.

2 months. That’s how long before we run into an error when deploying on Azure. I’ve already raised the issue with one of the developers over at Microsoft but in the mid-time, I need to clear this!

Many ways to do this.

The Portal

Oh boy… don’t even think about it. You’ll have to do them one by one. There’s nothing to multi-select. And you’ll need to do that every month/2 months.

Everything that is repeating itself is worth automating.

Powershell - Normal way

I’ve tried running the following command:

1
2
$resourceGroupName = "MyResourceGroup"
Get-AzureRmResourceGroupDeployment -ResourceGroupName $resourceGroupName | Remove-AzureRmResourceGroupDeployment -ResourceGroupName $resourceGroupName

The problem with this is that each deployment is going to be deleted synchronously. You can’t delete in batch. With 800 deployments to clean up, it took me hours to delete a few hundreds before my Azure Login Powershell session expired and crashed on me.

Powershell - The Parallel way

Powershell allows for parallel commands to be run side by side. It run those commands in separate sessions in a separate Powershell process.

When I initially ran this command, I had about 300 deployments to clean on one of my resource group. This, of course, launched 300 powershell.exe processes that executed to required commands.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$path = ".\profile.json"
Login-AzureRmAccount
Save-AzureRmProfile -Path $path -Force

$resourceGroupName = "MyResourceGroup"
$deployments = Get-AzureRmResourceGroupDeployment -ResourceGroupName $resourceGroupName
$deploymentsToDelete = $deployments | where { $_.Timestamp -lt ((get-date).AddDays(-7)) }

foreach ($deployment in $deploymentsToDelete) {
Start-Job -ScriptBlock {
param($resourceGroupName, $deploymentName, $path)
Select-AzureRmProfile -Path $path
Remove-AzureRmResourceGroupDeployment -ResourceGroupName $resourceGroupName -DeploymentName $deploymentName
} -ArgumentList @($resourceGroupName, $deployment.DeploymentName, $path) | Out-Null
}

Then you have to keep track of them. Did they run? Did they fail? Get-Job will return you the list of all jobs launched in this session. The list may be quite extensive. So let’s keep track only of those running:

1
(Get-Job -State Running).Length

If you want to only see the results of the commands that didn’t complete and clear the rest, here’s how:

1
2
3
4
5
6
7
8
9
10
11
# Waits on all jobs to complete
Get-Job | Wait-Job

# Removes the completed one.
Get-Job -State Completed | Remove-Job

# Output results of all jobs
Get-Job | Receive-Job

# Cleanup
Get-Job | Remove-Job

The results?

Instead of taking overnight and failing on over half the operations, it managed to do all of those in a matter of minutes.

The longest run I had on my machine was 15 minutes for about 300 deployments. Always better than multiple hours.

If you create a Powershell that automate this task on a weekly basis, you won’t have to wait so long. If you include Azure Automation in this? You’re good for a very long time.