.NET/ASP.NET Documentation Update for July 2019

tldr; This is a status update on the .NET documentation. If you want me to do more of those (once a month), please let me know in the comments!

Hi everyone!

If you missed our first post for June, well today I’m posting a summary of all .NET related documentation that were updated significantly during the month of July.

My name is Maxime Rouiller and I’m a Cloud Advocate with Microsoft. For the month of July, I’m covering 3 major products.

  • .NET, which had ~248 commits, and 3,331 changed files on their docs repository
  • ASP.NET, which had ~190 commits, and 1,413 changed files on their docs repository
  • NuGet, which had ~126 commits, and 133 changed files on their docs repository

Obviously, that’s a lot of changes and I’m to help you find the gold within this tsunami of changes.

So here are all the documentation updates by product with commentary when available!

.NET Core

There were lots of consolidation in the documentation happening. Content that was specific to the .NET Framework documentation that also applies to the .NET Core documentation are being moved to the .NET Guide. Things like Native Interop (say hi to COM), the C# Language Reference

.NET Architecture

The .NET Architecture e-books content was mixed in with the fundamentals content under the .NET Guide. The team wanted to give a better home for those, so a new landing page was created.

.NET Application Architecture Guidance

Native Interop

Csharp

VB.NET

There were tons of documentation that needed examples. Here are a few. To change the language to VB, make sure you pick your favorite language on the language selector at the top of the page left of the “Feedback” button.

Tutorials, recommendations, and others

This .NET CLI tutorial was rewritten and went from one page to three.


.NET APIs

.NET Core 3.0 Preview 7 was launched on July 23rd, so the API reference documentation for .NET Core and .NET Platform Extensions 3.0 was updated.
The documentation team is also working closing with the .NET developer team to add more API documentation for .NET Core 3.0. We reduced the number of undocumented APIs by 1,374 in July and this effort will continue through the month of August.

ASP.NET Core

gRPC

Documentation keeps improving on gRPC. This time, it’s security focused.

Troubleshooting

This doc was consolidated from other pages that handled errors and how to troubleshoot.

Blazor

Security (Authentication/Authorization/etc.)

Authentication without Identity providers tutorial. Very interesting read!

Those pages received significant changes.

MVC / WebAPI

New features in 3.0 that allows you to use HTTP REPL directly from the CLI.

Fundamentals

This CLI global tool is used to help you create areas, controllers, views, etc. for your ASP.NET Core applications. Especially useful if you want to create your own. For the source, look no further than on GitHub.

The following are pages that received significant changes.

Host and Deploy

SignalR

Performance

This article was co-written with /u/stevejgordon! Have you heard about ObjectPool? It prevents objects to be garbage collected and reused instead. It’s been in .NET Core forever but this article was written with performance in mind.

Tutorials

New tutorial for ASP.NET Core 3.0 Preview this time with jQuery.

Those Razor Pages tutorials received significant changes due in part to the latest Preview.

2 more tutorials to receive significant changes are focused on Web API and SignalR

NuGet

Including the 5.2 release notes, there’ new pages that has been created within the last month. Take a look to stay up to date!

A few of significantly modified pages includes handling nuget accounts, and Package Restore.

Using EasyAuth (AppService Authentication) with ASP.NET Core

There’s this cool feature in Azure AppService that I love. It’s called EasyAuth although it may not use that name anymore.

When you are creating a project and want to throw in some quick authentication Single-Sign-On (SSO for short) is a great way to throw the authentication problem at someone else while you keep on working on delivering value.

Of course, you can get a clear understanding of how it works, but I think I can summarize quite quickly.

EasyAuth works by intercepting the authentication requests (/.auth/*) or when authenticated, fills in the user context within your application. That’s the 5-second pitch.

Now, the .NET Framework application lifecycle allowed tons of stuff to happen when you added an HttpModule in your application. You had access to everything and the kitchen sink.

.NET Core, on the other hand, removed the concept of all-powerful modules and instead introduced Middlewares. Instead of relying on a fixed set of events happening in a pipeline, we could expand the pipeline as our application needed it.

I’m not going to go into details on how to port HttpModules and Handlers but let’s assume that they are widely different.

One of the many differences is that HttpModules could be set within a web.config file and that config file could be defined at the machine level. That is not possible with Middlewares. At least, not yet.

Why does it matter?

So with all those changes, why did it matter for EasyAuth? Well, the application programming model changed quite a lot, and the things that worked with the .NET Framework stopped working with .NET Core.

I’m sure there’s a solution on the way from Microsoft but a client I met encountered the problem, and I wanted to solve the problem.

Solving the issue

So, after understanding how EasyAuth worked, I’ve set on to create a repository as well as a NuGet package.

What I was fixed to do was relay the captured identity and claims into the .NET Core authentication pipeline. I’m not doing anything else.

Installing the solution

The first step is to install the NuGet package using your method of choice. Then, adding an [Authorize(AuthenticationSchemes = "EasyAuth")] to your controller.

Finally, adding the following lines of code to your Startup.cs file.

1
2
3
4
5
6
7
using MaximeRouiller.Azure.AppService.EasyAuth
// ...
public void ConfigureServices(IServiceCollection services)
{

//... rest of the file
services.AddAuthentication().AddEasyAuthAuthentication((o) => { });
}

That’s it. If your controller has an [Authorize] attribute, the credentials are going to automatically start populating the User.Identity of your MVC controller.

Question

Should I go farther? Would you like to see this integrated within a supported Microsoft package? Reach out to me directly on Twitter or the many other ways available.

Solving Cold-Start disturbs serverless' definition and it's okay

When you look at Azure Functions, you see it as a great way to achieve elastic scaling of your application. You only pay for what you use, you have a free quota, and they allow you to build entire applications on this model.

The whole reason why Azure Functions is free is due to its linked plan. It’s called Consumption Plan. While Azure Functions is an application model, the consumption plan is where the serverless kicks in. You give us your application, and you care less about the servers.

One of the main selling points of serverless is the ability to scale to 0. It allows you to pay only for what you use and it’s a win-win for everyone involved.

That would look a little bit like the image below.

Ideal Scale Behavior

About Cold Starts

Cold start happens when an application loads for the first time on a server. What happens behind the scene look as follows.

  1. The cloud receives a request for your application and starts allocating a server for it
  2. The server downloads your application
  3. The cloud forwards the request received initially to your application
  4. The application stacks load up and initialize what it needs to run the code successfully
  5. Your application loads up and starts handling the request.

This workflow happens every time your application either goes from 0 to 1 or when the cloud scales you out.

This whole process is essential as Azure can’t keep the servers running all the time without blocking other applications from running on the same servers.

That time between when the request initially arrives and is handled by a server can be longer than 500ms. What if it takes a few seconds? What do you do to solve that problem?

Scale with cold start

Premium Functions

Azure Premium Functions is the best way to resolve that problem. It breaks the definition of Serverless by the fact that you can’t scale to 0. It does, however, offer the elastic scale-out that is required to handle a massive amount of load.

Elastic Scale-Out

This minimum of one instance is what makes a night and day difference in terms of improving performance. This single instance already has your application on it; the Azure Functions runtime is ready to handle your application.

Having a permanent instance removes most of the longer steps needed to handle a request. It effectively removes cold-start issues as seen below.

Scale with One Pre-Warmed Instance

When would you use Premium Functions? When your application can’t have cold-starts. Not every application require the need for this feature, and I wholeheartedly agree. Keep on using the Consumption Plan if it fits your needs, and the cold-start isn’t a problem.

If you are among the clients that can’t afford to have cold-start time in your application while still needing the bursting of servers, Premium Function is for you.

Resources

I’ve gone very quickly over the essential feature that I think fixes one of the more significant problems with serverless. More features come with Premium Functions. I’ve left a link to the docs in case you want to read it all by yourself.

Getting rid of Time Zone issues within Azure Functions

Your client wants to run a database clean up task every day at 2 am. Why? Mostly because it’s when there’s no traffic on the site and it reduces the amount of risk of someone encountering problems due to maintenance tasks.

Sounds good! So as an engineer, you know that task is periodic and you don’t need to create a Virtual Machine to handle this task. Not even an AppService is required. You can go serverless and benefit from those free million monthly executions that they offer! Wow. Free maintenance tasks!

You create a new Function app, write some code that looks something like this.

1
2
3
4
5
[FunctionName("DatabaseCleanUpFunction")]
public static void Run([TimerTrigger("0 0 2 * * *")]TimerInfo myTimer, ILogger log)
{

// todo: clean up the database
}

Wow! That was easy! You publish this application to the cloud, test it out manually a few times maybe then publish it to production and head back home.

When you get back to the office the next day, you realize that the job has run yes… but at 10 pm. Not 2 am. What happened!?

You’ve been struck by Time Zones. Oh, that smooth criminal of time.

Default time zone

The default timezone for Azure Functions is UTC. Since I’m in the Eastern Time Zone, this previous function now runs at 10 pm instead of 2 am. If there were users that were using my application at that time, I could have severely impacted them.

That is not good.

The Fix

There are two ways to go around this. One, we change our trigger’s CRON Expression to represent the time in UTC and keep our documentation updated.

The second way is to tell Azure Functions in which timezone they should interpret the CRON Expression.

For me, a man of the best Coast, it involves setting the WEBSITE_TIME_ZONE environment variable to Eastern Standard Time. If you are on the lesser Coast, you may need to set yours to Pacific Standard Time.

However, let’s be honest. We’re in a global world. You need The List™.

Find your region on that list, set it in the WEBSITE_TIME_ZONE, and Azure Function automatically set the correct Time Zone for your CRON expression.

Fixing Azure Functions and Azure Storage Emulator 5.8 issue

So you’ve updated Azure Functions to the latest version (2.X at the time of writing this), and nothing starts anymore.

Now, when you boot the Functions host, you get this weird error.

1
[TIMESTAMP] A host error has occurred
[TIMESTAMP] Microsoft.WindowsAzure.Storage: Server encountered an internal error. Please try again after some time.

There happen to be an issue opened on GitHub that relates to Durable Functions and Azure Storage Emulator.

The thing is, it’s not directly related to Azure Durable Functions. It’s related, in my opinion, in a breaking change that happened in Azure Storage Emulator 5.8 ways of responding from its API.

If you want to fix that issue, merge the following setting in your local.settings.json file.

1
2
3
4
5
{
"Values": {
"AzureWebJobsSecretStorageType": "files"
}

}

This only applies when "AzureWebJobsStorage": "UseDevelopmentStorage=true".

So why should we set that? There was a change that was introduced back in September 2018 when Azure Functions V2 was released. Azure Functions store your secret on disk before 2.0. Azure Functions, when slot swapping environments, swap the content of the disk including the secrets.

What this setting does is ensure that Functions stores secrets on your file system by default. It’s expected behavior when using a local development environment.

If you want to read more, there is an entire article on that behavior.

Conclusion

To fix the issue, you can either use the workaround or update the Azure Storage Emulator to 5.9.

Wrapping Node.js Azure Table Storage API to enable async/await

I love the latest and greatest. Writing code by using the new language syntax is fantastic.

What happens when your favorite library doesn’t? You are stuck trying to find a workaround. We all hate workarounds, but they are the glue that keeps our code together at the end of the day.

Between the runtime, the frameworks, the libraries, and everything else… we need everyone on the same page.

Recently, I had to use Azure Node.js Table Storage API.

First, let me say I know. Yes, there is a v10 that exist, and this is the v2. No, v10 doesn’t support Table Storage yet. So, let’s move on.

Here’s the code I wanted to see:

1
2
3
4
5
6
7
8
9
let storage = require('azure-storage');
// ...

async function getAllFromTable() {
let tableService = storage.createTableService(connectionString);
let query = new storage.TableQuery()
.where('PartitionKey eq ?', defaultPartitionKey);
return await queryEntities(tableService, 'TableName', query, null);
}

Here’s the code that I had:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
async function getAllFromTable() {
return new Promise((resolve, reject) => {
let tableService = storage.createTableService(connectionString);
let query = new storage.TableQuery()
.where('PartitionKey eq ?', defaultPartitionKey);

tableService.queryEntities('TableName', query, null, function (err, result) {
if (err) {
reject(err);
} else {
resolve(result);
}
});
});
}

The sight of function callbacks gave me flashbacks to a time where code indentation warranted wider monitors.

The workaround

Here’s the temporary workaround that I have for now. It allows me to wrap highly used functions into something more straightforward.

1
2
3
4
5
6
7
8
9
10
11
12
13
async function queryEntities(tableService, ...args) {
return new Promise((resolve, reject) => {
let promiseHandling = (err, result) => {
if (err) {
reject(err);
} else {
resolve(result);
}
};
args.push(promiseHandling);
tableService.queryEntities.apply(tableService, args);
});
};

Overview of the workaround

  1. We’re using async everywhere
  2. Using Rest parameters args allow us to trap all parameters from that API
  3. We’re wrapping the proper promise and inserting it into the arguments
  4. We’re calling the relevant API with the proper argument.

Conclusion

That’s it. While the Node.js Storage v10 is having table storage implemented, I recommend wrapping table storage code into a similar structure.

This will allow you to use the new language syntax while they update the library.

How to build a multistage Dockerfile for SPA and static sites

When you are a consultant, your goal is to think about the best way to save money for your client. They are not paying us because we can code. They are paying because we can remove a few dollars (or a few hundred) from their bills.

One of the situations we often find ourselves in is building a single page application (SPA). Clients want dynamically driven applications that don’t refresh the whole page, and a SPA is often the perfect choice for them. Among the many tools used to build a SPA, we find Angular, Vue, and React.

I’ve found that delivering websites with containers is a universal way of ensuring compatibility across environments, cloud or not. It also prevents a developer’s environment from having to install 25 different tools/languages/SDKs.

It keeps thing concise and efficient.

If you want to know more about Docker containers, take a few minutes, in particular, to read about the terminology.

The problem is that we only need Node.js to build that application, not to run it. So, how would containers solve our problem? There’s a concept in Docker called Multistage builds where you can separate the build process from the execution.

Here’s a template you can use to build a SPA with Node.js.

Dockerfile template for Node.js

1
2
3
4
5
6
7
8
9
10
11
12
13
#build stage for a Node.js application
FROM node:lts-alpine as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build

#production stage
FROM nginx:stable-alpine as production-stage
COPY --from=build-stage /app/dist /usr/share/nginx/html
EXPOSE 80

CMD ["nginx", "-g", "daemon off;"]

There’s a lot to unpack here. Let’s look at the two stages separately.

Build Stage (Node.js)

Multistage docker builds allow us to split our container in two ways. Let’s look at the build stage.

The first line is a classic. We’re starting from an Alpine image that has Node.js pre-installed on it.

Note: Alpine Linux is a security-oriented, lightweight Linux distribution based on musl libc and busybox. Its main characteristic is it runs everywhere and is extremely small, around 5MB.

We’re configuring /app as the working directory. Then, we do something unusual. We copy our package*.json files before copying everything else.

Why? Each line in a Dockerfile represents a layer. When building a layer, if a layer already exists locally, is retrieved from the cache instead of being rebuilt. By copying and installing our packages in a separate step, we avoid running npm install on dependencies that didn’t change in the first place. Since npm install can take a while to install, we save some time there.

Finally, we copy the rest of our app and run the npm build task. If your application doesn’t have a build task, change the name to whatever tasks generate an output folder like dist.

The result? We have a correctly built Node.js application located in /app/dist.

Production Stage

We’ve generated our SPA or static site with Node.js but… our application isn’t using Node.js. It’s using HTML/CSS/JS. We don’t need a Node.js image to take our application to production. Instead, we only need an HTTP server. Let’s use the NGINX Docker Image as a host.

We copy the output from our previously defined build-stage /app/dist folder into the NGINX defined folder /usr/share/nginx/html as mentioned in their docs.

After exposing port 80, we need to run NGINX with the daemon off; option to have it run in the foreground and preventing the container from closing.

Building the Dockerfile

This step is easy. Run the following command in the folder containing the Dockerfile.

1
docker build -t mydockerapp:latest .

Running the Docker container locally

Running the application on your machine is of course just a simple command away.

1
docker run -it -p 8080:80 mydockerapp:latest

This command is doing two things. First, it runs the container in interactive mode with the -i flag. That flag will allow us to see the output of NGINX as it runs. Second, it maps port 8080 of your local machine to port 80 of the container.

Opening your browser to http://localhost:8080 will show you your website.

Conclusion

I’m using Docker more and more for everything. I’m building applications that are single use with current technology. Docker empowers me in running applications with older versions of frameworks, runtime, languages, without causing tooling versioning issue with my machine.

While technology may continue to evolve, I’m never afraid that my Docker container won’t work anymore. Things have been stuck in time if only for a moment.

That means I don’t have to upgrade that AngularJS 1.X app to stay cool. If it works, it works.

Are you using Docker in unusual ways? Share them with me on Twitter!

Uploadings files to Storage in batches, two ways

I love static sites. They are cheap, easy to maintain, and are total non-issue in terms of security.

What? You hacked my site? Let me delete everything, reupload my files, and… we’re done. Okay… not totally true but you get the point.

Having the source of truth away from your deployment is a big relief. Your deployment having no possible actions on your source of truth is even better.

Okay, where am I going with this? Maybe you saw the announcement a few days ago that Static Sites on Azure Storage went General Availability (internally, we call this GA because we love acronyms).

Now let me tell you something else that you may not have picked up that I consider GA (Greatly Amazing).

Uploading files to Storage

Uploading files to storage is an operation that be done in many ways. Maybe you go through the portal and upload your files one by one. I don’t know about you, but, I got things to do. I can’t spend the day uploading my site like this. So we need alternatives.

Command line Interface (CLI, another acronym)

So how do I get you, dear reader, to be able to upload it from anywhere in the world including your phone. Why? No time to explain. We’re more interested in the HOW?

1
az storage blob upload-batch -d MyContainer --account-name MyStorageAccount -s ./generated/ --pattern *.html --if-unmodified-since 2018-08-27T20:51Z

Want to customize it a little? Check out the docs. It’s easy. It’s amazing. It works in Azure Cloud Shell.

Graphical User Interface (GUI, see? we love them)

You know what I think is even better in demos? A nice graphical interface. No, I’m not about to recommend you install Visual Studio 2019 to edit some markdown files and publish to Azure Storage. May look fun but… it’s still Windows only. I want you to be able to do that operation everywhere.

Let’s take something cross platform. Let’s take something that isn’t as polarizing. Let’s take an IDE that’s built on Electron.

Let’s use Azure Storage Explorer. record scratch

Azure Storage Explorer

Download it. Install it. Open it. Login with your account.

Bam. We’re here.

Image of Azure Storage Explorer initial state

So how do I upload my blog to my storage account now? Well, someone told me once that an image is worth a thousand words.

An animated GIF is therefor priceless.

priceless animated gif

Did you see that? Drag and drop.

Getting started

So if you got yourself a static site using Azure Storage, I want you to know that whatever tools you’re using, this scenario works out of the box.

It is supported. You are in good hands. Do you have feedback on the Azure Static Site? Let me know on Twitter and we’ll talk.

Here’s a summary of all the links in this post.

Flipping the static site switch for Azure Blob Storage programmatically

As a few of you knows already, I REALLY love static sites.

So when I read the news that Azure Blob Storage enabled Static Site in Public Preview back in June, how would you classify my reaction?

via GIPHY

Right away, I wanted to automate this deployment. Then began my search into Azure Resource Management (ARM) templates. I search left and right and could not find an answer.

Where was this mysterious setting hiding?

Finding the switch

If the switch wasn’t in the ARM template, it must be somewhere else.

Since I tend to rush myself into coding, it’s the perfect moment to breathe and read. Let’s read the announcement once again:

This feature set is supported by the most recent releases of the Azure Portal, .Net Client Library (version 9.3.0), Java Client Library (version 8.0.0), Python Client Library (version 1.3.0), Node.js Client Library (version 2.10.0), Visual Studio Code Extension (version 0.4.0), and CLI 2.0 (extension version 0.1.3).

So, it’s possible to do it in code. On top of that, there are at least 6 versions of this code in multiple languages that set that property. Wow. Way to rush too fast into code Maxime (imagine slow claps here).

By going into the .NET Client Library’s GitHub, and searching for static website led me to those results.

The results may not look like much, but it mentions Service Properties. This is definitely not an ARM concept. Let’s do a quick search for azure storage service properties.

Where does it lead us? Just like every good Google search should lead. To the docs.

Wow. So I don’t even need to do a REST API or sacrifice a few gummy bears to get it?

Implementation

1
2
3
4
5
6
7
8
9
10
11

CloudStorageAccount storageAccount = CloudStorageAccount.Parse("<connectionString here>");
var blobClient = storageAccount.CreateCloudBlobClient();
ServiceProperties blobServiceProperties = new ServiceProperties();
blobServiceProperties.StaticWebsite = new StaticWebsiteProperties
{
Enabled = true,
IndexDocument = "index.html",
ErrorDocument404Path = "404.html"
};
await blobClient.SetServicePropertiesAsync(blobServiceProperties);

Just like that, the service is now enabled and able to serve HTTP request on a URL looking like this.

https://mystorageaccount.z00.web.core.windows.net/

Next post? Custom domain? Yay or nay?

More Resources

If you want to read more about how it works, I would recommend the following resources:

Prevent Kestrel from logging to Console in ASP.NET Core 2.2

I recently had the need to start a web server from the command line. Authentication, you see, is a complicated process and sometimes requires you to open a browser to complete that process.

Active Directory, requires you to have a Return Url which isn’t really possible from a command line. Or is it?

With .NET Core, you can easily setup a web server that listens to localhost. Active Directory, can be configured to redirect to localhost. Problem solved, right?

Not exactly. I’m partial to not outputing anything to useful when creating a CLI tool. I want kestrel to not output anything in the console. We have other ways to make an EXE talk, you see.

So how do you get Kestrel to go silent?

The first solution led me to change how I built my WebHost instance by adding .ConfigureLogging(...) and using Microsoft.Extensions.Logging. It is the perfect solution when you want to tell kestrel to not output anything from individual requests.

However, Kestrel will still output that it started a web server and on which ports it’s listening to. Let’s remove that too, shall we?

1
2
3
4
5
6
7
8
9
10
11
12
13
WebHost.CreateDefaultBuilder()
.SuppressStatusMessages(true) // <== this removes the "web server started on port XXXX" message
.ConfigureLogging((context, logging) =>
{
// this removes the logging from all providers (mostly console)
logging.ClearProviders();
//snip: providers where I want the logging to happen
})
.UseStartup<Startup>()
.UseKestrel(options =>
{
options.ListenLocalhost(Common.Port); // port that is hardcoded somewhere 🤷‍♂️
});

So next time you need Kestrel to take five, you know what to do. Add that SuppressStatusMessages

Step by step: Detecting files added between two commits in git

I was looking into retrieving the last created files into a repository. For me, it was for our Microsoft Azure Documentation.

This is, of course, completely open source and you can find the repository on GitHub. The problem however is that a ton of persons work on this and you want to know what are the new pages of docs that are created. It matters to me because it allow me to see what people are creating and what I should take a look into.

So, how do I retrieve the latest file automatically?

Knowing that git show HEAD shows you the latest commit on the current branch and git show HEAD~1 shows you the previous commit on the current branch, all we have to do is make a diff out of those two commits.

Showing changes between two commits

1
git diff HEAD HEAD~1

This however will show you in great details all the files that have been modified including their contents. Let’s trim it down a bit to only show names and status.

Showing names and status of files changed between two commits

1
git diff --name-status HEAD HEAD~1

Awesome! But now, ou should see the first column filled with a letter. Sometimes A, sometimes D but most often M. M is for modified, D for deleted and A for added. The last one is the one that I want.

Let’s add a filter on that too.

Showing names of files added between two commits

1
git diff --name-only --diff-filter=A HEAD HEAD~1

At that point, I changed --name-status to --name-only since now we are guaranteed to only have added files in our list and I don’t need the status column anymore. The thing however, is that I’m seeing png files as well as other types of files that I’m not interested in. How do I limit this to only markdown files?

Showing names of markdown files added between two commits

1
git diff --name-only --diff-filter=A HEAD HEAD~1 *.md

And that’s it. That’s how a simple command coupled with a few parameters can allow you total control of what you want out of git.

Resources

Here are the resources I used to build this command:

HttpRequestException with git pull from GitHub

I’m working on a Windows machine and some times ago, this error started happening when I did any git pull or git push operations.

1
fatal: HttpRequestException encountered.
   An error occurred while sending the request.
Already up-to-date.

Okay, we have an HttpException. First, let’s be clear that the whole concept of Exceptions do not exist in git. This is a .NET concept so it’s definitely coming from my Windows Credential Manager.

To enable tracing, you have to set the GCM_TRACE environment variable to 1.

1
SET GCM_TRACE=1
1
$env:GCM_TRACE = 1

Then, I did my git pull again.

1
C:\git\myrepo [master ≡]> git pull
08:59:28.015710 ...\Common.cs:524       trace: [Main] git-credential-manager (v1.12.0) 'get'
08:59:28.441707 ...\Where.cs:239        trace: [FindGitInstallations] found 1 Git installation(s).
08:59:28.459707 ...Configuration.cs:405 trace: [LoadGitConfiguration] git All config read, 27 entries.
08:59:28.466706 ...\Where.cs:239        trace: [FindGitInstallations] found 1 Git installation(s).
08:59:28.473711 ...Configuration.cs:405 trace: [LoadGitConfiguration] git All config read, 27 entries.
08:59:28.602709 ...\Common.cs:74        trace: [CreateAuthentication] detecting authority type for 'https://github.com/'.
08:59:28.684719 ...uthentication.cs:134 trace: [GetAuthentication] created GitHub authentication for 'https://github.com/'.
08:59:28.719709 ...\Common.cs:139       trace: [CreateAuthentication] authority for 'https://github.com/' is GitHub.
08:59:28.745709 ...seSecureStore.cs:134 trace: [ReadCredentials] credentials for 'git:https://github.com' read from store.
08:59:28.748709 ...uthentication.cs:163 trace: [GetCredentials] credentials for 'https://github.com/' found.
08:59:29.183239 ...\Program.cs:422      trace: [Run] System.Net.Http.HttpRequestException: An error occurred while sending the request. ---> System.Net.WebException: The request was aborted: Could not create SSL/TLS secure channel.
   at System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult)
   at System.Net.Http.HttpClientHandler.GetResponseCallback(IAsyncResult ar)

<snip>

Now, we can see that we Could not create SSL/TLS secure channel. Also, we can see that my credential manager is version 1.12.0.

This tells me that something changed somewhere and that the version of my credential manager is probably not up to date. So time to head to the Windows Credential Manager Release Page.

Windows Credential Manager Release Page

Alright, so I’m a few versions behind. Let’s update to the latest version.

Now, let run another git pull.

1
C:\git\myrepo [master ≡]> git pull
Already up-to-date.

Alright so my problem is fixed!

Why?

Updating git credential manager to the latest version is definitely solving my problem but why did we have that problem in the first place?

If we look at release 1.14.0 we would see something very interesting among the release notes.

Added support for TLS 1.2 (as TLS 1.0 is being retired).

By doing a bit of search, I ended up on this blog post by GitHub Engineering which is a depreciation notice for TLS 1.0 since February 1st.

That’s it! Keep your tools updated folks!

Graph Databases 101 with Cosmos DB

Also available in a video format:

I’ve never played with any kind of Graph Database before this blog post. As a .NET Developer, this was weird. I’m so used to RDBMS like SQL Server that thinking in graph was difficult at first. Developers who uses it as their main tool also use a different kind of vocabulary. With RDBMS, we’re discussing tables, columns and joins. With graph, we’re more talking about vertices, properties, edges, and traversal.

Let’s get the vocabulary out of the way.

Graph Database Vocabulary

This is not exhaustive but only what we’re going to be discussing in this blog post.

Vertex (Verticies)

This is what I’ll also call a node. That’s what define an entity. RDBMS would have them represented as a table with a fixed schema. Graph databases doesn’t really have a fixed schema but they allow us to push documents.

Properties

So a vertex have properties just like a table have columns. Table have a fixed schema but graph databases are more like NoSQL Document databases with their more fluid schemas.

Edge

So up until now, we couldn’t make up the difference between a document and a graph database. Edges are what makes it so different. Edges define the relationship between two verticies.

So let’s take an example. A person is_friend with another person. We just defined an Edge called is_friend. That edge could also have properties like since. It would allow us to make queries on which persons in our database are friends since a specific date.

What about Cosmos DB?

With the vocabulary out, Cosmos DB allows us to create graph database really easily and make our first foray into it.

Creating a Cosmos DB Graph API

So to create my first Cosmos DB Graph database, I followed this tutorial.

For the Cosmos DB name, we’ll use beerpub, the resource group beerapp, and as for the API, we’ll use Gremlin (graph).

Then, using this other section of the quickstart, we’ll create a graph. For the database, we’ll use beerpub and for the graph ID we’re going to use beergraph.

We’ll want to keep the storage to 10Gb and the RU as low as possible since we’re just kicking the tires and wouldn’t want to receive a big invoice.

Creating our first project - Data Initializer

1
2
3
4
5
dotnet new console -n DataInitialization
cd DataInitialization
dotnet add package Gremlin.net
dotnet restore
code .

This will create us a basic console application from which we can initialize our data.

Let’s open up Program.cs and create some basic configuration that we’re going to use to connect to our Cosmos DB Graph API.

1
2
3
4
5
private static string hostname = "beerpub.gremlin.cosmosdb.azure.com";
private static int port = 443;
private static string authKey = "<Key>";
private static string database = "beerpub";
private static string collection = "beergraph";

Then, make sure the following usings are at the top of your Program.cs

1
2
3
4
5
using System;
using System.Threading.Tasks;
using Gremlin.Net;
using Gremlin.Net.Driver;
using Gremlin.Net.Structure.IO.GraphSON;

Your authKey will be found in your Azure Portal right here:

Location in the portal where we get our Cosmos DB Key

Or alternatively, you could run the following Azure CLI 2.0 command to retrieve both of them:

1
az cosmosdb list-keys -n beerpub -g beerapp

Finally, we need to enable support for async in our Main(...) and add the basic client initialization.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
static void Main(string[] args)
{

Console.WriteLine("Starting data creation...");
Task.WaitAll(ExecuteAsync());
Console.WriteLine("Finished data creation.");
}

public static async Task ExecuteAsync()
{

var gremlinServer = new GremlinServer(hostname, port, enableSsl: true,
username: "/dbs/" + database + "/colls/" + collection, password: authKey);
using(var gremlinClient = new GremlinClient(gremlinServer, new GraphSON2Reader(), new GraphSON2Writer(), GremlinClient.GraphSON2MimeType))
{
//todo: add data to Cosmos DB
}
}

Our bootstrap is completed and we are now ready to go.

Since we’ll want to start from scratch, let’s use the Gremlin drop step to clear our whole graph before going further.

1
2
// cleans up everything
await gremlinClient.SubmitAsync<dynamic>("g.V().drop()");

Now we need to add beers and breweries. Those are represented as vertex (or Verticies). Vertex can have properties. Properties belong to that specific vertex. For our beers and breweries, we’d like to give them a proper name that will be easy to read instead of an id.

1
2
3
4
5
6
// add beers
await gremlinClient.SubmitAsync<dynamic>("g.addV('beer').property('id', 'super-a').property('name', 'Super A')");
await gremlinClient.SubmitAsync<dynamic>("g.addV('beer').property('id', 'nordet-ipa').property('name', 'Nordet IPA')");

// add breweries
await gremlinClient.SubmitAsync<dynamic>("g.addV('brewery').property('id', 'auval').property('name', 'Brasserie Auval Brewing')");

All those verticies are now all hanging around without any friends. They are single nodes without any connections or relationships to anything. Those are called edges in the graph world. To add an edge, it’s as simple as selecting a vertex (g.V('id of the vertex')), adding an edge (.addE('relationship description')) to another vertex (.to(g.V('id of the vertex'))).

1
2
3
// add 'madeBy'
await gremlinClient.SubmitAsync<dynamic>("g.V('super-a').addE('madeBy').to(g.V('auval'))");
await gremlinClient.SubmitAsync<dynamic>("g.V('nordet-ipa').addE('madeBy').to(g.V('auval'))");

If we run that code as-is, we should have the following show up in our Azure Cosmos DB Data Explorer.

Image of the represented graph

Conclusion

So this was my first beer database coming directly from an import code. Do you want to see more?

Let me know if these kinds of demos are interesting and I’ll be sure to do a follow-up!

Calculating Cosmos DB Request Units (RU) for CRUD and Queries

Video version also available

Cosmos DB is a globally distributed database that offers single-digit-millisecond latencies on multiple models. That’s a lot of power under the hood. As you may be tempted to use as much of it as possible, you have to remember that you are billed for what you use.

Cosmos DB they measure your actual usage of the service on Request Units (RU).

What are Cosmos DB Request Units (RU)?

Request units are a normalized number that represents the amount of computing power (read: CPU) required to serve the request. Inserting new documents? Inexpensive. Making a query that sums up a field based on an unindexed field? Ccostly.

By going to the Cosmos DB Capacity Planner tool, we can test from a JSON sample document how many RUs are required based on your estimated usage. By uploading a simple document and setting all input values to 1 (create, read, update, delete) we can see which operations are relatively more expensive than others.

1
Create RUs:  5.71
  Read RUs:  1.00
Update RUs: 10.67
Delete RUs:  5.71

Those are the number at the time of writing this blog post and may change in the future. Read is for a single document. Queries work differently.

Tracking Request Unit (RU) usage

Most operation with the DocumentClient (SQL API) will return you a model that will allow you to see how much RU we use. Here are the four basic operations and how easy it is to retrieve their respective Request Unit.

Create

To retrieve the amount of Request Unit used for creating a document, we can retrieve it like this.

1
2
3
var collectionUri = UriFactory.CreateDocumentCollectionUri(database.Id, documentCollection.Id);
var result = await client.CreateDocumentAsync(collectionUri, new { id = "1", name = "John"});
Console.WriteLine($"RU used: {result.RequestCharge}");

Update

We can also retrieve it while updating a document.

1
2
3
var document = new { id = "1", name = "Paul"};
var result = await client.ReplaceDocumentAsync(UriFactory.CreateDocumentUri(database.Id, documentCollection.Id, document.id), document);
Console.WriteLine($"RU used: {result.RequestCharge}");

Delete

Finally, figuring out the amount of RU used for deleting a document can be done like so.

1
2
var result = await client.DeleteDocumentAsync(UriFactory.CreateDocumentUri(database.Id, documentCollection.Id, "1"));
Console.WriteLine($"RU used: {result.RequestCharge}");

This is quite easy. Right? Let’s go onto Queries.

Calculating the Request Unit (RU) of Cosmos DB Queries

That’s where things get a little more complicated. Let’s build a query that returns the top 5 documents and retrieve the results. The default API usage makes it very easy for us to retrieve a list of elements but not the Request Units.

1
2
3
4
var documentQuery = client.CreateDocumentQuery(collectionUri).Take(5);

// materialize the list.. but we lose the RU
var documents = documentQuery.ToList();

Here’s why it’s difficult to retrieve the RU in this scenario. If I do a ToList, it will return a generic list (List<T>) on which I can’t append more properties. So, we lose the Request Units while retrieving the documents.

Let’s fix this by rewriting this query.

1
2
3
4
5
6
7
8
9
10
var documentQuery = client.CreateDocumentQuery(collectionUri).Take(5).AsDocumentQuery();

double totalRU = 0;
List<dynamic> allDocuments = new List<dynamic>();
while (documentQuery.HasMoreResults)
{
var queryResult = await documentQuery.ExecuteNextAsync();
totalRU += queryResult.RequestCharge;
allDocuments.AddRange(queryResult.ToList());
}

If all you wanted was the code, what’s above will do the trick for you. If you want to understand what happens, stick around.

The explanation

Cosmos DB will never return 1 million rows to you in one response. It will page it. It’s why we see a pattern similar to an Enumerator.

The first thing we do is move the query from an IQueryable to an IDocumentQuery. Using this method enables us to access the ExecuteNextAsync method and the HasMoreResults property. With just those two, we can now get a separate FeedResponse<T> for each page of our query. It’s now obvious that if you try to extract all the data from a collection, you are using RUs for each page of result.

Next Steps

Want to give it a try? Never tried Cosmos DB before?

You can get 7 day of free, no credit card, no subscription access, and no questions asked.

Then, once you have a free database, try one of the 5 minutes quickstart in the language that you want.

Need more help? Ask me on Twitter. I’ll be happy to help!

Converting an Azure Table Storage application to Cosmos DB with Table API

Converting an application that is using Azure Table Storage to Cosmos DB is actually pretty easy to do.

Azure Table Storage is one of the oldest Microsoft Azure storage technology7 out there and lots of applications still uses it. But, what if you need to go global and have your data accessed in a performant way with better SLAs that are guaranteed by the standard Storage Account SLAs?

Cosmos DB allows you to effortlessly transition from one to the other by one single change in your code.

Previous Code

Here’s how we would normally build a Azure Storage Client.

1
2
3
4
5
6
7
8
private async Task<CloudTable> GetCloudTableAsync()
{

CloudStorageAccount storageAccount = CloudStorageAccount.Parse(configuration.GetConnectionString("initial"));
var tableClient = storageAccount.CreateCloudTableClient();
CloudTable table = tableClient.GetTableReference("mytable");
await table.CreateIfNotExistsAsync();
return table;
}

Whereas initial connection string is represented as this:

1
DefaultEndpointsProtocol=https;AccountName=<ACCOUNT NAME>;AccountKey=<KEY>;EndpointSuffix=core.windows.net

Cosmos DB Code

Here’s how we would create the new CloudTable when using Cosmos DB.

1
2
3
4
5
6
7
8
private async Task<CloudTable> GetCloudTableAsync()
{

CloudStorageAccount storageAccount = CloudStorageAccount.Parse(configuration.GetConnectionString("destination"));
var tableClient = storageAccount.CreateCloudTableClient();
CloudTable table = tableClient.GetTableReference("mytable");
await table.CreateIfNotExistsAsync();
return table;
}

Whereas destination connection is represented as this:

1
DefaultEndpointsProtocol=https;AccountName=<ACCOUNT NAME>;AccountKey=<KEY>;TableEndpoint=https://MYCOSMOS.table.cosmosdb.azure.com:443/;

Difference in your code

And that’s it. A single connection string change and you’ve gone from the good ol’ Table Storage to multiple consitency levels, to globally replicated data in multiple regions.

Difference in implementation

Of course, we went from 2 different implementations from 1 single API. There’s bound to be differences. The complete list goes in details but it will end up being more expensive as Cosmos DB will preallocate your storage while your Storage Account will only allocate what you use. As much as it will end up being more expensive on Cosmos DB, you will also end up with better performance.

Try it now

If you want to try Cosmos DB, there’s multiple ways.

If you don’t have an account or a credit-card, you can try it for free right here.

If you want to not be limited by the subscribtion-less option, you can always get an Azure Free Trial which includes free credits for Cosmos DB.

Persisting IoT Device Messages into CosmosDB with Azure Functions and IoT Hub

If you are doing IoT, you are generating data. Maybe even lots of data. If you are doing API calls on each device to store them directly, you are doing yourself a disservice. If you are using something different as an event handler, things are better. If you are like me, you’re using Azure IoT Hub to ingest the events.

IoT Hub

IoT Hub is a great way to ingress data from thousands of devices without having to create a scalable API to handle all of them. Since you don’t know if you will be receiving one event per hour or 1000 events per seconds, you need a way to gather all this. However, those are just messages.

You want to be able to store all your events efficiently whether it’s 100 events or a billion.

Azure Functions ⚡

You could always spawn a VM or even create an App Service application and have jobs dequeue all those messages. There’s only one issue. What happens when your devices stop sending events? Maybe you’re running a manufacturing company that only operates 12 hours a day. What happens during those other 12 hours? You are paying for unused compute. What happens to the week where things need to run 15 hours instead of 12? More manual operations.

That’s where serverless becomes a godsend. What if I tell you that you’d only pay for what you use? No usage, no charge. In fact, Azure Functions comes in with 1 million execution for free. Yes, single function execution. You pay pennies per million executions.

Azure Functions is the perfect compute construct for use in IoT development. It allows you to bring in massive compute power only when you need it.

Storing the events

We have our two building blocks in place. IoT Hub to ingest event, Azure Functions to process them. Now the question remains where do I store them?

I have two choices that I prefer.

Now let’s assume a format of messages that are sent to our IoT Hub. That will serve as a basis for storing our events.

1
2
3
4
5
6
7
8
9
10
11
{
"machine": {
"temperature": 22.742372309203436,
"pressure": 1.198498111175075
}
,

"ambient": {
"temperature": 20.854139449705436,
"humidity": 25
}
,

"timeCreated": "2022-02-15T16:27:05.7259272Z"
}

CosmosDB

CosmosDB allows you to store a massive amount of data in a geo-distributed way without flinching under load. Besides its different consistency model and multiple APIs, it is a fantastic way to store IoT events and still be able to query them easily.

So let’s assume we receive the previously defined message through an Azure Function.

Let’s create our Function. We’ll be using the CSX model that doesn’t require Visual Studio to deploy. We can copy/paste directly this code in the portal.

1
2
3
4
5
6
7
8
9
10
#r "Newtonsoft.Json"

using System;
using Newtonsoft.Json.Linq;

public static void Run(string myIoTHubMessage, out object outDocument, TraceWriter log)
{

dynamic msg = JObject.Parse(myIoTHubMessage);
outDocument = new {timeCreated = msg.timeCreated, temperature = msg.machine.temperature};
}

Inputs

Then, we need to define our inputs. This is done with the Integrate option below our function.

Azure Functions IoT Hub Inputs

In this section, we define our function parameter that matches with our written function. I also create a new event hub connection.

Output

Now we need to define where things are going to go. In our case, I’m setting a Cosmos DB Output.

Azure Functions CosmosDB Output

In this section, I created a new connection to my Cosmos DB account where save our messages. As you can see, if you check the right checkbox, you don’t need to create any collection or databases manually.

On Automating

As you can see, I’m being all fancy and creating everything through a Portal UI. Everything I’ve done can be replicated with an ARM Template that will allow you to provision your different resources and bind your connection strings together.

If you are interested in seeing a way to deploy this through the command line, please let me know in the comments.

Results

After everything has been hooked up together, I sent a few manual event on my IoT Hub and looked into my Cosmos DB account.

Azure Functions CosmosDB Result

Amazing!

Want more?

So what we just saw was a very cheap and scalable way to receive a ton of events from thousands of devices and store them in Cosmos DB. This will allow us to either create reports in Power BI, consume them in Machine Learning algorithms or stream them through SignalR to your administrative dashboard.

What would you be interested next? Let me know in the comment section.

GO GO Hugo Blog to Azure Storage

Previously, we saw how to serve our static site out of blob storage.

The thing is, you’d still need to generate the actual HTML on a computer with all the tools installed. Well, that’s no fun.

What if we could generate all of this dynamically?

Last time, we had a git repository with our proxies into it. Now’s the time to add the whole root of our Hugo blog project. I would add /public to our ignore file as we’ll be regenerating them anyway.

Make sure that you do not include files with passwords, keys or other valuable data.

I am using Hugo here, but any static site renderer that can run in a Windows Environment as a standalone executable or on a list of supported language will run fine.

Minimum requirements before going further

Hugo Executable

If you are going to follow this tutorial using Hugo, please make sure that you have the stand-alone executable version for Windows downloaded. Also, make sure to add it to our git repository in /tools. We should now have /tools/hugo.exe present.

AzCopy

Then, install the latest version of AzCopy. I didn’t find a way to get the newest version other than by the installer.

It installs by default under C:\Program Files (x86)\Microsoft SDKs\Azure\AzCopy. Copy all the DLLs and AzCopy.exe under our /tools folder. We’ll need it very soon.

Custom deployment in Azure

When we deploy as we did previously with an Azure hosted git repository, there are default behaviors applied to deployments. Mostly, it’s copy/pasting the content and using it as our application.

But, we can do more. We can customize it.

The first step is installing kuduscript and generate a basic deployment script.

1
2
3
npm install -g kuduscript
# generates a powershell script for custom deployment
kuduscript --basic -t posh -y

The generated deployment script is useless to us. We’ll empty it. However, I wanted you to see its content first. We could forgo kuduscript altogether because we’re just going to write our script but, it’s important to notice what this script is doing and how to generate it. It allows you to customize your whole deployment process if you ever need to do that kind of thing without a specialized tool like Visual Studio Team Services.

So, the lesson’s over. Let’s empty out that file and paste the following inside.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# Generates our blog to /public
.\tools\hugo.exe -t minimal

# Connection string associated with the blob storage. Can be input manually too.
$blobStorage = $env:AzureWebJobsStorage

# We extract the key below
$accountKey = ""
$array = $blobStorage.Split(';')
foreach($element in $array)
{
if($element.Contains('AccountKey'))
{
$accountKey = $element.Replace("AccountKey=", "")
}
}

if($accountKey -ne "")
{
# Deploy to blob storage
.\tools\AzCopy.exe /Source:.\public /Dest:https://hugoblog2.blob.core.windows.net/content /DestKey:$accountKey /SetContentType /S /Y
}
else
{
Write-Host "Unable to find Storage Account Key"
}

Let’s send this to our Azure git repository that we set earlier.

1
2
3
git add .
git commit -m "deploying awesomeness by the bucket"
git push azure master

Resulting output

As soon as you hit Enter on this last command, you should be receiving these answers from the remote:

1
remote: Updating branch 'master'.
remote: ....
remote: Updating submodules.
remote: Preparing deployment for commit id 'f3c9edc30c'.
remote: Running custom deployment command...
remote: Running deployment command...
remote: .............
remote: Started building sites ...
remote: ...................................
remote:
remote: Built site for language en:
remote: 0 draft content
remote: 0 future content
remote: 0 expired content
remote: 305 regular pages created
remote: 150 other pages created
remote: 0 non-page files copied
remote: 193 paginator pages created
remote: 0 categories created
remote: 71 tags created
remote: total in 39845 ms
remote: .......................
remote: [2017/11/09 15:16:21] Transfer summary:
remote: -----------------
remote: Total files transferred: 652
remote: Transfer successfully:   652
remote: Transfer skipped:        0
remote: Transfer failed:         0
remote: Elapsed time:            00.00:00:26
remote: Running post deployment command(s)...
remote: Syncing 0 function triggers with payload size 2 bytes successful.
remote: Deployment successful.

From that little script, we managed to move our static site content generation from your local machine to the cloud.

And that’s it. Every time you will git push azure master to this repository, the static site will be automatically generated and reuploaded to Azure Blob Storage.

Is there more we can do? Anything else you would like to see?

Let me know in the comments below!

Hosting static site for cheap on Azure with Storage and Serverless

So, I recently talked about going Static, but I didn’t talk about how to deploy it.

My favorite host is Azure. Yes, I could probably go with different hosts, but Azure is just so easy to use that I don’t see myself changing anytime soon.

What about I show you how to deploy a static site to Azure while keeping the cost at less than 1$ per months*?

*it’s less than $1 if you have a low traffic site. Something like Scott Hanselman’s blog would probably cost more to run.

Needed feature to host a blog

So to host a static blog, we need to have a few fundamental features

  • File hosting
  • Custom domain
  • Support for default file resolution (e.g., default.html or index.html)

With this shopping list in mind, let me show you how Static and Serverless can help you achieve crazy low cost.

Azure Functions (or rather, proxies)

Cost (price in USD)

Prices change. Pricing model too. Those are the numbers in USD at the time of writing.

Azure Storage is $0.03 per stored GB, $0.061 per 10,000 write operations and $0.005 per read operation. So if your blog stays under 1Gb and you get 10,000 views per months, we can assume between $0.05 and $0.10 per months in cost (CSS/JS/images).

Azure Functions provides the first 1 million execution for free. So let’s assume 3 million requests, and it would bring us to $0.49.

Big total of $0.69 for a blog or static site hosted on a custom domain.

How to deploy

Now that we have our /public content for our Hugo blog from the last post, we need to push this into Azure Storage. One of the concepts is called Blobs (Binary Large Object). Those expose content on a separate URL that looks like this: https://<StorageAccount>.blob.core.windows.net/<Container>/

So uploading our root index.html will be accessible on https://<StorageAccount>.blob.core.windows.net/<Container>/index.html. We will need to do this for ALL files in our current /public directory

So as the URL shows, we’ll need a storage account as well as the base container. The following script will create the required artifacts and upload the current folder to that URL.

1
2
3
4
5
6
7
8
az group create --name staticblog-test -l eastus
az storage account create --name hugoblog2 -g staticblog-test --sku Standard_LRS

$key = az storage account keys list -n hugoblog2 -g staticblog-test --query [0].value -o tsv
az storage container create --name content --public-access container --account-name hugoblog2 --account-key $key

cd <output directory of your static site>
az storage blob upload-batch -s . -d content --account-name hugoblog2 --account-key $key --max-connections 20

So accessing the same file on that URL will now be accessible on https://hugoblog2.blob.core.windows.net/content/index.html.

Now that we have the storage taken care of, it’s important to remember that although we can associate a custom domain to a Storage Account, the Storage account does not support default files and uploading files to root. So even if we could map it to a custom domain, we would still have the /content/index.html instead of /index.html.

So this brings us to Azure Functions.

Provision the function

So we’ll need to create a Function using the following code.

1
az functionapp create -n hugoblogapp -g staticblog-test -s hugoblog2 -c eastus2

Then, we create a proxies.json file to configure URL routing to our blob storage. WARNING: This is ugly. I mean it. Right now, it can only match by url segment (like ASP.NET MVC), and it’s not ideal. The good news is that the Azure Functions team is very receptive to feature request, and if you need something specific, ask them on Twitter or on Github.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
{
"$schema": "http://json.schemastore.org/proxies",
"proxies": {
"root":{
"matchCondition": {"route": "/"},
"backendUri": "https://hugoblog2.blob.core.windows.net/content/index.html"
}
,

"firstlevel": {
"matchCondition": {"route": "/{level1}/"},
"backendUri": "https://hugoblog2.blob.core.windows.net/content/{level1}/index.html"
}
,

"secondlevel": {
"matchCondition": {"route": "/{level1}/{level2}/"},
"backendUri": "https://hugoblog2.blob.core.windows.net/content/{level1}/{level2}/index.html"
}
,

"thirdlevel": {
"matchCondition": {"route": "/{level1}/{level2}/{level3}/"},
"backendUri": "https://hugoblog2.blob.core.windows.net/content/{level1}/{level2}/{level3}/index.html"
}
,

"fourthlevel": {
"matchCondition": {"route": "/{level1}/{level2}/{level3}/{level4}/"},
"backendUri": "https://hugoblog2.blob.core.windows.net/content/{level1}/{level2}/{level3}/{level4}/index.html"
}
,

"css": {
"matchCondition": {"route": "/css/main.css"},
"backendUri": "https://hugoblog2.blob.core.windows.net/content/css/main.css"
}
,

"rest": {
"matchCondition": {"route": "{*restOfPath}"},
"backendUri": "https://hugoblog2.blob.core.windows.net/content/{restOfPath}"
}

}

}

Hooking git

Now, this proxies.json file is not configurable via the command line so we’ll need to create a git repository, add the file to it and deploy it using Git.

Note: You will need to set a deployment user before pushing to source control.

The script below will configure Git for our Functions app, add a remote upstream to the current git repository and push the whole repository. This will start a deployment.

1
2
3
4
az functionapp deployment source config-local-git --name hugoblogapp -g staticblog-test
$git = az functionapp deployment source config-local-git --name hugoblogapp -g staticblog-test -o tsv
git remote add azure $git
git push azure master

If your file wasn’t in there before, I’d be using Visual Studio Code to create the file, add it, commit it and push it.

1
2
3
4
code proxies.json # copy/paste the content above and change your blob url
git add .
git commit -m "adding proxies"
git push azure master

Now your proxy will automatically be picked up, and your Hugo Blog now works.

The same thing can be done with some slight modification to the proxies.

What do we have now?

We have a system where we can forward any requests to the Storage URLs.

It’s clunky for now, but it’s one way to do it with serverless. Imagine now that you want to add an API, authentication, queuing messages. All this? It is all possible from that same function.

If you are building a SaaS application, it’s an excellent option to have with the least amount of building blocks.

Future changes

Even though it’s the “minimum amount of building blocks”, I still think that it’s way too high.

The second highest uservoice that they have is about supporting complete static site without any workarounds.

Once implemented, this blog post will be unnecessary.

What next?

Thinking creatively with Azure allows you to perform some great cost saving. If you keep this mindset, Azure will become a set of building blocks on which you will build awesomeness.

Are you using static sites? Was this content useful? Let me know in the comments!

Breaking the shackles of server frameworks with static content

Let me tell you the story of my blog.

It started back in 2009. Like everyone back then, I started on Blogger as it was the easiest way to start a blog and get it hosted for free. Pretty soon, however, I wanted to have my domain and start owning my content.

That brought me down the rabbit hole of blog engines. Most engines back then were SQL based (SQL Server or MySQL) thus increasing the cost.

My first blog engine was BlogEngine.NET. After a few painful upgrade, I decided to go to something easier and more straightforward. So I went with MiniBlog by Mads Kristensen. Nothing fancy but I still had some issues that left me wanting to change yet again.

So this led me to think. What were the features in those blog engine that I used? I sure wasn’t using the multi-user support. I’m blogging alone. As for the comments, I was using Disqus for years now due to centralized spam management. What I was using was pushing HTML and having it published.

That’s a concise list of features. I asked the question… “Why am I using .NET?”. Don’t get me wrong; I love .NET (and the new Core). But did I NEED .NET for this? I was just serving HTML/JS/CSS.

If I removed the .NET component what would happen?

Advantages of no server components

First, I don’t have to deal with any security to protect my database or the framework. In my case, .NET is pretty secure, and I kept it up to date but what about a PHP backend? What about Ruby? Are you and your hoster applying all the patches? It’s unsettling to think about that.

Second, it doesn’t matter how I generate my blog. I could be using FoxPro from a Windows XP VM to create the HTML, and it would work. It doesn’t matter if the framework/language is dead because it’s not exposed. Talk about having options.

Third, it doesn’t matter where I host it. I could host it on a single Raspberry PI or a cluster, and it would still work.

Finally, it makes changing platform or language easier. Are you a Node fan today? Use Hexojs just like I did. Rather use Ruby? Jekyll is for you. Want to go with Go instead? Try Hugo.

That’s what this blog post is about. It doesn’t matter what you use. All the engines all use the same files (mostly).

Getting started

First, I’ll take a copy of my blog. You could use yours if it’s already in markdown, but it’s just going to be an example anyway. Mine uses Hexojs. It doesn’t matter unless you want to keep the template.

What is important are all the markdown files inside the folder /source/_posts/.

I’ll clone my blog like so.

1
git clone https://github.com/MaximRouiller/blog.decayingcode.com

What we’re going to do is generate a script to convert them over to two different engine in very different languages.

Impossible? A week-long job? Not even.

Here are our goals.

I’m not especially attached to my template, so I exclude that from the conversion.

Also to take a note, images haven’t been transitioned, but it would be a simple task to do.

Wyam Reboot

First, you’ll need to install Wyam’s latest release.

I went with the zip file and extracted the content to c:\tools\wyam.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# I don't want to retype the type so I'm temporarily adding it to my PowerShell Path
$env:Path += ";c:\tools\wyam";

# Let's create our blog
mkdir wyam-blog
cd wyam-blog
wyam.exe new --recipe blog

# Wyam create an initial markdown file. Let's remove that.
del .\input\posts\*.md

# Copying all the markdown files to the Wyam repository
cp ..\blog.decayingcode.com\source\_posts\*.md .\input\posts

# Markdown doesn't understand "date: " attribute so I change it to "Published: "
$mdFiles = Get-ChildItem .\input\posts *.md -rec
foreach ($file in $mdFiles)
{
(Get-Content $file.PSPath) |
Foreach-Object { $_ -replace "date: ", "Published: " } |
Set-Content $file.PSPath
}

# Generating the blog itself.
wyam.exe --recipe Blog --theme CleanBlog

# Wyam doesn't come with an http server so we need to use something else to serve static files. Here I use the simple Node `http-server` package.
http-server .\output\

Sample URL: http://localhost:8080/posts/fastest-way-to-create-an-azure-service-fabric-cluster-in-minutes
My blog in Wyam

Hugo Reboot

First, you’ll need to install Hugo.

Then select a theme. I went with the Minimal theme.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# Create a new hugo site and navigate to the directory
hugo new site hugo-blog
cd hugo-blog

# we want thing to be source controlled to create submodules
git init

# selecting my theme
git submodule add https://github.com/calintat/minimal.git themes/minimal
git submodule init
git submodule update

# overwrite the default site configuration
cp .\themes\minimal\exampleSite\config.toml .\config.toml

# create the right directory for posts
mkdir .\content\post

# Copying all the markdown files to the Hugo repository
cp ..\blog.decayingcode.com\source\_posts\*.md .\content\post

# generate the site with the selected theme and serve it
hugo server -t minimal

Sample URL: http://localhost:1313/post/fastest-way-to-create-an-azure-service-fabric-cluster-in-minutes/

My blog in Hugo

Result

So how long did it take me? Hours? No. In fact, it took me longer to write this blog post relating the events than doing the actual conversion.

Why should I care about your blog?

Well, you shouldn’t. The blog is an excuse to talk about static content.

Throughout our careers, we often get stuck using comfortable patterns and solutions for the problems we encounter. Sometimes, it’s useful to disconnect ourselves from our hammers to stop seeing nails everywhere.

Now. Let’s forget about blogs and take a look at your current project. Does everything need to be dynamic? I’m not just talking HTML here. I’m also talking JSON, XML, etc.

Do you need to have your data regenerated every time?

What can be transformed from a dynamic pipeline running onto a language full of features and execution path to a simple IO operation?

Let me know in the comment.

Fastest way to create an Azure Service Fabric Cluster in minutes

Previously, we found a way to create easily a self-signed certificate for Service Fabric. But did you know that all of this can be self-wrapped in a single command?

So let’s check it out.

1
az sf cluster create --resource-group demo -l eastus --name myawesomecluster --vm-user-name maxime --vm-password [email protected]! --cluster-size 3 --certificate-subject-name mycluster --certificate-password [email protected]! --certificate-output-folder .\ --vault-name myawesomevault --vault-resource-group demo

That’s bit a long… let me explode it line by line

1
2
3
4
5
6
7
az sf cluster create --resource-group demo -l eastus --name myawesomecluster 
--vm-user-name maxime --vm-password [email protected]!
--cluster-size 3
--certificate-subject-name mycluster
--certificate-password [email protected]ssw0rd1!
--certificate-output-folder .\
--vault-name myawesomevault --vault-resource-group demo

Woah… That’s a mouthful… Let’s see. First, we’re specifying our resource group, location and name of our cluster. Do we need to create those? Oh no. It’ll be handled for you. If the resource group exist, it will use it. If it’s missing, it will create it for you. I recommend creating a new one.

VM Size? The default at the time of writing the Standard_D2_v2 which is a general purpose Virtual Machine with 2 cores and 7 GiB of memory. They are at about $240 USD per months so… if you don’t need that much, please specify a different SKU one with the flag --vm-sku. If you’re looking for something specific, check out my Azure Virtual Machine Size Browser to help you make a choice. There are cheaper choices and I will come back to that later.

Cluster Size? One if you are testing your development scenarios. I’d start with three if you are just testing some actual code. Five at a minimum if you intend to run this in production.

The certificate parameters? All essential to get to our next step. As for the vault, I’ve tried removing them and it’s non-negotiable. It needs it.

What it will do however, is enable a flag on your Key Vault that enables Virtual Machines to retrieve certificates stored in it.

Key Vault Advanced Access Policies

Finally, you’ll most definitely need a username/password. You provide them there. Oh and please pick your own password. It’s just a sample after all.

And that’s pretty much it. What’s left? Importing to certificate so that we can access our Service Fabric Explorer.

Importing the certificate in Windows

1
2
# Change the file path to the downloaded PFX. PEM file is also available.
Import-PfxCertificate -FilePath .\demo201710231222.pfx -CertStoreLocation Cert:\CurrentUser\My\

That’s it. Seriously.

One single command line.

Cleaning up

Of course, if you’re like me and don’t have a load big enough to justify multiple VMs to host your Service Fabric cluster, you may want to clean up that cluster once you’re done.

If you didn’t reuse a Resource Group but created a new one, then the clean up is very easy.

1
2
# deletes the resource group called `demo` without confirmation.
az group delete --name demo -y

Further reading

If you want to read more about Service Fabric, I’d start with some selected reading in the Microsoft Docs. Let me know if that help!

Essential Service Fabric Concepts

I’ve structured the following list of links to give you a progressive knowledge of Service Fabric.

Creating that first app and deploying it to our newly created service