Fast & Reliable Deployments on Episerver with Azure DevOps

By: Chris Magee

Two devs sitting in a conference room watching a screen with Nansen logo

episerver

devops

optimizely

Over the past years working with Episerver, a common conversation I've found myself in is where someone is having trouble reliably deploying their application.

Common Issues With Episerver Azure Deployments

It can be represented in a number of ways, but it usually comes down to:

  1. Long Deployment Routines
  2. Unexpected Behavior
  3. Slow Deployment Cycles
  4. Interruption in Availbility to end-user

Long Deployment Routines

If you've gone through the launch of a site before, there's often a larger team sitting on a call as the deployment or final adjustments are made to direct traffic to their newly launched website or microsite, but that should be a one-time occurrence. Deployments to add new features or improve existing user experience should be as painless and as "set it and forget it" as possible. Normally, long deployment routines always come down to some kind of human interaction needed in a deployment routine.

Unexpected Behavior

Let's just start by saying that bugs are unavoidable. They ARE going to happen, what matters is your ability to catch them during testing and respond quickly to those that slip through testing. If you update your application and see inconsistent behavior either in parts of the application you didn't intend to change or even the new functionality you built is working differently once it's deployed and live, it can be tough to wonder what went wrong. There's nothing more frustrating than wondering why an issue arises after it was tested and worked before it was deployed.

Slow Deployment Cycles

A slow development and deployment cycle can leave users wondering when a new feature or an issue they're seeing is going to be release or fixed. You need to be able to deploy often to make small adjustments and tweaks to make the application better, or you'll risk your users or stakeholders getting frustrated by seemingly very little being done to resolve their needs and concerns.

page offline error

Interruption in Availability

If during a deployment your application is offline, your business is impacted. If you sell anything (even not directly on your website), the sale or purchasing decision can be impacted by this interruption by leaving a bad impression on the user who may see some kind of broken experience while researching the product.

Common Causes

Almost all of the issues above come down to a common set of causes:

  1. Source Control Strategy
  2. Code Quality / Confidence
  3. Inaccurate / Incomplete Testing
  4. Deployment Strategy / Process

If you're having trouble with any of these issues, feel free to reach out via email. If you're willing to take on the challenge, here's what you need to know.

Let's take a look at each of these briefly and talk about how you can avoid them.

pull request screenshot

Source Control Strategy

Starting at the very core, the code. Before even talking about the actual quality of the code, how and where you are changing your code makes a difference. Your application can have a varying number of developers and stakeholders working on a variety of features and adjustments that need to fit together when they're finished.

Let's invision for a moment a small application team and some common scenarios:

  • An application's stakeholders have decided to work on a new large feature for the application and after working with a small (2-3 developers) development team have found it will take two months to complete.
  • Everything can't be put on hold while the feature is developed, so during this time, the stakeholders also want some updates to existing features/user experience.
  • The development team needs to work together on the new feature and contribute to the smaller changes and be prepared that they'll need to get the smaller changes out to the world before the bigger feature is complete.
  • During this time, stakeholders want to review progress towards the larger feature alongside testing the smaller changes.

The development team needs to properly separate code for the larger feature(s) from the smaller adjustments and have the ability to create a deployment that combines one or many of the features without significant effort. If that isn't done, it's possible that unintended behavior pops up due to code conflicts or unfinished development being included in a deployment accidentlaly before being finalized.

I'll dive into source control strategies my team has used on projects recently on another blog post in the near future.

Code Quality / Confidence

Looking again at your application's code, aside from glaringly obvious bugs that should be caught during testing, there is a possibility for small bugs/quirks/conflicts to happen even if the code was intended to be released. Lack of confidence in the code being deployed can have a significant effect on the ability to rapidly respond to issues if they arise once deployments. This is only compounded again for larger teams where developers can work independently on tasks and all of that code is combined into a single application. So how do you combat this? There's a number of ways, but the best option I've found, by far, is code reviews.

code review screenshot

Code Reviews

Reviewing code can significantly reduce issues in code. It's even more effective when the person reviewing code has not written it as a fresh pair of eyes or different perspective can find issues. Even more effective is having code reviewed by someone with a broader knowledge of the code in the application, as they may be able to provide insight as to how two seemingly independent areas of an application could be effected by a code change. Reviewing code as a team, as long as the team understand it's for the betterment of everyone and isn't meant to call out one another on development choices, can spread the knowledge of the application itself as well as serve as an amazing opportunity to learn from eachother's code and maybe find a new pattern/feature to use later down the line.

These should be short, informal sessions where the primary focus is to review for quality control purposes before a deployment.

What not to do

It is possible to go overboard with code reviews. "Hindsight is 20/20" applies to code; Everything can be optimized or refactored, but it doesn't necessarily always need to be, especially when working on a timeline. Picking on every development decision, or requiring mandatory code reviews to make trivial changes, can lead to the appearance of lack of trust and confidence in the team which is far from the desired effect. If you feel the need to have that level of oversight in someone's work output, you should probably address it another way.

I'll dive into code reviews and how they can fit nicely into your source control process in another blog post.

Inaccurate/Incomplete Testing

Let me preface this by saying: I'm not talking about testing inaccuracies caused by the person doing the testing, that's a more 'human' mistake anyone can make and something that's resolved by more experience/training; Even an amazing tester can miss things. I'm talking about testing inaccuracies caused by:

  1. Regression
  2. Insufficient test cases
  3. Inaccurate testing environments

Regression & Insufficient Test Cases

Let's pretend you're launching an updated search page for Google. Prior to the code changes, it wasn't displaying pictures of cats when you typed "kitty". So you made some changes and verified that it is in fact displaying pictures of cats when you search "kitty". Purrfect, test passed! Not quite, did you test if it stopped returning pictures of cats when you search "cat", even though it worked before? Did you test if it's returning cats when you search "categories" because it has "cat" in it?

You can't just test the expected outcome and be confident it works as expected. I always say "try to break it" instead of test it. I won't dive into how to create proper test cases, as frankly I think I'm unqualified and there's plenty of articles on Google that would do better justice to the topic than I ever could.

Inaccurate Testing Environments

Even if you have perfect test cases, it's possible that you aren't testing accurately. This normally comes down to where you're testing. If you have typical web application infrastructure there are usually multiple environments. Using Episerver DXC as an example:

You have three DXC-provided environments hosted in Azure and each developer's local machine. These environments can differ in both data and infrastructure. If you're using an outdated or incomplete set of data, you're not accurately testing what your code will do in production. If the infrastructure differs, you may be testing against an environment without a CDN or with 1 app instance whereas your production environment is accessed through a CDN and is likely scaled out to increase traffic capabilities. These kinds of environment-related differences can cause inconsistencies in memory, cache, or performance giving you an inaccrurate test result.

I would highly recommend determining exactly what infrastructure you have in production and replicating that as best as possible for lower environments and ensure your team knows how portions of their code may execute differently based on the infrastructure in place.

Deployment Strategy / Process

Alright, now let's pretend you aren't effected by any of the above: You know exactly what code is being deployed, you have confidence it's going to work, and you've tested as accurately as possible. Is something still wrong? Well, it likely comes down to deployment strategy.

How your deployments are executed are just as important as the quality of the code itself. It normally comes down to the human factor. If any steps are performed by a person aside from clicking a button to start a deployment or approve a deployment, you're asking for something to go wrong. It's not just steps that can be forgotten or done incorrectly, it's possible for tools used during the deployment to fail. Copying files? Maybe one doesn't transfer, or another isn't deleted when it should be. It's important to standardize and automate this process to eliminate those possibilities.

I highly recommend Azure DevOps. Azure DevOps provides build automation and release management to reliably create code packages in the same manner each time as well as deploy the code in the same manner each time. There's a number of alternatives to Azure DevOps, but it's by far the easiest solution I have used in a long time. In the past I've used TeamCity + Octopus and a few other combinations, but none really match up to Azure DevOps. In my opinion, Azure DevOps is also the easiest to configure and nearly zero effort to maintain. Just a few years ago the amount of time that was spent ensuring our TeamCity and Octopus services were functional, I can't believe the amoiunt of time saved. Especialy since we hosted teamcity+octopus on-premesis, we had to manage Windows updates, TeamCity/Octopus software updates, pay licenses for Octopus and TeamCity, install specific .NET Framework and NodeJS versions on the machines ourselves and update the build and release tenants on the target machines. The list goes on and on.

Let's look at a plain Azure DevOps build and release pipeline for a typical .NET Framework web application alike Episerver, deploying to Azure (or Episerver DXC):

  1. Configuration
  2. Automated Builds
  3. Release Management & Deployment

Configuration

Connecting to source control is as simple as clicking your source control provider:

build pipeline source configuration

In my example (this blog) I am building from a "release" branch in GitHub. Depending on your source control strategy, choose your appropriate branch to build from.

Automated Builds

Create a new build pipeline: and use one of the built-in ASP.NET templates:

devops build template

You'll see the following setup created for you:

devops default template build pipeline

This will build your entire solution and generate an artifact from the build. You can add additional steps if you wish to identically model your local build process, or perform specific build steps for your deployable code package. One example is to add a front-end build tool to the process. For our work it's often Gulp or Webpack, both of which are supported in Azure DevOps just by adding a new step:

add webpack build step add gulp build step

Using these, in combination with yarn or npm (also configurable steps), you can execute your prepared scripts the same as your developers would do locally.

Additionally, you can ensure all unit tests pass before allowing a build to continue using the built-in testing capabilities or even third party services for code validation (code coverage, etc).

Once you run your build, you'll see that each build has an associated artifact that contains a zip file ready to be deployed in your release pipeline.

artifact download animated

This zip file contains the directory that will be extracted when you deploy your code package.

Release Management

Now that we have an artifact containing our code, we have a consistent code package ready for deployment. Similar to build pipelines, create a new release pipeline and use a template to get started. As an example, we'll deploy our code package to an Azure Web App.

azure release pipeline template

You can configure multiple stages for your deployment if you control multiple environments and wish to use Azure DevOps as a full deployment suite for promoting code packages.

release pipeline stages animated

If you're on Episerver DXC, you may only have one deployment stage which is deploying to your Episerver DXC Integration environment, meant to be used as a continuous integration environment.

release pipeline dxc integration

You can configure your continuous deployments to run each time you build/commit to a specific branch:

release pipeline ci trigger

You can also schedule deployments and configure approvals that are required before completing deployments, including hooks into other services alike Slack or Microsoft Teams for deployment approvals through your team's communication platforms.

If you found this helpful or found issues, please leave a comment below or get in contact with me via the social and email links above. Thanks for reading!