Posted on 11 mins read

Classic pipelines are out, YAML pipelines are in. Azure DevOps (ADO) has adopted a pipelines-as-code philosophy that will be their build and release strategy for the foreseeable future. There’s a lot to appreciate about the new pipelines, though they are a bit of a mental adjustment and the transition is anything but automatic—I took a few weeks converting all of ours, though there were some special obstacles in our case that hopefully you won’t have to deal with.

The documentation on YAML pipelines is pretty thorough, but like most documentation it’s got a few blind spots. The goal of this post is to be the heads-up I wish I’d had before diving in. I’ll cover the main concepts and implementation details I’ve learned along the way.

YAML Pipelines: an overview

A classic pipeline is a build or release created in the Azure DevOps web interface.

A YAML pipeline is a text file committed to a Git repository. It supports most of the same features as a classic pipeline plus a few more. There are three “missing” features: deployment group jobs, task groups, and gates, and the first two have suitable—arguably better—replacements. As for gates, you may well find that the Environments page offers what you need.

YAML pipelines can be used for both builds (creation of artifacts) and releases (deployment of artifacts). There are rumors that classic pipelines will be deprecated soon—there are also rumors that ADO will be deprecated entirely in favor of GitHub, but I’m trying not to think about those. New features will be coming to YAML pipelines first, and there are a few YAML pipeline features I don’t ever expect to see in classic pipelines.

YAML is a configuration format that I would describe as “JSON but like, more Python-ish.” Here’s a crash course:

  • No quotes required for keys or basic string values
  • Indentation matters
  • Line order matters
  • 99% of what you write in a pipeline will either be an object or an array of objects
    • Object: a set of unique keys at the same indentation level
    • Array of objects: each object begins with a dash, and all keys until the next dash are part of the same object.

And that’s it, more or less. In no time at all you’ll be looking at YAML and seeing JSON. Be sure to install the Azure Pipelines VS Code extension to help with syntax (and follow the instructions in the README so it will know which files to kick in for).

Why YAML pipelines?

These are all the advantages of YAML pipelines that I give a hoot about:

  • Versioning. YAML pipelines are code files in your version control so you get version history automatically.
  • Shareability. Code is text. If you have Good Text and want to share it, you can do so pretty much anywhere. Slack, email, Stack Overflow, Facebook, pastebin, a link to a file in your repository. Someone on the next team over wants to copy your build pipeline? Ctrl + C.
  • Atomic commits. If you commit your pipelines and your code to the same repository, you can update both at the same time. Any code change that necessitates a pipeline change can happen in the very same commit.
  • Templates. Task groups in ADO are great, but not as great as a magic text file that can (conditionally) render (parameterized) variables, tasks, stages, and more into any pipeline file you reference it from.
  • Top-level parameters. You can declare strongly-typed parameters in a pipeline file and they’ll show up in the “Run Pipeline” pane. Parameters can be used in powerful ways, like omitting steps from a pipeline run or using a text input to choose the target environment for a release. More on this later.
  • Find and replace. Changing something that affects many different tasks, release stages, or pipelines is a huge pain in the classic UI. With YAML pipelines, it’s often as easy as using the “Find and Replace” feature of your code editor.
  • Output variables. Classic pipelines sort of have output variables, but they’re confusing to use and don’t persist across pipeline stages. YAML pipelines allow you to output variables from one task and use them in a different task, job, or stage. (Note that there are, like, 9 different syntaxes for output variables depending on if you’re using them in a build or deployment job.)

Copy YAML steps from the classic UI

Every task in a classic pipeline has a button in the upper right-hand corner that says “View YAML.” Click this and it will spit out the YAML for that task. You can copy this into your pipeline file, make sure any variables are in place, and be on your way.

Unfortunately there is not a way to convert an entire pipeline from classic to YAML. You have to do it one step at a time.

No more Releases page, no more Deployment Groups

All YAML pipelines live in the Pipelines page in ADO. They do not and cannot exist in the Releases page. The Releases page is useless to you if all your releases are YAML.

You can set up a nested folder structure to organize your pipelines. Use a little foresight here; if one of your pipelines depends on the results of a different one (artifacts, for example), any change in the folder hierarchy will require a change to your YAML.

On to Deployment Groups. YAML can’t use your deployment groups, it doesn’t care about your deployment groups, deployment groups are dead to you. Instead you’ll use the Environments page. You can set up a virtual or physical machine to run a YAML pipeline by creating an Environment in ADO, adding a resource to that Environment, copying a Powershell script, running it in an administrator Powershell instance on your target machine, and answering a couple of prompts. (It will ask for an install location, the username and password the pipeline agent should use, and what tags to put on the resource. Tags are configurable later through the web interface but everything else is not, so take an extra moment to get it right the first time through.)

Templates are your bread and butter

Start using YAML templates right away. Templates, the functional replacements for Task Groups, are text files that define reusable YAML for different pipelines. The type of template I use most frequently has two top-level declarations: parameters and steps. Parameters are variables that are interpolated at compile time, well before the pipeline runs or even knows what it’s doing; templates are likewise merged into your pipeline at compile time and don’t even know they’re templates after that. And steps are tasks that can be included at any point in any pipeline.

The syntax of YAML can make long, complex pipelines very unwieldy and hard to read. I use templates to break them down into manageable chunks. I also use them to create standard stages, jobs, and groups so that deploying to a new environment only requires a few lines of code.

You can use template parameters to customize and/or skip any part of a template.

Three types of variable interpolation

Keep this chart handy, it’s great.

In order from most frequently to least frequently used:

  • Use $(macros) to refer to variables from variable groups, the “Variables” menu in Run Pipeline, or the variables declaration in YAML.
  • Use ${{ template expressions }} to refer to parameters (${{ parameters.parameterName }}) or statically defined variables in your YAML (${{ variables.varName }}). Also use these to add conditional logic to a pipeline or template.
  • Use $[ runtime expressions ] to refer to things that are only available at runtime but a macro won’t work for some reason.

Remember that template and runtime expressions support functions, should you need them.

Top-level parameters

If you use a parameters declaration in your top-level pipeline file, as opposed to a template, the parameters will appear in the Run Pipeline pane. The pipeline will refuse to run unless all parameters either have a default value or are populated by the user.

Since parameters are interpolated at compile time and can be keys or values in your pipeline definition, this has some fun use cases. I’ve created a couple of pipelines that don’t have a predefined runtime environment. When you run the pipeline, you enter the name of an environment you want it to run on. This is good for ad-hoc stuff like reading out a file or fetching app logs. (Use with caution, obviously.)

I also use top-level parameters to let the user of the pipeline skip certain tasks on a given pipeline run. Of course, you always have the option to skip entire stages through the Stages menu in Run Pipeline, but there are cases where a bit more granularity doesn’t hurt.

Always declare your triggers and artifacts

If you expect that a pipeline will always be triggered manually, start off the YAML file like this:

trigger:
- none

Otherwise it will run every time a new commit shows up in the repository. It’s an odd default if you ask me.

Also, if a pipeline job doesn’t require any checked out code or artifacts, start it off like this:

steps:
- checkout: none
- download: none

Otherwise it’ll check out your repository and/or download all artifacts from the pipeline it depends on. That’s a waste of time.

Never trust the branch selector in Run Pipeline

When you click Run Pipeline, you’ll see a branch selector. This lets you select the branch your YAML pipeline will be pulled from. On a pipeline with checkout tasks, it also selects the branch your code will be checked out from.

Importantly, this branch selector does not select the branch your pipeline artifacts will be taken from! This is counterintuitive. You may expect that if you select the main branch, you’ll get artifacts from the last run of your build pipeline on the main branch. That isn’t necessarily true. By default, ADO uses artifacts from the last successful pipeline run, even if that pipeline run checked out a different branch.

I’ve made a habit of clicking the Resources menu and manually choosing the pipeline artifacts I want for every manual pipeline run. I suggest you do the same.

Variable groups are a must

Variable Groups are named sets of variables that live in the Library page in ADO. They can be manually populated, cloned from another variable group, or pointed to an Azure Key Vault.

The most important thing about variable groups is that they’re easy to copy. You open the group, click Clone, change what needs to be changed, and you’re done. This makes new deployments relatively painless. You can’t do that with runtime variables, which is the Variables screen in Edit Pipeline.

You can use multiple sets of variable groups in a YAML pipeline. If any of them declare the same variable, the last one will win. So you can do stuff like this:

variables:
- group: Basic Defaults
- group: Production Defaults
- group: MegaCorp Specific Variables

MegaCorp Specific Variables beats Production Defaults beats Basic Defaults. Awesome.

Don’t overestimate the UseDotNet task

UseDotNet is a built-in task that installs a specific version of .NET on the environment where the pipeline is running.

Only use this to install .NET for use by the pipeline, in a later task. If you’re deploying an application that uses .NET, this task isn’t meant for that. Instead, you’ll need to do one of two things:

  • Do a self-contained deployment. This is the easiest way to go.
  • Bundle a .NET installer with your app and use the release pipeline to run it.

If neither of these are possible, you may be lucky enough that you can mess with the inputs of the UseDotNet task and get a global install out of it. Keep your expectations low.

Practice one-off deployments ahead of time

With classic pipelines, it was easy to skip a step for a particular deployment. You would edit the release, click the task, expand the command options, uncheck “Enabled,” save the release, and run it.

With YAML pipelines it’s almost as easy (but a little more intimidating). The process goes like this:

  1. Create a branch in your repo.
  2. Find the task you want to disable.
  3. Add enabled: false to the task definition.
  4. Commit the change.
  5. Push the branch to ADO.
  6. Select that branch when you run the pipeline.

If the task in question doesn’t live in a template file, you can even do this from the Edit Pipeline screen in ADO. It will give you the option to commit the change to a new branch. But life is better with templates, so this may not be a possibility most of the time.