Pipelines

Pipeline Expressions and Functions Deep Dive

An advanced guide to Azure DevOps pipeline expressions and functions, covering string manipulation, comparison operators, type casting, counter(), format(), each(), and the dependencies context.

Pipeline Expressions and Functions Deep Dive

Overview

Azure DevOps pipeline expressions are the control plane of your CI/CD logic. They let you inject dynamic values, build conditional workflows, manipulate strings, iterate over collections in templates, and pass data between stages. If you have been writing YAML pipelines for any length of time, you have likely hit the wall where simple variable substitution is not enough and you need real programmatic control. This article is the deep reference I wish I had when I started building non-trivial pipelines -- covering every expression context, every built-in function, the critical distinction between compile-time and runtime evaluation, and the patterns that actually work in production.

Prerequisites

  • An Azure DevOps organization with at least one project
  • Working familiarity with YAML pipeline syntax (triggers, stages, jobs, steps)
  • Experience with pipeline variables and variable groups
  • Basic understanding of pipeline templates (we will use each() and ${{ }} syntax)
  • Access to self-hosted or Microsoft-hosted agents

Expression Context Objects

Every expression runs within a context -- a set of objects that Azure DevOps injects at different points in the pipeline lifecycle. Understanding which contexts are available and when they become populated is the foundation of everything else.

variables

The variables context holds all variables defined at the pipeline, stage, or job level, plus anything set by tasks via ##vso[task.setvariable].

variables:
  buildConfiguration: 'Release'
  majorVersion: '3'

steps:
  - script: echo ${{ variables.buildConfiguration }}
    displayName: 'Compile-time access'

  - script: echo $(buildConfiguration)
    displayName: 'Macro syntax access'

  - script: echo $[variables.buildConfiguration]
    displayName: 'Runtime expression access'

All three of those produce the same value, but they are evaluated at different times. More on that distinction later.

pipeline

The pipeline context gives you metadata about the pipeline run itself. You rarely see this used, but it contains pipeline.workspace -- the local path on the agent where the repository is checked out.

steps:
  - script: echo "Workspace is at $(Pipeline.Workspace)"

pool

The pool context provides agent pool information. It is mainly useful in template expressions where you want to dynamically select an agent pool based on parameters.

parameters:
  - name: environment
    type: string
    default: 'dev'

pool:
  vmImage: ${{ if eq(parameters.environment, 'prod') }}:
    'ubuntu-latest'
  ${{ else }}:
    'ubuntu-latest'

environment

When you use deployment jobs, the environment context gives you the name and resource information for the target environment. This is separate from the environment keyword -- it is the runtime context object.

strategy

In matrix or parallel strategies, the strategy context gives you the current matrix leg name and values.

strategy:
  matrix:
    linux:
      imageName: 'ubuntu-latest'
    windows:
      imageName: 'windows-latest'

steps:
  - script: echo "Running on $(imageName)"

dependencies

The dependencies context is one of the most powerful and most misunderstood. It lets you read outputs and results from prior jobs or stages. I will cover this in detail in its own section below.

String Functions

Azure DevOps provides a solid set of string manipulation functions. These work in both compile-time and runtime expressions.

contains(string, substring)

Returns True if the string contains the substring. Case-insensitive.

variables:
  branchName: $(Build.SourceBranch)

steps:
  - script: echo "This is a feature branch"
    condition: contains(variables['Build.SourceBranch'], 'feature/')

startsWith(string, prefix) and endsWith(string, suffix)

steps:
  - script: echo "Release branch detected"
    condition: startsWith(variables['Build.SourceBranch'], 'refs/heads/release/')

  - script: echo "Hotfix branch detected"
    condition: endsWith(variables['Build.SourceBranch'], '-hotfix')

format(pattern, arg0, arg1, ...)

String interpolation using {0}, {1} placeholders. This is enormously useful for building dynamic resource names, connection strings, and URLs.

variables:
  environment: 'staging'
  region: 'eastus2'
  resourceGroup: ${{ format('rg-myapp-{0}-{1}', variables.environment, variables.region) }}

steps:
  - script: echo "Deploying to resource group $(resourceGroup)"

join(separator, collection)

Joins an array into a single string. Works with parameters that are arrays.

parameters:
  - name: services
    type: object
    default:
      - api
      - web
      - worker

steps:
  - script: echo "Deploying services - ${{ join(', ', parameters.services) }}"

Output:

Deploying services - api, web, worker

replace(string, old, new)

variables:
  branchName: ${{ replace(variables['Build.SourceBranchName'], '/', '-') }}

steps:
  - script: echo "Sanitized branch - $(branchName)"

This is critical for building Docker tags or artifact names from branch names that contain slashes.

split(string, delimiter)

Splits a string into an array. Primarily useful inside template expressions where you can iterate over the result.

variables:
  tagList: 'api,web,worker'

steps:
  - ${{ each svc in split(variables.tagList, ',') }}:
    - script: echo "Processing ${{ svc }}"

upper(string), lower(string), trim(string)

variables:
  env: '  Production  '
  normalizedEnv: ${{ lower(trim(variables.env)) }}

steps:
  - script: echo "Environment is $(normalizedEnv)"
    # Output: Environment is production

Comparison and Logical Functions

These are the backbone of conditional logic in pipelines.

eq, ne, gt, lt, ge, le

Equality and comparison operators. They work on strings and numbers.

condition: eq(variables['Build.SourceBranch'], 'refs/heads/main')
condition: ne(variables['Build.Reason'], 'PullRequest')

For numeric comparisons:

variables:
  retryCount: 3

steps:
  - script: echo "Too many retries"
    condition: gt(variables.retryCount, 2)

and, or, not

Logical operators that combine conditions. These nest just like function calls.

condition: and(
  eq(variables['Build.SourceBranch'], 'refs/heads/main'),
  ne(variables['Build.Reason'], 'Schedule'),
  succeeded()
)
condition: or(
  eq(variables['Build.SourceBranch'], 'refs/heads/main'),
  startsWith(variables['Build.SourceBranch'], 'refs/heads/release/')
)
condition: not(contains(variables['Build.SourceBranch'], 'experimental'))

in and notIn

Test membership in a set of values. These are surprisingly useful for environment-gating logic.

condition: in(variables['System.StageName'], 'Production', 'Staging', 'UAT')
condition: notIn(variables['Build.Reason'], 'Schedule', 'ResourceTrigger')

Conversion Functions

convertToJson(value)

Converts an object to a JSON string. Essential when you need to pass complex parameter objects to scripts.

parameters:
  - name: deployConfig
    type: object
    default:
      replicas: 3
      memory: '512Mi'
      cpu: '250m'

steps:
  - script: |
      echo '${{ convertToJson(parameters.deployConfig) }}' > config.json
      cat config.json
    displayName: 'Write deployment config'

Output:

{
  "replicas": 3,
  "memory": "512Mi",
  "cpu": "250m"
}

coalesce(val1, val2, ...)

Returns the first non-empty, non-null value. This is your default-value function and it saves you from writing nested if expressions.

variables:
  customTag: ''
  buildTag: ${{ coalesce(variables.customTag, variables['Build.BuildNumber']) }}

steps:
  - script: echo "Using tag $(buildTag)"

If customTag is empty, it falls back to the build number. Chain as many fallbacks as you need:

variables:
  tag: ${{ coalesce(variables.releaseTag, variables.buildTag, 'latest') }}

Type Casting and Truthiness Rules

This trips people up constantly. Azure DevOps expressions have specific type coercion rules that differ from what you might expect coming from JavaScript or Python.

Truthiness rules:

Value Truthy?
'' (empty string) False
'0' False
'False' False
0 (number) False
null False
Any non-empty, non-zero, non-false string True

The fact that the string '0' is falsy catches people. If you set a variable to '0' and check it in a condition, it evaluates as false.

variables:
  featureFlag: '0'

steps:
  # This step will NOT run because '0' is falsy
  - script: echo "Feature enabled"
    condition: and(succeeded(), variables.featureFlag)

  # Use explicit comparison instead
  - script: echo "Feature enabled"
    condition: eq(variables.featureFlag, '1')

Type casting in comparisons:

When you compare values with eq(), both sides are coerced to the same type. If one side is a number, the other is converted to a number. If conversion fails, the comparison returns false without an error.

# This works -- '3' is coerced to 3
condition: eq(variables.retryCount, 3)

# This also works -- 3 is coerced to '3'
condition: eq(3, variables.retryCount)

The counter() Function

counter() is how you build auto-incrementing version numbers that persist across pipeline runs. It takes a prefix and a seed value, and increments automatically each time the pipeline runs. The counter resets when the prefix changes.

variables:
  majorMinor: '2.5'
  patch: $[counter(variables.majorMinor, 0)]
  fullVersion: '$(majorMinor).$(patch)'

steps:
  - script: echo "Building version $(fullVersion)"

First run: 2.5.0. Second run: 2.5.1. Third run: 2.5.2.

When you change majorMinor to '2.6', the counter resets: 2.6.0.

The counter state is stored per-pipeline in Azure DevOps. It survives agent restarts, pool changes, and everything else. The only way to reset it is to change the prefix.

Important: counter() only works in runtime expressions ($[ ]), not compile-time (${{ }}). It needs the pipeline run context to persist state.

# WRONG - will not work
variables:
  patch: ${{ counter(variables.majorMinor, 0) }}

# CORRECT
variables:
  patch: $[counter(variables.majorMinor, 0)]

The each() Expression for Template Iteration

each() is a compile-time expression that iterates over arrays and objects in template parameters. It only works in ${{ }} expressions, because the iteration has to be resolved before the pipeline is submitted to the agent.

Iterating over arrays

# template: deploy-services.yml
parameters:
  - name: services
    type: object
    default: []

steps:
  - ${{ each service in parameters.services }}:
    - task: AzureWebApp@1
      displayName: 'Deploy ${{ service }}'
      inputs:
        appName: 'app-${{ service }}-prod'
        package: '$(Pipeline.Workspace)/drop/${{ service }}.zip'

Calling the template:

stages:
  - stage: Deploy
    jobs:
      - job: DeployServices
        steps:
          - template: deploy-services.yml
            parameters:
              services:
                - api
                - web
                - worker

Iterating over objects (key-value pairs)

parameters:
  - name: appSettings
    type: object
    default:
      ASPNETCORE_ENVIRONMENT: 'Production'
      ConnectionStrings__Default: '$(dbConnection)'
      FeatureFlags__NewUI: 'true'

steps:
  - ${{ each setting in parameters.appSettings }}:
    - script: echo "Setting ${{ setting.key }} = ${{ setting.value }}"

Generating matrix strategies dynamically

This is where each() gets really powerful. You can generate matrix legs from parameters:

# template: matrix-build.yml
parameters:
  - name: configurations
    type: object

jobs:
  - job: Build
    strategy:
      matrix:
        ${{ each config in parameters.configurations }}:
          ${{ config.key }}:
            buildConfig: ${{ config.value.buildConfig }}
            platform: ${{ config.value.platform }}
    steps:
      - script: echo "Building $(buildConfig) for $(platform)"

The dependencies Context

The dependencies context is how you pass data between jobs and stages. It exposes two things: the result of a job (Succeeded, Failed, etc.) and any output variables that job set.

Accessing job outputs within the same stage

jobs:
  - job: Setup
    steps:
      - script: |
          echo "##vso[task.setvariable variable=dbVersion;isOutput=true]14.2"
        name: setVars

  - job: Deploy
    dependsOn: Setup
    variables:
      dbVer: $[dependencies.Setup.outputs['setVars.dbVersion']]
    steps:
      - script: echo "Database version is $(dbVer)"

The syntax is dependencies.<jobName>.outputs['<stepName>.<variableName>'].

Accessing stage outputs across stages

When crossing stage boundaries, the syntax adds the job name:

stages:
  - stage: Build
    jobs:
      - job: BuildJob
        steps:
          - script: |
              echo "##vso[task.setvariable variable=imageTag;isOutput=true]$(Build.BuildId)"
            name: setTag

  - stage: Deploy
    dependsOn: Build
    variables:
      deployTag: $[stageDependencies.Build.BuildJob.outputs['setTag.imageTag']]
    jobs:
      - job: DeployJob
        steps:
          - script: echo "Deploying image tag $(deployTag)"

Notice the switch from dependencies to stageDependencies when crossing stages.

Checking job/stage results

stages:
  - stage: Test
    jobs:
      - job: UnitTests
        steps:
          - script: npm test

      - job: IntegrationTests
        steps:
          - script: npm run test:integration

  - stage: Deploy
    dependsOn: Test
    condition: |
      and(
        eq(dependencies.Test.result, 'Succeeded'),
        eq(variables['Build.SourceBranch'], 'refs/heads/main')
      )

Compile-Time vs Runtime Expressions

This is the single most important concept for writing correct expressions, and the one I see people get wrong most often.

Compile-time: ${{ }}

Evaluated before the pipeline runs, during template expansion. The YAML is processed, expressions are resolved, and the final pipeline is sent to the agent. These expressions can use parameters, template context, and literal values. They cannot use runtime variables like $(Build.BuildId) because those do not exist yet.

# Compile-time -- resolved before the pipeline starts
parameters:
  - name: runTests
    type: boolean
    default: true

steps:
  - ${{ if eq(parameters.runTests, true) }}:
    - script: npm test

The if block is evaluated during template expansion. If runTests is false, the step is physically removed from the pipeline -- the agent never sees it.

Runtime: $[ ]

Evaluated during the pipeline run, on the agent. These expressions can use runtime variables, dependencies, counter(), and anything that requires the run context.

variables:
  isMain: $[eq(variables['Build.SourceBranch'], 'refs/heads/main')]

steps:
  - script: echo "On main branch"
    condition: eq(variables.isMain, 'True')

Macro syntax: $( )

This is variable substitution, not an expression. It is replaced with the variable value as a simple string replacement. No functions, no logic.

steps:
  - script: echo "Build $(Build.BuildId) on branch $(Build.SourceBranchName)"

Where each syntax works

Context ${{ }} $[ ] $( )
Template parameters Yes No No
Variable definitions Yes Yes No
Conditions Yes Yes (default) No
Step inputs Yes No Yes
Display names Yes No Yes
Script bodies No No Yes

The key rule: conditions default to runtime expressions even without brackets. When you write condition: eq(...), it is implicitly a runtime expression. If you write condition: ${{ eq(...) }}, you force compile-time evaluation, which means the condition is baked before the pipeline runs.

Expression Length Limits and Performance

Azure DevOps has an expression length limit of 65,536 characters after template expansion. If your templates are deeply nested and generate massive YAML, you can hit this. The error looks like:

The request was too large. Reduce the size of the request body.

The other limit is template nesting depth: 50 levels. If you have templates including templates including templates, you can hit this with:

Maximum template depth of 50 was exceeded.

In practice, if you are hitting these limits, your pipeline is probably too complex and should be split into multiple pipelines with pipeline triggers.

Building Complex Conditional Logic

Real-world pipelines need complex conditions. Here is how to layer them.

Nested conditions for deployment gates

stages:
  - stage: Production
    dependsOn:
      - Staging
      - SecurityScan
    condition: |
      and(
        succeeded('Staging'),
        succeeded('SecurityScan'),
        or(
          eq(variables['Build.SourceBranch'], 'refs/heads/main'),
          and(
            startsWith(variables['Build.SourceBranch'], 'refs/heads/release/'),
            eq(variables['forceDeployRelease'], 'true')
          )
        ),
        ne(variables['skipProduction'], 'true')
      )

This deploys to production only when: staging and security scans passed, AND (it is the main branch OR it is a release branch with force-deploy enabled), AND nobody set skipProduction.

Conditional variable assignment

variables:
  isProd: $[eq(variables['Build.SourceBranch'], 'refs/heads/main')]
  replicas: $[if(eq(variables.isProd, 'True'), 3, 1)]
  memoryLimit: $[if(eq(variables.isProd, 'True'), '2Gi', '512Mi')]

Note that if() in runtime expressions returns the second argument when the condition is true, or the third when false.

Complete Working Example

Here is a complete pipeline that ties together most of the concepts in this article. It builds a .NET application with dynamic version numbering, runs tests conditionally, and deploys to multiple environments with environment-aware configuration.

trigger:
  branches:
    include:
      - main
      - release/*
      - feature/*

parameters:
  - name: forceFullBuild
    type: boolean
    default: false
    displayName: 'Force full build (skip caching)'

  - name: deployEnvironments
    type: object
    default:
      dev:
        resourceGroup: 'rg-myapp-dev-eastus2'
        replicas: 1
        sku: 'B1'
      staging:
        resourceGroup: 'rg-myapp-staging-eastus2'
        replicas: 2
        sku: 'S1'
      production:
        resourceGroup: 'rg-myapp-prod-eastus2'
        replicas: 3
        sku: 'P1v2'

  - name: targetServices
    type: object
    default:
      - api
      - web
      - worker

variables:
  majorMinor: '3.1'
  patch: $[counter(variables.majorMinor, 0)]
  fullVersion: '$(majorMinor).$(patch)'
  isMain: $[eq(variables['Build.SourceBranch'], 'refs/heads/main')]
  isRelease: $[startsWith(variables['Build.SourceBranch'], 'refs/heads/release/')]
  shouldDeploy: $[or(eq(variables.isMain, 'True'), eq(variables.isRelease, 'True'))]
  buildConfiguration: $[if(or(eq(variables.isMain, 'True'), eq(variables.isRelease, 'True')), 'Release', 'Debug')]
  imageTag: $[coalesce(variables['overrideTag'], variables.fullVersion)]

stages:
  - stage: Build
    displayName: 'Build v$(fullVersion)'
    jobs:
      - job: BuildAndTest
        displayName: 'Build $(buildConfiguration)'
        pool:
          vmImage: 'ubuntu-latest'
        steps:
          - script: |
              echo "Building version $(fullVersion)"
              echo "Configuration: $(buildConfiguration)"
              echo "Image tag: $(imageTag)"
            displayName: 'Display build info'

          - script: |
              dotnet restore
              dotnet build --configuration $(buildConfiguration) /p:Version=$(fullVersion)
            displayName: 'Build solution'

          - script: |
              dotnet test --configuration $(buildConfiguration) \
                --logger "trx;LogFileName=testresults.trx" \
                --collect "Code coverage"
            displayName: 'Run tests'
            condition: or(succeeded(), eq('${{ parameters.forceFullBuild }}', 'true'))

          - script: |
              echo "##vso[task.setvariable variable=builtVersion;isOutput=true]$(fullVersion)"
              echo "##vso[task.setvariable variable=builtConfig;isOutput=true]$(buildConfiguration)"
              echo "##vso[task.setvariable variable=builtImageTag;isOutput=true]$(imageTag)"
            name: buildOutputs
            displayName: 'Set output variables'

          - ${{ each service in parameters.targetServices }}:
            - script: |
                docker build \
                  -t myregistry.azurecr.io/${{ service }}:$(imageTag) \
                  -f src/${{ service }}/Dockerfile \
                  .
              displayName: 'Build ${{ service }} image'

          - script: |
              echo '${{ convertToJson(parameters.deployEnvironments) }}' > deploy-config.json
            displayName: 'Write deployment config'

          - publish: deploy-config.json
            artifact: config
            displayName: 'Publish config artifact'

  - ${{ each env in parameters.deployEnvironments }}:
    - stage: Deploy_${{ env.key }}
      displayName: 'Deploy to ${{ upper(env.key) }}'
      dependsOn:
        - Build
        ${{ if ne(env.key, 'dev') }}:
          - ${{ if eq(env.key, 'staging') }}:
            - Deploy_dev
          - ${{ if eq(env.key, 'production') }}:
            - Deploy_staging
      condition: |
        and(
          succeeded(),
          ${{ if eq(env.key, 'production') }}:
            and(
              eq(variables.isMain, 'True'),
              ne(variables['skipProduction'], 'true')
            )
          ${{ else }}:
            eq(variables.shouldDeploy, 'True')
        )
      variables:
        deployVersion: $[stageDependencies.Build.BuildAndTest.outputs['buildOutputs.builtVersion']]
        deployTag: $[stageDependencies.Build.BuildAndTest.outputs['buildOutputs.builtImageTag']]
        resourceGroup: ${{ env.value.resourceGroup }}
        replicas: ${{ env.value.replicas }}
        appSku: ${{ env.value.sku }}
        appName: ${{ format('app-myapp-{0}', env.key) }}
      jobs:
        - deployment: Deploy${{ env.key }}
          displayName: 'Deploy to ${{ env.key }}'
          environment: ${{ env.key }}
          pool:
            vmImage: 'ubuntu-latest'
          strategy:
            runOnce:
              deploy:
                steps:
                  - download: current
                    artifact: config

                  - script: |
                      echo "Deploying version $(deployVersion) to ${{ env.key }}"
                      echo "Resource group: $(resourceGroup)"
                      echo "Replicas: $(replicas)"
                      echo "SKU: $(appSku)"
                      echo "App name: $(appName)"
                      echo "Image tag: $(deployTag)"
                    displayName: 'Display deployment info'

                  - ${{ each service in parameters.targetServices }}:
                    - script: |
                        echo "Deploying ${{ service }}:$(deployTag) to $(appName)-${{ service }}"
                        echo "Setting replicas to $(replicas)"
                      displayName: 'Deploy ${{ service }} to ${{ env.key }}'

                  - script: |
                      echo "##vso[task.setvariable variable=deployedVersion;isOutput=true]$(deployVersion)"
                    name: deployOutput
                    displayName: 'Set deployment output'

  - stage: Notify
    displayName: 'Post-deployment'
    dependsOn:
      - ${{ each env in parameters.deployEnvironments }}:
        - Deploy_${{ env.key }}
    condition: succeededOrFailed()
    jobs:
      - job: SendNotification
        pool:
          vmImage: 'ubuntu-latest'
        steps:
          - script: |
              echo "Pipeline complete"
              echo "Build: $(Build.BuildNumber)"
              echo "Reason: $(Build.Reason)"
            displayName: 'Pipeline summary'

This pipeline demonstrates:

  1. counter() for auto-incrementing patch versions
  2. coalesce() for falling back from an override tag to the generated version
  3. each() for iterating over services and environments
  4. format() for building app names from environment keys
  5. convertToJson() for serializing parameter objects to files
  6. dependencies / stageDependencies for passing the built version between stages
  7. Nested conditions combining branch checks, stage results, and override flags
  8. Compile-time if/else for environment-specific stage dependencies
  9. Runtime expressions for variables that need build context

Common Issues and Troubleshooting

1. "An expression is not allowed in this context"

/azure-pipelines.yml (Line: 14, Col: 5): An expression is not allowed in this context

You used a runtime expression ($[ ]) where only compile-time expressions (${{ }}) or macro syntax ($( )) are allowed. This commonly happens when you try to use $[ ] inside step inputs or script bodies. Step inputs use $( ) for variable substitution, not $[ ].

Fix: Use $[ ] only in variable definitions and conditions. Use $( ) in step inputs and scripts.

2. "The template expression is invalid"

The template expression '${{ variables.myVar }}' is invalid:
the value of 'variables.myVar' cannot be determined at template expansion time

You tried to use a runtime variable inside a compile-time expression. Variables like $(Build.BuildId) do not exist at compile time. Only parameters and literal values are available in ${{ }}.

Fix: Move the expression to a runtime context ($[ ]), or use a parameter instead of a variable.

3. Output variable is always empty

# Job B tries to read Job A's output, but gets empty string
variables:
  version: $[dependencies.JobA.outputs['setVer.version']]

This happens for three common reasons:

  • Missing isOutput=true -- the setvariable command must include isOutput=true for the variable to be accessible from other jobs.
  • Wrong step name -- the name field on the step (not displayName) is what you use in the expression.
  • Missing dependsOn -- Job B must declare dependsOn: JobA for the dependencies context to be populated.

Fix: Verify all three: isOutput=true, correct step name, and explicit dependsOn.

4. counter() does not reset when expected

Expected version 1.0.0 but got 1.0.47

The counter() prefix must actually change for the counter to reset. If you hardcode counter('1.0', 0), the counter will never reset because the prefix never changes. If you use a variable like counter(variables.majorMinor, 0), the counter resets when you change majorMinor.

Fix: Make sure the prefix string genuinely changes. Also remember that counter() is per-pipeline -- if you rename the pipeline, the counter resets automatically because it is a new pipeline.

5. each() produces "Unexpected value" error

/templates/deploy.yml (Line: 8, Col: 9): Unexpected value ''

You tried to use each() in a runtime expression or in a non-template context. The each() function only works in compile-time template expressions (${{ }}), and the collection must be a parameter or a compile-time-known value.

Fix: Ensure each() is inside ${{ }} and the collection is a parameter with type object, not a runtime variable.

6. Condition is always true (or always false)

# This is always true because the string 'false' is truthy
condition: variables.shouldSkip

Remember Azure DevOps truthiness rules. The string 'false' is truthy (it is a non-empty string that is not literally 'False' with capital F). Use explicit comparison:

condition: eq(variables.shouldSkip, 'true')

Best Practices

  • Always use explicit comparisons in conditions. Do not rely on truthiness. Write eq(variables.flag, 'true') instead of just variables.flag. The truthiness rules are non-obvious and vary by case sensitivity.

  • Prefer coalesce() over nested if() for defaults. If you are just picking the first available value from a list, coalesce(a, b, c) is cleaner and easier to read than if(eq(a, ''), if(eq(b, ''), c, b), a).

  • Use counter() with a version prefix variable, not a hardcoded string. This lets you reset the counter by changing the variable value, which you can do through the pipeline UI or a variable group without editing YAML.

  • Keep compile-time and runtime concerns separated. Do not mix ${{ }} and $[ ] in ways that make the evaluation order ambiguous. Template parameters and structural decisions (which stages exist, which steps are included) belong in ${{ }}. Runtime data (build IDs, branch names, dependency outputs) belongs in $[ ].

  • Name your steps when you need output variables. If a step sets an output variable, it must have a name field. Use short, descriptive names like setVars or buildOutput. The displayName is for humans; name is for expressions.

  • Document complex conditions with comments. YAML supports comments with #. When you have a condition spanning five or more lines with nested logic, add a comment explaining the business rule in plain English.

  • Use convertToJson() to pass structured data to scripts. Instead of passing ten separate variables to a script, pass the whole parameter object as JSON and parse it in the script. This reduces the variable surface area and keeps the configuration co-located.

  • Test expressions in a throwaway pipeline first. Expression syntax errors are only caught at queue time (compile-time) or run time. There is no linter or dry-run mode. Create a scratch pipeline to test complex expressions before putting them in your production pipeline.

  • Watch for case sensitivity in variable names. Variable names in expressions are case-insensitive on Windows agents but case-sensitive on Linux agents. The bracket syntax variables['MyVar'] is the safest way to access variables because it avoids dot-notation parsing issues with special characters.

References

Powered by Contentful