Running Pipelines
- About Jobs
- Specifying Inputs for a Run
- Running a Pipeline on a Schedule
- Incoming Webhook Job Trigger
- Disabling Jobs
- Suppressing Output From a Run
- Getting Outputs From a Run
- Masking Secrets in Run Events
- Queueing Multiple Runs
- Automatic retry of a failed run
A pipeline can be run from the application by clicking the Run button from the pipeline builder. Running a pipeline queues up the run and makes it available to runners, which acquire the run, execute it, and report detailed log information back to the application.
About Jobs
While pipelines can be run directly, you can also create one or more jobs, which enable more sophisticated control over how and when a pipeline runs. A job is like a “run configuration” for a pipeline. It defines a pipeline revision, input variables, trigger type, and runtime settings for a pipeline run.
Specifying Inputs for a Run
If your pipeline has input variables, these will appear in the run dialog when a user runs your pipeline so that they can override the run parameters. Variables marked as hidden will not appear in run dialogs, but their values will be filled in at runtime. If a pipeline variable is marked as default, then the value specified in the pipeline will fill in for values omitted at runtime.
Note
When editing a pipeline, it’s possible to omit variable values, even if the value is required. This is because there is still an opportunity to provide values at runtime. Jobs, however, require a valid set of variables on creation. This is because a job’s trigger type can be scheduled, where there is no opportunity to set variables at runtime. This means that in some situations you’ll need to provide placeholder values for variables in a job if you’re overriding them at runtime, which is most common when using the API.Running a Pipeline on a Schedule
To run a pipeline on a schedule, create a job. To create a job, visit to the Jobs page from the main navigation and then click New Job. In the dialog, select Scheduled for the trigger type, and then fill out the schedule fields.

For a scheduled job to run, it needs an available runner assigned to the associated project. If your project doesn’t have any runners, jobs will fail to create runs. This prevents the scheduler from creating a backlog of queued runs which are flushed all at once when the runner is started.
Incoming Webhook Job Trigger
Jobs can be configured with an incoming webhook trigger type, which enables powerful integrations with other systems that fire webhook events. For example, you can configure a job to run a pipeline every time a commit is made to a git repository on GitHub or GitLab.
See here for more information about webhooks in general.
To create an incoming webhook job, visit the Jobs page from the main navigation and then click New Job. In the dialog, select Incoming Webhook for the trigger type, and then optionally fill out the webhook fields. These fields can be changed after the job is created.

Note
- Incoming webhooks only support JSON request bodies. The
Content-Type
of incoming webhook requests should beapplication/json
. - Many features of Sophos Factory expressions are not supported in webhooks, including the
read_file()
helper and the filesystem test functions. Webhook expression evaluation is highly isolated and does not have access to a full operating system like it does inside a runner. - Webhook expressions must evaluate quickly. Attempting to process large amounts of data in an incoming webhook request may cause that request to be terminated and return an error code.
Incoming webhook jobs have several powerful features for transforming and validating the input request.
Transforming Input Variables
The Variables Transform field is an expression which can be used to dynamically create the input variables for a pipeline run from the webhook request. For example, for a pipeline with a single string variable called my_string
, we could use this expression for the variables transform:
{
"my_string": "some literal value"
}
This example “hard codes” the value of the variable my_string
. Often we’ll want to instead compute the value of this variable from the incoming request data. For example, if the external system sends a JSON body like this:
{
"company": {
"id": "5"
}
}
We can extract the company id into the my_string
variable using this expression:
{
"my_string": body.company.id
}
In addition to body
, we can also use the headers
object to access the incoming request headers. Headers are converted to lowercase, so if the company id is instead provided in a X-Company-Id
header, we can access it using this expression:
{
"my_string": headers["x-company-id"]
}
Validating Incoming Requests
Credentials and project variables are also available in webhook expressions, which is useful for performing custom authentication. Let’s say our external system adds an X-Auth-Token
header to webhook requests, and we want to validate that this token equals a secret value.
- Add a credential to the project with type API Token, and enter the secret token in the Token field. Let’s say the credential ID is
my_cred
. - In the webhook job, use the following expression in the Validator field to check that the
X-Auth-Token
header matches the credential value:
credential("my_cred").token == headers["x-auth-token"]
Project variables can be accessed using the vars.
syntax, so an equivalent validator can be created by using a project variable instead:
vars.my_cred == headers["x-auth-token"]
Some webhook systems use basic authentication. To validate these requests, use the Credential field of a webhook job. This field should be a username/password type of credential. The parsing of the username and password is performed automatically behind the scenes, so you don’t need to write an expression for this case.
Finally, for systems that don’t provide an authentication mechanism, you can configure an IP whitelist for a webhook job. Any requests not matching the whitelisted IPs will be rejected with a 401 status code.
Controlling the Webhook Response
Since some systems require that webhook endpoints return specific status codes, you can override the success status code in the job. This code will be returned if the webhook successfully executed, which may not mean that the request created a run, for example if the Condition expression evaluated to false. To force an incoming webhook to always return a specific status code, even when errors occur, you can turn on the Ignore Errors field.
Disabling Jobs
When a job is disabled, it will never run, even if it has a scheduled trigger type. The Run button will be disabled in the application, and calls to the API will return an error code.
Scheduled jobs can also be configured to be automatically disabled when any runs fail.
Suppressing Output From a Run
If you’d like to prevent the runner from sending any detailed data about the run back to the application, you can configure a job to suppress its reporting.
- Suppress variables: The input variables are only stored temporarily until the runner begins executing the run, and then they are deleted. Run variables will not be available from the run page.
- Suppress outputs: Pipeline outputs will not be sent to the application by the runner.
- Suppress events: The detailed event log from the run will not be sent to the application by the runner. Note that this can make it difficult to debug pipeline runs, but it also ensures that no step output logs leave the runner.
With all three of these settings enabled, the runner will only send metadata about the run to the application, which creates a high degree of data isolation on the runner. This is useful when your pipelines are working with highly sensitive data, and you don’t want this data ever reaching the Sophos Factory servers.
Getting Outputs From a Run
After a pipeline is finished executing, its evaluated outputs are available from the run history page as well as from the API.
To view outputs from the application, open the run from the Run History page, then select Outputs from the dropdown at the top.
Masking Secrets in Run Events
All credential and SecureString variable values will be automatically masked in any run events. If the secret appears in run event logs, it will be replaced by *****
. This helps prevent sensitive data from leaking into your run history.
When a secret value contains multiple lines of text, each line is treated as a single secret. While this reduces the chances of a multiline secret avoiding the masking routine, it can also result in over-masking. For example, if your secret contains formatted JSON, then the first line will be {
, which causes all instances of {
to be replaced with *****
, which might not be what you want. To work around this problem, simply enter JSON secrets as a single line.
Queueing Multiple Runs
When many runs are created quickly, they are placed into a queue to be processed by available runners. When no runners are assigned to a project, or there are not enough runners to process the runs, the maximum run queue length will eventually be reached. When the queue is full, you will no longer be able to create new runs from the application or API.
Automatic retry of a failed run
When a pipeline or job is run manually, there is an option to automatically retry the run if it fails. In Advanced Options, the value labeled “Automatic retry on failure” controls this feature.
The value of this option determines whether the run will be automatically retried when it fails. By default, the value is empty meaning that the run will not be retried automatically when it fails.
By entering a value greater than zero (0), the user can request that the run be automatically retried when it fails up to the number of retries specified. The maximum number of retry attempts that can be requested is 99. If the user enters a value of 10 and the run fails, then the run will be retried up to 10 additional times resulting in a total of up to 11 runs, the original and up to 11 retries.
When a run completes, it will be automatically retried when all of the following are true.
- The Run Status is ‘Failed’.
- The maximum number of retry attempts was entered when the run was started.
- The maximum number of retry attempts has not yet been reached.
- The run was started from a job and the job is not currently disabled.
- The run queue is not full.
The automatic run retries may stop before the maximum number of retries is reached when any of the following are true.
- The run queue is full when a run – original or automatic retry – was being submitted.
- The Run Status of the last completed run is ‘Successful’.
- The Run Status of the last completed run is ‘Canceled’.
- The run was started from a job and the job has been disabled.
The “Automatic retry on failure” option is available in the following screens.
- When running a job or pipeline manually, the value for the current run can be specified.
- Run Job
- Run Pipeline
- Run Catalog Pipeline
- For manually-triggered jobs, a default value can be stored in the Job and, optionally, overridden when run.
- Create Job
- Edit Job
The “Automatic retry on failure” option is not available in the following screens.
- When manually retrying a run.
- Run Detail. The Retry button starts a manual retry of the run, not an automatic retry. Automatic run retries are not allowed here regardless of the setting on the original run.