A pipeline is a sequence of automation steps. Pipelines describe a workflow, where steps are executed in series or in parallel, and data can be passed from one step to subsequent steps. Pipelines can be built from the visual builder in the Refactr application, which includes a side-by-side text editor for the underlying YAML definition. A set of inputs (called variables) and outputs can be defined for each pipeline. Variables are provided when executing a pipeline, and can be provided by API or through rich, dynamic forms from the application.
The action performed by each step of a pipeline is defined by the step’s module type. Modules eliminate the heavy lifting of running automation tools by providing a powerful low-code interface, seamless authentication using credentials, and automatic installation of the tools at any version during the pipeline run. A wide variety of modules are provided out of the box, including popular scripting languages, configuration management tools, infrastructure automation tools, and security automation tools.
A powerful feature of the Refactr Platform is the ability to include pipelines as steps in other pipelines. In addition to the built-in step modules, you can build your own reusable component pipelines with structured inputs and outputs that can be shared with others.
Variables are structured data values available at runtime to pipelines. Project variables are available to all pipelines in the project. Pipeline variables are available at the pipeline scope, and can be overridden at runtime, either by the run inputs (for a top-level pipeline), or by the step properties (for an included pipeline).
Data can be manipulated during pipeline execution by utilizing Refactr’s powerful expression evaluation engine. All step properties can be passed a literal value directly, or an expression which evaluates just before the step executes. Built-in functions and contextual variables allow you to retrieve and transform data from one pipeline step to the next.
Credentials are reusable authentication details that can be easily passed into step modules to eliminate the complexity of authenticating with underlying tools. Credentials are treated as first-class variable types, which allows you to define a credential as a dynamic input to a pipeline. Many common credential types are supported, including SSH keys, passwords, and API tokens.
A job defines how and when a pipeline executes, including its trigger type, predefined input variables, and runtime behavior. Jobs are useful for creating a reusable set of of pipeline inputs. Jobs can also be triggered on a recurring schedule. Pipelines can also be run directly without a job.
Running a pipeline or job creates an individual run, which stores detailed run output for each executed step. Runs are automatically deleted after the retention period of your subscription plan.
Pipelines, jobs, and other application objects are organized in projects. Projects are the primary scope for access control, so you can control who in your organization can see and modify data by assigning users and groups to projects.
Organizations are shared accounts that allow collaboration within a real-world organization. All users are associated with a primary organization, and organization administrators can control which projects users have access to.
Runners are machines that execute pipelines. Refactr provides secure, cloud-hosted runners out of the box that allow you to get started quickly. You can also host runners in your own Linux environment by using a self-hosted runner.