Deployments and Environments

When you start building a solution in ScriptRunner Connect, you can start testing your solution right away by either manually triggering scripts or event listeners or by triggering event listeners externally (triggering the event from the external service). Once you have reached a state where your solution is ready and is working in your current environment as expected, you probably want to make sure the version you are happy with stays that way. In other words, when you come back to your workspace and introduce some changes down the line, you probably won't expect these changes to be applied to the existing environment right away, as you would like to test the changes first and only when it looks good do you want to deploy the new version. This is what the deployments feature is meant to help you out with.

In short, deployments create a snapshot of your workspace and make sure it becomes immutable, meaning that once a deployment is created, nobody, not even you, can change it. The concept of environments allows you to multiply that experience. Instead of having a singular deployment active at any given time, you can create multiple environments and configure these environments to target various deployments you have created. The most common set of environments you can think of are production, staging, and development, as you ideally should start developing a solution in your local/isolated development environment. When things look good, you deploy the solution into the staging environment and let other people test if your solution works as intended, and if the staging feedback looks good, you should finally promote the version into the production environment. This is not always possible, or you might have a different set of environments—sometimes, you don't have any other environments to work with other than the production! We make no assumptions about the ways that teams work best. Just know that when you need to have your solution deployed into more than a single environment, the environments feature is there to help you out.


As mentioned earlier, deployments allow you to create a snapshot of your current workspace, which becomes immutable (fixed in time). You can create a deployment whenever you need to by clicking Deploy on the workspace screen. You have to specify a version for your new deployment, or you can choose to specify a label for the version and select at least one environment to deploy your new version into.

If you only have one environment in your workspace, environment selection does not occur, and your sole environment is deployed.

Semantic versioning

When deploying, a new version is suggested for you, you can specify any version you like as long as it is in the format of semantic versioning and the new version is higher than the old version. In short, semantic versioning means that the version is made up of three numbers: major, minor, and patch.

The first number is the major version, which you should increase when you make a major change, often involving a breaking change. The second version is the minor version, which you should increase when you add functionality, preferably in a backward-compatible manner. And the third version, the patch version, should be increased if the change was related to fixing a bug or a change that is not considered new functionality, also in a backward-compatible manner.

Version labels are optional in a free-text form that can be used to add short descriptions to help you distinguish deployments in the future.


You can imagine the concept of environments to be a collection of connectors that you can attach for API connections and event listeners in each environment you have created in your workspace. You always start out with a single environment when you create a new workspace. This environment can represent anything for you, and often times you might not even need more than a single environment. You can create as many environments as you need and name them however you like. An environment is simply a grouping for your connectors that are attached to your API connections and event listeners, and they also allow you to specify a different schedule for scheduled triggers for each environment.

When you create a new environment in your workspace, you'll be asked to set up connectors again for the API connections, event listeners (when required), and a schedule for scheduled triggers in the new environment. How you set up your connectors and schedules for your environments is entirely up to you to decide. Oftentimes you'll also receive new setup instructions for event listeners when event listeners happen to make use of webhooks that require manual setup.

Avoid forking ⛔ 

Scripts remain static across environments. Changing scripts in each environment separately would essentially fork your codebase. While it might seem like a good idea at first, forking a codebase has far-reaching implications for codebase maintenance.

If you need to fork your codebase, consider creating a copy of the workspace.

Environment selector

In the workspace header, you'll find a selector for the environment. When you switch the environment, the following occurs:

  • Attached connectors for API connections and event listeners change to reflect the state of the connector for the environment you have selected. Schedules for scheduled triggers will change if you use different schedules in your environments.
  • Triggering scripts or events listeners manually will use the appropriate connectors for API connections of the environment you have selected and will execute the latest version of the scripts rather than the deployed version. Having the option to manually trigger the latest version allows you to test new changes without creating another environment. However, if you need to test the latest version by triggering scripts externally or on a schedule, you have to create a new environment, as the externally triggered scripts or scripts triggered on a schedule will trigger the deployed version.
  • By default, the console will only show logs that originate from the environment you have selected. (You can disable this feature by unchecking the Filter by Environments flag in the console).

Triggering deployed code

When switching to an environment that is deployed, you will still see the latest version of the workspace (draft), not the version of the workspace that is deployed. However, the code that is triggered externally will target the version that is deployed.

Changing which deployment is used in each environment

When you create a new deployment, you can choose one or more environments in which to activate that new deployment. However, sometimes you may want to change the mapping of which environment uses which deployment without creating a new deployment because you should only create a new deployment if there is something new to release. 

If you have multiple environments, for example, development, staging, and production, you may want to create a new deployment and deploy it to staging so other users can test your solution. If the code needs further adjustment, you can deploy another version to staging and repeat the process as many times as necessary.

When you are satisfied with the work, you probably want to deploy it to production. Rather than creating a new deployment, which technically already exists, you can simply change the production environment's version. While this behavior can also be achieved by creating a new deployment and selecting a production environment in the deployment screen, it's not recommended because you can use the Environment Deployment Manager instead to change the production deployment version. This feature can also be used to roll back a version that cannot be achieved by creating a new deployment. For example, you deployed a version into production and then found out it has a severe bug, and if you don't have time to fix the bug and deploy a new version into production (fix forward), which generally is a recommended practice, you may choose to roll back your production environment by selecting an older version you know to work well. Currently, we don't support viewing the state of the workspace for deployments, so you just have to know which older version was good enough. If you're not sure which version worked well, you should try to fix forward by creating a new deployment where your bug is fixed. Using descriptive version labels can help you further distinguish versions.

Script version execution priority

As long as the environment that you are triggering scripts or event listeners from does not have a deployment attached, a current version of your workspace code will get triggered. When triggering externally, a saved version of the script will get triggered. When you trigger the script or event listener manually (internally), an unsaved version of your script will get triggered if your script happens to be unsaved in the first place; otherwise, a saved version will get triggered. This can help you test your code without saving it first if you're unsure about the changes you're about to make by triggering the script or event listener manually. The same logic also applies when you're triggering the script or event listener manually for the environment that happens to be deployed. As already hinted, whenever the script is triggered externally, and the environment the script was triggered from happens to be deployed, a version (fixed in time) of the code that belongs to that deployment gets triggered. The only exception to this rule is that when you create a new script in an environment that is deployed and trigger it externally, it will continue to trigger your current version until a new deployment is created and that deployment is activated for the environment you are triggering from.

Scripts that get triggered on a schedule are considered externally triggered in this case, meaning that if the environment is deployed, then the scheduled trigger triggers the version of the script that is deployed. However, you can always continue to trigger scripts manually that are set as targets for scheduled triggers if you need to test the latest version in an environment that is deployed.

An example

Let's say we are building a solution that listens to an event originating from Jira that then does something in Confluence. Let's say we have Jira A and Confluence A that represent our production environment, Jira B and Confluence B that represent our staging environment, and Jira C and Confluence C that represent our development environment. In this case, the logical mapping would be to say that production is Jira A → Confluence A, staging Jira B → Confluence B, and development Jira C → Confluence C.

Initially, we may just start out by setting up our default environment with Jira C as the connector to the event listener and Confluence C as the connector to the API connection. Initially, we won't be making any deployments, so when we trigger scripts, either manually or externally, a current version of our workspace is always triggered. However, the difference might be whether the code that gets triggered is unsaved or saved, depending on whether we'll be triggering events manually (internally) or externally.

Once we're happy with our own testing and would like to get some external feedback, we can create a new environment for staging and set up Jira B and Confluence B connectors in that environment. We can then proceed to create a new version and deploy it into staging. We can create as many more deployments as we need until we're ready to deploy into production. Now that the staging environment is deployed, a version of the code that was snapshotted at the time of creating the deployment will get triggered when events are processed externally. We obviously can switch our workspace to the staging environment and trigger scripts manually to try out our latest code in staging if we need to, but everyone else will be triggering events externally by interacting with the Jira B instance, which is triggering the code that we have deployed.

Eventually, when we are satisfied with the state of our solution in the staging environment, we can create a new environment and set up Jira A and Confluence A connectors that represent our production environment. This time, instead of making a new deployment, we can use the Environment Deployment Manager to set the production environment to use the same version we were happy with in the staging environment. And should there be a need to add new functionality or fix bugs, we can repeat this cycle, develop and test it initially in our development environment, deploy to staging for external verification, and, finally, promote it to production when we're happy with the changes.

As mentioned earlier, we make no assumptions about how many environments you might need, if any, and what connector mapping you use for them. For example, you can only have a single Jira instance, Jira A, that you can work with, but maybe you have three different Confluence instances, too. So your setup could look like Jira A → Confluence A for production, Jira A → Confluence B for staging, and Jira A → Confluence C for development.

In this example, an instance-level virtual environment isolation for Jira needs to be achieved since you probably don't want to process all the events for all the environments. For Jira, it could be achieved by configuring project-level webhooks by using a JQL expression to process only events originating from certain projects that represent virtual environments for you. Not all services may offer such convenient means to configure virtual environment isolation on the service side. In other cases, you may need to selectively process events on the ScriptRunner Connect side, which you can do by checking from which environment the event was originating or by reading environment info from the context object (, which is always the second parameter passed into your functions.

For example:

export default async function(event: any, context: Context) { if ( === 'Staging') { // Do something only when the event was received from Staging environment } }
On this page