The Automation Center has a testing function that lets you make sure that the customer journey you have designed works as expected. You can start to test a program as soon as all its paths are validated, and stop it whenever you want. When you stop a test, the program reverts to the state In design.
As a quick reminder, here is the onboarding video about Testing, Launching & Reporting Programs:
What can be tested in a program?
The test functionality in the Automation Center allows you to test the logic of your program structure. There are four main things to test:
- That the contacts enter the program at the right time and for the right reason.
- That these contacts progress along the various paths as expected, according to their responses or their contact properties.
- That all the various messages and other actions are executed as expected.
- That the contacts exit the program at the right time and for the right reason.
In particular, testing is important to make sure that you have arranged the logic of any filter switches correctly (i.e. that the right contacts are included or excluded).
Since you can only use test segments, you cannot use this functionality to test the marketing effectiveness of your program, or of one path over another. For that you can use the A/B testing function when the program is live.
Once you launch a program your options for changing it become somewhat limited. Since you can stop the test, edit the program and test again as many times as you like, this is an important step to identify weak points and improve them before you go live.
Which programs can be tested
You can only test programs that start with one of the following transactional entry points:
- Form
- Data change
- New contact
- External event
In addition to this, the program must be in the state In Design.
If you want to test a program that starts with the node Entry from another program, you must launch the program and then set the original program to In testing. Then you can send test contacts through both programs and test the whole journey.
Exceptions
Before testing a program, please take the following limitations into consideration.
- You cannot test a program after it has been launched. For tips on this scenario, see below.
- You cannot test a program that starts with the Data change or New contact entry point by importing contacts. In such cases, you can add contacts manually on the Contacts menu > Add contact page or use the API.
- The Ignore opt-in feature does not work in Test mode. So, if you would like to test programs starting with the Form or External event transactional entry points, then you can send campaigns only to contacts whose opt-in status is
TRUE
.
Which programs cannot be tested
You cannot test programs that start with one of the following entry points:
- Target segment
- Recurring filter
- Batch email
- Recurring batch email
- Anniversary
- On auto-import
How program testing works
The test segment
Program testing is performed by allowing a specific set of contacts to enter and interact with the program. These contacts must already be in a segment. When you set a program to In testing, your first task is to select this segment.

Only email addresses in this test segment will be processed by the program during the test.
This is to ensure that real contacts do not accidentally enter a program while it is being tested. You can use either the entire segment, or individual contacts in it, to trigger the entry criteria and enter the program. Only the first 50 contacts in your test segment will be taken into account. If it contains more, the extra contacts will not be processed by the program.
We recommend creating a test segment from contacts within your own organization (e.g. use a segment where the email address should end in @yourcompany.com).
To change your test segment, stop the test, then select another segment.
Please note that if you stop the test, all the processes will be reset.
How does testing affect reporting?
Contacts passing through the program during the test do not show up in the program summary. On the other hand, all messages sent, opened and clicked will show up in the respective email reporting pages. This is because your program may rely on this data, so we need to make sure that it is recorded even during the test.
As long as you test with a small segment, your test responses should not have any significant impact on program reporting once the program has been live for a few days.
Testing a program
To test a program, proceed as follows:
1. Click Program is in design to display your program options.

2. Click Test and choose your test segment. Then click Start test.
At this point your program will be validated. Any errors that would prevent the program from being active will be displayed and you must correct them before you can continue.
When your program is in the state In testing, you can begin to test it by using the contacts in the test segment to interact with the program as any normal customer would.
For example, if your program starts with a Form node, then you would have to register a test contact using that form and then check that the test contact received the messages you would expect them to receive.
Wait nodes are still active during testing, but you can push contacts through them immediately by clicking the Fast Forward badge that appears on the node.

Testing a program starting with the New contact entry point
To test a program that starts with the New contact entry point, proceed as follows:
1. Create your test segment.
- When adding contacts to your test segment, use email addresses that are not already in your contact database in Emarsys.
- Make sure that the contacts in the test segment meet all criteria set in your program, otherwise they will be filtered out before they could pass through your program.
2. To test your program, click Program is In design in the top-right corner, select Test, choose your test segment from the drop-down, then click Start test.
3. On your website, sign up with the email addresses that you specified in your test segment. As a result, these users will be newly created in Emarsys.
4. Open your program that is being tested to check how your test contacts are passing through it.
To push your contacts through Wait nodes immediately, click the Fast Forward badge that appears on the specific node.
If you want to test a program after launch
Once a program has been launched you cannot put it into test mode any more. Any changes you make can fundamentally alter the nature of the program and make before and after comparisons meaningless. They can also adversely affect the experience of contacts already inside the program. We can offer two tips on testing programs after that have been launched:
1) Test a new program, then swap the entry points
If you really want to test major changes to a program structure you can copy it, modify and test the copy, and switch the entry point over to the copy once you are happy with the results. Before switching the entry point over to the new version, we recommend that you first finish the old version and only then activate the new program. In this way, you can avoid having two active programs with the same marketing goal.
Contacts who have already entered the original program will have to proceed through it, and after you have activated the new program, new contacts will enter the new version.
2) Pause the program and test a copy of it
If you don’t want to ignore contacts already in a program (e.g. those queued at a Wait node) then your best option is to pause the program, make the changes you want, then copy the program and test the copy. Once you are happy with the result you can resume the original program with the new workflow.
We recommend pausing programs for no longer than 30 minutes. If a large number of contacts have been queued, this can affect the performance when you resume it. See Pausing programs.
The A/B Splitter node
The A/B Splitter node is a great way to test minor improvements in your program while it is live, or to test different messages (or even channels) against each other. In this way you can continually experiment with new ideas and keep optimizing your strategies and improving your customer journey.
You decide how big your test groups are, and how big your control group is, by assigning percentages to each splitter node.
In the example below we are testing two variations of an email with 10% of the launch list each, while the remaining 80% receive the original version.

When you feel that you have tested enough and want to choose one path over the other, increase the preferred path to 100% and reduce the others to 0%. All future contacts will now receive the preferred version.
Before you make your final decision, you should consider one final time if the results are statistically meaningful. The key questions to bear in mind are:
- Was the sample group large enough?
- Are the differences between the various paths really significant (i.e. would you get the same result 19 times in 20 similar tests)?
About how we assign contacts to paths
You might notice at first that the numbers of contacts passing through each splitter do not exactly correspond to their respective percentages. You’ll be happy to know that there is a very good reason for this…
For statistical methods to work well, we need to make sure that we eliminate any effects that could skew the results. For batch emails this is easy – we simply divide the launch list randomly between the paths. With an Automation Center program it is a bit more complicated, since contacts are passing through one by one, and we do not know beforehand how many contacts will pass through the nodes before the test ends.
Because of this, the only way we can make sure that we don’t skew the results is by randomly assigning each individual contact to one of the paths according to their relative probability. And probability being what it is, it takes a while before the distribution begins to settle down into a stable pattern. It may take several thousand contacts to pass through before the differences become too small to notice. So be patient, wait until your test is stable, and rest happy in the knowledge that your A/B tests are scientifically valid.