The Value Control reporting page is located in the Analysis menu. It gives you a summary of all the campaign evaluations over the past 30 days and lists all the campaigns that have been launched using this feature.
You can activate Value Control for campaigns in the email editor.
Aggregated results for the last 30 experiments
This section presents an aggregated summary of the last 30 campaigns that used Value Control, giving you a statistically relevant overview of the value of your email strategy.
The figures represent the following metrics:
- Purchase rate uplift – The number of additional purchases that you would have missed out on, had the campaigns not been sent.
- Revenue uplift – The additional revenue that you would have missed out on, had the campaigns not been sent.
- Revenue per purchase uplift – The change in average order value generated by your email campaigns.
- Engagement rate uplift – The additional visits to your web shop that you would have missed out on, had the campaigns not been sent.
This table lists (up to) the last 30 campaigns sent using Value Control.
Each campaign is shown with the size of the test and control groups and the comparisons for the four main metrics:
- Purchase rate – The % of contacts who made a purchase.
- Revenue per purchase – The average order value of a purchase from that group.
- Revenue per person – The average revenue spread over all contacts in the group.
- Total additional revenue - The difference in revenue between the test group and the control group.
- Engagement rate – The % of contacts who visited the website.
In the Result column you have a simple visual aid to the campaign effectiveness:
Click to open the Value Control report for a single campaign.
The campaign reporting screen for Value Control is split up into three sections.
Here you have the details of when the campaign was launched and how many contacts were in the test and control groups, as well as an overview of the results.
Click the arrow to the left of the campaign title to return to the Email Analysis overview page.
You also have three more controls available in the upper right corner:
- View campaign report – Open the standard email analysis Results Summary page for that campaign.
- Calculate results for a segment – You can select a segment from the list and Value Control will add the data for that segment to the Detailed Results table (naturally this will only apply to contacts who are in that segment and the on launch list). Please note that this will not refresh the main campaign summary.
- Save as contact list – This creates two contact lists, one for the test group and one for the control group. These will appear on the Contact Lists page with the naming convention ‘campaign_name – Test group‘ and ‘campaign_name – Control group‘.
2. Comparison Charts
Here you can see the metrics for the test and controls groups side by side, as well as total figures for the impact that can be attributed to this campaign alone. For each chart the statistical confidence level is indicated, as well as the overall effect (positive or negative).
The difference between the two columns shows how much the campaign has affected your customers’ normal engagement and purchase patterns, and whether or not it was worth sending.
Above each chart you can see the difference between the two groups in absolute values (the % figures are % points).
Value Control also calculates whether or not each comparison is statistically relevant, and shows if the figures are above or below the 95% confidence level threshold.
Value Control is dependent on purchase data uploaded by Smart Insight. There are two scenarios where data load can affect Value Control:
- If there is no Smart Insight data loaded in the seven days prior to an experiment, the experiment will be cancelled in advance.
- If there is no data uploaded during the experiment, you will be informed that the experiment could not be conducted.
2. What does “Statistically relevant” mean?
A key term to understand when working with Value Control is how we determine if the results are statistically relevant. Based on the distribution of underlying data per contact and the variance of the behavior between the contacts, we establish the confidence level of the results.
A confidence level of 95%, for example, means that if you were to measure the effectiveness of the same campaign with 100 similar launch lists, you would expect to get the same result in at least 95 of the evaluations.
If the behavior is very “noisy”, i.e. with high variance in relation to the measured difference, the confidence level will usually be lower, and vice versa. These indicators are there to help you decide whether or not to take action based on these results.