Set up A4T reports in Analysis Workspace for Auto-Target activities
The Analytics for Target (A4T) integration for Auto-Target activities uses the Adobe Target ensemble machine learning (ML) algorithms to choose the best experience for each visitor based on their profile, behavior, and context, all while using an Adobe Analytics goal metric.
Although rich analysis capabilities are available in Adobe Analytics Analysis Workspace, a few modifications to the default Analytics for Target panel are required to correctly interpret Auto-Target activities, due to differences between experimentation activities (manual A/B Test and Auto-Allocate) and personalization activities (Auto-Target).
This tutorial walks through the recommended modifications for analyzing Auto-Target activities in Analysis Workspace, which are based on the following key concepts:
- The Control vs Targeted dimension can be used to distinguish between Control experiences versus those served by the Auto-Target ensemble ML algorithm.
- Visits should be used as the normalizing metric when viewing experience-level breakdowns of performance. In addition, Adobe Analytics’ default counting methodology might include visits where the user does not actually see activity content, but this default behavior can be modified by using an appropriately scoped segment (details below).
- Visit-lookback scoped attribution, also known as the “visit lookback window” on the prescribed attribution model, is used by the Adobe Target ML models during their training phases, and the same (non-default) attribution model should be used when breaking down the goal metric.
Create the A4T for Auto-Target panel in Analysis Workspace
To create an A4T for Auto-Target report, either start with the Analytics for Target panel in Analysis Workspace, as shown below, or begin with a freeform table. Then make the following selections:
-
Control Experience: You can choose any experience; however, you will override this choice later. Note that for Auto-Target activities, the control experience is really a control strategy, which is either to a) Randomly serve among all experiences, or b) Serve a single experience (this choice is made at activity creation time in Adobe Target). Even if you opted for choice (b), your Auto-Target activity designated a specific experience as the control. You should still follow the approach outlined in this tutorial for analyzing A4T for Auto-Target activities.
-
Normalizing Metric: Select Visits.
-
Success Metrics: Although you can select any metrics on which to report, you should generally view reports on the same metric that was chosen for optimization during activity creation in Target.
Figure 1: Analytics for Target panel setup for Auto-Target activities.
Use the Control vs.Targeted dimension to compare the Target ensemble ML model to your control
The default A4T panel is designed for classic (manual) A/B Test or Auto-Allocate activities where the goal is to compare the performance of individual experiences against the control experience. In Auto-Target activities, however, the first order comparison should be between the control strategy and the targeted strategy. In other words, determining the lift of the overall performance of the Auto-Target ensemble ML model over the control strategy.
To perform this comparison, use the Control vs Targeted (Analytics for Target) dimension. Drag and drop to replace the Target Experiences dimension in the default A4T report.
Note this replacement invalidates the default Lift and Confidence calculations on the A4T panel. To avoid confusion, you can remove these metrics from the default panel, leaving the following report:
Figure 2: The recommended baseline report for Auto-Target activities. This report has been configured to compare targeted traffic (served by the ensemble ML model) against your control traffic.
Add experience-level breakdowns of metrics
To gain further insight into how the ensemble ML model is performing, you can examine experience-level breakdowns of the Control vs Targeted dimension. In Analysis Workspace, drag the Target Experiences dimension onto your report, then break down each of the control and targeted dimensions separately.
Figure 3: Breaking down the Targeted dimension by Target Experiences
An example of the resulting report is shown here.
Figure 4: A standard Auto-Target report with experience-level breakdowns. Note that your goal metric might be different, and your control strategy might have a single experience.
Why “Visits” is the correct normalizing metric for Auto-Target activities
When analyzing an Auto-Target activity, always choose Visits as the default normalizing metric. Auto-Target personalization selects an experience for a visitor once per visit (formally, once per Target session), which means that the experience shown to a visitor can change on every single visit. Thus, if you use Unique Visitors as the normalizing metric, the fact that a single user might end up seeing multiple experiences (across different visits) would lead to confusing conversion rates.
A simple example demonstrates this point: consider a scenario in which two visitors enter a campaign that has only two experiences. The first visitor visits twice. They are assigned to Experience A on the first visit, but Experience B on the second visit (due to their profile state changing on that second visit). After the second visit, the visitor converts by placing an order. The conversion is attributed to the most recently shown experience (Experience B). The second visitor also visits twice, and is shown Experience B both times, but never converts.
Let us compare visitor-level and visit-level reports:
Table 1: Example comparing visitor-normalized and visit-normalized reports for a scenario in which decisions are sticky to a visit (and not visitor, as with regular A/B testing). Visitor-normalized metrics are confusing in this scenario.
As shown in the table, there is a clear incongruence of visitor-level numbers. Despite the fact there are two total unique visitors, this is not a sum of individual unique visitors to each experience. Although the visitor-level conversion rate is not necessarily wrong, when one compares individual experiences, visit-level conversion rates arguably make much more sense. Formally, the unit of analysis (“visits”) is the same as the unit of decision stickiness, which means that experience-level breakdowns of metrics can be added and compared.
Filter for actual visits to the activity
The Adobe Analytics default counting methodology for visits to a Target activity might include visits in which the user did not interact with the Target activity. This is due to the way Target activity assignments are persisted in the Analytics visitor context. As a result, the number of visits to the Target activity can sometimes be inflated, resulting in a depression of conversion rates.
If you would prefer to report on visits in which the user actually interacted with the Auto-Target activity (either through entry to the activity, a display or visit event, or a conversion), you can:
- Create a specific segment that includes hits from the Target activity in question, and then
- Filter the Visits metric using this segment.
To create the segment:
- Select the Components > Create Segment option in the Analysis Workspace toolbar.
- Specify a Title for your segment. In the example shown below, the segment is named “Hit with specific Auto-Target activity”.
- Drag the Target Activities dimension to the segment Definition section.
- Use the equals operator.
- Search for your specific Target activity.
- Click the gear icon, then select Attribution model > Instance as shown in the figure below.
- Click Save.
Figure 5: Use a segment such as the one shown here to filter the Visits metric in your A4T for Auto-Target report
Once the segment has been created, use it to filter the Visits metric, so the Visits metric includes only visits where the user interacted with the Target activity.
To filter Visits using this segment:
- Drag the newly created segment from the components toolbar, and hover over the base of the Visits metric label until a blue Filter by prompt appears.
- Release the segment. The filter is applied to that metric.
The final panel appears as follows:
Figure 6: Reporting panel with the “Hit with specific Auto-Target Activity” segment applied to the Visits metric. This segment ensures that only visits in which a user actually interacted with the Target activity in question are included in the report.
Ensure the goal metric and attribution are aligned with your optimization criterion
The A4T integration allows the Auto-Target ML model to be trained using the same conversion event data that Adobe Analytics uses to generate performance reports. However, there are certain assumptions that must be employed in interpreting this data when training the ML models, which differ from the default assumptions made during the reporting phase in Adobe Analytics.
Specifically, the Adobe Target ML models use a visit-scoped attribution model. That is, the ML models assume a conversion must happen in the same visit as a display of content for the activity in order for the conversion to be “attributed” to the decision made by the ML model. This is required for Target to guarantee timely training of its models; Target cannot wait for up to 30 days for a conversion (the default attribution window for reports in Adobe Analytics) before including it in the training data for its models.
Thus, the difference between the attribution used by the Target models (during training) versus the default attribution used in querying data (during report generation) might lead to discrepancies. It might even appear that the ML models are performing poorly, when in fact the issue lies with attribution.
The exact metric definition, and attribution settings depend on the optimization criterion you specified during activity creation.
Target defined conversions, or Analytics metrics with Maximize Metric Value Per Visit
When the metric is a Target conversion, or an Analytics metrics with Maximize Metric Value Per Visit, the goal metric definition allows for multiple conversion events to occur in the same visit.
To view goal metrics that have the same attribution methodology used by the Target ML models, follow these steps:
-
Hover over the goal metric’s gear icon:
-
From the resulting menu, scroll to Data settings.
-
Select Use non-default attribution model (if not already selected).
-
Click Edit.
-
Select Model: Participation, and Lookback window: Visit.
-
Click Apply.
These steps ensure that your report attribute the goal metric to the display of the experience, if the goal metric event happened any time (“participation”) in the same visit that an experience was shown.
Analytics metrics with Unique Visit Conversion Rates
Define the visit with positive metric segment
In the scenario where you selected Maximize the Unique Visit Conversion Rate as the optimization criteroin, the correct definition of the conversion rate is the fraction of visits in which the metric value is positive. This can be achieved by creating a segment filtering down to visits with a positive value of the metric, and then filtering the visits metric.
-
As before, select the Components > Create Segment option in the Analysis Workspace toolbar.
-
Specify a Title for your segment.
In the example shown below, the segment is named “Visits with an order”.
-
Drag the base metric you used in your optimization goal into the segment.
In the example shown below, we use the orders metric, so that the conversion rate measures the fraction of visits where an order is recorded.
-
At the top left of the segment definition container, select Include Visit.
-
Use the is greater than operator, and set the value to be 0.
Setting the value to 0 means that this segment includes visits where the orders metric is positive.
-
Click Save.
Figure 7: The segment definition filtering to visits with a positive order. Depending on your activity’s optimization metric, you must replace orders with an appropriate metric
Apply this to the visits in activity filtered metric
This segment can now be used to filter to visits with a positive number of orders, and where there was a hit for the Auto-Target activity. The procedure of filtering a metric is similar to before, and after applying the new segment to the already filtered visit metric, the report panel should look like Figure 8
Figure 8: The report panel with the correct unique-visit conversion metric: the number of visits where a hit from the activity was recorded, and where the conversion metric (orders in this example) was non-zero.
Final Step: Create a conversion rate that captures the magic above
With the modifications to the Visit and goal metrics in the preceding sections, the final modification you should make to your default A4T for Auto-Target reporting panel is to create conversion rates that are the correct ratio–that of the corrected goal metric, to an appropriately filtered “Visits” metric.
Do this by creating a Calculated Metric using the following steps:
- Select the Components > Create Metric option in the Analysis Workspace toolbar.
- Specify a Title for your metric. For example, “Visit-corrected Conversion Rate for Activity XXX.”
- Select Format = Percent and Decimal Places = 2.
- Drag the relevant goal metric for your activity (for example, Activity Conversions) into the definition, and use the gear icon on this goal metric to adjust the attribution model to (Participation|Visit), as described earlier.
- Select Add > Container from the upper right of the Definition section.
- Select the division (÷) operator between the two containers.
- Drag your previously created segment—named “Hit with specific Auto-Target activity” in this tutorial for this specific Auto-Target activity.
- Drag the Visits metric into the segment container.
- Click Save.
The complete calculated metric definition is shown here.
Figure 7: The visit-corrected and attribution-corrected model conversion rate metric definition. (Note this metric is dependent on your goal metric and activity. In other words, this metric definition is not re-usable across activities.)
Summary: Final sample Analysis Workspace panel for Auto-Target reports
Combining all of the steps above into a single panel, the figure below shows a complete view of the recommended report for Auto-Target A4T activities. This report is the same as that used by the Target ML models to optimize your goal metric. The report incorporates all the nuances and recommendations discussed in this tutorial. This report is also closest to the counting methodologies used in traditional Target-reporting driven Auto-Target activities.
Click to expand image.
Figure 10: The final A4T Auto-Target report in Adobe Analytics Workspace, which combines all the adjustments to metric definitions described in the previous sections of this tutorial.