Conversion is the Goal, but Conversion Rate Isn’t Always the Critical KPI

Understanding the importance of widget engagement in driving widget conversion.

Conversion rate (CVR) is a key metric for most e-commerce businesses because it directly impacts revenue and increases revenue per visit (RPV). However, this relationship does not always hold true for search or recommendation widgets, where conversion often results from engagement (clicks). Consequently, conversion rate is typically a secondary KPI to click rate when optimizing widget performance. Here’s why.

First, it’s crucial to recognize that site conversion rate is based on visits. In simple terms, site conversion rate = conversions / visits. A higher conversion rate generally leads to higher revenue. However, search and recommendation widgets aren’t necessarily tied to visits, as each visit doesn’t guarantee a widget view or click. Therefore, the widget conversion rate does not use ‘visits’ as its denominator, unlike the site conversion rate.

Most platform implementations track widget views, clicks, and purchases, reporting these events at the widget level. Therefore, widget conversion rate is effectively a click conversion rate: widget conversion rate = widget conversions / widget clicks. Unfortunately, many platforms fail to clearly identify or label this crucial distinction, making it easy to misinterpret optimization test results.

When a Lower Conversion Rate Can Still Be a Win

Data democracy aims to make more data accessible to a broader audience, enabling better data-driven decisions across an organization. This often places key decision-making in the hands of those without technical expertise, who may misinterpret the data and make honest but misguided decisions.

For instance, a client once aimed to optimize conversion and revenue on their product detail page (PDP). I suggested a strategy to achieve this goal, and they included my recommendation in their test plan. Weeks later, they ran the test and designated conversion rate as the critical KPI. After two weeks, the split-test results indicated that my recommended variation had a conversion rate that was 0.1% lower than the control; the two test conversion rates were statistically tied. As a result, they opted to keep the control, as I usually recommend favoring the control when there’s no valid performance reason to switch. However, this was a significant mistake. Here’s why.

Remember, widget conversion rate is based on clicks, not visits. Therefore, if clicks increase and conversion rate remains stable, both conversions and revenue can still rise! Let’s explore how this played out in the split test.

Here’s the test data the client focused on since they had determined conversion rate was their determining test metric.

Test Data

Test Split Control Variation
Conversion Rate 5.78% 5.47%

At first glance, it’s easy to see why a cursory review of the test results might suggest maintaining the existing strategy, as there was no gain in conversion rate. However, the full picture is more revealing.

Expanded Test View

Test Split Control Variation
Clicks 12,193 20,549
Conversion Rate 5.78% .5.47%

This broader perspective highlights that engagement (click rate) nearly doubled in the test variation, even though conversion rate remained unchanged. How did this impact revenue?

Complete Test Results

Test Split Control Variation
Clicks 12,193 20,549
Conversion Rate .5.78% .5.47%
Conversions 705 1152
AOV $100.48 $101.54
Revenue $70,838 $116,794

Revenue increased by 65% with the widget test variation! These results suggested that the PDP widget could generate an additional $1,199,536 in annual revenue with the test variation. The test was not merely a tie; it was a significant win!

When I shared these results and the revenue increase with the client, they were astonished by the test’s true performance. They explained that conversion rate had always been their key test metric and admitted they didn’t fully understand how widget performance was measured. They nearly overlooked a substantial win.

The Moral of the Story

Be careful what you ask for. Ensure you fully understand what you’re testing, how it’s measured, and how to interpret the results. This way, you won’t leave money on the table when you don’t have to.