Road trips are one of the items you’ll find on most people’s bucket lists, and I think everyone should get on the road at least once in their lifetime.

Imagine driving a car on that planned road trip; you want to ensure you’re on the right path and not wasting any time or gas. So, you rely on your trusty GPS and Maps application to guide you, track your progress, and inform you if you miss a turn or if there’s congestion ahead.

A/B testing goals are like the GPS for your website optimization journey or business goals.

Just like you wouldn’t drive without a destination in mind unless getting lost is your primary goal,  you shouldn’t also conduct A/B testing without clear goals to track your progress.

By setting and tracking specific goals, you have a clear direction on what you want to achieve, and you can also measure the performance and success of your test.

In this article, I’ll take about the different A/B testing goals to track, but before then, let’s discuss the goals of A/B testing.

Importance of Tracking Goals in A/B testing?

Tracking goals is essential in A/B testing because it helps you determine the success of your experiments. By tracking how visitors interact with each test variation, you can determine which version performs better based on your goals, key performance indicators, and success metrics.

Goals are specific actions you want your visitors to take on your website or app. These could include purchasing, filling out a form, pageviews,  clicking a particular button, etcetera.

You need to track goals to have a way to measure the effectiveness of your experiments. You could make changes to your website or app that don’t improve performance or lead to the desired outcomes.

In addition, tracking goals can help you identify opportunities for optimization and improvement. By analyzing the data collected from your experiments, you can identify areas where visitors are dropping off or engaging more.

This can help you make more informed decisions about optimizing your website or app to meet the needs of your visitors better and achieve your business goals.

For example, in FigPii, an alternative to Google Optimize, which is also the fastest Web A/B testing engine, according to Google Lighthouse, you can set up a split test in less than five steps and also set your goals.

FigPii is the only web a/b testing platform that allows you to track multiple A/B testing goals; using the “Add Another Goal” button in the image below, you can add as many goals as you like to your A/B test. However, your experiment can only have one primary goal.

Additional Benefits of Using FigPii over other A/B Testing Tools

Being able to set multiple goals is one of the many benefits of FigPii over other A/B testing tools. If you currently use Google Optimize, and you are thinking of which A/B testing tool to move all your active experiments and personalizations to, look no further than FigPii

Let’s take a look at some of the benefits of migrating to FigPii from Google Optimize or any other testing platform.

  1. Free Migration Service
  2. Fast Speed
  3. Built-in Sample Size and Test Duration Calculator
  4. Integration With Other Tools
  5. Multi-armed Bandit Model
  6. 24/7 Customer Support
  7. Perfect for Limited-time Testing Campaigns
  8. User-friendly Interface

How to Choose A/B Testing Goals to Track? 

When you’re ready to test, how do you choose the goals to track, or are your a/b testing goals backed by research, observations, or random guess?

Choosing the right A/B testing goals to track plays a significant role in the outcome (success/failure) of your experimentation and can also determine the trajectory of your business post-A/B testing.

Say you observe that customers are abandoning their carts just when they are about to make payment, or a particular web page that is supposed to keep users engaged has a high bounce rate. When conducting A/B testing in these two scenarios, your goals would be different and specific to the problems you are trying to solve.

Let’s discuss pointers to help you choose the A/B testing goals to track during experimentation.

  1. Understand Your Business Objectives

Your business objectives determine the relevance or irrelevance of an A/B testing goal. If your goals do not align with these objectives, then there’s no point in conducting a split test in the first place.

By understanding your these objectives at the time of the test, determining the changes to make and metrics to track during your A/B test becomes easier. In addition, if your business goals are well outlined, you can also recognize the changes that will likely have a significant impact when testing.

  1. Identify the problem you’re trying to solve.

In most cases, A/B testing usually addresses a problem or explores an improvement opportunity. So, identify the problem or opportunity you want to address through A/B testing.

Depending on the kind of business you run, whether B2B, SaaS, or Ecommerce,  a problem could be anything from increasing website traffic, newsletter signups, new account creation, etc.

  1. Choose A Primary Metric

After you’ve identified the problem or opportunity you want to address with your A/B test, the next thing is to choose a primary metric. A primary metric is the main indicator of the success or failure of your A/B test. The main purpose of a primary metric is to ensure that other metrics do not make you lose sight of what’s important.

For example, if your primary metric is conversion rate, while an increase in click-through rate and reduction in bounce rate is a good thing, they can be considered secondary metrics.

In one of his live videos, Khalid Saleh, the CEO of Invesp, had this to say about choosing the metrics to track during A/B Testing.

The metrics you track during A/B testing will depend on the page that you are trying to optimize and the different steps visitors must take to achieve the main goal of the test.

  1. Consider Secondary Metrics: 

After determining your primary metric, you can also define your secondary metrics. In most cases, your secondary metrics should be those that directly relate to your primary metric because they can provide additional insights into the performance of the test.

If the primary metric you chose is conversion rates, consider secondary metrics such as click-through rates, engagement metrics, form submissions, etc.

Now that you know the factors that can help you determine your A/B testing goals let’s discuss some examples of goals you can track in an A/B Test.

Examples of A/B Testing Goals To Track

Now that you know the factors that can help you determine your A/B testing goals let’s discuss some examples of goals you can track in an A/B Test.

  1. Conversion Rate

In my time at FigPii, and after spending a reasonable amount of time in the marketing and A/B testing spaces on LinkedIn, conversion rate is one of the most talk-about testing goals that most clients want to track.

Conversion rate is so important that there are companies (conversion rate optimization agencies) whose sole purpose is to help you optimize your website’s conversion rate through A/B testing and other strategies.

This goes to show that conversion rate is an essential metric for businesses. It measures how many website visitors take a desired action, such as purchasing, filling out a form, or subscribing to a service.

Every business wants to get more customers, convert website visitors into paying customers, increase sales, and generate more revenue.

To accurately track the conversion rate during an A/B test, it’s necessary to clearly define your test goals, metrics, and visitors’ actions that count towards conversion.

If your conversion metric is the number of sign-ups, then button clicks or purchases do not contribute to the conversion rate of that test.

The image below shows the different variations of the same web page and the differences in conversion rates during the A/B Test.

  1. Click-through rate (CTR)

Click-through rate (CTR) is the number of clicks an ad receives divided by the total number of times it was shown. It is another critical metric for measuring the effectiveness of a marketing campaign, advertisement, or website’s effectiveness in driving clicks to a particular page or action.

This is an A/B testing goal that businesses can consider tracking if they aim to increase their CTR on their marketing campaigns, landing pages, or email campaigns. After creating the different variations of your test, you can use your desired A/B testing tool to measure their performances and see which generates more clicks over a predetermined time.

When you create the different variations of your web page you are testing, ensure you only change elements that directly affect the click-through rate, such as the call to action button, ad placement, copies, and other design elements.

  1. Bounce rate

The image below shows the different variations and the difference in Bounce rate during the A/B Test.

Bounce rate is another AB testing goal you can track if you observe that visitors are leaving your website almost as soon as they arrive without taking any further action. A high bounce rate can indicate that your website or a particular webpage is not meeting the needs of your target audience.

A website’s bounce rate is the percentage of people who land on the website and then leave without moving past the initial page or taking action, such as clicking a button, entering their information, or navigating to another page within the same website.

To ensure that the AB test is accurate, controlling factors such as traffic sources that could affect the bounce rate is important.

This means that you ensure that the distribution of traffic between the different variations is consistent. If traffic from a particular source is more than the traffic from another source to the variations, this could skew your test results.

Since your goal is to reduce the bounce rate, Ensuring that the page(s) versions are identical except for the specific change being tested is also important.

  1. Time on site

Time on site, commonly known as session duration, is the total time a visitor spends on a website, from when they land on it until they navigate away.

Time on site = Total duration of all sessions/number of sessions.

When tracking time on site as an A/B testing goal, the aim is to determine which variation of your website will lead to user engagement and retention. By comparing the time on-site metrics of two or more variations of your website, you can determine the design elements, content, and other factors that effectively keep users engaged.

However, you should note that time on site is only sometimes a perfect measure of user engagement, as some users may spend a long time on a page simply because they need help or clarification, while others may quickly find what they need and move on.  Using A/B testing tools with heatmaps and session replays, you can avoid this mistake.

These tools show user behavior on your site, and you can see whether short or long time on site results from confusion or whether users quickly find what they need on your site.

A short time on site does not necessarily mean your website performs poorly. For example, if a visitor spends a short time on your site but still converts or engages with a call to action button, that could indicate a good user and customer experience.

  1. Revenue per visitor

Revenue per visitor (RPV) measures the amount of money generated for every visitor that comes to your website. It is calculated by dividing the total revenue by the total number of visitors. Ecommerce websites mainly track this metric to evaluate their acquisition costs and efforts and understand what’s working and what isn’t.

Revenue Per Visitor = Total Revenue/Total Number of Visitors

To track RPV during your tests, you use A/B testing tools such as Google Analytics and FigPii, which have analytics features to measure the revenue generated by each user visit to your site. After concluding your test, you can determine the more effective variation for generating more revenue.

They may also test different landing page designs, checkout processes, or payment methods to see which version is most effective at converting visitors into paying customers.

Tracking revenue per visitor is a valuable split-testing goal for e-commerce websites, as it can help you identify the most effective methods for increasing sales and revenue.

However, it is important to remember that other metrics, such as customer satisfaction and retention, should also be considered when tracking revenue per visitor. A good user and customer experience will also positively impact your revenue per visitor.

  1. Average Order Value

Like RPV, Average order value (AOV) is also used mainly by Ecommerce stores. It measures the average amount spent every time an order is placed on a website. AOV is calculated by dividing the total revenue generated by the total number of orders.

AOV = Total Revenue/Total Number of Orders

One strategy that you can employ to increase AOV includes offering product bundles or complementary products or upselling other items during checkout. Another method is providing free shipping or discounts for larger orders, encouraging customers to increase their order size. During your A/B test, you can experiment with any of these strategies and use an analytics tool to see which generates the highest AOV.

However, it’s also important to consider your customer acquisition costs when looking at your AOV during A/B tests. If your AOV is higher than your Customer Acquisition Cost, then it’s safe to say that your business is making a profit. So, when comparing the results of the different variations you used during your tests, you must also consider the customer acquisition costs for each variation.

  1. Cart abandonment rate

Cart abandonment rate refers to the percentage of online shoppers who add items to their cart but fail to complete the purchase. It is a critical metric for e-commerce businesses looking to improve their sales and revenue.

This is another split testing goal you can focus on if your website is recording a high cart abandonment rate. Some reasons for cart abandonment rate include a complicated checkout process, unexpected additional costs, and an ambiguous return policy.

After uncovering potential issues contributing to the cart abandonment rate, you can create different site variations and test other design elements of your checkout process, such as button placements and copies used. During the test, you can also try incentives like free shipping or discounts to see if they will encourage customers to complete purchases.

With your A/B testing tool, you can track the variations, the number of abandoned carts, and the percentage of customers who complete their purchases.

  1. Engagement metrics

Engagement metrics measure how well a website or landing page marketing campaign resonates with users. Some common engagement metrics include likes, comments, shares, click-through rate, page session, conversion rate, etc. These metrics tell you how well or poorly your marketing efforts or website performs.

For example, if you drive traffic to a landing page but realize that users are not engaging with the page’s content or clicking a desired button and are spending less than 5 seconds on the page. You can easily conclude that the page’s content does not resonate with customers or drive the needed engagement.

With A/B testing, you can test different variations of this landing page to see which version generates the most engagement, such as clicks on links or buttons or time spent on the page. However, when setting up your A/B test, you must recognize the engagement metrics you want to track.

In conclusion, monitoring engagement metrics as an A/B testing goal is critical in improving user engagement and achieving marketing goals. Through A/B testing and analysis of test results, you can identify the most effective strategies for improving user engagement.

Over To You

It’s important to mention that A/B testing goals that you should be tracking are not limited to those mentioned in this article. When testing, you should always approach your test with a clear set of goals and metrics that align with the set goals while ensuring that the test runs for a sufficient amount of time to achieve statistical significance.

This means that your test results can be attributed to the changes in the different variations designed rather than randomness.

Which A/B testing goals will you be tracking in your next experiment?

Author