With an SEO A/B test or Split Test, you can validate almost all SEO changes and optimizations before they are implemented on your website. For example:
- Optimizing content elements such as headings
- Optimizing SERP snippets via page titles or meta descriptions
- Adding content
- Adding structured data
- The impact of client-side rendered vs. server-side rendered content
- The impact of Web Vitals optimization (site speed)
FAQPage structured data or “FAQ schema” has been one of the most sought-after rich results since Google released this feature in 2019. It allows you to display questions and answers directly in Google’s search results:
As you can see, this snippet makes for a much more comprehensive search result. The biggest SEO driver for adopting FAQPage structured data is often trying to generate higher click-through rates (CTR) by standing out more and pushing competitors down (especially on mobile).
However, the most remarkable learning experience we’ve had from split-testing is that something that works for one website may not work for another. The only way to know for sure is to run tests and see what works for your website. Additionally, a split test can help build a strong business case to get the resources needed to make the change quickly.
How to set up a split-test
In an SEO test, pages are divided into (at least) two groups with similar characteristics. An SEO change is made on the variant group, and the control page group remains unchanged.
With an SEO test, we want to measure the effectiveness of marking your FAQs with structured FAQPage data. You must first identify a suitable page template on your website to run the test on, for example, category or product pages with FAQs.
Once you have the list of pages to test, you need to divide them into two groups:
- The Control group: The original pages, which will stay the same; and
- The Experiment (Variant) group: The test pages on which the changes will be implemented.
Create two groups of pages that are representative of the total group of pages with a similar number of organic traffic. You can, of course, do this manually if you understand your test group well. Still, there are more accurate ways to do it, that is, to ensure you get two groups with statistically similar characteristics.
For example, Stratified sampling is a great way to do just that. If you need help, you can use this train_test_split module from the scikit-learn library. You can even split pages based on multiple values, such as total organic traffic and average daily organic traffic. You want to end up with two groups of pages that contain pages with high organic traffic, medium amounts and small amounts of traffic, and more. That, in short, is what the concept of creating data is.
Generate FAQPage structured data dynamically
The most common reason to run a split test is to be able to prove the added value of a particular change before freeing up precious development or content resources.
Setting up the test
Most likely, you will have your FAQs marked up similar, as seen below:
You can implement a dynamic structured data script for the pages you want to test. So, for example, if you have 200 variant group pages that share the same HTML template but have different FAQs per page, you can quickly implement the structured data you want on all those pages with one script.
You will need to modify the faq_element variable to match the container that lists your FAQs. Then, you can specify the HTML element containing the question and the HTML element containing the answer. The script will then loop through all your FAQs.
After making the necessary changes, you can easily test the script from your browser’s console by pasting the script:
By pressing enter, you can run the script. Now you can check the “Elements” tab to see if the structured data is injected into the <head> section of the HTML document:
Finally, you can copy the HTML and pass it to the Schema Markup Validator:
You can get the sample code template here. (Yes: it can be that easy).
The last step is to fire the script on your variant group of pages. For example, if you use Google TagManager, you can easily set a trigger with a regular expression string of URLs.
If everything looks good, you can go ahead and start your test.
How to analyze your split-test
Finally, we can analyze and validate the results using the causal inference approach invented by Google for estimating the impact of a change. The tool allows you to construct a Bayesian structural time series model. The model predicts the counterfactual response that would have occurred if no intervention had occurred, and we compare this with the actual data. You can find the tool here.
With this statistical approach, you gain insight into the real impact of an SEO change. Using a control group of pages with statistically similar characteristics, the model can detect and filter out trends and other external influences (for example, seasonal influences or an algorithm update).
You can use Search Console to provide data input. For both the variant and control group of pages, collect organic click data (or sessions or impressions) on a day-to-day basis for the total sum of the group of pages.
For both groups, you need a minimum of 100 days of historical data (data before the test starts), plus all the days your test ran. So, if your test runs for 21 days, you need data from 121 days.
After uploading the test data, you can select the start date. Based on the example above, your start date would be on day 101.
Below you can see an example of how you should provide data input:
After you have entered the data, you can run the analysis. The output of the test looks something like this:
The overview gives you information about the calculated impact of the SEO change, the confidence level, and the absolute effect of the change on your tested pages.
By default, the plot contains two graphs:
The first graph shows the data and a counterfactual forecast for the period after the change is made. Each test has a pre-intervention and post-intervention period. In the pre-intervention period, you want a good fit of the model, which means that ‘predicted clicks’ and ‘actual clicks’ should match up very closely. That ensures a reliable model to draw conclusions from.
The second graph adds up the daily effect on clicks, resulting in a plot of the cumulative effect. When the orange shaded area performs below (negative) or above (positive) the x=0 axis, the test is statistically significant at the desired 95% level.
To learn more about how the tool works and how to provide the data input, read the documentation.
There’s no doubt that SEO split-testing is crucial for understanding how Google interprets (and ranks) your website. By testing small tweaks to page groups, you’ll begin to uncover which subtle tweaks move the needle for your SEO strategy.
There are simpler and more advanced, and integrated ways to set up and analyze your own split tests. Tools like SplitSignal can do a lot of the legwork. This allows you to act quickly and run multiple tests in a short time frame, accelerating your learning process.
If you’re unfamiliar with the concept of split testing, this guide will hopefully help you explore the exciting world of statistical SEO testing.