A/B Testing in PPC Marketing (with our TEMPLATE)

Photo of author

Art Zabalov

Updated

I’m biased, but I’ve always thought we are very meticulous when it comes to optimization and testing. 

We optimize most of our clients’ accounts twice or thrice a week, create 4-month strategies, have regular and ad hoc brainstorms, and perform monthly audits of our own efforts for course correction and to avoid lasting underperformance.

And yet, before 2022, I’d always felt we didn’t have enough “structure” in place.

Too much of it was chaotic, ad hoc, and reactive to the data we had on hand. Optimizing on a schedule, creating go-forward plans, and occasional-but-not-too-frequent testing to avoid disrupting the existing results.

‍Similar to how most agencies operate.

This all changed in 2022 when we introduced consecutive A/B testing. And it’s been the single most impactful change to date.

Got 30 minutes? Let’s talk ads.

No strings attached, just an open conversation.

Book a free 30-minute consult

… and get a free audit if we’re a fit

Art pointing finger2

The impact of introducing consecutive A/B testing

Introducing (and refining) consecutive A/B testing has had a very profound impact on 3 areas of our work – results, structure, and communication.

The results

We’ve seen result improvements across all of our clients (yup, all of them), and while it’s hard to attribute these to A/B testing alone, I’d say we’re talking about at least a 15% improvement in leads or sales thanks to the high-lift tests.

Furthermore, it helps us create best-practice databases for each client, separately, as we learn what works for them, specifically (for instance, Smart campaigns have been replaced by PMax for all of our clients, but one client keeps seeing significantly better results from Smart 🤷)

The structure

The impact on structure can’t be understated. It’s huge (doing my best Trump impression: Yuuuge!).

By performing consecutive, monthly quarterly* A/B testing we’re creating footholds, from which we can keep “climbing” and refining the accounts based on our learnings. It helps give us focus and shift our mindset from cosmetic optimizations to finding the most impactful opportunities.

*We’ve recently switched to quarterly A/B testing. This allows us to keep focusing on the most impactful higher-lift-potential tests, longer test duration and higher confidence in the results.

The communication

Now, considering most clients rely on us for ownership of their PPC while they focus on other areas of their business, our communication is rarely focused on A/B testing (and I understand that – everyone wants results, not getting bogged down in details). 

However, the fact that we share everything in a shared Google Sheets file, and encourage clients’ own ideas/hypotheses, has provided our clients with an additional layer of transparency, awareness (of our activity), and engagement..

Why we introduced A/B testing

A set of the most important questions we asked ourselves that led to us introducing consecutive A/B testing:

  • Do we have enough structure when it comes to monthly optimizations? Do we feel we’re getting consistent improvement?
  • What is the current structure of our optimization efforts, and how do we decide which elements to test or optimize?
  • How frequently do we test and optimize our campaigns, and do we have a set schedule for these efforts?
  • Are too many of our optimization efforts focused on (or end up being) cosmetic changes? (often the case when you feel like you’ve optimized everything you could)
  • How do we plan to incorporate A/B testing into our optimization strategy?
  • What are the limitations of A/B testing? (make sure you think it through – there are plenty)
  • How will we measure the impact of A/B testing on our advertising results, and how will we determine if it’s been successful?
  • How will we use the insights gained from A/B testing to inform future optimization efforts and improve our overall advertising strategy?
Get the PPC insights we share with our clients—for free.

✅ Latest PPC news

✅ Our wins & lessons

✅ Creative inspiration

✅ Useful tools & tips

Our A/B test template

After answering the questions above (and about a dozen more, specific to our processes), we set out to create an approach that checks all the boxes, starting with a report template we share with the clients.

AB Testing Template

Nothing fancy. It’s meant to be easy to fill out, easy to review, and easy to get actionable takeaways from. (the “report card” color scheme could use a touch-up, though 🙂 )

Here is the link to the template that you can copy and use for your own A/B testing.

Our A/B testing approach

We’re down to the technical aspects for those who want to get down to the nitty-gritty.

Here is the foundation of our approach pulled right from our SOP:

  1. A/B testing needs to be integrated from the very setup
  2. A/B testing should be balanced with normal optimizations
  3. The minimum duration of a test should be 6 weeks. However, should the results indicate an early winner, the test can be completed after 4 weeks. And vice-versa, if the test is inconclusive towards the end of 6 weeks, the duration can be extended (to up to 12 weeks).
  4. Tests should be planned ahead of time, on a monthly quarterly basis, but adjusted should the need arise (client request, lack of data, previous tests’ results suggesting a different direction).
  5. Every test should reach statistical significance for a conclusive takeaway. Measure it using this tool – https://abtestguide.com/bayesian/
    • For most tests, we aim for 90% confidence level and 80% power. However, due to campaign restrictions (i.e. slower result generation limiting the necessary time to run the test or producing insufficient sample sizes) or the need to introduce changes mid-test, we sometimes settle for lower confidence after looking at supplementary metrics such as clicks, CTR, CPC, on-page behavior.
  6. During the tests, there should be minimal to no adjustment done to the whole ad group/campaign running the test if there’s a risk of affecting the results’ statistical significance.
  7. The tests should have an adequate budget based on previous performance. Remember, A/B tests often require 2x the budget to avoid limiting the control variant (this is because the budget gets split 50/50)

And the general approach to filling out the report:

  1. Test results should be filled out right away, and before the end of the month quarter (unless it’s a long-duration test) to ensure consistent reporting and help keep us on our toes.
  2. Test hypotheses, conclusions, and actions need to be short and to the point but have sufficient context for easy understanding and navigation.
  3. We need to have a separate report with all of our verified test conclusions and best practices with links to tests and dates that get pulled automatically (using a simple VLOOKUP formula) – acting as our database of verified hypotheses.
  4. We need to review our best practices every quarter and make adjustments to our general and client-specific best practices (i.e. if images with people perform better, keep using those; if using Exact match keywords don’t make any sense remove them from the initial setups, etc.)

‍I left out a few things specific to our processes, with possibly the most important being how we actually come up with A/B tests. 

‍It’s a separate process that’s different for every client and deserves its own guide, but if viewed from a top level, it involves relying on our internal database of possible tests that we update regularly, brainstorming the highest-impact tests while reviewing the previous month’s/months’ results and A/B test takeaways, as well as future goals/campaigns/sales.

Leave a Comment