After ten years in PPC, and hundreds of A/B tests conducted, I’ve developed a methodology for paid social that’s helped my team and I unlock growth for our clients.
I call it the Test-to-Win Method.
It’s a method largely built upon Facebook’s official testing framework (discussed below), but modified for accelerated growth and lower CPA.
Read on to find a detailed breakdown of how it works, the results we’ve been able to achieve with it, and the minimum requirements for successful campaigns.
Contents
Got 30 minutes? Let’s talk ads.
No strings attached, just an open conversation.
… and get a free audit if we’re a fit
Why should you do ad testing?
If you’ve done Facebook or TikTok Ads for any period of time, you know how much of your success depends on your ads.
One good ad can really make or break your campaign.
And if you’ve worked with 5+-figure budgets, you know how crucial it is to keep finding these good ads, because creative fatigue is no joke and it will sink your results.
The Test-to-Win Method focuses on just that.
The Test-to-Win Ad Method
In short, the Test-to-Win method focuses on short-duration A/B/n testing of multiple ad variants to combat creative fatigue and identify winning creatives, copy, audiences or landing pages faster, while minimizing the negative impact on CPA.
It is a method built upon Facebook’s testing framework, found here.
Important to note that the Test-to-Win Method is not an A/B testing method and does not rely on Facebook’s own experimentation tool, touted as the more scientific approach.
We’ve often experienced Facebook’s tool picking winners and tests much earlier than recommended without reaching the minimum necessary sample sizes or statistical significance, ultimately, making it no better than algorithmic selection.
How does it work?
- When testing new ads always test them in a separate ad set or campaign to avoid disrupting the performance of your best performers, and “warm up” your future winners as Facebook puts it (in other words, to generate conversion data).
- Change only one element of the ad (creative, primary copy, ad type, headline or landing page).
- We recommend testing 5-10 ads to minimize risks and unused ads for accounts under $50,000/month. More than that and there’s a risk of production inefficiency due to higher creative production costs and a higher number of unused ads.
- Set your budget high enough to exit the learning phase within 1-2 weeks (your daily budget should be at least 3x your CPA; the higher the better)
- Keep monitoring the performance and pausing ads with above-target CPAs every 3-5 days (or sooner depending on your budget). You should be left with about 15-25% of your ads.
Pro tip: Use Revealbot or Madgicx to pause ads for you automatically and make sure you’re not wasting money on slow and delayed manual management.
- After 2 weeks, any ads meeting your target CPA are winners.
- Once you identify your winners, move them to your best-performing ad set.
- You can also put them against the current best performers in a “controlled” A/B test by creating a separate ad set and using Facebook’s experimentation tool, but you should consider the higher costs involved and the additional time to reach a decision. We tend to add them to the existing ad sets and rely on Facebook’s algorithm to pick the winner out of winners.
- Rinse and repeat every 2-3 weeks. Remember to test iterations of your winners, too!
As you can see, this approach requires preparation, foresight, and a constant stream of creative assets.
What can you test with this method?
Deciding what to A/B test in your Meta or TikTok campaigns is crucial. I recommend starting with the elements that, if improved, could have the highest impact on your results. Here are these elements ordered by impact:
- Creative
- Ad type
- Audience
- Landing page
- Primary copy
- Headline
- CTA
In our experience, creative testing has by far the highest impact on your results and should be prioritized for the largest wins.
Get the PPC insights we share with our clients—for free.
✅ Latest PPC news
✅ Our wins & lessons
✅ Creative inspiration
✅ Useful tools & tips
Why prioritize creative testing?
It’s not just our data and experience that confirms that creative is king.
Here are studies that found the same to be true:
- “Creative quality determines 75% of impact as measured by brand and ad recall.” (2020, Ipsos study)
- “Creative quality is responsible for almost half (49%) of the incremental sales driven by advertising.” (2023, Nielsen, “5 Keys to Advertising Effectiveness”)
- “Creative and Effective ads generate more than four times as much profit” (2023, Kantar and WARC)
- “Creative accounts for up to 50% of the estimated action rate in the auction” (2019, Facebook)
Here is what Meta’s Morgan Monnet had to say about creative’s importance:
“Compelling and catchy creative can give you an edge in auction-based advertising, even if the ad has a lower budget. It can launch you into a cycle of outperforming your competitors, leading to a higher ROI.”
Still not convinced?
Just take a look at the ad libraries of all of the big brands running ads on Facebook. Each brand has tens if not hundreds of active ads, runs multiple tests, and quickly replaces underperforming creatives.
MagicSpoon – $100 million in funding (TechCrunch) – 39 ads in the Ad library
Ritual – $100+ million in sales (Forbes) – 90 ads in the Ad library
Lumen – $62 million in series B (TechCrunch) – 410 ads in the Ad library
HiSmile – $92 million in revenue (2022) (Business News Australia) – 8,800 ads in the Ad library
Results we’ve been able to achieve with this method
Beauty and Personal Care
In this client’s case, we got lucky, as we only had to test 5 net-new approaches before finding one that outperformed the rest by a significant margin. However, it soon started declining as Facebook started pushing all of the spend to it, which increased its frequency. We quickly introduced new ads including net-new and iteration approaches and found our combination of winning ads that outperformed our original winner.
Health and Fitness
For this Health and Fitness sector client it took us a while to break through the ceiling on leads and CPA. To date, it’s one of our most intense 3 months of testing that required over 100 creatives (includes iterations, net-new, and new copy approaches).
However, once we found it, the winning creative doubled our results in just 1 month, almost single-handedly and continued to grow in month 5. In month 6, we introduced new creatives to tackle creative fatigue.
Online Education
This client in the online education sector, similar ot the beauty sector above, was a rollercoaster of results for the first 3 months, and another case in our library proving the importance of tackling creative fatigue with new creatives, and scaling your winners.
Why traditional A/B testing doesn’t always work
I’ve written about PPC A/B testing here and here and have performed over 100 different A/B tests on Google and Facebook.
The bottom line: It’s important for testing hypotheses, improving results and structuring your optimizations.
However, traditional A/B testing requires significant sample sizes (think 1000s of clicks), equal distribution of data across the variants (A gets ~50%, B gets ~50%), prolonged duration (think 6+ weeks), and lack of outside influence (no optimizations).
See for yourself by playing around with this A/B test size calculator https://abtestguide.com/abtestsize/ (for the examples above, I used 2% control, 50% uplift and 1,000 clicks per week).
If you’ve worked with Facebook Ads for any amount of time, you know it’s not always possible to meet these requirements, even when using Facebook’s own A/B testing tool.
Minimum sample sizes are hard to meet without significant testing budgets, Facebook’s algorithm picks its own winners and doesn’t drive equal traffic to each of the variants, Facebook’s own A/B testing tool calls winners much sooner than required for statistical significance (often after just 3-7 days), and the lack of optimization for prolonged periods inevitably leads to worse results overall.
This is why traditional A/B testing doesn’t always work and why we often prefer our Test-to-Win Method.
FAQ
1. Who is this for?
Higher-budget and/or lower-CPA clients with the ability to generate 50 or close to 50 conversions/week/ad set to exit the learning phase.
2. How much budget do I set aside for the testing campaign?
Depends on your total budget, but no less than 10% of your total or 3 times your daily CPA.
3. How does this work with situations where Facebook keeps driving all of the traffic/spend to the old winners?
Old winners need to be treated the same as your testing ads – once they start underperforming, you should pause them, thus freeing up room for the new entries; in essence this methodology allows you to maximize the chances of improving conversions by using proven ads – their performance won’t carry over, but at least you will already have proof of concept instead of testing in your winning ad sets and risking tanking the performance.
4. Can you recycle an old ad or an ad that didn’t get enough impressions or clicks during the test?
90% of the time, Facebook will pick just 1 to 3 ads out of your whole set and drive most of the traffic to them. Naturally, you will be left with a number of ads that didn’t get a chance to generate sufficient results. In such cases, you can recycle these variants and test them again, but only if you’re confident in the variant’s potential or think it warrants a net-new or refinement testing approach. Same with old ads.
Spending over $10,000 and want to improve your results with our Test-to-Win Method? Click here to get a free 30-minute consult (and a free audit if we’re a fit!)
Get the PPC insights we share with our clients—for free.
✅ Latest PPC news
✅ Our wins & lessons
✅ Creative inspiration
✅ Useful tools & tips