Meta Ads · 14 min read · Published May 4, 2026
Meta Ads Audit: The Full 10-Stage Method I Use on $65M+ in Ad Spend
The exact method I run on every Meta account that lands on my desk. 10 stages, in order, with the diagnostic at each step. Founder-written, no agency talking points.
Founder, BTB Audits. $65M+ ad spend audited across Meta, Google, and Amazon
Most Facebook ads audits you'll find online open the same way.
They define what an audit is. Then they list ten generic checks like "review your CPM" and "check your audience overlap." Then they pitch you a free audit at the end.
I want to skip that.
If you're spending $10K to $500K a month on Meta and your ROAS is wobbling, the problem is almost never the thing on the surface. It's the layer underneath. The dashboard you can't read. The pixel that's been firing the wrong event for six months. The campaign structure that started clean two years ago and turned into a junk drawer.
This post is the actual method I run when a new account lands on my desk. Ten stages. In the order I run them. With the diagnostic at each step that tells me whether to keep going or stop and write the first finding.
You can run it on your own account in about four to six hours. The method is yours. The diagnosis takes practice.
Why most Facebook ad audits are calibrated to sell, not solve
Free agency audits exist for one reason. They're a sales tool dressed up as a diagnostic.
The findings get calibrated to create just enough anxiety to close a retainer. A junior gets handed a template. They open Ads Manager, screenshot a few high-CPM ad sets, write "your audience targeting needs work," and the senior who pitched the audit signs off. The diagnostic was never the product. The retainer was the product.
The other failure mode is the opposite. Auditors who actually try to help, but stop at the surface. They review the campaigns and call it a day. They never check whether the pixel is firing right. They never look at Shopify. They never ask which stage the account is actually in.
Both miss the same thing. Most leaks aren't in the ads. They're in the system around the ads.
The 10 stages below exist to find both kinds of leak. The shallow ones a junior could catch, and the structural ones that take ten years of pattern-matching to see.
Stage 1: Dashboard hygiene (the 30-second smell test)
When I open a new account, I don't grade the campaigns first. I check whether the operator can even read the account.
Here's what I look for inside the first 30 seconds:
- Is there a custom column view set up?
- At one glance, can I see CPM, CTR, CPC, CPO, ROAS, and AOV?
- AOV isn't a default Meta column. Has someone built a custom one?
If the column view is Meta's defaults, three columns, no custom layout, I already know one thing. Nobody has been seriously looking at this account. The account hasn't been managed. It's been checked on.
A messy dashboard is a messy operation. The signal is that strong.
If the dashboard is clean, I move on. If it's not, that's finding number one. I rebuild the column view before I write a single other finding, because every number I look at next has to be readable in context.
Stage 2: Connections and event firing
Before I trust a single number on the dashboard, I check that the data underneath it is real.
This is the gate. If the pixel is wrong, every conclusion I draw downstream is wrong. I'd be auditing fiction.
What I check:
- Pixel connected to the right page (and only the right page)
- All the right events firing: Page View, View Content, Add to Cart, Begin Checkout, Purchase, Lead
- Conversions API set up alongside the pixel, deduplicating properly
- Event match quality scores above 6 for the events that matter
I once audited a supplements brand spending around $80K a month on Meta. Their reported ROAS was 2.4x. The team was confused because their P&L said the business was bleeding cash.
The pixel had been firing "Add to Cart" on every product page view for fourteen months. So Meta's algorithm was optimizing for a fake event. Every dollar spent was teaching the algorithm to find people who looked at products. Not people who bought them.
That's not a creative problem. That's not a targeting problem. That's a data problem. And no amount of campaign optimization fixes it.
If you skip stage 2, you can do every other stage perfectly and still be wrong about everything.
Stage 3: Campaign structure
Here I go against what Meta's own customer success managers recommend. Their advice is generic. They don't know the business, the margins, the category, or the customer LTV. Their job is to keep your spend on platform.
What I look for in the structure:
- Does the campaign structure mirror the website's information architecture? If the site has four product categories, can I see those four categories in Ads Manager?
- Even if it doesn't mirror, can I tell at a glance which category drives the most return? Which product within a category is the winner?
- Or has the account turned into a junk drawer? Duplicated campaigns, copy-of-copy-of-copy, no structural logic.
The pattern I see most often: brands started day one with one or two products, duplicated campaigns over and over, and now nobody can read the account. The structure didn't fail. It was never designed.
A useful test. Take the account and ask the person running it, "Which campaign is making us the most money right now and why?" If they need more than 30 seconds to answer, the structure is broken.
Stage 4: Nomenclature
Almost nobody talks about this. It's one of the most under-discussed inputs to good ad management. I see 70 to 80% of accounts doing it wrong.
My standard: the campaign name should tell me everything inside the campaign without opening it.
A campaign name like Conversion_Sales_Mar2026 tells me nothing. A campaign name like CONV_SuppCategory_BroadInt_15-65_M+F_Mar26 tells me the objective, the category, the audience signal type, the age range, the gender, and the launch month. I can audit ten campaigns at a glance instead of clicking into each one.
A simple naming convention table I use as a starting point:
| Field | Format | Example |
|---|---|---|
| Objective | 4-letter prefix | CONV / TRAF / VIEW |
| Category | Short tag | SuppCat / SkinCat / FashionCat |
| Audience type | Signal label | BroadInt / LAL1 / CustomList |
| Demographics | Age + gender | 25-45_F |
| Date | Mon + Year | Mar26 |
Two minutes of nomenclature discipline at campaign creation saves hours of audit time later. And it's the kind of input that compounds. Every new campaign launched into a clean naming structure inherits the readability. Every new campaign launched into a junk drawer makes the junk drawer worse.
Stage 5: Account stage diagnosis
This is the stage most audits skip entirely. And it's the one that determines what every other recommendation should be.
Before I make a single recommendation, I ask: what stage is this account in?
Three options:
- Scaling stage. The account has product-market fit, the unit economics work, and the goal is to push more volume through the system without breaking it.
- Optimization stage. The account has a budget ceiling. The goal is to tighten the existing setup without growing it.
- Cost-cut stage. The business is under pressure. The goal is to defend the bottom and stop the bleeding.
The same finding means three different things in three different stages.
Take a campaign with a 2.8x ROAS. In a scaling stage, that campaign is a winner I want to push budget into. In an optimization stage, it's a steady performer I should leave alone. In a cost-cut stage, it might be the campaign I cut first because the marginal return isn't worth the marginal spend.
Most operators skip the question. They give the same recommendation regardless of stage. That's why their advice often doesn't match the business reality the founder is living.
Stage 6: Budget allocation
Once I know the stage, I look at where the money is actually going.
The diagnostic I run: pull the top spending campaigns and the top performing campaigns and see how much they overlap.
In a healthy account, the top 30% of campaigns by spend should be roughly the top 30% by return. If your top 30% by spend is delivering 50% of returns, you have leftover money on the table. Those campaigns can absorb more budget. If your top 30% by spend is only delivering 10% of returns, you're funding losers.
The principle behind the play: budget allocation before bid adjustment. Increase budget on winners until you've maximized allocation. Then increase bids to push further. Never increase a bid before maximizing the budget allocation, because bid increases are inflationary and you only want to pay them once you've exhausted the cheaper lever.
Stage 7: Cross-platform reality check
Meta tells me one story. Shopify tells me another. GA tells me a third. I triangulate.
This is the stage where almost every "Facebook ads audit" you can buy stops short. Most auditors never leave Meta. That's the biggest blind spot in the category, because most leaks aren't in the ads. They're between the ad and the cash register.
What I specifically look at:
- Drop-off from link click to session. If Meta says 10,000 link clicks and Shopify says 6,200 sessions, you have 3,800 visitors who clicked the ad and never landed. That's a broken page, a slow load, or a mobile rendering issue.
- Conversion rate by category in Shopify. Is the category Meta thinks is winning actually converting on the site? Sometimes Meta's "winner" is a category with weak product pages and the win is a mirage.
- Funnel drop-offs in GA. Session to add-to-cart. Add-to-cart to checkout. Checkout to purchase. The biggest leak is almost never the ad. It's somewhere deeper in the funnel.
A composite example. A few months ago I looked at a fashion brand running about $40K/month on Meta. Reported ROAS 3.2x. Founder named Sarah. Solid creative, decent structure, no obvious campaign-level red flags. The leak wasn't in Meta at all. It was in the mobile checkout. The shipping calculator was timing out for 18% of mobile users at the address-entry step. They were paying Meta to send qualified, ready-to-buy customers into a checkout that silently broke before they could pay. Fix the checkout, ROAS lifts. Touching Meta wouldn't have done a thing.
This is why stage 7 exists.
Stage 8: Testing and signal hygiene
Now I look at how the account has been testing. Top-down. Category to product.
The biggest setup mistake I see: brands giving Meta four signals at once. Demographics, behaviors, interests, and lookalikes, all in the same campaign, often the same ad set. They think they're being thorough.
They're confusing the algorithm.
My principle: the more signals you give Meta, the more confused the algorithm gets. Pick one signal type per axis. Lookalike campaigns, custom audience campaigns, and interest campaigns should be separated, not blended.
What good testing looks like:
- Non-overlapping audiences (use Meta's audience overlap tool before launch)
- Structured creative tests (one variable changed at a time, not five)
- Clear hypothesis written down before the test starts
- Decision criteria fixed in advance ("we'll keep this ad if CPO is below $X after $Y spend")
What bad testing looks like:
- Five new creatives launched on the same day with no plan
- Audiences rotating with no record of what was tested when
- Tests killed before they reach statistical signal because the operator panicked
The scariest version of bad testing is the one that looks productive. Lots of activity. No learning.
Stage 9: Creative and communication
This is where I sit longest. Once the system passes its checks, the creative is where the leverage is.
What I look at:
- Hooks. Does the first three seconds earn the rest? If the viewer doesn't get a reason to stay past second three, nothing else in the ad matters.
- Angles. Is there variety, or is every ad the same pitch in different fonts?
- Value proposition. Is it clear in the creative, or buried in the third sentence of voiceover?
- CTA. Specific (
Get 20% off your first order) or generic (Shop Now)?
I look creatively, not just analytically. The numbers tell me which ads are tired. The creative review tells me what should change.
A truth that's getting harder to deny in 2026:
Creative is the new targeting.
For most of Meta's history, targeting did the heavy lifting. You picked the right audience and a decent ad worked. That era is over. With broad targeting becoming the default and the algorithm getting better at finding buyers, the lever has moved. Whatever you say in the creative, that is the targeting now. The ad sorts the audience for you. A boring ad gets shown to a boring audience. A sharp ad with a specific hook gets shown to people who respond to that specific hook.
Which is why the most valuable thirty minutes you can spend on a Meta account in 2026 is reviewing the creatives, not the audiences.
Stage 10: Competitor benchmark
I go to the Meta Ad Library. I look at what the big and good players in the category are running.
Not to copy. To find structural moves the brand could replicate intelligently.
The questions I ask:
- What hook angles are competitors using that we're not?
- How many ads are they running concurrently? (If we're running 4 and they're running 40, that's a structural gap, not a creative one.)
- What's their ad-to-landing-page consistency? Do their ads make a promise the page keeps?
- Are they running offers? What kind, and how often do the offers rotate?
The point isn't to clone. The point is to find the structural advantages that compound over time and build a version of them that fits this brand.
A real account I audited: the Raw Coffee Company example
The clearest demonstration of these stages in action is the audit I documented for the Raw Coffee Company audit walkthrough, a Dubai-based D2C specialty coffee brand. Public data only. Five minutes of looking, dozens of findings.
A few of them:
- 6 of 18 active Meta ads were driving traffic to 404 pages. Stage 7 finding (the click-to-session gap).
- Zero subscription discount on the website. Stage 9 finding bleeding into offer architecture (the creative was selling a value prop the site wasn't honoring).
- 11 grind options on the product page with no guidance for first-time buyers. A pure stage 7 / CRO leak.
- A video ad with strong production killed by audio mix. Music drowning out the voice. A stage 9 creative finding visible only because I watched the ads with sound on, like a real customer would.
None of these would have been caught by a "review your audience targeting" template audit. All of them showed up in under an hour because the method runs in order and surfaces problems where they actually live.
What good looks like vs. what bad looks like
A summary table you can print and use as a self-check on your own account:
| Stage | What good looks like | What bad looks like |
|---|---|---|
| Dashboard | Custom column view, 10+ relevant metrics visible | Default 3-column view |
| Pixel | All events firing, EMQ above 6, CAPI live | Events misfiring, low EMQ, pixel-only |
| Structure | Mirrors site IA, every campaign has a job | Junk drawer, copy-of-copy campaigns |
| Nomenclature | Campaign name reveals everything inside | "Conversion 5", "Test campaign 3" |
| Stage diagnosis | Operator knows: scaling, optimizing, cost-cutting | Same playbook regardless of business reality |
| Budget | Top spend matches top return | Budget on losers, winners under-funded |
| Cross-platform | Meta + Shopify + GA reconcile within 10% | Meta says X, P&L says Y, nobody knows why |
| Testing | One variable per test, written hypothesis | Five new ads, no plan, killed at day 2 |
| Creative | Variety of angles, sharp hooks, specific CTAs | One pitch, weak hooks, generic CTAs |
| Competitor | Library checked monthly, structural gaps known | Last looked at competitors a year ago |
If you go through this table honestly and find yourself in the "bad" column on more than three rows, you don't have an ad problem. You have a system problem. And no amount of bid adjustment will fix it.
A note on the other half: most D2C brands also leak budget on Google
The 10-stage method above is built for Meta. Most D2C brands also run Google Ads, and the audit logic is similar but the diagnostic order changes. Search behaves differently from social, and the leaks live in different places.
I documented the full method I use for auditing Google Ads for that side of the picture. If you run both platforms, audit them both. Treating Meta and Google as one bucket is one of the most common reasons brands can't figure out where their actual returns are coming from.
Frequently asked questions
Common questions
About the method
Is this method specific to D2C brands, or does it work for lead-gen too?
The 10 stages work for any Meta account, but the diagnostic shifts. For lead-gen, stage 7 swaps Shopify for the CRM, and stage 9 weights toward form-completion creative cues over add-to-cart language. The skeleton is the same.
How long does running this audit on my own account take?
About four to six hours for a focused operator who knows their account. Expect longer the first time you do it, because most of the work is rebuilding the column view and the campaign nomenclature so the rest of the audit is even readable.
What's the difference between this and a Facebook Ads audit checklist I can download?
A checklist gives you the items. The method gives you the order. Order matters because finding a creative problem in stage 9 means nothing if the pixel is misfiring at stage 2.
About the BTB Audits process
What does the Quick Scan cover that this post doesn't?
The post tells you the method. The Quick Scan runs the method on your account using public data only. You get a 5 to 7 minute Loom walking through the leaks I find, plus a Leak Score, in 48 hours. No account access needed.
How is the Forensic Report different from the Quick Scan?
The Quick Scan is public data only. The Forensic Report includes full account access, all 10 stages run end to end, a 60-minute strategy call, and the prioritized 30-day fix plan. It is $499 and delivered in 5 to 7 days.
Trust and safety
Is it safe to share my Meta account access for the Forensic Report?
Yes. We use Meta Business Manager partner access only, scoped to read-only where possible, and access is revoked the day the audit closes. We have never had a security incident across $65M+ in audited spend.
Why is the Quick Scan free? What's the catch?
There's no catch. The Quick Scan is the work, not a sales call. The economics work because if 1 in 10 founders who get a Quick Scan upgrade to a Forensic Report, the funnel pays for itself. If the Quick Scan is good, that conversion happens naturally.
Will this work for me
My monthly Meta spend is below $10K. Is the audit worth it for me?
Honestly, probably not yet. Below $10K/month, the highest leverage is in creative volume and offer testing, not in the structural fixes the audit catches. Run the post-level method yourself. Come back when spend is higher.
My brand is in supplements, fashion, electronics, or baby care. Does the method change?
The method does not change. The patterns within each stage do. Supplements brands tend to leak in stage 5 (cost-cut accounts mistakenly run like scaling accounts). Fashion brands leak hardest in stage 7 (the checkout-to-purchase drop on mobile). Electronics brands leak in stage 9 (creative variety is too narrow because the SKU range overwhelms the testing capacity).
You can run this audit yourself. The 10 stages above are the full sequence I use, in order. Realistically it takes a focused operator four to six hours.
If you don't have four to six hours, or you want a second pair of eyes that's audited $65M+ across Meta, Google, and Amazon for D2C brands, the Free Quick Scan is what I built for that. I'll record a private 5 to 7 minute Loom walking through the leaks I find on your account using public data only. You'll have it in 48 hours.
Get Your Free Quick Scan →- No account access needed
- 48-hour delivery
- Money-back guarantee
- $65M+ ad spend audited