Listen Now

How did our clients actually perform in March 2025?

On this episode of the podcast, Taylor and Richard break down the latest performance scorecard across our client portfolio … revealing exactly how we stacked up against forecasted revenue, spend, and contribution margin goals.

Spoiler: We beat margin expectations by +11.68%, but that’s just the beginning.

Here’s what you’ll learn:

  • The exact numbers behind our March forecast (and how close we came)
  • Why most agency case studies are misleading
  • How to evaluate an agency with real data, not anecdotes
  • How CTC forecasts client growth using a self-scrutinizing model
  • Why contribution margin is our north star metric—and what that means for your brand

If you’ve ever wondered how to hold an agency accountable (including your own), this is the episode for you.

Show Notes:
  • Common Thread listeners get $250 by depositing $5,000 or spending $5,000 using the Mercury IO credit card within your first 90 days (or do both for $500) at mercury.com/ctc.!
  • Explore the Prophit System: prophitsystem.com
  • The Ecommerce Playbook mailbag is open — email us at podcast@commonthreadco.com to ask us any questions you might have about the world of ecomm

*Mercury is a financial technology company, not an FDIC-insured bank. Checking and savings accounts are provided through our bank partners Choice Financial Group, Column, N.A., and Evolve Bank & Trust; Members FDIC. The IO Card is issued by Patriot Bank, Member FDIC, pursuant to a license from Mastercard. Learn more about cashback. Working Capital loans provided by Mercury Lending, LLC NMLS ID: 2606284.

Watch on YouTube

[00:00:00] Richard Gaffin: So let's, let's dive into some of these numbers then. So specifically, you mentioned in March, 2025, revenue missed forecast by 0.79%. So not a huge deal, but it is under spend exceeded forecast by 4.59% and contribution margin exceeded. Forecast by

[00:00:15] Taylor Holiday: beat beat contribution margin by 12%. And I'm gonna explain how that's possible. You're, 'cause right now you're doing the math in your head, you're like, wait a second. How could you miss on revenue, be over on spending ahead on contribution margin. 

[00:00:24] Richard Gaffin: Hey folks. Welcome to the Ecommerce Playbook Podcast. I'm your host, Richard Gaffin, Director of Digital Product Strategy here at Common Thread Collective. And I'm joined as I always am – he's wearing his his well, is this the New York Yankees or what's the …

[00:00:37] Taylor Holiday: This is the Costa Mesa

[00:00:39] Richard Gaffin: Costa Mesa Yankees. That's right.

Wearing his Costa Mesa Yankees Fit. We've got Taylor Holiday, our CEO here at Common Thread. Taylor, what is going on today, man?

[00:00:48] Taylor Holiday: Yep. It's game day. So we got the five o'clock showdown against the Costa Base of Padres. So, this helps me to have to change one last article of clothing as I make that switch later this afternoon.

[00:00:58] Richard Gaffin: Now are, are you the coach, head coach, assistant.

[00:01:02] Taylor Holiday: the manager. The

[00:01:02] Richard Gaffin: Oh, the manager, of course. Right. I've gotta use baseball. That's right. Nice. All right, cool. Well, let's jump into it today. So I believe last time you and I talked Taylor, we talked a little bit about our Growth Accelerator program. And one thing that we really love about our Growth accelerator program is that Joy, who kind of heads up that initiative, releases a report card on how all of his clients are performing across a number of different metrics.

And for a number of reasons, I mean, we obviously we love that idea. So we're actually producing I believe this week. A similar thing for CTCs clients, to give you guys a sense particularly of how our clients are kind of performing across these sort of entire index. And I think there's a, there's a few things we wanted to, to call out today, but I think first thing maybe that would make sense is to kind of dig into the actual report card here that we have in front of us, so we can maybe throw a graphic up on the screen.

But Taylor, why don't you tell us a little bit about what we're seeing here in March.

[00:01:55] Taylor Holiday: Okay, so first I wanna, I wanna level set on why we're doing this. And I wanna do this through the lens of f if, how should you as a listener, evaluate agencies? Okay. This is a topic that there's probably a thousand podcasts on and blog posts. I know I've written one historically about how David Ogilvy recommends that you do it.

Here's my problem with every single one of those recommendations. Okay. In every single case, they're entirely qualitative and not quantitative because there is actually no way to get to the actual quantitative outcomes. The actual thing you care about is, does it work? Right. Like I, I think about the way that we sort of treat medicine or the way that these thing drugs make it to market.

We don't, you don't ask for friends if aspirin works, right? Like that's not the mechanism that we would use to evaluate the criteria for making those kinds of decisions. But in agency world, that's sort of all we're left with is find a client that's worked with them. And so you get one anecdotal data point across a business.

In 10 years we've worked with probably a thousand customers, right? And so you go ask somebody else you could meet and interview the team. That's another one, which totally depends on your individual ability to assess someone's capacity in what A 30 minute conversation with someone I like. It's not how you'd interview an employee, it's not how you'd do anything.

So I, I find it to be a really insufficient process, but it's on us as agency owners to actually provide. More scrutiny to ourselves, publicly available so that people can make that decision. And so this is one, it's really important for helping people make decisions to , provide quantitative results of the actual outcomes. That's like part one of this. The second part, and more important for me, even than people's perception of the results, is to self scrutinize our own system and ask, is it working?

I, I will regularly enter this depressive quagmire state where I can convince myself that what we're doing is totally random number generation, and we have no idea if what we're being helpful or not. And that the idea of spending my life doing something that might be totally useless is like a very depressing thing.

And so I, I'm a, I've been a huge fan of watching Kyle Bode and the Driveline team to borrow a baseball reference like I. Really obligate themselves to publishing the results of the things that they're testing and the hypothesis that they're after. And the way that their training materials and modalities actually impact players.

And I think the obligation should be the same for us as service providers is to build a system that self scrutinizes with data to ask the question, are you delivering on the promise that you're making or the desired outcome that you're trying to create? So the beautiful thing is we have a very clear way to do that is that every month we take on the responsibility of setting the expectation of the business for our customers.

So we handle financial forecasting, revenue spend, contribution margin, new customer revenue, return customer revenue, Facebook spend, Facebook Ross, efficiency across all these different metrics, things like 50 metrics every day for the month. What that means is that the end of the month. It's very easy for us to say, did we do what we said we were going to do?

Yes or no? If no, where did we miss and why? And that allows us to do two things operationally. One, we can hold ourselves accountable and publish the results which we're doing, which is what the, so scorecard is. Two, we can improve the modeling. So we can ask ourselves where were we wrong in terms of the data structure that built the expectation or when we were right or wrong on an execution basis.

What did we fail at and what can we adjust? And so it allows us to actually iteratively improve the system under. Very publicly available scrutiny. And so that's my goal is to eliminate the single case study of the one brand that we worked on three years ago. And we have those on our website. Like I hate it, like, great, you worked with 47 brand two years ago and this one time you did a campaign and this work that has nothing to do with the experience that a customer's gonna have next month.

But I, and I got, I gotta give Joy a high five for starting this saying Here's every customer that I have. Here's what, how they performed relative to expectation across the entire board. You can see it. Do you wanna be one of these brands?

[00:05:55] Richard Gaffin: Yeah. So, actually I was gonna say, I'm glad you brought up the case study. Element because like one potential objection to your sort of assertion that, that the choice of an agency is largely subjective. Is that like, Hey, we did look at the case studies and they said X and y. Or then you see the other thing, which is that like agencies will put like, oh, we've achieved, I.

10 roas across $10 billion of spend over x amount of time. Which are these sort of, I mean like on the face of it, it's like they're very obviously useless metrics, but, but kind of talk about the ways in which maybe those things are useless and how, why these, this set of, of metrics is better.

[00:06:29] Taylor Holiday: Well, one, they only publish the positive ones. So like we, I'll say we only like, that's what a case study is. It's gonna be a positive outcome. So one, it's only one side of the equation and two is like, it's a, it's, it's a, a previous bygone era in many cases from people that may or may not even be at the agency anymore.

So I think what. What is much more interesting and, and what I hope someone actually sees in our data, and even this first scorecard, you're gonna see that like we missed revenue. Like we didn't achieve the revenue outcome, but we were within our goals plus or minus 10% to target. We were about 1% under the revenue expectation and over on spend but ahead on contribution margin.

That's like the TLDR of last month. And we'll talk about a little bit about what that means and how that happens. Second, but here's what I hope by publishing the negatives, is that we're gonna build confidence to you that we measure our results, that we learn from them and we seek to improve. And that's actually the key, is that you wanna be a part of a system that self scrutinizes and improves.

Because every agency has historical wins, historical losses, the question is, are they going to deliver for you a constantly improving service over time? And that's what I'm committed to trying to create.

[00:07:36] Richard Gaffin: Right. Okay. So let's, let's dive into some of these numbers then. So specifically, you mentioned in March, 2025, revenue missed forecast by 0.79%. So not a huge deal, but it is under spend exceeded forecast by 4.59% and contribution margin exceeded. Forecast by

[00:07:52] Taylor Holiday: beat beat contribution margin by 12%. And I'm gonna explain how that's possible. You're, 'cause right now you're doing the math in your head, you're like, wait a second. How could you miss on revenue, be over on spending ahead on contribution margin. I'm gonna explain to you in a second,

yeah. So, so what, what there's, what we're doing here is we're saying across all of our customers, we forecasted every single one of them.

Where did the aggregate revenue land to expectation in the future, I'm also hoping to constantly improve this and, and show you the distribution. So in other words not just the, the media or average outcome. But also how did we do across all the customers? And the idea is that hopefully they cluster towards plus or minus 10% to target across the portfolio versus a large bell curve distribution where some were a hundred percent off in either direction and some were right on.

We wanna cluster them around accuracy 'cause we wanna be more accurate across more customers. That's, that's the ambition. So we're gonna continue to improve how we analyze this data for ourselves and how we publish it. So, that's one thing. Two is that we do this for every individual growth strategist across their individual portfolio.

And then we sort of look at how did each of them perform in this exercise of we like to say that great forecasting is an exercise and execution more than it is in modeling, but it's a collaborative effort. You have to build a good model and then you have to execute to that model. So how did they do?

If they were off, why, if they were on target, how did they accomplish it? We have a discussion, qualitative learning from that exercise to, to try and figure that out. But in this month, revenue was a little, soft spend was a little over, and contribution margin was ahead. And, and generally speaking, our system's going to drive towards contribution margin as the primary value.

And so what happens here in this case is that you, this is an indication that you've got more revenue off of your returning base and less revenue off your new customer. So new customer efficiency. We know has been a challenge for the last 30 days, but brands were able to extract more contribution margin by getting a higher percentage of their revenue from their existing customer base.

That's generally what happens when you see this kind of metric set. And that's consistent with what we're seeing right now is that there is softness in new customer acquisition, which is not surprising, I'm sure, given the broader market concerns.

[00:09:48] Richard Gaffin: Right. So let's talk about the, the qualitative discussion then that happens when, say a growth strategist misses a forecast or exceeds by a certain amount. What is the kind of like hierarchy of of thought work that goes into kind of determining exactly what happened?

[00:10:02] Taylor Holiday: That's right. So we, we'll pull up and the beautiful thing is that we can see it and maybe Corey, we could put up a, a stats screenshot with like the client hidden to show like a full month view so that you can see like how we can see how every day performed relative to expectation. And we're gonna discuss the misses like.

So one, we can now break it down a level further to say, okay, did we miss on returning customer revenue or new customer revenue? Was it a problem of efficiency or volume? Like which channel struggled? So we can see in the data in a, in stat list which metric actually, which input led to the failure in that output.

So that allows us to target the conversation a little bit more specifically. And then it's a dialogue to say, okay, from your perspective, why did we fail? Was there a launch that was expected to be better? That was wrong? That's often the case, is that. The, the, the events that are the most novel are the hardest to predict, and so guessing how well new cell X or new Product Y is going to do is tends to be some of the harder exercise in forecasting.

So that's often the case. Maybe there was an inventory issue, maybe something got delayed that was expected to happen, or maybe we just. Didn't generate the efficiency that we wanted to in meta that month. Like all of those could be possible. But what we wanna do is I wanna hear self-reflection, and then I wanna eliminate victimhood.

So this is often in those dialogue what you'll start to hear is people's default position is defensive, here's why it wasn't my fault necessarily. And we want, we want to give them back authority over the problem. So in other words, if you say to me like, oh, well we ran out of inventory on SKU X, Y, or Z.

The conversation might move towards like, well, what did we fail to do in the planning process? To not recognize that there was an inventory limitation, right? So we always wanna try and help them to see the mechanism by which they can understand how to better anticipate the future. Now, I recognize that we cannot forever predict the future in every area, but we never want like.

A hundred percent responsibilities as sort of a value here at CTC. And so we wanna recognize what could we have done to better give ourselves a chance of understanding the future and to always embrace that as much as we can.

[00:12:04] Richard Gaffin: Right. Okay, so let's. Talk then about, so in this particular blog that we're putting out, we have some specific sort of like tactical observations, I guess, from our growth strategists that in this case actually did exceed their forecast or, or hit their forecast. And so we can go through some of those now.

Are there are any particular ones that jump out to you as being especially important or maybe unusual in this time?

[00:12:26] Taylor Holiday: Yeah, so the other thing on the other side of it, I was just talking about the issues, is to ask the question of like, alright, well what did we do when we got it right? I. What happened, and so I'll give you some examples. We will sit and we'll ask each growth strategist for specific examples. So here's one from Anmar.

He says, we mapped expected revenue to actual product launches in short pro promos until we were confident we'd hit that number. So forecasting wasn't just a monthly report, it became a planning framework. By mapping the plan revenue contributions from launches and promotions onto their forecast, brands can identify the gaps before the month started and take action accordingly.

So in other words, you build out. Okay, here's the revenue goal now here, and here's the marketing calendar. Oh, we don't have enough actions to get there. But by not relying just on the run rate, you can tie your forecast directly to specific actions, knowing which events are supposed to carry the month, and what contingency plans if they don't.

So the, the example I'd like to use here is if you create an expectation of every email that you send and how much revenue it's going to generate relative to the overall email revenue that you need, that on day three. When one of those emails is soft, you can now make a choice to either add another email, do a resend, redistribute some ad spend budget to somewhere else to make up that deficit.

'cause it's clear and obvious from the beginning. And so that level of action, expectation, realization is sort of the holy triangle. Holy trinity of great forecasting. Action. Expectation of the value of the action realization of that value.

It, it allows you to really see like, okay, am I doing what I intended to do to help reach this goal? And if not, what other actions can I take to make up that gap?

[00:14:05] Richard Gaffin: Right. No, that makes sense. And, and it's, it's interesting like reading Amar's quote here, we mapped expected revenue to actual product launches in short promos until we were confident we'd hit the number. In some senses that sounds sort of like. Kind of obvious, but actually in, in practice, that's not often what happens.

The actual mapping of revenue expectations to specific behaviors that are being, that you are planning on taking in the upcoming month. So I, I think, and the takeaway of not relying on run rate is interesting there, like this idea that like there's some expectation, Hey, we performed this way last month, we're gonna perform something similar this coming month without any actual tie to specific concrete behaviors.

[00:14:41] Taylor Holiday: Similarly, having additional marketing moments planned helps forecast a bit more bulletproof. So this, this is a very common strategy, which is like, we, we call it the like reserve marketing moment, which is, it's a thing you plan to do if there's a problem.

So you plan for failure. This is really important for the trip. If it rains, I have a rain coat. Kind of vibe, right? Like you want to assume that your plan is subject to volatility missed expectation. Similarly, like anytime we're halfway through the month, I just sent this email, we have a client right now that's way ahead of contribution margin goal.

And so I sent a a forward to the growth strategist, like, Hey, what are we doing to spend that excess contribution margin to. Bulletproof future revenue, like in other words, you don't wanna go out and massively over deliver either. We wanna deliver the expectation in as many months as possible. And so each of these times there's contingencies relative to the expectation in the event that you fail or don't fail.

[00:15:33] Richard Gaffin: Yeah. And, and I think that that's interesting too, to tie that back to what you were saying previously about marketing calendar events being the most difficult to predict, or specifically product launches and maybe a sale or offer that you haven't tried before. Those are the two things that have to happen if you're going to take those actions that are going to grow revenue in the upcoming month.

But they're also the most difficult thing to forecast, which is maybe why people shy away from even trying. But the idea is they're like. Because you're not exactly sure what you're gonna get from those two things. You need to have some sort of backup plan in order to, a back backup set of actions to fill the gap that those actions or the failed actions took,

[00:16:06] Taylor Holiday: Yeah, I think you're exactly right. It's so much easier to say, if I increase spend x, I'm gonna generate revenue y at least. And you can apply some sort of degradation to that, but like, it's obvious, but it's a much harder exercise to go, okay, if I come up with this new idea that I've never tried before, what's that gonna be worth?

And therefore, how much resource should I put against that? This is where, so we have a part of our modeling exercise that we call the event effect model, where what you're trying to do over time is connect qualitative and quantitative outcomes. So we upload every brand's historical marketing calendar back two years, and then we label every event every day.

We notice what happened when there was a product launch, what happened when there was a promotion, what happened when that email was sent so that you can begin to, at the very least, understand comparatively when I did something like this in the past, what was the result? And therefore, what's my baseline of expectation that I either amplify or decrease relative to my understanding of this event against that one.

So we are working really hard to tie those together for our growth strategists so that they don't, aren't just sort of. Putting their thumb up into the wind and, and guessing at at, at the numbers.

[00:17:13] Richard Gaffin: Yeah. Okay. So let's let's kind of wrap it up by bringing it back to the report card for a second. So, obviously I'll reiterate the numbers. 79% behind on revenue 4.59% ahead on spend 11.68% ahead on contribution margin. So the overall picture of this is that. Seems like we're doing pretty well, but is there anything else you wanna like pick out about like what these numbers mean about how we're performing

[00:17:35] Taylor Holiday: It. Yeah. So the reason you should care about this is because our job is to set expectations and meet expectations. That's every agency's obligation is to come to you. And here's the thing, we don't get to just write the number in as the goal, like you as the brand have to agree. It's your business, it's your goal ultimately.

And so our job is to deliver on this mutual expectation. And so we're gonna publish. Did we do that? Yes or no? And you have my word that we're just gonna publish it. If we miss, we miss and we'll explain why. And we'll review and we'll iteratively improve. But go look at this from Global Accelerator. Go look at this from CTC and then from our scorecard, and then ask yourself, who else is publishing this and obligate them.

Ask them, say, Hey, last month, how did your brands perform? Can you provide me a report card of your business to add in a quantitative assessment of the business that you're going to choose? And then just know that our system is entirely predicated on the ethos of we're as good as we've ever been, and the worst we'll ever be.

And so what that means is that whatever this outcome was, whatever the tooling and architect and technology that underpinned it, whatever the service and media buying strategy and whatever, we're gonna take it and we're gonna continue to improve it over time. And I want to be the most transparent with our outcomes because I think that's a better truth than just asking two former clients or an ex-employee or even an active client and not, 'cause they won't say good things.

Maybe they'll say an amazing things that are overstated, like. The point is you should add in a layer of quantitative assessment in the process of trying to decide who's gonna help you to generate the outcome you care about.

[00:19:02] Richard Gaffin: That's right. All right folks. Well, if you wanna be part of this client scorecard, IE become a client, you know where to find us, commonthreadco.com. Hit that hire us button. We would love to chat. Taylor, anything else that you wanna hit on this? I.

[00:19:13] Taylor Holiday: Nope. Follow along. We will have it published on our website. Send it out as an email. If you're not sub or subscribed to our email database, do that. And of course, like subscribe on YouTube and let us know what other data you'd wanna see. What do you want us to publish? Like would media spend in efficiency?

What other things could we do to help bring transparency to our work? 'cause it's gonna help us improve internally and give you the confidence that you'll have a great partner in working with us.

[00:19:36] Richard Gaffin: All right. That's right folks. Okay. We'll see you all again next week. Thanks for listening. Take care.