Listen Now
Incrementality tests are “in”… but the real problem is what you do after the read.
In this episode, Taylor sits down with Olivia Kory (Chief Strategy Officer at Haus) and George Davis (CMO at Cozy Earth) to unpack the messiest part of modern measurement: operationalizing incrementality when results swing, channels conflict, and “platform ROAS” can’t be trusted.
If you’ve ever asked:
- “Our holdout came back way lower than Meta… now what?”
- “Why don’t test results replicate month-to-month?”
- “How do I actually use an incrementality factor in real budget decisions?”
- “If everything is under 1.0 iROAS… should we cut spend or keep investing?”
… this one is for you.
What we cover:
- Why incrementality requires a holdout (and why “spend up / spend down” isn’t enough)
- The replication problem: why results change even with “clean” tests
- The gap between measurement and optimization (platforms optimize for attribution, not incrementality)
- How operators use incrementality factors without letting them become a blunt instrument
- Why channel vs. channel is often the wrong fight (and why profit thresholds matter more)
- iROAS → IMR (Incremental Marginal Return): a more intuitive way to compare performance
- Budget cadence: daily realities vs monthly allocation decisions
- Long-term effects, “adstock” claims, and why post-treatment windows matter
- Practical levers that can improve results: creative, account structure, exclusions, distribution expansion (Amazon/retail)
Got a weird incrementality result? Drop it in the comments. We’ll let you know what we’d do next.
Read this next: CTC Core Methodology Series: Marketing Measurement
Show Notes:
- Axon is offering $5K ad credit when you spend $5K. Go to axon.ai/en/ctc to set up your first campaign.
- Explore the Prophit System: prophitsystem.com
- The Ecommerce Playbook mailbag is open — email us at podcast@commonthreadco.com to ask us any questions you might have
Watch on YouTube
[00:00:00] Taylor: All right. Welcome to a very, very special episode of the Ecommerce Playbook podcast. We're in person today. This is the office. We're in a little bit of an odd setting between two ferns here. Because we had some power outage issues because of the big rainstorm we got in California. It rained for 12 minutes and we nearly lost the power grid.
[00:00:20] But I'm really excited to have two of, I would say, like my close internet friends. You guys are my close internet friends. George, you've, you've manifested into my real life a little further. Yeah. Yeah. Olivia and I just met for the first time in real life, but we hang out a lot in the dms. Today's your first time
[00:00:35] George: meeting.
[00:00:36] Taylor: The first time we've ever met in person.
[00:00:37] George: Wow. I didn't know that.
[00:00:38] Olivia: Seriously. You were adversaries for a while.
[00:00:40] Taylor: No,
[00:00:40] Olivia: you know,
[00:00:41] Taylor: see, I was gonna start with all the positives and you just went negative adversary to most
[00:00:44] George: everyone on the internet, so,
[00:00:46] Taylor: no,
[00:00:47] Olivia: I think, I think this, this might be a death match.
[00:00:49] Taylor: No. Okay, good.
[00:00:50] That's what I'm here for. I'm gonna set this up differently. Don't listen to them. They're actually here to help me and to help you. What I navigate, what I think is one of the most challenging things for operators and marketers to deal with right now, which is that we've all been inundated with the idea and many of us accepted that incrementality and holdout testing is like a critical component to running a modern marketing playbook.
[00:01:13] But I think that there is a gap in the market in information about how to actually operationalize these results and put 'em into practice. And I know we're experiencing this, I feel a level of confusion myself that's been challenging. So I've brought two of the smartest people I know who are dealing with this all the time.
[00:01:29] Olivia, obviously at house. What is your exact title?
[00:01:32] Olivia: I'm the Chief Strategy Officer.
[00:01:33] Taylor: Chief Strategy Officer. That's a C level folks. You heard it there. That's a c from Olivia. And she's probably run more tests than anybody in the world. And then George is the
[00:01:44] George: CMO at Cozy Earth.
[00:01:45] Taylor: Okay. CMO at Cozy Earth. An amazing Do you guys call yourself, what was like the specific category you would say Cozy Earth is?
[00:01:52] Is it a
[00:01:53] George: Home Goods?
[00:01:53] Taylor: Home goods? But also apparel.
[00:01:55] George: Apparel, bath, bath. I mean home goods is Bath Beding. And then we have apparel,
[00:02:00] Taylor: Amazing business who's also participated themselves is an individual brand in a lot of testing. And so, and then at CTC we run our own set of testing and hold out an MMM and all sorts of different things that we have to then participate in deploying for our customers.
[00:02:13] And so building this into practice from setting up the test, where to test when you get a test, what do you do, how do you actually turn this into action is something we wrestle with George directly a lot with. And then Olivia and I are always kind of bouncing ba ideas back and forth because I think it's challenging.
[00:02:27] And so I, I think we're here to do that. But maybe for those of people who don't know, you guys, just give us 35 seconds of your bio and what you do at your corresponding companies. Olivia, go first.
[00:02:40] Olivia: Sure. Before we start though, George, you, you pulled our LTV, right? Ooh cozy Earth purchases. Can you put on the record here?
[00:02:49] George: $1,400 from Olivia in the last year. So what about
[00:02:52] Taylor: me?
[00:02:53] George: I think it was a hundred dollars and most of the orders are free 'cause he text me asking for free things.
[00:02:59] Taylor: Okay. Well also, but here's I, I one up you. My mom works for Cozi.
[00:03:04] George: That's true.
[00:03:04] Olivia: What?
[00:03:05] George: That's
[00:03:05] Taylor: true. Mother. His
[00:03:06] George: mom does work
[00:03:06] Olivia: for us. How did this, how did this not come up?
[00:03:09] Taylor: Because I needed to save some of it for the podcast. What? Yeah. So my mother works at the retail store here in Corona, Delmar for Cozy Earth. So we're on, we're we're just on the the take side of the coast of Earth relationship. You're a giver.
[00:03:20] Olivia: All right.
[00:03:20] George: Granted I didn't look at your wife's profile.
[00:03:22] Taylor: Yeah, thank you. She's
[00:03:23] George: the real spender.
[00:03:25] Olivia: The pajamas are a great gift. I'll do that plug right now. Thank you, Olivia. If you have a gift to get, go for the pajamas. So what do I do? 30 seconds on, on house? So I actually was one of the first customers of house at Sonos. I was leading growth. We had a last click attribution model for D two C pointing us in one direction.
[00:03:44] We had an MMM pointing us in in another direction and sort of pitched experimentation is the way to like really start to quantify what marketing was delivering to the business. And it was just going so well in, in such a short amount of time that I made the move over. I can't believe I'm a vendor now but have been doing this now for almost four years.
[00:04:02] And, and this is, I'm excited we're having this conversation too. I think this is really important because the first couple years of house was just. Convincing people that incrementality is important. And I think we are, we're sort of through that now. We're sort of through the first chapter of like, okay, we need this, and now it's how do I use this as an operating system and really operationalize this, this practice in a way where it's driving business outcomes.
[00:04:25] So that is what I do. I do a little bit of everything, but mostly spending time with our customers and making sure that they're getting value out of, out of this, this practice. I say it's, it's, it's more than a product. It's, it's the whole thing that, you know, the both the product and the, the kind of strategic layer that comes with it.
[00:04:39] George: Cool. Oh, you got something to
[00:04:41] Taylor: say? Well, I was just say Olivia is out here for a customer success event with a lot of our customers and I enjoy, watching her do her job. So much so that I'm constantly just like trying to convince her to come work for me. Sorry, Zach. And she just keeps saying, no, I'm gonna say that.
[00:04:55] So you're, you're in good hands. But because I love watching her care about the work that she does, and she cares a lot about it. And so that's why I like I always tell people, arguing with them on the internet is a compliment. It's the highest compliment I could give. It's 'cause I means I really value your opinion.
[00:05:08] And so I, I wanna run up against it to learn how like you got to where you are. And Olivia's one of those people for me, so excited to have her here in person. And she's, she's important. Y'all, she's like, she's hobnobbing around with the CEOs of all of the most important platforms in the world. So she won't say it, but she's, she's in the know and connected.
[00:05:27] Olivia: No comment.
[00:05:27] George: Cool.
[00:05:28] Olivia: No comment.
[00:05:29] George: Yeah. Olivia's awesome. I've been at Cozy Earth for eight years. I started when we had like three or four employees. I was doing customer service. And then we started leaning into digital marketing a little more. I, I owned that. And the company's grown. And I, I got to stay in my seat.
[00:05:44] I haven't done a bad enough job to get fired yet. So anyway, that's, that's my background. Obviously relatively involved in our, all of our holdout tests and our relationship with house and excited to, excited to chat today.
[00:05:56] Taylor: And so I'm gonna do the thing where I compliment both of 'em after they, 'cause they're gonna undersell themselves.
[00:06:00] But George is, if you're somebody who's like, maybe you're in your first job in a marketing company and you want to understand the playbook to how do I someday continue to grow in this organization to the point that you could be the CMO from where'd you start? Customer service rep.
[00:06:15] George: Customer service rep,
[00:06:15] Taylor: right?
[00:06:16] From customer service rep to CMO, like George is the person. And, and here's, here's the key, genuine curiosity and humility and a constant desire to learn. George is a CMO who asks questions in every conversation that he's in and asserts, no presumed understanding or knowledge that would be greater than anybody else.
[00:06:33] He's around, he's constantly trying to learn. And that's why he keeps growing is 'cause he keeps trying to find ways to do that. And so it's why I enjoy talking to him. 'cause I just talk most of the time Then, you know, it's easy 'cause I,
[00:06:43] George: it's a great, that's a, it's a great relationship. Come out here, he yells at me, tells me what I need to do better and I, I love it.
[00:06:48] That's why we, that's why I pay him.
[00:06:49] Taylor: Yeah. So George,
[00:06:50] Olivia: I'm so glad you're here.
[00:06:51] George: Thank you Olivia. I'm glad to be here. Would
[00:06:53] Taylor: you have done it with,
[00:06:53] Olivia: I can't believe if, if he wasn't. I believe you need to come on, but I'm so glad you're here.
[00:06:56] George: Of course. Very
[00:06:57] Olivia: thankful.
[00:06:57] George: Yeah. I've, I've been looking forward to this.
[00:06:59] It's fun. We'll get this on film. We have these conversations all the time. Hopefully it's helpful to somebody. We'll see.
[00:07:06] Taylor: Yeah. So maybe we set the stage a little bit. With some of the journey that we've been on, because this isn't about cozy Earth today in the specific ways, but I think your journey is illustrative of what actually a lot of people will experience if they go down this road.
[00:07:19] So whether they're just starting their measurement journey or they've been on it for, what, two and a half years now, or however long you guys have been in it. There's a, there's a process here that I watch people go through all the time where there's a conversion moment
[00:07:32] Olivia: where suddenly they decide incrementality
[00:07:34] Taylor: matters and then there's this like enthusiastic testing process and then that first test comes back.
[00:07:41] And in some cases. It could be inconclusive in some cases, it could be great, in some cases it could be bad. But then begins this like process that it's sort of like the trough, the dissolution meant. It's sort of like Gartner's hype cycle that we go through every time with every customer as it relates to this technology or this process where they realize that there is no one single answer that solves all the business problems.
[00:08:03] And it becomes almost discouraging at first. So Olivia, I'd be curious of like, you've walked a lot of people through this. Yep. Like, how do you think about setting up how people should relate to the idea of a measurement journey if they're just getting started or maybe they've had one or two results?
[00:08:22] Olivia: Yeah. So this is a new metric. Like we are establishing baselines on a lot for a lot of our customers. They've never actually run an incrementality test before. And I think those are actually the hardest customers to, to to educate in the early days because a lot of our. A lot of our customers, bigger brands who have data science teams who have done this before, kind of come in knowing what to expect in terms of our, these results are not gonna look like you're in platform roas.
[00:08:47] Like they are potentially going to be lower. We, it's, it's not uncommon for us to see cost per incremental acquisition in the multi, multiple thousands of dollars. And so for brands coming in who have never gotten these baseline reads, in terms of incrementality, it can be shocking. And to your, there is definitely that, that journey that we go on with them.
[00:09:05] I'd say though, like the hardest moment is not after that first test because I, I still think there's a lot of excitement around, all right, well this read wasn't where I thought it, it, it, it would be, but let's go test some other channels. Mm-hmm. And those are gonna be a lot better. Mm-hmm. And we can then, you know, reallocate budget from low performer to high performer.
[00:09:22] I think the, the hard part is when all of the reads come back and then you have them all in hand across your key channels and they're all under one in terms of, of incremental roas. And in. Again, we are used to seeing this, like when we got started with a large entertainment brand, you all know it.
[00:09:39] Their CPI was $4,000 and by the end, you know, through a year or two of working together, we got it down under a thousand. And that, and that's why you do it. And it, it, to me, it's like hard to, I think it's why would you expect incrementality to be good out of the gate? You know, so that's the, what we try to preach is like, this is a journey, this is a, how do you improve upon this number, not the report card that you take back.
[00:10:01] But it's tough because I, I, I, you know, we'll, we'll, we'll get into it, but sometimes our, our results do show that that customers are, are overspending in marketing. And I think that's, it's an unfortunate reality that most brands actually don't want to, to face in terms of, you know, a lot of, a lot of these brands are trying to grow and they're trying to invest back into the business.
[00:10:19] So that's something we're wrestling with every day. In terms of, you know, as partners, what is the best way to guide them?
[00:10:24] George: Yeah,
[00:10:25] Taylor: I think that, i'm gonna, I often play this role of like poking at Olivia over certain positions and one of the ones that would be easy is that like, okay, you are a business who makes money as I continue to persist in testing.
[00:10:38] So the idea that this is a journey or the idea, it's a thing that I always need to do, feels very self-serving as an idea. And I think you and I have gone back and forth on like whether the application of the benchmarks is sufficient or how you, how much you have to test everything is a phrase that gets used a lot.
[00:10:54] But I think that it is important to understand that this is not, that we are taking in every one of these cases, a moment in time snapshot that reflects what was true for a period of time. That is subject to lots of reasons why it might change. And so the idea that a single answer would be sufficient for all time.
[00:11:16] Is just not true in any way, ca way, shape, or form. And so I think that there are, there is certainly a reality that brands need to begin to emre embrace that I'm not running a test, I'm making testing a part of my methodology. Yeah. And I think that disposition change is an approach to entering into this is really important is are you running a test or is testing a part of the way you're gonna do measurement?
[00:11:40] Olivia: Yeah.
[00:11:40] Taylor: Those are very different places, I think.
[00:11:42] Olivia: Yeah. I, I, we should have, we should have done this up top of like, let's just like lay out all of our incentives and make it super clear. And, and you, you said like we, we are a measurement company. We want customers, we want these brands to be running more tests.
[00:11:55] I actually haven't seen a, like, a great correlation there. There are a bunch of brands who come in, you get two testing slots a month. There are brands who will fill those testing slots and, and it's testing for the sake of testing. That's actually not how we drive value for our customers. Like, I don't think there's a.
[00:12:10] A very high correlation between number of tests run and and value. It's the, are you actioning on the results? Are you doing anything with them?
[00:12:16] Taylor: Mm-hmm.
[00:12:17] Olivia: And so that is like, that is, you know, where and how we're successful is like, are you actually using these results to make business decisions? Yeah. And that's our metric.
[00:12:25] That's, that's what I, that is I would say our, our number one determinant of, of ROI and value.
[00:12:30] Taylor: So George, how did, tell us a little bit about your measurement journey. When did you start, how do you, how did you think about the role of incrementality and how has that evolved as you've been going through it?
[00:12:41] George: Yeah, so to Olivia's point earlier, everyone comes in with a perception of what the return is on meta. And I was having a conversation actually yesterday with the brand doing about $15 million and the end platform is like a 3.5. And he's assuming that all of his new customer rep, any, when you start looking at the math, you're like, there's no way it's driving a three.
[00:13:01] It's probably driving closer to a one. He's never ran a holdout test, but he's operating under the assumption that it is driving a three XROI. So I think there is this moment when a new brand comes in, they're operating. And so were we, before we started testing that the in platform number is reflecting some version of reality, whereas in when we got our first test result back, we were like very, very different than what we were seeing in meta.
[00:13:24] So and then from then on, like we've almost always had not cool, we've almost always had a conversion lift study on or a house study on. And so we've been able to see what Tyler Taylor mentioned, which is a very wide range of results where one month it's this and the next month it's, you know, it's this and there's no, there's no clear reason why the result changes so much month over month.
[00:13:51] And our effort is always to increase that number and. Two and a half years in, it's, it it can be frustrating because I feel like we are running out of levers to actually change the outcome. And it's really difficult, at least for us as a brand to figure out what creates the outcome. So that, that, that's kind of our journey.
[00:14:12] And again. It's still figuring it out. We don't have it down to a science yet, but we're working on it.
[00:14:17] Taylor: So I wanna lay out some of the things that I watch that's common from what you just described to what I see as like the e-commerce brand measurement journey. Is that when you start out earlier stage businesses, let's say you're in the seven figure range, meta ROAS becomes a really strong indication of your actual revenue realization.
[00:14:38] There's not other channels in the mix. You don't have existing organic demand. You're not even really, you don't really have existing customer revenue. So it is a very trustworthy signal of the impact of your media. So your organization builds orientation around improving that measure. It irks its way into all your dashboards.
[00:14:56] It becomes present in lots of ways, and then you begin this process of two things occur as you mature. You expand distribution of the media. You add in Google. Now there's some TikTok, maybe there's a little app Love, and we're gonna try out YouTube. That introduces measurement complexity. Then you also get more of your revenue coming out of your existing customer base through email.
[00:15:17] And SMS as a channel grows as a percentage of revenue that has an effect on the actual incrementality of your media and channels. Metas is gonna pixel all those purchases. Those people have probably seen ads when they see an email purchase. And that's sort of phase two is like, okay, existing customer revenue growth and slight channel introduces.
[00:15:35] Now ROAS degrades a little bit in quality. And so generally we start to trying to move into MTA is sort of a first step where now we're trying to assign some sort of multi-touch value to these different interactions. We're trying to sort out between the channels. Maybe an increased focus is where A MER became a metric that I think Andrew really probably deserves credit for really driving this forward.
[00:15:58] A lot. Acquisition, marketing efficiency ratio, so a focus on new customer revenue begins to emerge as really important. Then a third thing happens in my experience, which is distribution expands. Now you're on Amazon, now you're in retail, now you have online retail competitors, so your actual product distribution expands.
[00:16:16] And then you start introducing top of funnel media. Now we're into TV or podcast or whatever. And now all, any sort of click-based measure basically sort of collapses on itself in terms of its efficacy. And so you move as an organization, as you mature, you have these signals that we're really reliable, that it's hard to tell the exact moment that they stop being reliable.
[00:16:36] And so every organization has to go through this process of sort of recreating the measurement framework for themselves. And I feel like now what got introduced and you, you guys deserve a lot of credit for popularizing this is that, wait a second. We need to sort of ignore all of this click or view or attempts at any sort of MTA.
[00:16:53] We need to actually try and create a causal relationship between the spend in any location and the ethic, the efficacy on the actual business metric you care about. And so that's where I think holdout testing has really emerged as popularizing. Maybe you could. So define these terms a little bit for people, because I think sometimes hold out, scale up, hold it like there's different ways that they get defined but maybe you could give us the most common incrementality tests and what they're attempting to do.
[00:17:21] Olivia: Yeah. So the, a lot of people say what they do is incrementality. I think we kind of fundamentally believe at house that incrementality requires a holdout. You need a counterfactual to understand what would have happened anyway. This is classic scientific method, randomized control trials in healthcare.
[00:17:41] Like you cannot you cannot establish causation without the counterfactual of what would've happened anyway. And so if you are if you are spending up and you don't have a holdout. That's interesting. But you don't know what your business would've done if you hadn't spent up. And so there are different ways to answer different types of questions.
[00:18:00] The what is the baseline incrementality of this channel is a different test design than how much should I spend on this channel? And both, I'd say require different cell structures and test designs. But fundamentally at the core, like what we believe is that, is that that holdout group, you need to turn off your marketing or whatever you're testing in, in, you know, across some group of of users on the conversion lift side.
[00:18:23] Or across summer regions in order to actually understand the, the counterfactual. So yeah, that's in, in a, I think a lot of people also kind of conflate these like natural experiments with, we turn this thing off and then we observed, which is a great fallback solution. Like I think we still recommend these tests, we call them time tests.
[00:18:43] But it's really hard because there's so much happening in the business. At the same time, if you turn something on right ahead of Black Friday, cyber Monday, it is impossible to actually tease apart, cause and effect and disentangle which of those channels you, you lit up or actually drive the business
[00:18:57] George: for sure.
[00:18:57] I mean, we were doing that before where we were just scale things up or turn things off and it was a terrible, it was a terrible system. We had no reads, we had no idea we would make assumptions, then we'd action on those assumptions and then they'd be wrong. So a couple of questions though. Yeah. You mentioned like for new brands.
[00:19:12] Taylor: Yep.
[00:19:13] George: There's this point in time where they need to start introducing incrementality testing. I have conversations with brands that are, you know, five to $15 million, maybe half of their revenue's coming from repeat. They have no idea how much of the demand is organic versus repeat versus meta. They just assume all new customer revenue is coming from meta because that's the only paid channel that's turned.
[00:19:33] Yeah. At what point should they start testing incrementality? 'cause I actually think it's earlier. I think there's a lot of brands that have a decent amount of organic being driven and they're attributing all of that to meta.
[00:19:47] Taylor: So yeah, we probably both, it'd be, I'd be curious your answer here too. I think that incrementality is a step to take when you've built distrust in your present measurement system to generate the effect that you want on your business.
[00:19:59] So I think about all these as tools to some end, and the end is you wanna understand the levers of growth so that you can deploy capital and generate a return. If you present measurement system, you feel confident internally that you can understand based on the whatever information you're using, where to deploy the dollars and get a return, then great.
[00:20:18] When we, if I analyze all my Gong transcripts, which is something we do a lot like our sales calls, we plug 'em in and we ask, what is the number one reason people come to CTC? It's not because their business stops growing. It's because their businesses stopped growing and they don't know why.
[00:20:34] George: Yeah.
[00:20:35] Taylor: And so it's when an, when uncertainty clouds your measurement system and you don't know what to do, or you have a lack of confidence or there's fighting between marketing and finance, or there's disagreement organizationally around how you should deploy the next dollar and what the levers of growth are, you have a problem and you need to reevaluate the measurement system, which likely at that point will begin the process of including testing into that thing.
[00:20:59] That's, that's my experience of
[00:21:00] George: it. I would say that a lot of them have a false sense of confidence in their measurement system. Like in my conversations with them, they're like, no, no, this all has to be coming from meta because that's like, but when you actually dig into the business and you look at the revenue driven from email and SMS and you look at the revenue driven for from organic or like some other channel, I would say there's a lot of small brands that would say, oh no, I'm confident I trust this.
[00:21:21] But they don't, they don't even know. This goes
[00:21:23] Taylor: back to what, when does change happen though? Change happens if and only if. Dissatisfaction is large enough. And in those cases, we, we interact with those people all the time. And you can't actually change their mind.
[00:21:33] George: Yeah.
[00:21:33] Taylor: That's the problem, is that someone has to be willing to examine the present thing through a critical lens in order to get to a new adoption of a new idea.
[00:21:42] And I just think that a lot of times, if it's working for them, then like they're gonna persist in that system. And that's my experience of it. It's hard.
[00:21:49] Olivia: Yeah. Well, I, I continue to be surprised. I always ask this question of like, when, when, when I'm having an intro call, like, how much of your business do you think is driven by paid?
[00:21:58] And so many people answer that question based on MTA attributed data. Mm-hmm. Yeah. And I'm like, no, no, no. Throw that away. Yeah. If you turned off paid tomorrow, how much would your business come down?
[00:22:09] Taylor: Exactly.
[00:22:09] Olivia: And so many can't answer that question now, I think. The, the, the moment where you need holdout testing is where you can't do that anymore.
[00:22:16] Like if, if, if you can turn some, turn off your, me your meta and you can immediately see the impact to the business. I don't think you need holdout testing. Yeah. But, but in, there are moments I'd say like when organic demand starts to take off when you're doing partnerships, when you're selling in more channels, I typically see it around like low to mid eight figures is like when you're, when you're diversifying your channel mix enough, and like when you have enough organic going that you can't actually confidently answer that question by just shutting it off.
[00:22:46] George: Yeah.
[00:22:46] Olivia: So, but yeah, it's, it's, so many people still answer that question based on like how many sales came in through an attributed click.
[00:22:53] George: Yeah.
[00:22:53] Olivia: And, and that's just fundamentally flawed.
[00:22:55] George: Or they just say everything that's new is meta. Yeah. I'm serious. It's like very, it's, it's concerning when you talk to people because.
[00:23:02] There's just this perception that Meta is driving X percent of the business and then they try to grow the business by scaling into meta and, and the ROI is not what they anticipated
[00:23:11] Taylor: so well. So what you're getting at there though is like, is not just, it may be both presently true, that Meta has the primary driver and that when they scale they, that the curve of degradation is actually something they're like acutely unaware of.
[00:23:24] So I think there's lots of ways to use the testing to answer different, different questions. I, I, I don't actually think that you can really test there, certainly from a data standpoint there's too early. But I really think that having an early foundation for measurement as part of an experimentation of for, of how your organization determines truth, like this is an idea I care a lot about, which is like, how does your organization decide what's true?
[00:23:45] Like what is the standard for truth. Inside of your company. And I think that it really interrogating the core KPIs that drive the behavior of the organization and why they are true. Like what needed to exist to validate that expectation is something that leaders should be like highly, like should highly scrutinize.
[00:24:02] Like why is this the thing that orients all our behavior around? And usually it's because you've designed the financial incentives that way. This like almost always comes back to the financial incentives. Why, which is why it's so important that you as a CEO in particular design the incentives around causal levers.
[00:24:17] Like you want your people actioning things that create real value. And so you should interrogate that a lot. So I I I, we're not afraid to do it early. I is what I'd say,
[00:24:24] George: I guess is there, what is META'S threshold? You might know this, Olivia, for a CLS study. Is there a SP
[00:24:30] Olivia: threshold? I think it's, I think it's universally free for all.
[00:24:33] So
[00:24:33] George: it's exactly why would everyone not do a cls Great question study if it's available. If you have, talk to your meta rep about doing the study. 'cause that's gonna give you some sort of a read on the incrementality of the channel. That, that was the first thing we did, and then that opened the door to more testing and, you know, helped guide us down that path.
[00:24:48] But
[00:24:49] Olivia: yep. I agree.
[00:24:51] George: Oh.
[00:24:52] Taylor: You're like, bro, can you get it, get it up in your grill? I thought
[00:24:56] George: I was coming into a,
[00:24:57] Taylor: I know you're trying to be like, cool guy down low. Get it up in your grill.
[00:25:00] George: I don't have to try to be cool.
[00:25:02] Olivia: It is hard to, it's hard to use body language when it's as close.
[00:25:05] George: Yeah. Someone's right.
[00:25:06] Olivia: That's, that's probably why your mic is always going on.
[00:25:08] Like, Hey, why are you, yeah,
[00:25:09] George: it says you, apparently you, you have the biggest problem with the mic not being on your
[00:25:12] Taylor: mouth. I do. That's why I try and just really get up in there.
[00:25:15] George: All right. My bad.
[00:25:17] Olivia: Taylor, you, you, you've asked this question on, on Twitter. I'm very curious what your answer is. What do you think should be the North Star metrics for a marketing team?
[00:25:25] And I'm, I'm not, I'm intentionally not saying a growth team. Yeah. For the person who spends the, the advertising budget, what should they be gold on as their north star? KPI.
[00:25:37] Taylor: So I believe that the whole organization should orient, orient, and flow out of the company financial objective for how shareholders intend to realize value.
[00:25:47] So stay with me for a second 'cause I'm not gonna give you a simple answer here. The number one job for the shale holders is to decide how they're gonna create liquidity. If they're gonna do that through the distribution of capital, then free cash flow is the governing metric that the organization has to orient itself around, which leads to a very different media buying strategy than one that's trying to produce a p and l level profit.
[00:26:05] These are different things. If you're trying to create a sale event someday, and you're trying to maximize TTM EBITDA over some period of time, then you should try to orient your behavior and flow down through the p and l as the view, which is likely gonna lead to things like contribution margin. Then you're gonna try and figure out where you generate incremental contribution margin, and then you're gonna orient around that.
[00:26:25] So I actually think that the, the crisis that I see is a crisis of leadership at the shareholder level to define the organizational objective that the marketing team is trying to accomplish. That is the number and, and fighting or disagreement at that root level about how the company intends to make money.
[00:26:41] I'll give you another example that where this comes in conflict is in the distribution strategy. Because like all of the future growth flows out of some corporate strategy that includes retail, it includes wholesale, it includes international expansion, it includes.com, it includes Amazon. And media should be used to support business growth, not channel growth.
[00:27:00] Like, and oftentimes you get it's isolated and siloed into these things. So the first answer is, do you have clear alignment on the fin corporate financial goal? Do you have clear business strategy for where, what channel that growth is gonna come from? And then is the media team laddering up to the effect on those channels?
[00:27:18] And if so, they can be really, really effective. And then we can decide the right input metric for them to focus on. In most cases, what I think it should be is incremental positive contribution margin on all points of distribution. So if you're, and, and if, if, if retail is a little bit more complicated because it's not as simple to say that driving retail sale through equals incremental dollars for the company because there's a sales rhythm related to pos and store accounts and things like that.
[00:27:43] So the simplest way is to say, we call it your digital enterprise, your online business, and say everywhere you're selling online that you have control over, hold out. So like, usually this is Amazon. If you run walmart.com or any like you know, self-serve marketplace where you can get geo level revenue data, all your media should be held to an incremental impact on all of that revenue all the time.
[00:28:06] That's what I think. And, and you should, your efficiency goal should ladder up to whatever the organizational obligation is to margin and growth and work from there.
[00:28:14] Olivia: What, what about the scenario where. Your brand for whatever reason, can't be running an always on all up holdout test. Are are, are they doing, are you running always on, all up
[00:28:24] Taylor: holdout?
[00:28:24] So not, not always on all up, but I, we, we would advocate for always on channel level.
[00:28:30] Olivia: Okay.
[00:28:30] Taylor: Especially we, we think about measurement roadmap relative to what we would call as like the variable business impact. So when we think about designing your testing roadmap, we would say, where's the most, if you take the, the media spend by channel times the variability of our sample size of test results, that represents the bounds of possible incrementality in that challenge channel multiplied by, by the dollars, right?
[00:28:56] And so what I wanna do is, as fast as possible, I wanna narrow the error bars on your most valuable channel. I wanna work through it sequentially like that to try to make the most immediate impact on the place where you're spending the most money. So most of the time that begins with a meta holdout. It represents 66% of share of wallet, I think across our portfolio.
[00:29:15] I think North BE'S is pretty similar. And so if for most brands, that's the most important question to answer and then we can sequence through it from there. But all up, I, I am a fan of as well. It just, it's a hard, a little harder for me to determine the right corresponding action afterwards.
[00:29:28] Olivia: Yep.
[00:29:28] Okay. So you're just, you're, you're running channel level tests. You start with the most, the, the highest spend channels, and then you get your factor, you recalibrate based on the factor versus the benchmark that you started with. And then basically in a month where you're not testing, let's say you test Google the next month, so meta's off in terms of, of incrementality tests.
[00:29:46] You're not testing meta, you just use the factor from the last month. And, and you think like ahead of growth could go to their boss and say like, this was my incremental contribution margin in this month based on. Kind of assumptions on incrementality from prior months tests. Like I'm trying to figure out how you Yes.
[00:30:04] How you measure that. You're, you're
[00:30:05] Taylor: fast forwarding to the end of what
[00:30:06] Olivia: we're trying to solve for here today. That's
[00:30:07] Taylor: North Star, like Yeah, that's right.
[00:30:08] Olivia: That, so that, okay. So you, you, you believe the North star metric for a head of growth is should be based on the results of, of channel level Hold that tests.
[00:30:18] Taylor: The, the world I'm gonna advocate for in the future is something called IMR. Ooh, incremental.
[00:30:23] Olivia: Are you rolling that out right now?
[00:30:24] Taylor: I, I've, I've done a thread on this before. We're working towards like a broader implementation in our own world. But what I'd love to see, so IMR would stand for incremental marginal return.
[00:30:33] So I think that the errors that we make in reporting return is we report revenue and we report non incremental revenue. So I don't wanna report either of those things. I want, how many marginal dollars did I generate divided by the, the investment? And that would present always, it would normalize everything to 0% is neutral.
[00:30:52] And then it would read like a portfolio like I would in my Robinhood account.
[00:30:56] Olivia: That's cool.
[00:30:56] Taylor: You'd see plus 20% plus 15% and you could compare every channel to each other in a normalized view of return. That to me is way more intuitive. Like if I look at it and I'm getting a 15% return on my money, then I don't have to do this calculation of like, what's my gross margin?
[00:31:10] What's the like? 'cause when even if I say my I ROAS is 2.2 to one, that doesn't actually tell me anything. I don't actually know if that's good or bad for each of your businesses. But if I was reporting incremental, marginal return on a percentage basis, then I could look at it and I could also create a threshold like that is consistent with return on invested capital.
[00:31:27] So if I'm a CFO and I'm like, our hurdle rate as an organization to deploy capital is an annual return on investment of 22%, well then that's your IMR threshold. Like otherwise don't deploy the capital. Right? Now, now, so you so, so to me that would be the ideal state of how we would get to reporting.
[00:31:46] Unfortunately, right now, I'd say the first step of educating people is just to move to ira. Is to move from Ross to IROs to step one. My goal for step two is IROs to IMR. So that would be my
[00:31:56] George: Interesting,
[00:31:57] Taylor: yeah,
[00:31:58] George: always learn from
[00:31:59] Taylor: it. What do you, what do you, so well, what do you do What, give me how, what is, what is true about tell you the measurement complexity?
[00:32:05] 'cause I think you represent what is true in a lot of organization, which is there's a lot of information.
[00:32:10] George: Information, yeah. So with meta, we've been able to have a test pretty much always on for the last two years through the conversion of studies. And because the test result varies so much month to month, and we worked with Taylor on this and basically what we decided, what a year and a half ago is we would wait the results of the, i, you know, the incrementality studies and we would apply that to our in platform.
[00:32:33] So if we're getting a one x incremental return and the in platform is showing a three x, we know that we have to. We have to, I mean, I guess if that's within target,
[00:32:46] Taylor: you call it like, you guys call 'em ifs, right? Incrementality factors, incre factors,
[00:32:48] George: incre factors, that's one third, right? It's a multiplier on the end platform.
[00:32:52] And we, we operate within that constraint and try to hit the target and then we know like, you know, you're assuming a correlation between the end platform number and the incremental return, which hasn't always been true for us. It which is a debate and something, you know, that has caused me pause, but I don't think there's a better way to do it currently.
[00:33:11] So we are operating that way. For the channels, this is where it gets a little harder though. So Olivia mentioned they have two cells and we're running, call it 10 channels. So we might only be able to do a Google channel or, or maybe you're identifying a single tactic within Google, so you might only be able to do a non-brand search, Google test once every six months.
[00:33:34] So that means you get two tests per year and the result varies. You know, like Taylor said at the beginning of the pot, it represents a period of time. And as we've seen with meta, the result varies pretty dramatically based on the period of time that the test is deployed. So that's where I think I have a harder time operationalizing it because the tests are so infrequent and the results vary so widely.
[00:33:56] Like it, it makes it very difficult to have any confidence in the, in pla increment, what did you call it? Incrementality factor, EIF incrementality factor for those channels in particular.
[00:34:08] Taylor: So this, you, you brought two really good points that I want you to address. Yeah. I'm gonna
[00:34:11] Olivia: dig in
[00:34:11] Taylor: because I, that I, I think they're worth like, really wrestling with.
[00:34:15] One is the replication crisis, which is like tests don't come back the same all the time. What do you do in light of that? And then the second is the disassociation between optimization and attribution, which is this idea that like we're St. Meta's still optimizing off of their data. Not the incrementality factor or those things.
[00:34:32] Is the platform result actually connected? Does the factor play out every day? Is that real? So replication, crisis and disassociation of optimization and attribution. What say you?
[00:34:44] Olivia: Yeah. And the, the you said something else. We, we can cut this part. Yeah, but you said something else that I really wanted to comment on.
[00:34:50] Taylor: Okay.
[00:34:50] Olivia: Sorry.
[00:34:51] Taylor: I didn't mean to narrow it.
[00:34:52] Olivia: The factor
[00:34:54] George: me I
[00:34:54] Olivia: said something. Yeah. Nah.
[00:34:55] Taylor: 10 channels trying to decide two cells at a time. Only two reads a year. Limited number of data points.
[00:35:03] Olivia: Yeah, yeah, yeah. Wait. Shit,
[00:35:04] George: it was good.
[00:35:08] Olivia: Okay. Yeah. Yeah. All right. So, all right. I, I actually I think that this can be really challenging because the. Like the, if, if the, let's, let's start from the beginning. It is most definitely a good thing that you now know one third of your platform reported ROAS is incremental. Right? Yeah. Like, you have to, I think we all need to acknowledge that that is way better than the starting point of assuming that that is all incremental.
[00:35:35] George: Mm-hmm.
[00:35:36] Olivia: Now, your challenge is if you get a read that's, you know, one, one third 0.33 IF and then six months later you get a 0.5 0.8 IF
[00:35:44] George: Yeah. Point.
[00:35:44] Olivia: I think this is a problem. I agree. I think this is, this has been like, again, we're very early. We're use a baseball analogy like second or third inning of the whole incrementality movement.
[00:35:54] I think this has been really challenging of like, that is, it's just too blunt of an instrument. To actually be useful in decision making
[00:36:01] George: of like, wait. Yeah, you, you, you lose the real time application.
[00:36:04] Olivia: Yep. Yeah. So if you, you know, and then you get confused of like, all right, it was 0.3 and now it's a 0.8.
[00:36:08] Do I just split the difference? Like, we need to level this up and this is why I actually really respect the way you are challenging this position of there has to be a better way to do this in terms of operationalizing it versus just using the most recent result. So I agree. I think that that's been, it's been a struggle.
[00:36:23] We've just sort of, we've, we've had to, we've had to experience as we've gone because we're one of the, this is like the first time we're all building an OS around incrementality. And I think that can be frustrating, but I do, I, I do believe this is better than using that 0.3. Even if it ends up being a 0.7 is better than Sure.
[00:36:42] Assuming all of it is for sure.
[00:36:44] Taylor: So, okay. But I actually, I, I'm actually going to push a little bit here to say that I actually think it's better than even that representation. Here's what I would suggest is that, let me ask you, do you believe that it's actually true? That your media is more and less incremental at different points in time.
[00:37:00] Olivia: Yes.
[00:37:00] Taylor: Yes. So, so I, there's this phrase, I think with data analysis that I like, which is data reveals truth. Okay? So in those snapshots, what we found is that at different periods of time, your media was more and less incremental. Okay? So we could look at that and we could be frustrated. We could say, why are the tests different?
[00:37:17] Or we could say, we are observing nature. That's what science is. We're, this is an observation of your media. It's not, the test isn't creating the outcome, right? We're observing what is occurring. So what it allows us the opportunity to do, and this is where like measures of central tendency as a mathematical principle are like really powerful.
[00:37:34] So if I have a, like a distribution of data, a bunch of different points on a scatterplot, the idea that the median at any given time would represent the cl if I, if I had to play the game be least wrong in my guess. That point represents the least likely to be wrong given the present sample distribution.
[00:37:54] Now, what I like is that as long as the the ends of the sample, like the, the, the, the low and the high aren't constantly expanding, as long as the results continue to fall within some continuous range, over time, that median becomes even more powerful. Because what it begins to represent is this idea that like, that is actually a best guess at any given time, which I think with measurement we have to acknowledge that's what we're always doing.
[00:38:18] It's always our best guess and we're just trying to be less wrong. And to your point, the Facebook Ross was wildly wrong. Yeah. So like we already got way better by just eliminating that data point. And even though it's still frustrating that 0.3 and 0.8 are far from each other, 0.6 is still way better than 3.2.
[00:38:38] And so like this is this idea, I call it progressive truth. We're trying to be more right over time with while accepting will never be. Definitively right
[00:38:46] Olivia: now. The, the, here's the, the problem. What I think is great about Cozy Earth is you were running meta lift studies before you came in. I think you were even running holdout tests with another partner.
[00:38:56] And so we kind of came in with the shared understanding of, alright, these results are not wildly far from what you've seen in the past. Which, which I think made you trust them more quickly. Yeah. But that's a, this can be a, an issue of like, if you have never run a meta conversion lift then there is, there is a long period of just trying to get customers to believe the, the result.
[00:39:17] And so I think that was a great starting point and foundation. I think it's great that directionally, I dunno if Taylor fully agrees, but directionally the household out results have lined up to meta conversion lift. I I think you think that's great when multiple sources kind of
[00:39:31] Taylor: Yeah.
[00:39:31] Olivia: Circle around the, the same data point.
[00:39:33] So I think that's been great. Now the question is, is there variance because incrementality can change over time or because these are just really noisy test results. And that's also a reality that we need to, we need to, I think, wrestle with is you can run bad tests. And so that's, I I, I, this is why like I, like, I really do believe that the science matters and if we, if we are, if we know we're running clean tests, then that makes it easier to buy into this reality that incrementality is changing over time.
[00:40:02] But it also just could be pure noise. Like I've seen other vendors that will run these lift tests on like $20,000 of monthly spend.
[00:40:10] Taylor: Yeah.
[00:40:11] Olivia: And against like a, a very large baseline in terms of, of that business level revenue. Like there's, it's very hard to, to actually land on a point estimate there that you can trust.
[00:40:21] So just to say this is why it actually matters, to run good tests of like, we've run good enough tests that we feel good, we've, we, and then we've, we've validated them through different kind of methodologies that we do know incrementality is changing over time. And I think that's important.
[00:40:35] George: And I will plug house real quick.
[00:40:37] She didn't pay me to say this. Oh boy. But we were with a different vendor beforehand.
[00:40:41] Taylor: Okay. The vendor disparagement. We
[00:40:43] George: are, look, I'm not saying any names. All I'm saying is it's been night and day better. Their operation is much cleaner. I'm not, you know, super familiar with the testing methodology, but I do have more confidence in Olivia than I did the prior team.
[00:40:56] And so
[00:40:57] Taylor: yeah,
[00:40:57] George: brand is, brand
[00:40:58] Taylor: is trust.
[00:40:58] George: A good partner is is definitely helpful in this. And the good, we trusted the result. There was even debate when I first started talking to you about whether brands should trust the CLS study because it's meta grading their own homework. And I'm like, well, they gave themselves an F
[00:41:12] Taylor: so,
[00:41:13] George: so I think we can probably
[00:41:15] Taylor: trust the result.
[00:41:16] See this all the time too. Like the Yeah, meta C ls studies are often very poor. In sometimes are, because the other thing is, is that one of the things I think is really important is bringing in the other channel revenue. So a CLS study is not gonna include your Amazon revenue. So in many ways it's always going to underappreciate its effect.
[00:41:32] So I think that it actually, it it's a totally reasonable way to do it as a, as a starting point. And yes, you can make bad tests, but I think regardless, like I I don't think you think the fact that there are different test results for cozy earth are results of your poor test design.
[00:41:47] Olivia: No, no, but that's me.
[00:41:48] Absolutely.
[00:41:49] Taylor: So my point is that like it's, it, it can happen with good test design,
[00:41:53] Olivia: but that's, that is because there we can, with good test design, we know that incrementality is actually changing over time. Right.
[00:42:00] Taylor: Yeah. I think, yeah, I think incre
[00:42:02] Olivia: with poor test design, you don't know. Yeah. Is it, is it the poor test design or is incrementality changing over time?
[00:42:06] You don't have a lot of confidence.
[00:42:08] Taylor: Yeah. I mean, I, I, yeah. It's, it's hard to tests also come with their own confidence intervals and expectations of ways to analyze how strong the signal was on that individual test. That like is a spectrum, it's not a binary.
[00:42:20] Olivia: Yep.
[00:42:20] Taylor: And so that in and of itself is a piece of this that you talked about something I believe too, which is that there should be a model that weights each test result relative to the aggregate set of historical tests to your database of tests.
[00:42:32] We do this with our database of tests that give you some present best guess. But, but now I wanna talk about like, okay, so let's say we've done that. Let's say we've run all the tests, we did good test design, we've got these different outcomes and we've picked a point now, median whatever, or modeled result relative to the confidence of each test and seasonality and all the things that we might use as inputs.
[00:42:53] Let's say we have a number. Then the question becomes, okay, what do you actually suggest people do with this number? So you go first. 'cause I, I, I don't know what the house position is on how to operationalize an if
[00:43:08] Olivia: did pause we talked about rep replication. Was there another big
[00:43:12] Taylor: one? One, well, optimization.
[00:43:13] Well, I think this is what this hit on. This is
[00:43:14] Olivia: basically that question all. So what do we do with the IF?
[00:43:17] Taylor: Yeah. Mm-hmm.
[00:43:18] Olivia: Well, okay, so what do we do with the, if f the, the way when, when a customer signs up with house, we show them three kind of playbooks. They can and, or tracks they can sort of run with house.
[00:43:29] It is the efficiency track where I wanna cut costs, I want to get more efficient. It is the redistribution framework where we, our budgets are flat. Wanna redistribute funds, you know, across channels in order to, to grow at a flat budget. And then there's the, the scale up budget of we want to add new channels and, and we want to increase spend to grow.
[00:43:54] A lot of most customers don't wanna get more efficient, even if that is the, the, you know, technically correct thing to do based on our test results. I'd say most of our, most of our brands are in the redistribution track, where it's okay. We we're, we're healthy business. We can invest back into the business to drive growth.
[00:44:13] We can spend some money. We have this budget. We want to reallocate across channels in order to, and, and this is what, what, what we work with these brands on is like, as long as you're moving budget from the low performers to higher performers, you will get better. It still might not be where you should be.
[00:44:30] And this is Taylor. I think this is what a thread you're gonna pull on of like, you're still, you'll, you're still spending in efficiently, but you're spending more efficiently than you were before you started actioning on these results. And so that's where a lot of our customers are at right now. And, and I, I think there's probably still a lot of work to do, but they're moving budget from relative underperformers into, into stronger performers, even if your results come back and they're not profitable.
[00:44:55] George: Can I, can I dig into that? Because we've, we've had problems with this where it's like, okay, we run a test for Google non-brand in. March and then July comes around and we run a test on YouTube. And you know, maybe the difference is mini, you know, it's not a huge difference, but there is a little bit of, you know, the performance is a little bit better on YouTube, but you just tested two different periods of time.
[00:45:20] How can I actually confidently move budget from non-brand to YouTube when we just said that there's massive variance in the test. Revol test results month over month. That's where I have a hard time is like how, because we're, we're acknowledging the fact, and again, this is all live, so I apolo we didn't discuss this beforehand.
[00:45:40] Kolby specifically was like, I'm done running test unless I'm pinning channels head to head. Colby's our,
[00:45:47] Taylor: so, so this is, this is, I think that your experience of this is because you work with budget managers, not owners they have budgets that they have to spend. So of course they're just trying to allocate the given budget that the organization gave them at the most efficient level possible.
[00:46:01] 'cause they need their budget next year and their job is to allocate budget. If you are the owner, I want you to like answer the question as if it's your money.
[00:46:08] Olivia: Yep.
[00:46:09] Taylor: Because then the threshold isn't channel versus channel. The, the threshold is channel versus return. And this is my big thing is that it is, and you said it, they're not performing as well as they should.
[00:46:19] I wouldn't call it should, I would call it profitable. Like what could you deploy the dollars in a way that made you money back? And I, it's not channel versus channel at all. It's channel versus potential scale of profitable return from the channel. I don't care what any of the channels do. I'm not taking budget from here and putting it here.
[00:46:35] It's channel versus profit. If the channel can generate profit, you get more dollars. I don't care what happens anywhere else because you can't do the comparison that you just described. It's not actually possible. It's not a real solution. And so instead what you're just doing is,
[00:46:49] George: wait, why can't you, why can't you go head to head on a channel during a given period of time?
[00:46:53] You why? You just described why? Well, I can, if I say, Hey, you know what? I'm deciding between non-brand and YouTube and I'm gonna run that test March through April, then at the end of that read I, it's the same period of time. I can confidently say, okay, non-brand outperformed YouTube over that period.
[00:47:06] Taylor: Okay.
[00:47:06] Then what do you do with the meta budget?
[00:47:08] George: Hold steady. Hold the plan. I mean,
[00:47:10] Taylor: the point is you have two cells. This was her point earlier. You like, but my question is, if they're both profitable,
[00:47:16] George: they're not.
[00:47:17] Taylor: Okay. So if one is and one's not, why? Why is the, why is the relationship to each other? The measure versus the relationship is to the return on investment that you want as a business?
[00:47:28] Well,
[00:47:29] George: yeah. It shouldn't be, but in a lot of cases it is. It's, you are. So
[00:47:33] Taylor: the question is, why is, why do organizations behave that way?
[00:47:36] Olivia: Are you, are you asking why, why we do channel comparisons? Why not just say, I tested YouTube in July. Forget about that non-brand test result from four months ago. I'm going to either dial up or dial down YouTube based on this result.
[00:47:48] Mm-hmm. And my, my goals in that moment.
[00:47:51] Taylor: Yeah.
[00:47:53] Olivia: Because I think a lot of the results might suggest that you should, that many brands are over investing in marketing Bingo. And, and that, but this is, we, you talked about this up top of like, what is the founder's goal for the business. It's not always clear in my, in my experience of like they're trying to build in some optionality and they really wanna show growth and they're willing to invest a little less efficiently in order to continue showing high growth.
[00:48:19] I don't, I don't think a lot of founders come in with a clear plan of what they wanna do with the businessmen.
[00:48:23] Taylor: I agree. That's why I said, when you asked me the question, the problem, the problem is lack of clarity at the shareholder level. Because how am I as a media buyer
[00:48:31] George: or, or lack of alternative investment?
[00:48:33] Like, and I don, I understand you're gonna say, okay, distribute the, distribute to the shareholders.
[00:48:37] Taylor: That's your job as a CE. Okay. If you don't have an investment thesis for the cash, and you're just sitting shareholder money in treasury at like, what, like a 4% yield?
[00:48:46] George: Mm-hmm.
[00:48:46] Taylor: Why would I do that?
[00:48:47] George: But okay. But we, okay.
[00:48:49] So really the job should be, you're supposed to, you need to grow the business.
[00:48:53] Taylor: Mm-hmm.
[00:48:54] George: And you need, you need to grow bottom line. You need to do both. You have to hold both things at the same time and say, okay, if, and we've talked about this extensively, if all of our channels are not hitting the target that we've set, yes, we need to figure out a plan B.
[00:49:07] It can't just be, okay, stop spending money and shrink until you're dead. Right? It has to be, okay, what's plan B? And how else can you deploy funds? In a way that's different than what you've tested that hasn't proven profitable.
[00:49:20] Taylor: If there was a hypothetical business that had tested a bunch of channels and was finding that none of them were profitable
[00:49:26] George: mm-hmm.
[00:49:26] Taylor: They are actually killing themselves faster, deploying the capital than if they stopped. Yeah. That's the problem. You, you like the, the, the frame is false. Like if you were deploying money and it's generating negative money
[00:49:38] George: No, because you
[00:49:38] Taylor: are accelerating
[00:49:39] George: your route to that for sure. If there's no, if there's no testing being done
[00:49:43] Taylor: right.
[00:49:43] No, but, but if you do the test and you find out that it's bad,
[00:49:45] George: no, no, no. I'm saying, okay, you figure out that none of the channels that you've tested so far
[00:49:50] Taylor: Yes.
[00:49:51] George: Have been profitable or to the target that you've set, and then you redeploy into new channels and you keep testing and you keep testing, and then your overall efficiency doesn't improve because you're in a constant state of testing.
[00:50:02] Right. I'm not, I, I guess what I'm saying is you get all the reads back, the channels aren't performing the way you'd like. You either. You either stockpile the cash, which is kind of what you're saying, or you say, alright, we're gonna redeploy into X channel or X initiative. And I would say that's currently what we're doing.
[00:50:22] It just takes time. Those things are not as easy as,
[00:50:26] Taylor: as spending
[00:50:26] George: money on that money through meta.
[00:50:28] Taylor: I agree. It that is the problem. It's too easy to be
[00:50:30] George: bad.
[00:50:31] Olivia: Yeah. And we, we sort of glossed over this, but you're assuming Taylor, that this brand has already gone on that journey and they don't have any levers to improve incrementality, which we don't always see.
[00:50:40] Like often there is a lot of low hanging fruit. Mm-hmm. In terms of account structure, creative, there are like, so what are, there are others you can pull
[00:50:46] Taylor: because this, this is Okay. Fair. I,
[00:50:48] Olivia: i in so, so let's gimme them though. So let's say, all right, if you've tested, if you've tested this channel three times, you've made changes and it's still not working.
[00:50:56] Yes. Like we need to move budget out, but we often see there is, there are some opportunities to fix it. So let, okay, let's keep going now. So they've tried to improve it and it's not getting better.
[00:51:06] George: I actually think that conversation is probably more valuable to the audience. What are the actual, what are the things when they get back that first CLS study and it's not what they had hoped, what things should they turn to to increase that score before we move back to the problem of, I, I would say most of your audience probably has things they can do to improve the performance before they revert to, okay.
[00:51:29] Don't invest in met anymore. Yeah, let's, let's try to
[00:51:31] Taylor: figure it out. I think the first question though that we're like, I, I don't think we've resolved, is what's the standard, like what's the expectation to deploy capital? Like, I, I think that that, because everything falls against that constraint. That constraint breeds the creativity to the solution against that problem.
[00:51:44] When you capitulate the standard, then there is no reason to be creative in solving the problem.
[00:51:48] George: Well, how do you, how do you suggest they set the standard,
[00:51:50] Taylor: they define it against the profitable contribution of that dollar against whatever the hurdle rate for investment of capital is in the organization relative to all the options of allocation of capital.
[00:52:00] If you can go launch a store and you can make 40% return, then the, the hurdle rate for the dollars is actually that 40% return. Now, the, the problem is, is that like, I agree to your most point, most business owners don't consider the alternative investments of capital. They don't, and so they just only know this.
[00:52:15] So they're
[00:52:15] George: slow or they're slow. Like the alternative
[00:52:18] Taylor: investment take time to build. So, so is a meta return that pays you back 70 cents on the dollar.
[00:52:22] George: Great.
[00:52:22] Taylor: That's like really freaking slow.
[00:52:24] George: True, true.
[00:52:25] Taylor: And it actually causes a hole today. Like that's really big.
[00:52:28] George: Yeah.
[00:52:28] Taylor: So I, I think the idea that that's fast is just fake.
[00:52:31] Fake. Now what it does is it creates cash, it creates a balance sheet solution that's different than a PML solution. So this goes back to like, is the goal to create cash? Well then, yeah, my gross margin turning that back into cash at a lower return might be worthwhile.
[00:52:47] George: Also, there's subscription businesses who, you know, like
[00:52:49] Taylor: LL but Target.
[00:52:51] That's a realization of value Timeline question. Yeah. That I would be in support of.
[00:52:55] Olivia: I'm gonna pause real quick 'cause I, I forgot to mention this, but I just wanna acknowledge how I see I work, we work with so many agencies and so many brands, how rare it is for an agency. This is the only compliment Taylor's gonna get in this probably ever.
[00:53:08] How rare, like this conversation is not being had out in the open right now because agencies are, are mostly. Billing on percent of spend, and they have no incentive to tell or suggest to a brand that they, they slow down. So this is just like, that's why I feel like this conversation doesn't, isn't, isn't being had.
[00:53:29] I agree. Is because there are not a lot of agencies who are willing to have that conversation. So it's, I've seen it's an interesting role reversal here where like, Taylor is, the agency owner is saying like, no, you, you should not spend, if the incrementality results are coming back sub one because we, we never hear that.
[00:53:44] Taylor: I just like, I'm a, I'm a business owner. It's my money. Like if my head of marketing grant sitting right behind us came to me and was like, Hey Taylor we're gonna invest your dollars and you're gonna get them back. Never, but it's gonna grow the top line and you're gonna buy 70 cents for a dollar. I would be like.
[00:54:04] No grant. I don't want to do that. Like I can't, I I'm just so, so, so often I don't actually believe that there's somebody in the organization that, that actually believes that's what HA is happening and has said, yes, we want that.
[00:54:17] George: Yeah.
[00:54:18] Taylor: I just, I don't, I think it's a lack of clarity or belief around the measurement system usually.
[00:54:22] George: Okay. Yeah.
[00:54:23] Olivia: There's one, one thing we haven't talked about, which is long-term effects and, and so often these tests, you guys cozier, it's been I think, really fun to work with because you, you're, you've been running longer tests, which I just think are inherently more interesting, but most of these tests are three, four weeks.
[00:54:39] If you, and then we run this post-treatment window where we observe the behavior of the markets after the test and we'll see more, more kind of come in through, through that window. Like if you believe that these effects compound over time, then then we are understating results. And we have talked about this a lot at house.
[00:54:55] I'm actually curious how you all feel about this, of like, should we model long-term effects into our results? And there are some people again, imagine a larger org who wants to look good, who love that idea.
[00:55:07] Taylor: Mm-hmm.
[00:55:07] Olivia: And then there are people like Connor McDonald who's like, hell no. Like I come to house for the cold hard truth, do not bottle my data and I want it to be no black box transparent.
[00:55:17] I can recreate these results. And so we've been torn on like we, we kind of know these are both average efficiencies, not marginal efficiencies. So that's one thing is like if you just pull down that last 20% of spend, you could be in the realm of efficiency here. So that's one thing. And then the other thing is these are short-term effects, not long-term effects.
[00:55:34] George: Yeah.
[00:55:35] Olivia: And the only way we can. We can actually like pro, provide an evidence backed answer to long-term effects is running a very long test. In the absence of that, we'd have to model it. And so I'm curious, like if that's, if, if you, if if like, should we, should we be modeling this because we know that there is some effect that we can't capture through an experiment?
[00:55:53] George: Well, I have a lot of thoughts. So when we first ran our CLS studies with meta, they came in and tried to discredit the result with this argument that there's some sort of long tail effect of the spend that they aren't capturing. Oh no. It's like,
[00:56:10] Olivia: okay,
[00:56:10] George: well it's stupid. Taylor Shoulder keeps hitting him in the shoulder.
[00:56:14] Taylor: It's a big
[00:56:14] George: shoulder. It's a big shoulder. Not really, I wouldn't consider it big. Is tearing holes in a shirt though. So yeah, basically meta came in, we got the first read, we weren't happy with the result. We're gonna pull budget back and they come in and they say, Hey wait, there's actual effects that you can't see in this study.
[00:56:28] You should keep spending. And I'm like, okay, maybe. Right. Maybe logically you think about it. Okay, great. The problem is. We start running tests with you, there is a post-treatment window. So for example, we ran a test in Q4 of last year. The post treat, I think the actual treatment window was six weeks and the post-treatment window was six weeks in the holiday.
[00:56:49] So if you think about that timeframe, if you're spending your marking dollars in October, a lot of brands assume that is generating revenue on Black Friday. On those peak sales days, the change in return from the time we killed treatment to the post-treatment window was almost zero for new customer. And so for a while,
[00:57:14] Olivia: but there was, there was latency for existing.
[00:57:16] George: For existing. Yep.
[00:57:17] Taylor: Almost a hundred percent.
[00:57:18] George: Which is also odd. But anyway, so like the new customer it, it shows and Anmar who's sitting over here, we've talked about it a lot. I, I love the theory until you start looking at the post-treatment windows, because there is, why would the impact all of a sudden after the post-treatment window increase?
[00:57:36] Olivia: Yep.
[00:57:36] George: It wouldn't. Yep. You know, so like if there is no impact in the post-treatment window, there's probably no long tail effect that's at least significant, and that's where for at least our brand, I don't even care to see it. I, I'm with Connor. I, I would prefer to just, it appears that there is no real long-term effect, at least on new customer acquisition
[00:57:56] Taylor: and like first principles like, so.
[00:57:57] 'cause my thing about this is that I am all for whoever has a hypothesis about the long-term value and we'll put their name on it and create a number to publish it and defend it. And we can, we can all act accordingly. No one will do that. And when you do do it, the post-treatment windows, which I commend you on for actually attempting to try and do it, and you get that result, well then you go, well, what do I do now?
[00:58:21] But, but the first principles for me, the idea that I scroll through an Instagram ad. And I had an ad impression and nine months from now, I can't remember what I had for lunch 20 minutes ago. Like the idea that there's suddenly, now I do think that where this becomes more real and where it becomes more true is when your distribution broadens to where you encounter the product unexpectedly in the wild in retail.
[00:58:46] George: Mm-hmm.
[00:58:46] Taylor: So where all of a sudden I'm walking the aisles of target now, the recall connection mm-hmm. Because buried in my mind somewhere is that ad Oh, I, I, I can do that. And so that effect of ad stock, which comes out of a study that was started in I think the seventies is when the initial ad stock theory was developed.
[00:59:03] It was all retail distribution. And so the idea that you would connect the dots later in your brain is very different to me than the idea that you would suddenly go, oh, you know what I'm gonna do? Go to the website and buy now as a result of the ad I saw nine months ago.
[00:59:15] Olivia: Yep.
[00:59:15] Taylor: So I think that's really important to understand how ad stock effects and memory recall play out when the product gets re-encounter differently.
[00:59:24] And so if you're Proctor and Gamble, I think this is probably real important and can show up in lots of different ways in NCS studies and retail and different things. But the idea that you are generating new customer revenue where your incremental return nine months later is growing like hundreds of percents, I think is like deceptive to the point of criminal behavior.
[00:59:44] Yeah. Like, it's like insane. Almost
[00:59:45] Olivia: click
[00:59:45] George: thought on that. Oh, go ahead Olivia. You were gonna say
[00:59:47] Olivia: something? Well, I, I think one, I think it's great that you guys did a test to answer this question. Yeah, yeah. Like that fir I would, I commend you on that. They actually ran, I think it was a seven week plus seven seven week test week
[00:59:57] Taylor: total,
[00:59:57] Olivia: 14 week test.
[00:59:58] That's awesome. Now we have an answer to that question. And if you're, if you're not seeing much come in that I think that that's, that's important. I guess my, my question is how much, how much distribution do you guys have in Amazon and wholesale?
[01:00:09] George: I mean, we have a little bit on Amazon. Varies, maybe 5% of our total work.
[01:00:13] It's
[01:00:13] Taylor: a big problem. It's one of the, it would unlock a lot more of
[01:00:15] George: your, probably less. Yeah. We need to fix our distribution, but
[01:00:18] Olivia: I discovered you guys in retail. You did like a little boutique in Michigan. Yep.
[01:00:21] George: No way. Well, good job wholesale team. Anyway, one of the, what were we saying?
[01:00:27] Taylor: Talking about a recall,
[01:00:28] Olivia: it's criminal
[01:00:29] Taylor: distribution,
[01:00:30] George: measuring long term effects.
[01:00:32] Oh, you were describing, like seeing it on the shelf. My thinking on meta right now is meta is the equivalent to seeing it on the shelf. So you create the ad stock through other channels.
[01:00:42] Taylor: Right. So,
[01:00:42] George: so, and meta service.
[01:00:43] Taylor: Yeah. So they see it on podcasts.
[01:00:45] George: Yes.
[01:00:45] Taylor: And then,
[01:00:46] George: then they build that bridge and meta is the freaking target shelf.
[01:00:49] That's, and that is why meta, at least for us at this stage, for our product category, it's less incremental because it is more mid funnel. It is, it is that you are relying on some sort of brand awareness to actually drive the purchase.
[01:01:03] Taylor: So you're saying they heard about it on the podcast and they see it on meta?
[01:01:05] George: Yeah.
[01:01:06] Taylor: Oh, I see, I see. So you're saying that there's some initial implant that happens on a different channel?
[01:01:11] George: Yeah.
[01:01:11] Taylor: Yeah. It's the 47 leg stool.
[01:01:13] George: Yeah,
[01:01:14] Taylor: That I, that, that is sort of my issue with it. But, but let's go back to, like, I wanna go back to because I, I feel like we didn't get to, I have a factor.
[01:01:21] What do I do? And so like, I want to just get to like, 'cause this is the decision I actually feel on the hook for. And so I feel pressure about this. Like, okay, we ran the test, George is looking at me, clients are looking at me going, okay, tomorrow you have to allocate my dollars. And the dashboard said something, scale up, scale that one.
[01:01:40] Good, bad. What, what do I do?
[01:01:43] Olivia: Okay, well you guys, you guys do something too. So let me, let me explain what we do. Okay? If you are only using our experiments product, then we line up all the incrementality factors and we look at the, you know, incremental ROAS by channel or CPA by channel. And then we redistribute budget based on CPA.
[01:01:58] It's like, it's just like looking at a CPA. In North Beam. Mm-hmm. Instead of looking at North Beam, you're looking at at house on A-C-P-I-A, now the exact every day. Exact.
[01:02:07] Taylor: When,
[01:02:08] Olivia: well, how often are you, how often are you doing budget reallocation? I like, do you do, should you be touching it and moving it every day?
[01:02:14] Like, I think Meta and Google would say that, that that is hurting the system. I think every, every eight, and I don't, I don't buy media, but like, I feel like every agency owner has a POV on like how much you should be moving in and dialing up and down these knobs, right? Yeah.
[01:02:27] Taylor: So weekly, like, so what's the answer?
[01:02:29] I'm not right now you're giving your perspective.
[01:02:31] Olivia: Yeah. So I think so what, so, but that, that you cannot use that chart to answer how much budget should I move from channel A to B? You can, we kind of say like, move 20%, you know, we're of loose about it. That is what MMM is for. MMM is actually drawing that curve and telling you exactly how much money to move.
[01:02:47] To basically have your marginal, incremental efficiency be
[01:02:52] Taylor: yeah, be
[01:02:52] Olivia: neutral across, across all these platforms. And so the MMM answers the question of, all right, exactly how much do I move? But without that, you're kind of guessing, again, you're playing loose with like, all right, how much budget? Maybe I do 10 or 20% shift and then I retest.
[01:03:05] I don't, but I don't know that. And, and, and this is, we're learning into this as well because experimentation has never been like this. Always on os of like, how, like how much, how often are you making daily allocation decisions? Are you making weekly allocation decisions?
[01:03:21] George: Like how often? Right now. And there's a lot of debate on how frequently you should be.
[01:03:24] I know companies that are doing it on the hour. I know companies that are doing it every day. I think I know people that really believe in cost caps. But for us, yeah, I think for digital right now, it's a two day, but for channel level distribution, that's a monthly decision. And I'll shout out Colby McDermott.
[01:03:40] He's great. He, he basically has a monthly meeting with all of our acquisition channels, and he allocates budget by the most recent, or, you know, and we're, we're discussing changing, changing this methodology, but by the most recent incrementality read. So if meta got a 1.2 and podcast is a 1.5 and that's not an incrementality read, that's a direct code.
[01:04:03] You know, we measure that. We can't measure that through incrementality. He has basically a process to reallocate budget and then as far as during the month goes, there is the incrementality factor applied to the end platform reported number. And then you try to hold his, that standard so your budget moves up or down in real time based on, okay, if we need a three X in platform to get our one X incrementality or iro as then, you know, we can't have performance drop below three is.
[01:04:34] How we're doing it. I also need something to drink.
[01:04:38] Olivia: How, how do you all we
[01:04:38] Taylor: grab water? Yeah, we'll wrap up in
[01:04:40] Olivia: late. I'll take, I'll take one too. Thank you.
[01:04:41] George: Thank you. Yeah.
[01:04:43] Olivia: Yeah. Taylor, how do you guys, how, how would you say that, how do you operationalize factors?
[01:04:47] Taylor: So there's a, similar to you in the sense that ideal, every channel gets a factor.
[01:04:52] If we haven't run a test, we use our benchmark factors. Then we'll use an MMM to allocate the proposed budget against those channels. We'll break that into a daily expectation of every channel because the reason daily matters to me is because you run sales, you create moments, you run. There's reasons why days have different available opportunity for you as a business.
[01:05:14] The market availability on meta as an example. Is a function of usage. There are wild variations in available inventory by day. Like the idea that you would spend the same every day is so flawed. I can't even begin to tell you how errs this idea that you should spend the same amount of money every day is.
[01:05:30] So you need to make some allocation. This is why part of why Costco controls are so good is 'cause they actually allow you to allocate based on the available market, not your presumption of it. I'll, I'll save that diatribe for another time. But the so, so you start there, but here's the key, okay?
[01:05:45] And then you have your factors in the channel, but then you have to play the game on the field. It's just like you go into the game with a game plan and then guess what happens on day one? You sent an email, you ran your ads, and you made some amount of money. And you either made the amount of money you thought you were gonna make or you missed and the email either worked or it didn't.
[01:06:03] The ads either were effective or they weren't, and the money's in your bank account or it's not. And then in light of that reality, you have to course correct. You have to change your behavior because if you don't, then you are just going to be subject to the limitations of the plan as it is. And there you're going to be wild error bars no matter what model you use, no matter what factor you're trying to do all the time.
[01:06:25] So you have to constantly be using your best daily discretion against the present reality of what's happening to try and map towards it. I'll give you an example of the way that incrementality factors can fail. When you guys present an incrementality study result, do you break out new versus returning customers?
[01:06:40] Yes. Okay. Are the factors the same on new and returning customers? Okay, so if I have a campaign, and in the period that it was tested in, it was 50 50, split between new and returning, and suddenly it's 80, returning in 20. Is the factor the same? Yeah, of course not. So same thing, I, you'll do factors on click versus view.
[01:06:57] Like there's lots of different ways to present it, but if the underlying application of that factor changes then you have to be conscious of that. And that will happen all the time. So, so you have to constantly be mining for why the reality, which is the revenue, the contribution margin for that day is disassociating from the factor.
[01:07:19] In some way, and you have to try and account for that as much as you possibly can to try and move the thing that actually matters, which I would say is contribution margin on a monthly basis. Like you have to try and move that number as much as as you can.
[01:07:31] Olivia: Are you doing that manually? Is that like people that di digging into the ad accounts and into the data, or do you have a system for that?
[01:07:37] Taylor: There's a, there's a, we have a lot of things that contribute to it now. So the planning process is part ai, part of data science team, part in part qualitative human planning, overlaid against the marketing calendar, email, send, schedule, all the things that go into building the plan. And then on a day in, day out basis, it's a combination of.
[01:07:53] AI analysis of the performance, human analysis of the performance. That, that ultimately the, the action still today is in the hands of a person. I'm hopeful that maybe that won't be the case in the future, but for now it's,
[01:08:03] Olivia: so is the factor then
[01:08:04] Taylor: you can drop it in. Great.
[01:08:05] Olivia: Is it not static? Is it dynamic every day?
[01:08:07] Like are, are your teams adjusting the factor based on what they're seeing in, in these data
[01:08:12] Taylor: sets? Nope. The factor's not allowed to change your behavior relative to the factor is what you
[01:08:17] Olivia: do with it?
[01:08:17] Taylor: Yes.
[01:08:18] Olivia: What you do with the budgets, I
[01:08:18] Taylor: dunno what
[01:08:19] Olivia: to do with it. Okay. That's interesting.
[01:08:20] Taylor: So I, you only update the factor in the face of new evidence.
[01:08:25] Yep.
[01:08:26] George: Let me ask you something. Go ahead. So let's say you run an E, so this is very common. You run an email campaign?
[01:08:31] Taylor: Yes.
[01:08:31] George: The meta in platform number is now a five? Yes. And usually it's a three. So are you assuming that that day you had a two and a half X incremental roas or are you assuming that
[01:08:41] Taylor: I'm
[01:08:41] George: Excl did not.
[01:08:42] It
[01:08:42] Taylor: sounds like you have bad exclusions.
[01:08:44] George: No. Sounds like that's every single meta account in the country.
[01:08:47] Taylor: No, that's not true. If I have, really good exclusions to new customer acquisition On a quick basis.
[01:08:53] George: You don't have really good exclusions
[01:08:54] Taylor: if the, the, you need three things. You need Klaviyo audience exclusion, pixel exclusion.
[01:08:58] Nope. Meta and your Shopify audience. And your Shopify audience exclusions and there will be leakage. Yes, of course. A lot. But you can get it. No, it'll be s 10% maybe from,
[01:09:06] George: no, it will not. Yes, it meta has an official stance on this. I just met with them. There is not 10%. What
[01:09:11] Taylor: do they think it is?
[01:09:12] George: 30 to 40% leakage.
[01:09:13] Taylor: 30 to
[01:09:14] George: 40%. They can't match the users. They said
[01:09:16] Olivia: that.
[01:09:16] George: Yes. And if that's true, I
[01:09:17] Taylor: probably shouldn't say that out loud. If that, if that's true, then what is the value of a CLS study if that's they're working on it. If you're telling me the user match rates are that bad, then this whole thing collapses on itself.
[01:09:25] What is the value of optimization? How can you actually attribute a purchase back to the end? Click
[01:09:29] George: Don't outta me. I'm telling
[01:09:30] Olivia: happy. I mean, that's great for our business, right? Yeah. That's, that's if it, if the. So, so Meta's only able to match 60 to 70%.
[01:09:37] Taylor: Oh my God. Now we're getting clipped. Don't, don't, don't do this.
[01:09:39] I look, I don't know for sure this is there. There's is Georgia's there?
[01:09:42] George: That's not
[01:09:43] Taylor: an enough. Olivia Olivia's gonna be in a disruptors meeting
[01:09:45] Olivia: and have to, but I
[01:09:46] George: can tell you right
[01:09:47] Olivia: now, they know they're calls that Taylor doesn't run your ad account, so he's able to just totally crash him.
[01:09:51] George: Yeah, I don't care.
[01:09:52] He basically, I know they, they are admitting that the exclusions product does not work and they're working on revamping it. That was a big talking point in last week. Cody is gonna be so validated. There is a reason there are a lot of companies spending on Waste Knot and blot out and all of these companies that improve exclusions and mat rates.
[01:10:09] Well,
[01:10:10] Taylor: okay, but if they do, then, then you use all those people.
[01:10:12] George: Yeah. Yeah. And
[01:10:12] Taylor: so you're, and so what are you telling me your match rate is then yours? I don't know. Use all the tools that you use.
[01:10:17] George: Sure. Ours is probably sub 10% now.
[01:10:18] Taylor: Okay. So then, so then when you send an email it should not affect new customer revenue on a click basis and certainly not on a new customer revenue on a one day click basis.
[01:10:30] George: Well, why does, I mean, sure, in theory it shouldn't, but I can pull up, we could go through all of your client's accounts and you're gonna see when they send an email that the in platform row has spikes. And that's just the truth.
[01:10:40] Taylor: No.
[01:10:41] George: Yep.
[01:10:42] Taylor: No,
[01:10:44] George: we're gonna do it after this call and I'll, I'll make my first tweet about that result.
[01:10:49] Taylor: I wish it were true that we perfectly execute our ideology everywhere. We don't, we have gaps. You can find gaps, there's no doubt about it. But the, the, the point is that if you are executing that, then the, the effect should be minimal. Now what I'll say is that I think a harder question is, is the incrementality during a sale moment the same as a non-sale moment?
[01:11:07] Those kinds of questions Yeah. I think are worth interrogating. And I'm actually, I give our our strategists real leniency if they want to make a case for why they human qualitatively adjusted a factor on a day and change their behavior based on some. Insight like that, I care most that behavior and belief are aligned in most organizations.
[01:11:31] They're wildly disparate. If you can defend why you changed the factor on Tuesday and Wednesday during the sale to a lower number because of you were gonna send a bunch of emails you were worried about over reporting, you wanted to be conservative, then I will support you in that position. That like there, there are novel edge cases all the time, and I think that's one of 'em.
[01:11:48] So I I, I just think though that what you're doing is, is what I watch every organization do, which is that they undermine the information all the time to where no scenario is actually actionable because you can poke a hole in almost every decision.
[01:12:02] George: The question I'm asking is how do you handle that?
[01:12:04] And you're saying just don't mess with it.
[01:12:06] Taylor: No, I'm saying have a dec have an opinion and express it.
[01:12:09] George: Great.
[01:12:09] Olivia: But pre-commit,
[01:12:10] Taylor: yes.
[01:12:11] Olivia: Before that, before it comes up on how you're
[01:12:13] George: gonna, how we don't to clear, we don't change our factor. If we send an email in the end platform,
[01:12:16] Taylor: then why did you just do that? '
[01:12:18] George: cause I wanted you to admit that the, that that is something that should be, 'cause people might think about that.
[01:12:23] People might say like, well, when I send a sale, it goes to a five x. How should I factor it? Do what Olivia
[01:12:28] Taylor: said, predefine the measurement and believe within it, and then adjust it at a different period of time. I watch, but, but this is what you did is what I watch happen in every organization all the time, which is somebody says something, somebody undermines it, somebody says something, somebody undermines it, somebody test
[01:12:40] George: results undermine
[01:12:41] Taylor: it as
[01:12:41] George: well.
[01:12:42] Taylor: I agree. Right?
[01:12:42] George: Like when
[01:12:43] Taylor: you, but not if you, you, you assert that they are revealing truth and you are building a population of results about what happened at various moments in time. Mm-hmm. That then if you assume that they're supposed to be the same, then they will undermine each other.
[01:12:54] George: Well, the problem,
[01:12:55] but
[01:12:55] Taylor: they're never gonna be the same
[01:12:56] George: us is we assume.
[01:12:58] So every month we get, if we get a bad result, we implement a plan to fix the result. We're always trying to improve the incrementality of the channel. And then maybe the, maybe the performance improves for the next three months. And we, we, we assume that the performance improve because of x change. For example, we added waste knot.
[01:13:14] I have no affiliation with them. I don't, I don't even know if I like the product. He loves software, he loves it in September of last year and had our back to back highest incrementality scores ever in a CLS study. And I assumed that that was maybe in, you know, causing the high incrementality scores get to January, not as good.
[01:13:33] So I, I think that's where, for me, I expected to start this journey and start stacking wins and see the score improve gradually over time as our strategy shifts. And that just hasn't been the
[01:13:44] case.
[01:13:44] Taylor: So here's, here's one of the reasons why, and this is where I'm gonna ask Olivia to give us a preview of what they're trying to become in the world.
[01:13:50] But I think the reason why is because your optimization is not connected to your measurement. And so you aren't actually meta in the delivery of your dollars is not actually in trying to affect incremental. Attribution, like, and, and I'm hopeful, like to me, if you're on this journey, I don't know how you couldn't be also committed to using the ad product.
[01:14:07] Now I get that meta is saying like, oh, there's some errors, it's got some stuff to work out. And they, they undermine their own ad product as it relates to incremental optimization. But to me, the end state here has to be, it has to be that we are optimizing for incremental effects. Otherwise, why would we assume that trying to optimize for something else will create incrementality?
[01:14:25] Because it's not going to, which is why one of my predictions in the future is that house is going to build an automated media buyer and ultimately a, like, what I would think of as like a DSP, like a demand side buying platform that actually does optimize for their incremental measurement system. So win product.
[01:14:44] Olivia: Yeah, I, I don't know if we can talk about this one, but I, I'm gonna, I'm gonna go back back a second. The, one of the unintended benefits from. Doing this as a SaaS product is pre-comm committing to how you're gonna do the analysis before you do it. Like we have found so many instances of if you have an in-house data science team, they are engineering the result that the marketing stakeholder wants.
[01:15:05] We have like, this is, I'm not just giving Yeah. You know, like a few examples, like this is just prevalent inside of these organizations is the data science team is tweaking the results in a way to land on what, what these stakeholders want. And so that's really important. That's why I like that we're doing this hands free.
[01:15:21] With the, with the, the, the like ROI piece, like you, you, you wanted to be stacking wins. I think there are two questions. Is it. Are you actioning the tests? Like, 'cause that's, again, that's where the ROI comes in and this is Taylor. I, I like what you said of like, you kind of like, you can't continue to, to, to throw in these, you know, hypotheses or reasons why not to believe.
[01:15:42] Mm-hmm. Like, you actually have to commit to this and make decisions on it. And then question two of like, stacking is, I think this is, we talk about, Taylor and I talk about this all the time, it's really hard to actually and this is what's, it's so ironic about a measurement company is like, it's hard to actually know, like maybe your business is, would've been down more if you didn't Yeah.
[01:16:01] If you didn't actually these test, like maybe even if it's, if it's, it's really hard to actually tie these changes that you're making to like business level outcomes given all the noise in a very large business.
[01:16:12] George: So, well, to your point earlier about counter counterfactual. Exactly. There isn't there, there, there is no counterfactual even though you're running the holdout.
[01:16:18] There's no counterfactual to business before you made the change
[01:16:22] Olivia: there, there is no follow follow up test of like, made this change.
[01:16:26] George: Yeah.
[01:16:26] Olivia: Ran this test,
[01:16:27] George: validated. Now the test versus the old And we could actually do that.
[01:16:31] Taylor: Yeah. What do you mean? Yes, there is,
[01:16:33] George: well, not I'm, we haven't, we haven't done it. We haven't done it.
[01:16:37] I'm trying to think of how we would structure that.
[01:16:39] Taylor: Well, we ran an all up test where we held out all media. To me, I think of that as like a portfolio test. I think you redesigned the portfolio and you test it again.
[01:16:48] George: Yeah. And you just continue to, but that's the point. So this is, this is the whole problem.
[01:16:53] If I, if I really, this has got me thinking about the past, I think about what's happened, right? We have a poor result. We changed the whole account from seven day click, one day view to only seven day click. We get two months of strong ROI, we're feeling optimistic. We think perform and then down, you know, it's like we constantly make changes.
[01:17:11] We get false signal that it's working. We reinve in the channel. Because I don't think it actually has anything to do with the change we made. I think it has everything to do with the demand for the category at the time, and that that ultimately is where I'm landing for our business is it's much more, at least meta's performance is much more dependent on market demand for our product categories than it is any media buying strategy that we're going to implement.
[01:17:37] Taylor: I think that's, that's true in a very mature category, very late on the customer adoption curve. I think there are just countless businesses. Hundred percent even in your ca, very closely aligned to your category.
[01:17:46] George: Yep.
[01:17:47] Taylor: That prove counter to that where they're growing right. In your face.
[01:17:50] George: Not, not, they would not say meta is their main strategy,
[01:17:54] Olivia: but I mean, George, you, you asked me earlier like what are the follow on tests that people run to improve the result?
[01:18:01] Like have you done, like have you really. Have you invested in like creative testing? Yes. Like obviously I know that's the biggest lever. Yes. Like you feel like you've swung through all the way on that.
[01:18:09] George: Yeah. And he can disagree, but
[01:18:11] Taylor: No, I think you, I think you guys have done
[01:18:13] George: a lot. We, we have invested and that was Mets thing.
[01:18:15] We actually, you know, we made some hard decisions last year. We let go of a lot of our creative team and rebuilt a growth creative team who's only focused with Meta, was meta ads and Meta and some other partners were like, this is the, this is the lever. This is going to improve your performance. And it hasn't
[01:18:32] Olivia: really, so I, we saw a 50% improvement in CPIA at Netflix through a creative template test.
[01:18:38] We changed our template that we ran the shows through 50% win. Other than that, we were never a able to see anything more than like a 10 to 15% win on accounts.
[01:18:47] George: I think for us, we were already at a point where the creative, like, I think there are businesses where like, let's say you have an ad account that's just running static ads and like really bad video cre, I think.
[01:18:57] There's opportunities for creative diversity to really help them. I think for us, we just got to a point where there's a diminishing return on the creative diversity. And maybe, you know, our meta reps, I just said they were in town. We met with them and I said, Hey, you guys made us make hard decisions last year and it hasn't really paid off.
[01:19:13] Explain why. And they, they said, and I said, are we not doing a good enough job at Creative Diversity? And they're like, no, that's not it. You guys are doing a good job. So
[01:19:21] Taylor: what did they say? It's,
[01:19:23] George: well, they said the January test, just to let everyone know the January test poor performance was because of an ad placement.
[01:19:31] Have you seen it in
[01:19:32] Olivia: Fan Audience Network?
[01:19:34] George: No, it's not the audience network thing. Oh, it's the on reels. So it's like an ad placement. It's a piece of junk placement. It's an ad that pops up on a reel that's unrelated to your product.
[01:19:46] Taylor: Yeah,
[01:19:47] George: and it,
[01:19:48] Olivia: yeah,
[01:19:48] George: the platform
[01:19:49] Olivia: changes.
[01:19:49] George: It performed really well in Q4 for us because we're running promo.
[01:19:52] And then when promo turned off, that placement turned. Started performing really poorly and then spend was still going there because of the strong performance in Q4. And they're claiming, that's why January's result has been less than ideal.
[01:20:04] Taylor: And these are all just things like, again, so your incrementality read in that month was true.
[01:20:10] And so it goes on the plot of things that might happen right there. Are there, there, there is a built-in, dispersion of potential future outcomes laid out by your historical ones, and that is one of a meta bugs can happen in the future. And so the way that you consider for that is that you allow it to inform the expectation of the future, and then you'll have periods where you overrun underperform it.
[01:20:32] But, but I, what I would like to see is that you want to see the monthly relationship to your measure tightening over time. So like if, if we started in month one and you know, I'm gonna make up numbers right now. Platform said three and we got back a one that's a 300% delta to performance. If we get to the end of this and we're working off of a 0.7 and the result was 0.4, well we have really narrowed the error bands through which we're making decisions.
[01:21:02] That to me is a sign of a really good testing program. You're actually getting the truth. Now. Fixing it is sort of a different thing than measurement. Like these are di it's really important to us. Measure measuring things doesn't make them good.
[01:21:14] George: Yeah,
[01:21:14] Taylor: it just assess what is, but it's really powerful, right?
[01:21:17] And this is where they get to, Olivia gets to sit in the seat of just like, we will tell you the truth. And that is really important. It's a really important part of the process, but it doesn't lead to fixing it. And that becomes where the real work happens and, and where it becomes really hard. So I empathize with that.
[01:21:30] Do you
[01:21:30] George: feel like we've given a good answer on that, on how to fix it? If they get a score back that's less than set? I mean, I, we've hit on creative diversity. Have there been other things that you've seen that you feel like most brands should try as an effort, in an effort to increase the score?
[01:21:45] Taylor: I think there's a lot of things I could give you a lot of,
[01:21:47] Olivia: a lot of things here.
[01:21:47] Yeah. There, there are some levers. I think creative is, is the biggest lever. Like that's most often when I see like step change, improvements in performance. It's a creative swing.
[01:21:56] George: Yeah.
[01:21:56] Olivia: Expanding distribution, like, you know, that's, that's huge. When meaning,
[01:22:00] George: so if you're talking to a $10 million e-comm business, meaning go live on Amazon.
[01:22:05] Yes. Meaning TikTok shop, you know, what do you mean by distribution?
[01:22:09] Olivia: Like expanding into, into Amazon and retail. Okay. Yes.
[01:22:11] George: And do you measure Lyft in retail? Yeah. You do? Yeah.
[01:22:15] Olivia: You, you know, you have to get the sales by the, by retailer. Like Target's easy. Walmart's easy. Yeah. A, I mean, and you, we should, we should talk about how much, we don't have to, not lot put this in, but like is it more than, but it will be something
[01:22:27] George: we're talking about.
[01:22:28] No, it's not. But their future growth is.
[01:22:31] Olivia: Because that, that, I mean, that's something we talk about all the time is if you are doing business in Amazon, Amazon and wholesale, you are definitely understating your results if you're only measuring on D two C. Yeah. And so that's a, you know, that's a, a very common reason why these brands think they're incrementality results is, are low, is that they're not pulling in all their sales channels.
[01:22:49] Yeah. So yeah, that can make a difference. A account structure's interesting, like I think that there are accounts, this is, this is an enterprise problem that like 50, 60, 70% retargeting. I am not exaggerating. Yeah. No doubt about it. A lot of low hanging fruit on like an account that really hasn't been, hasn't been optimized in terms of, of best practices.
[01:23:11] And so account structure we see, click versus view based. Optimization has helped. The signal engineering has been mixed in terms of, alright, I need to point meta toward a shallower metric to help it go prospect new customers. It's not a, it's not a home run every time, but like we have seen a lot, like Ridge talks about it often, like the alternative
[01:23:28] George: objectives.
[01:23:29] Olivia: Yeah. Like getting them like out of this cycle of just like overdelivering to this very small bubble pocket of potential customers. So we've seen people get out of it. We have a lot of success stories around not just creative, but account structure. But, you know, I think you, it seems like you guys have pulled a lot of those levers already.
[01:23:48] Yeah. Which is a different, that's different from where a lot of these brands
[01:23:50] George: are. We've pulled levers. I I will say like the top of funnel objectives is something we could lean into a little heavier. The creative diversity, I feel like we've pulled the lever, you know, over a 12 month period and the exclusions yet to be seen.
[01:24:03] But if the, if the results rebound post January, which they're starting to after turning that placement off. There's still an argument to be made that fixing the exclusions help the,
[01:24:11] Taylor: well, I actually, I actually think that the data says something different, that the data says that you should spend more on existing customers than you.
[01:24:17] George: Sure.
[01:24:17] Taylor: And this, this is actually, I think a big, this is a contrarian opinion that I know everybody. I think brands, big businesses with large customer files are massively underspending on existing customers. And that they get, they would get more incremental return by in terms of, I oass in terms, not in terms of incrementality factors, but in terms of if you want to improve your I IROs that you would actually get it more from existing customers than from new customer.
[01:24:40] George: Well, all of our test results show that. What I'm saying is you need clarity on, on how many dollars actually that went to repeat. That's right. I agree. Your
[01:24:46] Olivia: mic solo,
[01:24:47] George: dude,
[01:24:48] Taylor: you
[01:24:48] George: gotta get a new mic
[01:24:49] Taylor: set up. The other thing, so for you, the other thing I'll say, so you say it's creative. I would say product is number one.
[01:24:54] It's like to go into a category where there are tailwinds like you just described, like you there. If you're in a category that's growing I think you're going to do better than a category that's stagnant and mature and highly competitive, which is where you are. And all the profit gets competed away down to nothing.
[01:25:07] Two is amazon, if you guys were on Amazon, your meta IRAs,
[01:25:11] George: you're on Amazon. Yes.
[01:25:12] Taylor: Like as broadly distributed and you introduce Amazon revenue into the measurement, your IR os will be higher. Yeah. 'cause you are having an effect there. And the fact that you're not measuring it is suffocating your view of your own performance.
[01:25:23] So that's, so product would be one. Second would be that, and then account structure. Like I, I think is, I, I think that if you wanted to improve your IOS or IMR, you should introduce cost controls on all the dollars. Like those,
[01:25:36] Olivia: you said say sale out cost controls, is that
[01:25:37] Taylor: what you just said? On every cost controls?
[01:25:39] Yeah.
[01:25:39] Olivia: How about
[01:25:40] George: partnership apps? I just feel like if cost controls did what you think they do, there would be clear indicators that it's more incremental.
[01:25:47] Taylor: No, no, no ads would, it's not more incremental. Yeah,
[01:25:48] George: it
[01:25:48] Olivia: would be.
[01:25:48] Taylor: It's not '
[01:25:49] George: cause
[01:25:49] Olivia: you'd be spending less money.
[01:25:51] Taylor: No.
[01:25:51] Olivia: Yes you would.
[01:25:52] Taylor: That's that's not what incre future debate?
[01:25:53] No, it's, it's incremental return on ad spend, not incremental factor. It's not the I Ross factor that's higher. This is the confusion that people have when they say it's real. A channel's really incremental. That means it has a really high incrementality factor relative to what the platform reports.
[01:26:11] Like what I'm talking about is the incremental return on ad spend, the efficiency of the dollars.
[01:26:17] George: And have you seen that? Have you seen more efficient dollars? I agree with that.
[01:26:21] Taylor: That's a choice of, I'd
[01:26:22] Olivia: be lying if I said, we've run a lot of these tests, we've run a few head-to-head cost cab lowest cost tests.
[01:26:28] Mm-hmm. And what we could do is we could like label all of the accounts that we've tested in meta and we, we could, we could probably do that in another follow on report. But with the head-to-head test, we've seen, we've seen not, not much difference to, but it's a very small sample
[01:26:42] Taylor: size. So what you do the test though, you'd have to see what the cap was set out relative to what the expected target was on the other ones.
[01:26:48] Yeah. 'cause if they're the same, then it's like, well that's not gonna be
[01:26:51] Olivia: different. You gotta answer this question though. Yeah. For once. I
[01:26:53] George: mean, that's a.
[01:26:54] Olivia: Guys, what about partnership ads? That's where I've seen a lot of wins.
[01:26:58] George: We run a lot of partnership ads. I mean, we are doing most of the best practices.
[01:27:01] There's not like a lot of like, oh, well we, you know, we're part of the disruptors group. It's really just a spend level decision. That's what it comes down to. Do we want to pull back, spend divest investment in meta or the other point is, and we haven't talked about this, this isn't just
[01:27:15] Taylor: meta either. It's every, it's,
[01:27:16] George: yeah.
[01:27:17] The other thing is, is where we have a relatively diverse mix, which creates a high baseline.
[01:27:24] Taylor: Yeah. Yeah. That would be my third thing to do to fix it, would be wildly narrow. The media mix. Shout out to you, Connor and Cody and Connor for bringing this up on your podcast the other day. Channel diversity is the absolute killer of clarity.
[01:27:39] Nobody has any idea what's happening when you get into 10 channels. Nobody has any idea. And so complexity breeds inefficiency.
[01:27:46] Olivia: Yeah. The mantra I like to to breach is like. Either meta, Google big ideas, and nothing in between. Mm-hmm. Because it will, like, you'll get more creative if you're focused on big ideas and not just trying to app, trying to like light up a new channel with the same creative and thinking that's gonna create a step change improvement.
[01:28:06] So,
[01:28:07] George: for sure,
[01:28:07] Olivia: I like that mantra of like, if you're gonna go outside of those two channels, make it a big swing instead of just repurposing what you were doing in another channel.
[01:28:14] George: Yeah, a hundred percent.
[01:28:16] Taylor: Well, I hope this was helpful in some way, shape, or form. I think if anything, what I hope you can learn to sit in is to accept the difficulty of, if you wanna become a measurement based company and you care about.
[01:28:29] This idea of progressive truth, about building being less wrong over time as an organization, that it's not gonna be a binary, it's not gonna be one test and you solved at all, that you're on a journey. And that building structural decision making inside of your company through complexity is really important.
[01:28:44] And these are really smart people that are here to help. And we're, as you can hear, we haven't figured it out for Cozy Earth. We're working together, we're working through the problem.
[01:28:50] George: Made some progress. It is not, it is not super easy, but I will say the clarity is very helpful.
[01:28:55] Taylor: Yep. And you know, as much as we're competing day to day, we get to work with house a lot, including in the case of cozy Earth.
[01:29:02] And I'm an Olivia Stan and very appreciative for you coming to hang. Oh,
[01:29:06] Olivia: thanks Taylor. Thanks George. Thanks for having us. Taylor. George. Thanks, George. Flew here
[01:29:09] George: from here. Yeah.
[01:29:10] Taylor: Just to be on the podcast.
[01:29:11] George: I flew here to beat Taylor in golf and he bailed. So,
[01:29:16] Taylor: by the way, your, your a good metaphor. Your handicap is basically what this is by the.
[01:29:23] If you think about an incrementality factor in an iass, that's what your handicap is.
[01:29:26] George: I didn't play to my handicap.
[01:29:27] Taylor: You should think about that though.
[01:29:28] George: Last
[01:29:28] Taylor: time we played. You should take that away for using you that are golfers. If you wanna understand, I oass think about a handicap and how every new score doesn't suddenly become your handicap.
[01:29:38] Exactly.
[01:29:39] Oh
[01:29:39] George: yeah. That's a great analogy. Good job. That's a
[01:29:40] Olivia: really good one.
[01:29:41] Taylor: Thank you. Peace. All
[01:29:42] Olivia: right.
[01:29:46] That was a long one.


