Listen Now

Are you obsessively tweaking your ad campaigns only to see flat results?

In this episode of the podcast, Richard sits down with Luke Austin, to expose why daily bid and budget changes are wasting your time and money. They dive into the root issue behind inconsistent performance: a lack of belief in the data.

Luke shares the system CTC uses to align teams, set targets with confidence, and stop the reactive cycle. You’ll learn why geo holdout testing is the most trustworthy signal in media buying, and how shifting focus to creative output rather than micromanaging ad accounts is the real unlock for growth.

If your team is stuck in the weeds, this conversation will show you how to rise above and operate with clarity.

Stop second-guessing. Start scaling with confidence.

Show Notes:
  • Go to alialearn.com and mention our podcast on the demo, to get 30 days free and 20% off!
  • Sign up for a 30 day trial and TaxCloud will give you free migration onboarding services when you decide to make the switch. Check it out at taxcloud.com/thread
  • Explore the Prophit System: prophitsystem.com
  • The Ecommerce Playbook mailbag is open — email us at podcast@commonthreadco.com to ask us any questions you might have about the world of ecomm

Watch on YouTube

[00:00:00] Luke Austin: I'll frame it this way, which is our perspective on the signal related to marketing measurement that we can have the highest confidence in is the result of a geo lift.

Geo-holdout incrementality test set up at a point in time that we get a stats result from that is 90% confident or greater hard stop. That is what we are going to believe in and orient our decision making around as it results to, as it relates to budget allocation, target setting, and then the the optimization of the media campaigns that flow, that flow from the budget allocation and targets, but.

[00:00:40] Richard Gaffin: Hey folks. Welcome to the Ecommerce Playbook Podcast. I'm your host, Richard Gaffin, Director of Digital Product Strategy here at Common Thread Collective, and I'm joined this week in a sort of a semi-regular segment called War Reporter Luke, we're joined by our VP of Ecommerce Strategy, Luke Austin, recently back from paternity leave, I believe. Luke, what's going on, man?

[00:01:01] Luke Austin: Oh, we were in, we were in the throes. I got an 18 month old and a, and a five week old, and it's it's a party. But we're, we're doing good. Yeah. And, and sort of reflecting on it, the first time around, there's just so many novel, novel things to solve through and with second. We expect the, we expected the, the chaos and inconsistency a lot more.

And so it's making for a much, a much it's just been, yeah, we're, my wife and I are sort of able to take it in stride and just kinda laugh, laugh this there's more humor in the situation than,

[00:01:32] Richard Gaffin: There

[00:01:32] Luke Austin: than prior. So we're in it. Some good nights, some bad, but just we're just rolling with it.

[00:01:37] Richard Gaffin: There you go. Well, and boy, does that sound like a metaphor for e-commerce under understanding how to survive through the chaos. So obviously that's what we're here to talk about today. Occasionally we'll have Luke on to kind of give us a sense of like what the actual workflow looks like on the front lines, so to speak, of working with some of our D two C clients in the D two C industry in general.

So, a couple things to talk about. So, before we kinda hit record here, you laid out for us. A couple of sort of core problems that a lot of D two C brands are facing, both in our agency and in, in kind of their in-house teams for some of our clients. So let's talk a little bit about the first of two issues, which is that no one seems to know what to believe or what to trust in terms of measurement, in terms of target setting. So let's break that down a little bit.

[00:02:24] Luke Austin: Yeah, the. This, this has been a, a constant challenge that we have tried to stand in the gap and bring clarity in for, for for a while now, which is getting clarity on what signal. To believe what is the strongest signal from a a, an infinite number of signals that we can all have on channel performance and measurement and targets.

What is the signal that we should prioritize above all others? And at what point do we say we believe in this, we are subscribing to the signal and we are going to orient our. Workflow against it, because in the absence of that which this is what's happening in most cases across most brands and the workflows that exist that, that we see is some level of unbelief or misalignment in terms of, in terms of that question, what, what is the target?

What is the right thing to believe, and what should we act on? And what it leads to is typically. Okay, we're gonna orient around this signal, this measurement signal as as a source of truth. Let's make a bunch of actions against it. Let's go like really far to one side, right? Like we are seeing our revenue is decreasing.

And so we're gonna look at the most conservative ga last click revenue and try to drive GLS click revenue. Orient around that. After say a week or two start to see that our new customer revenue or a percent new to file metric starts to not look so great because JLS click naturally favors sort of bottom the funnel returning customer channels and tactics.

So, oh, wait percent new to file, new customer revenue. Let's go after those. So we're gonna drive that up. So all of a sudden now we have. Some awareness or reach campaigns going into meta, which is increasing our percent new to file, but all of a sudden now our A MER and our contribution margin are starting to take a hit, and it's just this cycle of one signal after the other that leads to inconsistent decision making at different intervals, misaligned between the team and what what we would say is the biggest problem of it all is.

Energy and resources or, and twiddling being focused on the wrong set of things. On the lowest activity impact, which is which is making tweaks in the meta add account, adjusting your bids and budgets. And that's sort of like the end output of this, which is the hyper orientation around. We're gonna adjust this bid today, adjust this budget today, sort of like surfing, surfing the, the media campaign strategy rather than focus on a much, much higher impact.

So the belief is at the core of this, which is what are we gonna believe is the strongest signal that we're gonna base our decisions against? And that is the thing that we are fighting for. And and. Are, are really orienting around is the first step in. This is alignment of belief because all the decisions in the workflow then are, are, are an output and outflow of that belief or lack of belief.

[00:05:19] Richard Gaffin: Okay. So it sounds like there's been kinda like. What I'm hearing is that there's two issues happening here. One is that there's a problem when you mentioned a problem with trust or belief, it's like sort of a problem perhaps with, let's say data fidelity. Like do we understand like is this tool reporting in a way that that's meaningful to us?

And then the second problem is a lack of understanding around what. Metric within our hierarchy. Me as of metrics, as I spoke with Taylor about last week, which

[00:05:46] Luke Austin: Yep.

[00:05:46] Richard Gaffin: metrics is actually the most important thing to go after. So, let's talk about the first part then. And that kind of maybe into another question I have, which is, what is unique about this moment?

'cause this sounds like this could have been a sort of an issue with e-commerce since the beginning of time or the beginning of whenever e-commerce started. So let's, let's double back then and answer my question around. What's the current set of issues with data fidelity and trusting the measurement tools?

[00:06:13] Luke Austin: Yeah. So, at this point it's, it's not a u it's not a unique thing. It's the same thing that's been going on for for forever, since the beginning. But what's, what's, what's wild is that like there's more tools available than ever but the same sort of like. The same, the the same questioning against the output is happening.

So it's the same thing but it's just perpetuating and, and and it's, and it's not different. Even though we have, we have other signals available to us.

[00:06:41] Richard Gaffin: Mm-hmm.

[00:06:41] Luke Austin: and so I'll, I'll, I'll frame it this way, which is our perspective on the signal related to marketing measurement that we can have the highest confidence in is the result of a geo lift.

Geo holdout incrementality test set up at a point in time that we get a stats result from that is 90% confident or greater hard stop. That is what we are going to believe in and orient our decision making around as it results to, as it relates to budget allocation, target setting, and then the the optimization of the media campaigns that flow, that flow from the budget allocation and targets, but.

What we are subscribing to is geo holdout test from an incrementality study. Once we reach 90% confidence or better from that test, we are going to use that as the best signal of what to believe in terms of the channel's contribution, and we are going to act against that.

[00:07:41] Richard Gaffin: Mm-hmm.

[00:07:41] Luke Austin: are not going to, we are not going to subscribe to something else in its place in the meantime.

You're not gonna replace that. But once we have that signal, that is the thing that's going to drive the decision making. To the extent that we set the target in alignment with the result of that test. And once we do we are driving budget against it and making allocation decisions and we're not saying well.

We're not sure. Maybe we should look at a different signal. Maybe we should pull in a different data source, et cetera. But that is the point in which we all align and subscribe to. This is the signal we are going to look at performance and make decisions in alignment with this signal.

[00:08:19] Richard Gaffin: Mm-hmm. So what leads then to the lack of faith, I guess, in that signal? Or is it just like a lack of people maybe aren't being exposed to it, or like what's, what's the issue there?

[00:08:28] Luke Austin: So the I think. Everyone has experiences related to. Conversations and results from different marketing measurements that are always sort of at the back back of our minds that are, we've seen signals from other studies for other brands that conflict. So that would be one bucket where we get an incrementality result for one brand.

Let's, let's use a very specific example now, because I think it'll be helpful to, to illustrate this point and connect to the second point, which is we get a result from a Google brand. Incrementality geo hold that test for for a brand that we work with. And the result is 73% incrementality to the platform reported roas.

So Google brand 73% incrementality factor, which means if the on platform reported ROAS for. Google brand is, let's call it a six and a half as an example, 73% incrementality factor against that is it's a 4.745 IOS. From that, from that test, what we know is we have benchmarks or incrementality based on the our dataset average and.

The Google brand tends to be closer to 30% incremental, so less than half as incremental as the read for this specific test. That's tough when you've, when you've operated and seen many other tests that are closer to 30% incrementality range and use that as a benchmark, and then you get a test result back, that's saying for this brand it's 73%.

That's going to sit a certain way with most people, and there's gonna be varying levels of sort of questioning and disbelief against it. Which is, which is challenging after, you know, a long time that you've maybe subscribed to a different a different measurement metric. What it's going to also require.

So the second, the second buck, that bucket that I, that I think leads to this. Lack of faith or unbelief is when you take that a step further and you say, okay, we're gonna subscribe to a 4.75 I OAS as the target for this channel based on that read. And you set your targets accordingly. What that means is you're gonna probably spend more into Google brand than you previously were.

Let's say you were assuming a 30% incrementality factor. Now you can actually go lower. You can push more volume into Google brand, realize more from there. And let's say you have three days of performance where your revenue or your contribution margin is lower than you expect, or your A-A-M-E-R is a little soft, right?

That is a signal one. Yes. We should be looking at the revenue to contribution margin. Those are the business metric that is, that supersedes incrementality. But looking at that on a three day window, and then from that drawing to conclusion that this test result is probably not to be trusted or nullified as a result.

That, that's where it breaks down as well, which is looking every day at the performance in these small windows. And then sort of even subconsciously, well, we pushed more into Google brand as a result of this test and this two to three day window, there was softness in the A MER relative to what we expected.

So based on that, and also I've seen other tests that are closer to 30% instead of 73%. I'm not sure how much to believe that maybe we should, you know, run a different test or, or, or reconsider or let's just raise the Ross target closer to what it was historically. So I think those, those are the two things that tend to be at some level present for all of us When we get a test result back that that's then we have to make a choice at that point in time.

Are we going to believe the thing or not? And then the action or lack of action flows out from that.

[00:11:57] Richard Gaffin: Yeah. So I mean, it sounds like part of it is the pressure, the pressure to make daily changes or to sort of prove on a day-to-day basis that you are doing something. And I think we've talked about that on the pod before, but that certainly leads to. Second guessing decisions that would be better left to sort of simmer for another seven days or something like that.

[00:12:15] Luke Austin: That's exactly right. That's exactly right. Because there's, there are so many other factors that could be at play in that small of a time window that drawing the conclusion from. That that small of a time window is that is not sound thinking. That is not an experiment design, right? We just ran an experiment design.

We just got 90% confidence back from it. That is something you can actually subscribe to. The other is, is anecdotal related to a, a much shorter period of time. And then if you take it one level level deeper from the business metrics. You go into the platform Meta or Google, and you set your, you update your targets and budgets, your raw targets and your budgets according to that.

Looking at the performance of those campaigns in a shorter time window than seven days. And in some cases when you're optimizing for seven day click. I think that the, there's an argument to looking a longer time window, but let's just keep it seven days minimum. This is then where we're drawing the next line here in terms of our workflow and what we're actually going to act against, which is looking at the performance outside of a seven day window is going to give you a read that you can't have confidence in either and shouldn't be making decisions against because.

You're optimizing that CAM campaign against the seven day window. We've all seen the chart of how cost caps work, right? Which is like over a seven day window. You have some conversions above your target, some below, and sort of, it sort of averages out over that time period. I. There's, there's day a week effect.

There's seasonality. There's how the bidding mechanism and the auction works where that sort of a time window then leads to the actions that are going to result in in much less impactful work than can be done otherwise.

[00:13:59] Richard Gaffin: Yeah. Okay. So you kind of hinted at it already a little bit here, but so if the two kind of fundamental problems are not knowing what to believe in terms of measurement tools or trusting a data source and then that leading to no cohesive or to a non cohesive workflow, let's talk about what it looks like then. Or what the ideal workflow looks like. What does it

[00:14:22] Luke Austin: Yep.

[00:14:22] Richard Gaffin: to come together?

[00:14:24] Luke Austin: Yep. So, step one in terms of determining budget allocation and targets, incrementality. Geo holdout testing is the thing that we are all going to subscribe to. There has to be agreement at the highest level in the org that this is what we are going to look at to make those decisions. Once that is completed, then what we are going to do is use.

The best starting point for incrementality and geo holdout that we know at that point in time, which is our data set of benchmarks that we have across a number of brands, use that as a starting point. That's, it's a better, it's a better signal than what the platform is reporting. It is not going to be the best signal until we run an experiment for the brand specifically, but it is better than anything we have at the moment.

So we're going to use incrementality benchmarks to set the. Set the result simultaneously, we're gonna start setting up geo holdout tests for each of our channel tactics, right? And as we get those results, we're gonna update the incrementality starting point factors so that those are the best read for the brand specifically.

I'll pause here too in this step, which is. I think there's another underlying thing that leads to some of the skepticism and, and, and unbelief, which is related to the confidence level of, of geo holdout test. Even more specifically, we we're looking for 90% confidence or better in our tests. There are a number of tests that we extend for longer than we expected to run them, to make sure we get to 90%.

Some tests that we scrapped completely because the read was not strong enough for, for us to have confident in making a budget allocation or target setting decision against. I would implore everyone out there that that is what they should be expecting from this sort of from, from an incre mortality and geo holdout testing.

There are, there are platform solutions out there that having that say 80% confidence is an acceptable range of the test results.

[00:16:09] Richard Gaffin: Mm-hmm.

[00:16:10] Luke Austin: too much variation when you go less than 90% confidence on, on the test results because you start to introduce a wide range of possibilities in terms of what the test result could have been.

[00:16:19] Richard Gaffin: Mm-hmm.

[00:16:20] Luke Austin: need that to be tight enough to have confidence in otherwise you're gonna get a test result back. Maybe it's 80% confidence, whatever. And you probably should have some skepticism of if that, if that's gonna change in time, but make sure the confidence level is high enough. And that we're not just running the test and saying the result is okay, that's gonna be good enough.

Because already right there we've introduced some sort of sense of unbelief or disbelief or there the right to sort of question that result at some, at some point in time. So my incrementality starting points run geo holdout tests to update your factors. Then from there you set your ROAS targets accordingly for each of your channels.

So you have what your A MER target or or first order profitability target is. We've talked we've talked about what you're expecting in terms of the return rate on each of your channels, but you have that, you have that set and then you set your, your ROAS targets for each of your channels in accordance with that by applying the incrementality factor.

So we have the incrementality factor from these tests. We apply the ROAS targets for each of the channels and then. From there, we are going to set the targets within each of the campaigns. So our min ROAS targets in meta, our T ROAS targets in Google. We are going to set those targets in alignment with what that target is based on the incrementality test.

We are gonna put an inflated budget on it to make sure it doesn't cap out. We are not going to make decisions on a daily basis related to bid and budget changes. Particularly for campaigns that are optimized for a seven day click window or longer. We're gonna set the raw target in alignment with what that.

Goal is, and then we're gonna make sure it has enough budget so it doesn't cap out. So that can spend against it. And that is not typically what we see happening in most accounts. It is looking at yesterday's data or the past few days, data, oh, the target was set at 2.6, but the campaign's been at a 2.4.

I'm gonna tighten up a bit or I'm gonna drop the budget, then you have a couple strong days. And it is, it is wild. We have started to aggregate data across our accounts of the amount of bid and budget changes across each of the accounts. And it is incredible the amount of activity that's happening in pursuit of something that's not that is not not achievable.

It's not a game that's worth playing or, or that we're gonna be able to win, which is making all these individual budget changes on. Single days and expecting a better outcome as a result. So incrementality starting point factors update with actual results. Set the campaign ROAS targets in alignment with those, have inflated budget.

And then do not make bid budget changes outside of the optimization window of those campaigns. Instead, focus the activity on something that's gonna be much more impactful to the business overall.

[00:18:59] Richard Gaffin: Okay. Well that's a great segue. Let's talk about that. So if the, if the kind of correct use of effort is not. Bid changes on a daily basis, or even on a weekly basis, or maybe only on a weekly basis, then what's the sort of daily effort need to go into?

[00:19:19] Luke Austin: There are different buckets of activities related to where folks sit within the organization that the activities should be spent on. But I'll put this in the bucket of like. The folks that typically would be spending a good chunk of time each day looking at bid budget changes, making adjustments in the ad accounts, making all these tweaks, like that group of folks that, so the people that we're normally doing this, what do you replace the activity with that's gonna be higher impact?

Right? So folks on that subset of folks the thing that, that subset of folks is gonna have the most impact on is. Creative output and volume in alignment with the highest performing offers, angles and audiences within the ad accounts. They're the closest to the ad accounts closest to see what's working.

They should know what's on the marketing calendar and they should know what the opportunities are to drive more performance. So what we started doing at CTC, we, we launched a pilot program. Recently for this, for a subset of customers where we've actually just started making ads for these customers.

And sharing those ads after we brief them, design them ideate based on top performing current ads, et cetera, we'll create the ads, drop them to our customers and say, Hey, here's 10 ads we made. Which one of these do you wanna run? Which of these are approved? You can just pay for those individual ads and we'll get them live in the ad account.

We'll scrap the rest and we'll keep making you as many ads as possible. That is an example of the thing that this, the subset of folks focused on, this sort of activity closest to the ad account is going to, that's gonna lead a much more impact than, rather than doing that activity, spending an hour or two making 27 different.

Bid changes in the ad account that you're just gonna revert tomorrow anyway, and that's outside of the optimization window. So that's a very specific example of where what we have done to focus on a higher impact activity, which is. We are going to spend time making ads and we're just gonna keep making ads and get as many ads into the ad account as possible, which is going to have the highest impact on that campaign that has the 2.6 ROAS goal.

Being able to spend as much volume against it rather than lowering the ROAS target to try to get more spin volume and trading off an efficiency. We're actually adding something net new. And then I think in addition to that, like. The, the one layer above this is the marketing calendar, so ideation related to net new marketing moments, et cetera.

Like we can talk a lot of about a lot of things there, but the folks that are most involved in this day-to-day workflow, there's so much energy and time that for DDC Econ brands typically goes to this sort of activity in the ad account. And I think there's this expectation that sort of sits in the space more broadly around activity and daily changes and what's going on.

Where that subset of folks, if we had the workflow that aligned with what we believe is the strongest signal, the campaign targets are aligned with that, and then we're gonna spend as much time as possible just creating as many different ads as we can into the ad account to increase that result. I think many brands would see a much stronger outcome from the energy that they're indexing into this workflow.

[00:22:21] Richard Gaffin: No, that makes sense. I mean, it's an interesting sort of like phase, maybe close to a final phase of evolution for. The media buyer's role, both at CTC and in general of kind of like the beginning, like a few years ago, five years ago maybe. It was a sort of like a, a day trader,

[00:22:35] Luke Austin: Yeah.

[00:22:36] Richard Gaffin: sort of a, you know, making, buying and selling all day long. But at this point it just sounds like the best use of almost everybody's time is creative strategy, creative ideation, creative production. Because as particularly with, with recent algorithm changes, as that continues to sort of inevitably progress the. The space left over will be just creative.

[00:22:58] Luke Austin: Okay.

[00:22:59] Richard Gaffin: and it sounds like that's, that's kinda like what we're moving towards in, in terms of what the most useful thing is.

[00:23:03] Luke Austin: Yes, absolutely. And, and we, for those listening who are familiar with Compass, the creative I tool, AI tool, we built out, like we've been investing in tools and technology to allow us to be able to do this as effectively as possible and, and contribute the amount of output in terms of ad volume necessary.

So we've been really focused on equipping our team and our workflow in a way that's able to make this work where everyone can participate in this process. And I would add outside of like add ideation and and generation, there's this other bucket related to understanding what is going on deeply in the category and competitor set of each of the brands that are, are doing this.

Where there another specific example, if. You're spending, rather than spending two hours each morning adjusting all the budgets and bids on each campaign, surfing the performance, doing the day trading thing what if the two hours with spent looking at your competitor and categorical set the, the creative they're running on the platforms, what you're, what the shopping feed, what the Google environment looks like related to those folks getting inspiration from that.

There, there's like this very specific example where, we are looking at a brand not long ago where their similar circumstance, their Google brand was performing at like a, at, at like a eight or nine x. And there was sort, and then the go, the Facebook performance is much lower than that.

Call it like somewhere lower than a, you know, two x roa. So Facebook has added two Google brand close to a nine x. And, and there's already sort of this like, well, what do we believe about the Google brand performance, right? Like. Should it function that high or not? Well, Google brand incrementality is a lot lower than meta acquisition.

We know that as a starting point, but even at 30% incrementality factor, Google brand at a nine x is is performing at a 2.7 ROAS for the brand, which is, which is higher than your meta performance over that time period, even after applying the incrementality factor. And then you look into Google, the Google brand shopping feed into the erf, and you see that.

The, the brand is getting outbid by a bunch of retailers and competitors. They have like one or two of the spots in the top 10 related to those folks. So there's a bunch of this brand demand for the business that we're driving. All this traffic from meta and the net new channels. And it's not being captured within the Google brand feed.

And we, we put out, Taylor put out a tweet on this recently as well, which is, I think we've all gone, potentially the pendulum has swung a little too hard on the Google brand side of things. We, I. With P Max and Google brand specifically we, we all, we went pretty hard on like, scrutinizing that.

And I think we're at a place where we kind of need to swing back more towards the, the middle where we're seeing for a lot of brands right now, there's a bunch of opportunity within Google brand shopping, specifically within the shopping feed for, for your branded terms. And even after applying the 30% incre fatality factor starting point, it's higher efficiency than a lot of other channels where just understanding that dynamic.

[00:26:03] Richard Gaffin: Mm-hmm.

[00:26:04] Luke Austin: And solving for the Google brand shopping feed is going to drive a lot more net contribution for the business. Then let's keep tweaking the bids and budgets on our meta campaigns and then that, that demand isn't gonna be captured on the other side as well. So I think, yeah, those, those two buckets like ad ideation generation and then.

Deep understanding of the competitor and category set. That's a broader, it's a broader perspective that once you set your ROAS targets where they need to be and align on incrementality as, as the source of truth, let those be and go solve higher impact problems. The platforms are out of place where it's going to allow you to do that, and it's actually probably gonna be better performance if you, if you approach it in that way.

[00:26:44] Richard Gaffin: Yeah, so usually I, I like to end these any podcast episode with the question about like, what, what's the most impactful thing that you listening at home can do right now with your business in order to kind of bring about some of the stuff we're talking about. And part partially it sounds like that might be make your media buyers creative strategist also, but, so I'd be interested a, if you could. If you have another sort of like more important thing for people to apply at home, I'd be interested in that, but I would be, first, let's talk a little bit about what does it look like to do that in terms of like is this, are you all of a sudden having brainstorming sessions every week or what's the actual like execution been at least so far for that sort of transition?

[00:27:25] Luke Austin: Yeah. For, for most brands, the, the most effective workflow for this is gonna be allocate the time with your media buyer or whoever's managing the Google or the meta account. And then have a creative strategist slash designer have two people dedicated to making med ads. Like, just have two people who sit next to next to each other if possible, or if remote who spend a lot of time of the day just ideating, dreaming up ads and making them together.

If you can dedicate two people to start with that workflow and set the expectation of their time in that way rather than. Hey, what are the adjustments we're making the ad account? What bid should we, right? Like it's how many ads did we make today? How many ads can we make? Just start there. Get two people focused on volume of creative output.

And just getting it in the ad account. It, for many brands that is going to be a step in the direction where that's the expectation for those two folks. That is their core expectations. Make as many ads as possible and just get 'em live in the live, in the ad account. Prior to that being able to happen that the other thing that everyone should do immediately is for all the core decision makers related to the D two C business.

So it's gonna change based on your org org structure, but CEO slash founder and then anyone overseeing the marketing channel and then all the way down to the folks doing the media buy on a day-to-day basis. Ask, ask everyone what the meta roas target and why it's set that way. And the, the response is if there are different responses being given, that's a, that's a yellow flag.

If there's the same responses being given, and it's not related to the result of a geo holdout experiment at any point in time, that's also a yellow flag. Because. That is from what we've seen in existence, going to be the strongest signal of the performance of the channels. And so if that is not informing the way that your channel targets are being set currently.

Is what that is, what needs to happen out of the gate to give you the highest confidence that you have the right signal, and then to set the channel targets in alignment with it and have them be in a place that you're confident in letting, in, letting them be and letting them ride against the optimization window so that, so that the team can then focus on these higher impact activities.

[00:29:45] Richard Gaffin: Love it. All right, well wise words from the front line. And I will say one other thing too. You'd mentioned Luke, our incrementality benchmarks, which is to say like, if you don't have a geo holdout test lined up yet, or whatever the case may be, we do have a sort of a set of benchmarks that you can at least begin from for judging. The incrementality, this through the eyebrow as of any given channel. So check out it should be on our YouTube channel. My conversation with Luke and I believe Tony around incrementality that also has a deck slide in it that contains some of those benchmarks. So check that out. But I think we'll wrap it up there, Luke.

Appreciate the time, appreciate your advice and for everyone else out there, we will talk to you next time. See you.