Listen Now
In this episode, Taylor shares his biggest critique of cost controls in digital advertising.
Join Richard and Taylor as they explore the limitations of cost controls, including the challenges of relying on human logic and consistency in a constantly evolving ad landscape. They dive into the role of AI in ad management, the struggle to trust automation, and the emotional hurdles that often lead to poor decision-making.
Taylor also shares insights on how to optimize campaigns for long term success by balancing human creativity with machine driven precision. This episode is a must watch for anyone interested in digital marketing strategy, campaign optimization, and the future of AI in ecommerce!
Show Notes:
- Go to mercury.com/thread today to see if you’re eligible for Mercury Working Capital
- The Ecommerce Playbook mailbag is open — email us at podcast@commonthreadco.com to ask us any questions you might have about the world of ecomm.
Watch on YouTube
[00:00:00] Richard Gaffin: Hey folks, welcome to The Ecommerce Playbook Podcast. I'm your host, Richard Gaffin, Director of Digital Product Strategy here at Common Thread Collective. And I'm joined as I often am by Mr. Taylor Holiday, our CEO here at Common Thread. Taylor, what's going on, man?
[00:00:14] Taylor Holiday: Not much, man. Just getting caught up on my week. I got to go celebrate my 14th wedding anniversary with my wife up in Napa this weekend. So it took a little extra day off and now I've got five days packed into four, you know, so off we go.
[00:00:26] Richard Gaffin: There you go. Yeah. The cost of vacation always. But one thing I do, I want to mention a big change in my life is that I'm currently sporting the Skullcandy Hesh Evo.
[00:00:36] Taylor Holiday: Come on.
[00:00:37] Richard Gaffin: A courtesy of our, your friend in
[00:00:39] Taylor Holiday: Welcome to the family.
[00:00:40] Richard Gaffin: exactly. He he sent me a LinkedIn message and says, I feel, I feel literally sick every time I see you on the podcast, wearing this 35 Amazon pieces of crap.
So, the cans are on Brian. Appreciate it.
[00:00:52] Taylor Holiday: Yeah, before I had found the way I used to walk around the office with some incorrect earbuds and it was really a source of conflict. So that's been resolved. I now have an abundance of Skullcandy ear earphones whether they're these beautiful ANC Crusher twos, or, you know, the the dime threes that I carry on my key chain, you know,
I'm loaded up.
So we're all, we're all,
in, we're all in.
[00:01:16] Richard Gaffin: right. I got a couple more models I can show off on, on subsequent episodes, but all right. So, I think what we want to dive into today, I mean, this could be kind of a quick discussion here. I don't know, but. We've been talking a little bit about, off mic I guess, about this.
Some of the relationships between human beings and AI. Maybe that's the way it kind of an umbrella category for this, but I think the more sort of a clickbaity way to talk about what we're, what we want to discuss today is Taylor's biggest critique of cost controls. Now, of course, you all who listened to us will know that we love our cost controls and that we can perhaps come across as dogmatic about them.
But of course, what we want to prove to you today is that that's not the case at all, and that there are some very specific things, I think that Limitations maybe to the, to the format. So, so Taylor, why don't you kind of unpack that for us? What is your biggest critique of cost control?
[00:02:06] Taylor Holiday: that they require a incredibly consistent, logical set of behavior from people who are not either consistent or logical, and as a result, there's real limitations with their use. Now, I wanted to stop and caveat and say that that is also a problem, Using lowest cost
bidding, and so it's not not unique to the attempt to utilize cost controls, and they are still a constraint in my mind that adds a layer of protection for the advertiser.
That is really important. But we are finding that as we continue to try to design our system. And we try to define the boundaries of all of the actions that could exist within a system. There are some really hard questions that we have to wrestle with and where we find our own behavior incongruent with some of our ideas. And that has a lot to do with when you adjust bids or how you set bids all the time. And this is, a really challenging dynamic that has both a data driven and thoughtful. And I think it's worth discussing the challenges of both today and how they show up in the utilization of cost control.
[00:03:18] Richard Gaffin: So let's start, let's start with the, the data driven piece or the more sort of like logical element of it.
[00:03:25] Taylor Holiday: So if you want to think about where to appropriately set your cost control and again, there's multiple types of cost controls. There's bid caps, cost caps and minimum row ass or target Ross or target cost per result as options, Well, it requires a bunch of unique knowledge. So as an example, it has, we have to understand the marginal outcome of the transaction that we're generating. So what is the product I'm selling? What is the collection of products that I'm selling? What is the anticipated order value? What is the margin of that order? That in and of itself is a very complex topic because oftentimes our ads don't lead to one purchase potential. They lead to a myriad of different purchase potentials, right?
So it's not just one product. Okay. It's the customer could buy an almost infinite combination of things through the, the link. And so just answering the question of what is the margin on this purchase? And therefore, where should I set the cost control is a very challenging exercise by itself. secondarily. Then once you have that idea of where you would ideally set your cost per acquisition goal or your minimum ROAS goal, you then have to deal with the relationship of the optimization setting. So, are we talking about one day, click seven day, click one day, click one day view seven day, click one day view seven day, click one day view one day engaged view.
Like there's all these myriad of potential settings that relate to the incrementality of that purchase. For the business. So you have to understand the margin clearly. You have to actually be able to get to the right optimization setting. And then in an ideal world, you have some sort of holdout test that has identified the incremental impact of that attribution of that optimization setting. That's a lot of work to get to a very clear point of view on what the target should be. A lot of brands never get there. They never get to a level of clarity across that spectrum. And a lot of times we're having to, in the short term, use standardized benchmarks of previous results from other businesses and apply them into each individual edge cases in ways that I know that there's some error bar for. It's not going to be perfectly the same everywhere, what the incremental impact of a seven day click optimization is on meta for new customer acquisition and retention, like, it's just not going to be the same. And so I have to walk them through a time period of where I'm, I'm guessing at approximating using previous historical data from a broader set of tests to, to answer questions. Now that's still an advancement in many cases for how it's being done prior to us. But that, that's just like, we're just talking about how to set it up. That is a very difficult process,
by itself.
[00:06:02] Richard Gaffin: was going to say like, so how, what are some of the, the maybe heuristics or whatever that you use when you think about setting cost control? So obviously like that does have to eventually be done. How, how how
is, what's the process there?
[00:06:16] Taylor Holiday: So usually I'm using the destination URL to try and determine the boundary of potential purchase values. So, it's simplest if I'm driving to a PDP. That doesn't mean meaning product display page. That doesn't mean that people can't leave the PDP and go purchase other things, but it tends to be the narrowest bound of potential order values.
If I'm driving very to a specific purchase at the very like peak of that would be like a locked in lander with a single offer where everybody can only buy one thing. That's like the easiest, that's the easiest version of the game. Then there's PDP, then there's collection page, and then there's like the whole website. And the further out you go, the more likely it is that you should be using a value optimized minimum or target or. Target rust in order to set a goal because you're not going to guess correctly at the order value and the average order value is going to be a blend of a bunch of a wide myriad of order values.
And you don't want to just be optimizing for a bottom set of that. So. That's like the first thing is understand how wide or is the purchase potential for the ad that I'm running. If it's really wide, then I probably need to use value optimized and minimum Ross, not target CPA or a bid cap because the purchases are too disparate and I could be optimizing for just a subset of purchases by doing that.
And that is the step one to getting to the understanding of the order value and then the marginal value of that set of purchases. So now I have an, at least an idea of, okay, I want a two to one Ross on this set of purchases. Okay, cool. Now. Is where I think there's probably a lot of debate and I'm going to give you our point of view on the optimization setting is that we have found repeatedly that seven day click is the most incremental optimization setting. Now there's a counter argument to that, that my friend Yoni Levy likes to make, which is Okay. Even if seven day click one day view is less incremental on a per order basis. All you do is just adjust your target higher and you get more signal doing it that way. And so in reality, I'm open to the potential of either.
I've just seen some results from seven day click one day view would be really, really bad in terms of the incrementality lately. And so we, our default is to use seven day click. And the benchmark we have seen for the standard of incrementality there is that meta actually under reports by about 20%. So whatever the platform number is, we would multiply it by about 1. 2 to set the target. So in other words or divide by 1. 2 to get to your, to your min ROAS target. So if I have a two to one ROAS goal and I'm using seven day click optimization, I'm going to set my target at one six, seven, and that's going to give me. My my minimum ROAS target because the incremental impact is actually greater than what the platform reports. So that's sort of how that process would go, depending on the specific specific specificity. There we go. Got it out of the order value into the optimization setting, the incrementality factor that we're applying based on all the research we've done on geo holdout tests across many brands. And then that gets us to, all right, now we've got a target to build against and that's just the setup.
We haven't even gotten to how I manage that, where, which is where things really go
[00:09:23] Richard Gaffin: So let's get into that management piece then, because I know that it can certainly be an issue where people sort of feel like the, let's say the attic or the specific campaign is failing from an efficiency perspective, but it's just, cause they actually need to adjust the cost cap up or down or whatever.
So talk through like what the
thought process is
[00:09:38] Taylor Holiday: well, okay. So I'm going to set aside using big cap and Costco, because I think that's more complicated on the management side. We'll get there, but let's stick with our minimum Ross example here. At this point now, I think we have to begin to think about the relationship between Budget, the amount of ads that we're testing in the campaign and The amount of spend required to get to an outcome that is trustworthy or why meta may spend more or less money in the process of failing. And this is, this is really important. And this really comes to life. I've noticed a lot in catalog ads. We've started introducing catalog is more of a default part of our structure. But what I recognized in using catalog ads as an example is that if you have a large catalog of products, you have functionally launched a campaign with hundreds of ads. So the similar idea, right? Like each ad is, and if you go sort a Daba campaign or a product campaign by product ID, you'll see how many different ads or products it's spent on, and each of those is functionally a different ad that it's optimizing for, but it's the same thing as if you launched a new, let's say min Ross campaign with three ads versus 300, you are functionally giving meta a broader array. Of experimentation that the budget is going to take longer to potentially try all of those options to get to a conclusive result about its ability to spend. Now, it may find an ad really quickly that allows it to deliver at that intended result. And off you go. But if you have a lot of ads, or you're running a catalog ad, it may take a lot of time before it finds out that That it's wrong that it can't win. And that sort of attempt at seeking out a probabilistic outcome that it's confident in may result in spending a lot of money in a way that people's experiences, what the heck I set a cost control and it blew
through it or spent over it, or it didn't deliver the result that I want. And it's very hard to understand exactly. How much time you should allow something to fail before you get out of the way while you're watching each day negative performance at substantially, potentially a lot of
money show up.
[00:11:50] Richard Gaffin: Okay. So, I mean, that feels like it's kind of segwaying, segwaying into the emotional piece of, of the man, this discussion, right. Of management particularly. So what's. Is there some sort of, again, like, heuristic or whatever to think about when a relationship between volume and how long you need to let it wait, or wait until you kill it, or whatever?
[00:12:12] Taylor Holiday: I think the first thing you should think about is how many end nodes of the test. Are you introducing meaning? How many ads are you launching in the campaign? Should have something to do with the amount of time that you need to answer the question. And also I think is a difference between one day click and seven day click where One day click with one ad is going to get to the fastest most conclusive result most quickly Seven day click with 5 000 ads Is likely to spend the most money.
And so think of that like a spectrum of your appetite for up the pace of optimization and the potential waste of capital along the way. Now, the alternative is, I think that if you can get a Dabba campaign or a campaign with a lot of ads to start spending, it likely has a long life
cycle. So there's sort of an, a relationship here between the, how long the optimization period is. And the length at which that campaign is going to last versus like short optimization, short lifespan, maybe faster fatigue. So there's a trade off that you have to decide as a business relative to your own budget, your own ambitions here about what you're trying to build. If you're trying to solve for today's profit right now, and I have very scarcity, then probably smaller campaigns, less ads, shorter optimization window, but if you're trying to build a foundation for long term scale, Longer optimization window, more ads is going to give you a campaign.
That's likely to have some staying power. And so I think that's like, there's this dynamic there where when we build a campaign, we have to have a point of view on how long it may take to create the result that we want against that cost control and make sure that our clients or our boss or whoever is connected to that idea up front. In a way that gets them bought in because where this pressure really arise, where the emotional compromise happens is three days in it's not working. And there's going to be pressure on you to do something. And inaction looks like negligence, right? This is the problem is that from the client's point of view, doing nothing feels like you're being negligent, despite it often being the right thing to do. And I watch us. Do this all the time where I know our buyers are feeling pressure to do something to solve the problem and they want to, there's like a genuine desire to act at the benefit. And so they go in, they tighten the cost control is usually the corresponding action. And now you've basically functionally reset the learning.
And oftentimes what happens is the campaign just dies
very quickly. Because it, like you've now moved the target into a place that was never achievable or your intended goal. But all you're trying to do is just, you're trying to balance out the result to try and make it look good over the last seven days. But that's not actually what we're trying to accomplish. What we're trying to accomplish is quality future spend. And unfortunately those four days they're gone. They've it's the money's already been spent. So you have to ask yourself in this moment right now, what's the best thing I can do to help the future outcome? Not not the past, try and alter the past.
[00:15:19] Richard Gaffin: Okay. So this is, this is interesting because part of what you're saying here is that it's not when you, let's say, build out a huge campaign and it needs to take some time in order to optimize what you're not, you're not actually gambling per se, right? It's not like, well, I'm putting a lot of capital into this and some of it's going to be wasted, but seven or eight days or 10 days later, I'll find out whether or not this was a complete wash.
It's more something along the lines of. Hey, this will probably likely work over a course of, you know, over like a week or two weeks or whatever the case may be, but I just have to hold firm for like seven days or whatever, and just basically fight the sort of emotional storm that's happening inside. Is that accurate?
[00:16:03] Taylor Holiday: I actually think it is probably closer to gambling
in that, in that, you, you are setting something up that has a probabilistic outcome that I've seen the odds of success be very low. Now to make the expected value calculation, which is to move the bet in your favor has to do with the relationship of the cost of the production of that individual ad.
Okay. And the potential return of
that ad. And so if you can get that relationship, right, you can actually flip the gambling game into your favor. But, and that's where it becomes different in that, like at a casino, the house controls the EV, they never allow the game to flip
into your favor. Right? Like that's just the structure of it.
And the second it does, then they switch blackjack from a two to one payout to a three to two payout. And then all of a sudden it's back in their favor. Now for Metta, that doesn't have to be the case is that you can create a system, I believe, and that's what we're after doing, where you push the expected value into your favor, but keep in mind that even when you do that, that may be something as small of an edge as. Like 51 49, 60 40, which still is a, is a massive amount of failure to stomach and still requires that along the way you don't ruin that edge by making bad decisions as a human along the way. And this is, this is where I think the tension starts to happen is that if all the time you're just paying for the optimization period and then intervening as a human, You're actually really ruining
the, the
setup you're paying for all the bad performance and then repeatedly interjecting yourself into that process over and over and over again.
And you will be chasing your tail and feeling like this is always
failing. You have to remove yourself way more often. And I actually think that the conclusion I'm coming to is that I actually don't think that you can ask humans to make this
decision. I think it's unfair. I, I, and I don't mean that to be Like putting people down.
I can't be asked to make this decision. I find myself emotionally compromised. Every time I'm in charge of this decision, it's just too weighty. If I go in the ad account and I see yesterday's performance suck there, what it requires of me to do nothing. It's just, it's overcoming my monkey brain, my human evolutionarily thing too much, it's, it's asking too much of me to do that because I'm so compromised and fearful and worried and thinking about what if I don't, and what is the client going to think if they see I did nothing, it's like, it's
too much. I think you have to remove that choice from them and actually allow and remove the obligation to, to intervene. Or only under a very narrow set of parameters. And that's, I think what we're trying to figure out is what would those
parameters be?
[00:18:49] Richard Gaffin: so let's unpack like what, what would the steps be to actually remove the human from the equation short of like, I don't know, locking them out of the account for seven days or whatever.
[00:18:58] Taylor Holiday: Yeah, I just think it's that their job is not optimization at the, at the bid level is that that all happens automatically and they actually aren't allowed to do it. Like literally, it's just not part of their job and it would be a violation of their scope of responsibility to do so. I think that's, that's what you have to do.
It's like a protection mechanism against this part of ourselves that we just can't help. And so I think I think that's part of it is that you actually give the machine the obligation for optimization. And then you work to program that in the right way. And you automate it again within within a narrow set of parameters that where we think that a human should introduce themselves because there's there's like not an infallibility to to the system, obviously.
So, And to represent ideas that the machine may not know about inventory, et cetera, whatever might might be beyond the scope of the data that the machine is able to interact with. So I think that that's what we're trying to figure out is like, what are those things? One of the big things related to bid caps and cost caps in particular is that when you guess at the CAC, when you're actually introducing the expected cost of acquisition, you're subject to being really wrong about the AOV.
And so that is a time where I think you have to adjust I also am just getting less and less confident that that's ever the right way to do. It is the bid at the cost per acquisition level, unless you have a really, really clear, narrow set of potential order values. So we're, we're sort of leading more and more into value optimization, which is, I think, is consistent with the direction that that is heading as well.
So I I think that that's probably where we're headed is to more automated bidding and even automated building and really just trying to again push humans into the ideation of what we're selling and why and to who and the creative portion of it as much as possible with even some support on the data side there.
But I just, I just think it's an unfair request that's leaving people compromised and making bad decisions.
[00:20:44] Richard Gaffin: Okay. So let's say like in this scenario where optimization is now entirely the machine's responsibility, what is to
prevent, let's say the client or the boss or whoever's in charge from looking in and seeing that. Excessive amounts of money have been blown over the course of three days or whatever,
and then
[00:20:59] Taylor Holiday: Yeah.
[00:21:00] Richard Gaffin: begging you to intervene somehow, like what,
[00:21:03] Taylor Holiday: Well, this is, this is actually, I think, the limitation of why this may fail is that the standard that humans have for the performance of machines is astronomically higher than the expectation that they have of
humans. So this is, there's a great article that we should link in the show notes about this, about one of the challenges of implementing AI culturally.
And so like use like self driving cars as an example, as the most obvious one is that like, are, if a self driving car gets into a car wreck right
now. It is headline news
everywhere, but yet every day, endless people crash their cars. We ran into each other, we kill each other just endlessly. And so there's, there's an incongruence between our logic and our emotion on this issue, because we have this distrust for what the machine might do when we give up control.
Control is such a powerful emotion. But eventually like we become Normalized to it. Like, I think a good example is autopilot. Like most of us fly in planes. The plane is mainly flown by
autopilot. The vast majority of the time we don't think about it. It's now normal for us to do that. And we all entrust our lives with it constantly. Eventually, self driving will become the same thing where we don't think about it, and we just understand that it's but when you're the bridge generation for whatever that next layer of relinquishing of control is, I think it's really hard for us to do that. There's self worth tied up in it. There's questions about Can like conspiratorial questions about medicine, set of structures that are at play and trust in
that system. There's all sorts of things that go into it that I actually, my fear is that human, that in many ways, there's clients that would rather let the humans fail than have the machines make
optimized
decisions.
[00:22:46] Richard Gaffin: interesting. No, I think about that. The analogy of the self driving car is really good. Cause I think about the idea of like allowing a car to drive me around and maybe this machine could kill me versus if I were to get myself into a car accident, I would be able to blame myself. And there's something easier
about that.
Right. There's a way that I could have done something different, you know?
[00:23:06] Taylor Holiday: So we landed in San Francisco for this drive to Napa, and I wanted to see if I could get a Waymo to take us to Napa, just thought it'd be a cool experience. It doesn't go that far yet, but I remember as I was looking into it, this feeling of like, that's really far. What if it? What if it?
doesn't know how
to do it?
What if something happened? Like, how would I be able to explain this? Like, all these things that I And instead, you know what I did? Got in the car with a complete stranger.
That, for all I know, could have been high
as a kite. And, like, all I have to go off of is he has a bunch of star ratings in his
Uber app. You know, like, that's the extent But, but yet that feels so much safer to me. But if you go back to the beginning, then you remember that everybody was like, sleep at a stranger's house, get in the car with a stranger. You're crazy. You're going to be murdered. Right? Like, so whatever the thing is, there's always this sense by which the next evolution of it is like really scary.
And I think that's going to happen to us a lot with AI right now is that we're just gonna have to overcome a lot of our humanity into some of these spaces where we have assigned meaning and purpose and figure out what to do with it.
[00:24:09] Richard Gaffin: Fascinating. Well, part, part of a much bigger philosophical discussion perhaps, but anything else that you feel like you want to hit on this?
[00:24:16] Taylor Holiday: No, I just want to acknowledge that like, you know, we probably get criticized for being dogmatic about cost controls in some ways. And I think that is important that there are ideas that are better and worse than other ideas and that it matters. Which system performs best on the aggregate across all of the things that we do.
And that that answer is actually really important for people to seek out. But there's the execution of the tooling is wholly imperfect all the time. And so it is, it is a thing that we are striving for to consider the ways in which we may. Take our hammer and swing it better than everybody else and constantly critique ourselves for the ways in which we're not swinging it well enough.
And so I think that that's that's an important part of continually improving who we are and to never be satisfied with the status quo of how we're operating. And I think this is really our superpower is that for anybody that thinks we're dogmatic. I don't think you actually understand how much we evolve and challenge and push our system forward to try to produce the best possible results that we can.
And we're after it in this area,
too.
[00:25:17] Richard Gaffin: All right, folks. Well, appreciate y'all listening to us. We will talk to y'all next week.