Listen Now
In our recent DTC Index Monthly report, we analyzed the ways Meta seems to have “broken” over the last several weeks. But if Meta’s broken, what can you do about it?
On this episode of the pod, Taylor and Richard dive deep into what to do — and what NOT to do — when the fundamental system your ads are built on stops working properly.
Show Notes:
- Sign up for the DTC Index
- Want an easier way to source UGC? Streamline your process with SARAL’s chrome extension: getsaral.com/champions/ctc
- The Ecommerce Playbook mailbag is open — email us at podcast@commonthreadco.com to ask us any questions you might have about the world of ecomm.
Watch on YouTube
[00:00:00] Richard Gaffin: Hey folks. Welcome to the e commerce playbook podcast. I'm your host, Richard Gaffin, director of digital product strategy here at Common Threat Collective. And of course I'm joined today. He's not in the studio right now. He's back in the office. Mr. Taylor Holliday. Taylor, what's going on, man?
[00:00:13] Taylor Holiday: Nothing, Richard. Just enjoying another Monday. I'm sort of mentally prepping. My wife heads out of town for five days starting on Friday. So this is sort of the, the lead up to a super dad weekend. Got all the logistics lined up, got a lot of support from the babysitter crew.
So excited.
[00:00:31] Richard Gaffin: Excellent. Okay. All right. So we got super dad, Taylor on the mic, ready to give you some some sage advice here particularly because what we want to talk about today, and this was mentioned in our most recent monthly. DTC index, a lot of the conversation or the discussion rather in that particular issue was around what to do in Metabricks or rather the question was, is Meta broken and there's some debate around that, but to get some context and Taylor can maybe provide a little bit more in the last couple of months, really, like we've had, we've seen a number of circumstances in which Meta has essentially the.
Had some outages or rather had spend spikes. In other words, the platform itself was broken in a meaningful way, several times over the last little while. So what we wanted to do is sit down and talk today about like, what do you do when that happens? Because again, one of the issues that we run into many times is that we are all dependent on this other platform, right?
Something that we do not control, but that holds kind of our livelihoods in some ways. So what do we do when the worst case scenario happens and the platform itself, the thing that this whole thing is constructed on breaks? What do you do in that circumstance? So we're going to talk about that a little bit, but maybe let's kick it off with a little bit of context.
Let's just talk about. Some of the debate here and kind of some of what was included in the DDC index issue.
[00:01:44] Taylor Holiday: So we looked through daily performance on Metta across the month of February. To try and understand when did the issue start? How bad was it? What has it been resolved? And it's clear that on looks like about February 20th was the date that what happened was that Ross decreased substantially for, and by substantially, I mean, 10 to 15 percent on those days across a large aggregate dataset, lots of individual cases where it was worse than that.
But what we saw was that there began to be a deviation from. The consistent corollary relationship between CPM and conversion rate. In other words, CPM was going up and conversion rate was going down. We're generally speaking, those things move together such that even if there are rises in CPM, they're generally offset by increasing conversion rate.
The obvious example for this is like something like black Friday, cyber Monday, right? Where you see an increase in price and conversion. But when those things move in opposite directions, one of two things happens. Either you're Creating arbitrage where you're getting an increase in conversion at a decreased CPM, that's good news. Or the opposite where you get an increase in CPM and a simultaneous decrease in conversion rate. That's not great. And that is what we started to see was that those things became fractured in a way that they're usually not. And this resulted in a lot of poor performance and in the poor performance happened in. A few different ways. Either you were running lowest cost and your conversion rate declined and you spent through your budget, but at lower efficiency, or in the case of running cost controls, what you saw was and spending through your budget, despite not getting the efficiency outcome that you wanted.
And a lot of that again was normally, if you think about what that is doing and it's bidding methodology, it's bidding. Winning auctions at a CPM. So that's the actual price you pay for the ad inventory with an expectation of conversion rate that wasn't there. So that's like the practicals of what happened.
It seemed to last through the end of February and even into the beginning of March, really up until the last week or so last week, we started to see. What we'll call more normal performance where we are consistently seeing delivery against the cost controls, holding their price appropriately. And not seemingly a really erratic set of performance.
[00:03:53] Richard Gaffin: Gotcha. Okay. So, so actually, so then maybe to make a distinction both for me and for our listeners. So there is some distinction here between this is an ongoing problem as opposed to What we saw a couple of times, which was like acute issues or moments of like meta, just like randomly overspending or turning off or just these sort of more urgent things, but this is a combination of those things did happen.
But then also there's just an ongoing meta is just not operating the way that it
[00:04:22] Taylor Holiday: Yeah, yeah. We've seen a number of different issues from meta. Some of them are what you're describing, like this very acute sudden spike in spend for a very short period for four hours. It just blew through budget. And then it was sort of quickly resolved. Those have happened in the past. And in this case, it seemed to be more of erratic ongoing performance and disassociation between delivery and performance and
those things that tends to be the most important thing.
Challenging and frustrating because you just don't really know what to do in that scenario. And I think that's what I want to talk about today is really how should you behave in the future when you encounter this problem? This is something we have to really organizationally hone in on. And there's a lot of really strong temptations to act in this moment. on bad information. And so I think that's the real key here is that in light of a technical challenge, we are using a very sophisticated technical tool for which there is very little information in terms of what's occurring when these problems happen. Meta is not sort of, super forthcoming in terms of the kinds of Challenges that they face, nor do I think that in many cases, any one individual actually understands entirely what's happening all the time.
And so you're dealing with a very complex technical tool with very poor communication, which is a very bad space to make decisions in.
[00:05:37] Richard Gaffin: Okay. So then let's talk about, maybe let's start the conversation with talking about what not to do in these situations. So we're in a circumstance where the ground is kind of shake. The whole foundation is moving. What's. The thing you shouldn't do that happens.
[00:05:51] Taylor Holiday: I think there's a temptation to act for the sake of showing that you tried something. And this is a temptation that I feel as an agency is that your clients want you to fix it now, right now. And in order to show them that you really care, one of the easy signals is Look at all these things I tried, but I would just contend that that's like the worst possible thing that you could do because you are just acting completely at random with no logical reason for why the behavior will work. And so the expectation of that performing well should be very, very low because there's, it's not informed by anything logical, but there is a very human element that people want to just show you. I am here and I'm trying to help you. Something and I would just encourage you in the moment that your job when a problem arises is to understand the problem, right? And this is sort of the classic, I think it's Einstein or Edison. I'm need to ascribe to one of these geniuses. The idea that if you give me an hour to spend this problem, I'm going to spend 57 minutes understanding the problem or whatever the quote is. It's some version of that. And in this case, I think that's the key. Stop and ask yourself, do I understand the problem such that I could suggest a solution?
[00:07:06] Richard Gaffin: So let's talk about actually, maybe before we go into the dues here. So, you're right. This is like a very common human issue. I mean, it happens in all sorts of circumstances where there's an issue to be solved that the temptation, of course, is to. Appear to be doing something rather than to sit back and strategize.
How do you balance the fact that let's say on the agency side, of course, the client needs to see that something is happening. How do you demonstrate that something is happening without actually, let's say, mashing buttons to see if things are
[00:07:35] Taylor Holiday: That's right. So I think that what you want to show are happening are one that you are analyzing the problem. You begin to deconstruct all the data into hourly segments to try and pinpoint the moment at which a change occurred and in what metric it occurred. CPM went up, conversion rate went down, some combination of those things.
You are trying to identify with clarity. What is the issue? What changed? Right? It's sort of the classic what, so what now what? The other thing you want to illustrate is that you are seeking understanding about the why. Once you understand, okay, CPM went up and conversion rate went down, whatever the, the change may be. Now you also have to be careful to assert into that gap assumptions, right? We do this. We want to close that mental loop between what occurred and why it occurred as quickly as possible. But it's really, really difficult in these situations to do that. So the why, in our case, what we often do is we are in direct communication with meta.
We are pursuing. Networks of information to gather the most insight possible before we make a recommendation about what to do. So what you want to show to somebody is that you are deeply analyzing the problem to create clarity of good decision making And that you are pursuing access to the source material as best as you can
[00:08:46] Richard Gaffin: Okay. So then let's talk about what that actually looks like. What, so you're doing it, you're going about this the right way. So maybe let's, we could use our particular circumstance, like what's been happening over the last couple of weeks, and you've already sketched this out a little bit. In your answer to my previous question, but what has it looked like to go about this the right way and what are some of the solutions that have been put in place for some of our clients?
[00:09:09] Taylor Holiday: So one of the things again, we have to just all sort of level set around what? Constitutes a good decision making framework and what information would lead us above the threshold of, okay, we will decide that. And this is sort of like, I think a challenge for humans generally in this era
is like, what's the threshold for truth. What's
the great, the point at which we would make the information's worth acting on. And so there's a thread I'm looking at right now of us in our paid media strategy channel where people are sort of, you know, Spamming in threads that people are writing about what they're seeing happening and what's affected and what's not. And it's really easy to get latched into. I saw someone say, this is what's happening. This is what's happening. That's what's happening. And so I just sort of stopped and said, A Twitter thread does not equal evidence. Please try to act only on verified information from Meta or we can end up chasing our tail. There is no evidence I have seen that relaunching fixes things. We should be careful to make any such claim. Also that there is no evidence that a specific campaign type is affected more than others. So these are all sort of examples of the problem. You'll see someone say like, BAU campaigns. My BAU campaigns are fine, but my cost controls were bad, or my cost controls are fine, but my BAUs are bad or ASCs. My ASC isn't a problem. And one individual person's describing what happened in their individual account is basically a useless piece of information. It provides you no information about what is occurring generally. Now it can provide a hypothesis. So I think that, you know, What is useful as you discover these things is to begin to cultivate hypotheses.
So someone might say, I've seen three instances where cost control campaigns were more affected than BAU. Okay. That is not evidence that that is true. That is the beginning of a hypothesis that you can begin to explore. Now, as you gather those things, you may say, Hey, there's three common things that we're running into. Like one example I saw was that the. Distribution of placements changed. There was a lot more delivery happening on some placements than others. Oh, okay. The Instagram algorithm was affected and inventory was diminished in that one channel. And so suddenly there was less ads being served in that channel.
CPMs went up. Okay. That's a, that's a working hypothesis. Now, can we take that and confirm it across a set of broader accounts where we see a consistent set of results? Oh, okay. If so, then yes. And that sort of step by step methodological thinking. Okay. Is really hard to do when there's pressure to solve a problem. But it really is important that you act on information that you can validate. Now, oftentimes what ends up happening is there is no pattern. There actually isn't a connection across all of these things, such that you can make a specific that you're going to solve it, that you're going to Nancy drew the problem in a way that you know, for sure. What is occurring. And so you're dependent on what is actually the solution. And 99. 9 percent of these cases is that meta resolves itself over time. It is almost never the case that I have seen that some individual came out and discovered the solution, distributed it to all the network via Twitter or Facebook or
something else.
And everybody implemented that solution. And then. That became the reason that it all worked versus take a step back. If it's not working, then I would possibly, the action would be to, if it's so bad that you're losing money to turn off the account and say, like, I'm not going to deliver into an uncertain environment Or right.
You just continue the course and wait for the resolution to come from the technical side. And I understand how hard that is and how it feels like a lack
of control, but it's the reality of what the situation is because your decision making actually is worse because it's not actually rooted in anything that would give you confidence.
[00:12:55] Richard Gaffin: Right. There's something you said before we hit record that I thought really encapsulated this well, which is the idea that you can't fix a systemic problem with your front end behavior. The idea that there's something that you can do with the tools that you're given from Meta that's going to solve the actual black box deep problem that Meta is having right now.
So, yeah,
[00:13:15] Taylor Holiday: Yeah. So, so to clarify that we had that conversation off camera. So the idea is that inside of Meta's delivery code side optimization engine, something occurs that alters the dynamics of that delivery. And we have on the front end, a set of levers. You can't turn a campaign on and off.
You can build a new one. You can change the settings. You can exclude delivery. You can identify an audience, but the idea that there's a relationship between some setting on the front end and the red technical issue on the Is what's just made up like there's, there is no evidence that any of those things are related to one another.
It's like if you're all of a sudden your check engine light comes on in your car and you start pushing all the interface buttons on the radio, it's like there's no, I understand that we want that to be in our control that if I like turn the car on and off and then I push the, the, the air conditioning up and down and then I turn the volume knobs that it's going to resolve an oil leak or something.
But I can't, there is no. Reason to believe that these buttons affect that issue without having any idea what the issue is.
[00:14:19] Richard Gaffin: yeah, totally. Well, it's fascinating how much this is just a classic human problem of solving something or trying to solve a problem of a system that they have no information about, which is like how you get, oh, if you, I don't know, catch a bee, it'll cure your fever or whatever that people believe 500 years ago.
Or even like your example with the car of like, Oh, actually, Hey, if I, if I kick the hood, all of a sudden the engine will start working again. And that, it seems like this is a very similar situation where there's all kinds of sort of like almost magic or magical thinking about this, right? Of like, if I do the right thing, suddenly it'll resolve the whole backend issue,
[00:14:53] Taylor Holiday: That's exactly. So that's exactly what it is. And what happens is somebody kicks the hood and it works. And then they tell everybody that they kicked the hood
and it worked. And now all of a sudden everybody's kicking their hood. Right. Like, and, and you're like, Whoa, hold on. We still have no idea what caused the technical issue.
And the problem with these, these issues generally from it is that we won't actually get some sort of. Deep technical follow up about what occurred and why and what to do in the future, right? And
so you're left in a helpless position. And I, I, I, I empathize with the feeling, trust me, we feel it at scale across
a bunch of customers. But sometimes the, the view is that the problem is not evenly distributed. That's the, the other thing that being in an agency helps us to understand is that some customers are entirely unaffected. And so, and then others are, and so now that adds a different dynamic to the view of the problem that.
Somebody doesn't have a throw on their own. Right. So, I think there's just so much that is challenging in these moments, but the key is patience and clarity of problem. Now, if you can help to find identities of patterns where there is an issue that like, yeah, Instagram delivery was unaffected and you could find some pattern of representative behavior, then you internally can decide at what threshold you're willing to make.
And effort at solution. And
look, maybe for some people they prefer, like, I would rather you spam a bunch of options to try and find a random result, even if we don't know why. I would just encourage that just like launching a bunch of lowest cost campaigns, there's so much risk to losing money in that process where your ad spend becomes negative, contributing that you actually deteriorate dollars.
That in many cases, it's better to spend none at all.
[00:16:28] Richard Gaffin: Yeah. Okay. So let's talk about. Then then real quick, maybe summarize with best practices for the kind of let go and let God scenario that you sort of ended up in. Right? So the idea is that like Facebook will resolve itself that in 99. 99 percent of these situations. So what what advice can we give about what you you should do in that?
Sort of in the waiting period between until Facebook kind of resolves itself. So you already mentioned doing, you know, analysis, making sure you have some understanding of what the problem is, but once you sort of discovered that there's quote unquote, nothing you can do about it, what is there left that you can do about it, maybe is the question.
[00:17:04] Taylor Holiday: Yeah, again, I think that you want to look and understand all the levers of growth that exist in any business. And try to in the short term, maximize all of the other available levers while one is not working. It's sort of like if the vacuum doesn't work, you use the broom. You don't just keep doing it.
Turning the vacuum on while it spits out more dirt,
like every time you turn your vacuum on, it spits out more dirt. Don't just keep doing that. Right. Go get a broom and manually mop it up. And that can mean like, we depend a little heavier on email in this moment. This could mean that we are going to lean into search.
We're going to try Tik TOK. We're going to, whatever it might be in this period where something is not technically broken. Right. Remember, this is very different than comparing the performance between meta and tick tock. Generally, we're saying that if one tool is technically broken, then you may need to pursue the use of another one.
And maybe it's a good opportunity to start to begin to think about that, that platform associated risk, because while every platform has technical risk it is felt more seriously when it's a larger dependency of your revenue. So a challenging period of time. Now, the good news is in over the course of doing this for over a decade. Tech phase meta has a massive financial incentive to resolve these issues and lots of smart people. And they tend to do it fairly quickly. So that's the thing to be optimistic or hopeful about is that it's highly unlikely. I think the largest one that we ever experienced, right. It was related to iOS and that took probably like six to 12 months really to resolve that loss of data integrity.
That's been the most persistent one we felt. But these things pop up and it should be pretty easy. Part of the, this is, this is also why when you think about like LTV to CAC modeling one of the reasons that these kinds of things become really hard to, to model is because there's unknown risk that's not adjusted for in the expectation.
So in all of your sort of financial planning, there's this idea that you should risk adjust all of it down for these unrelated issues in a way that it's just something that we have to account for because it won't ever stop. Happening. I don't think,
[00:18:59] Richard Gaffin: sort of the those kind of like the black Swan scenario is, isn't that what it is like the Nassim Taleb thing,
where it's just like, there's the unknown unknowns. and building in some sort of like expectation that the unexpected will happen.
[00:19:11] Taylor Holiday: I, and I, I would just contend that this isn't quite black Swan, right? Like there's enough of a reality that over the course of a year of spending money on meta, you're likely to encounter a technical issue that creates ad efficiency that you may be compensated for or not. Like I remember I used to, the first time I ever used mint the budgeting app like one of the things they try to do is that they will look back at like, let's say everything you spent on car maintenance over the last year. And it's like you 2, 600 and then they want you to every month plan for car issues.
And I always was like, when I would do that in my budgeting app, it felt like I don't have car issues this month. Why are you asking me to put 200 on car issues? And the reason is, is because there's a pattern that over the course of enough time, you will at some point encounter car issues commiserate with that level of planning required. And so I think that technical, you know, ad spend and efficiency is going to be a part of some of our reality at some point when you're using a tool like this, it's just on realistic to expect that it works perfectly 365 days of the year. And so the question is, like, how do we consider. That reality.
And I, I don't know that there's a good numerical representation. Maybe this is a cool research project to think about is like how many
large scale, and we could maybe create a parameter of the issues have occurred over some period of time, such that you could account for it, but maybe they should start selling insurance for it.
[00:20:39] Richard Gaffin: There you go. I love it. All right. Paid social insurance. That's our next next service offering here at CTC. All right, folks. Well, Taylor, is there anything else you want to leave people with? Any last thoughts?
[00:20:49] Taylor Holiday: No, I would just encourage generally to go back and, and Especially if you're a leader, go back and ask, how did we as an organization behave when this occurred? And do I feel good about that? And if not, what could I do to prepare us next time? Uncertainty arises because there's a, there's a, just a fact pattern that tends to show up in our life in lots of ways.
And like you referenced medical, this tends to be a way that we behave or with car issues or anything else. And there's the old saying, sort of, what happens in small spaces happens everywhere, right? Like this idea that. If, if you behaved in this really erratic fashion where you just started slamming things against the wall without good information, then it may be emblematic of a broader problem that exists in the way in which you get to the kinds of decisions that you make. So just worth, I think, exploring as an opportunity for all of us to stop and go, okay, what information did I act on? What was the quality of that information? Why did I believe it was true? Was it useful? Could I improve that process in some way? Because. Coming soon, I promise, over the next few months, is another technical issue you'll
[00:21:50] Richard Gaffin: Awesome. Cool folks. All right. Appreciate y'all listening. And if you're listening and you're not a subscriber, please hit that subscribe button, it really helps us out on both Apple podcasts and Spotify rather. And folks, we will see you all next week. Take care. Bye bye.