Listen Now

Over the past several weeks, CTC Growth Strategist Luke Austin’s been conducting an experiment across a number of our clients to answer a simple question: are third-party attribution tools like Rockerbox and Triple Whale actually more accurate than native platforms like Facebook and Google?

On this episode, Taylor and Richard talk with Luke about the results of the experiment, why CTC is skeptical about third party attribution tools, and the steps you should take to test your own attribution tools.

⁠Show Notes:

Watch on YouTube

[00:00:00] Richard Gaffin: Hey folks. Welcome to the E-Commerce Playbook Podcast. I'm your host, Richard Gaffin, director of Digital Product Strategy here at Common Thread Collective. I'm joined as I always am by Taylor Holiday, CEO of CTC, but we also have a special guest back from the trenches. Luke Austin, growth strategist here at CTC.

Luke, how are you doing?

[00:00:19] Luke Austin: Doing great. Gonna be more excited about our subject matter today. It's a conversation I'm very invested in and on a weekly basis I'm having conversations around this topic with the brands that we work with across all industries, across all sizes. So I think it'll be really relevant. Hope so.

[00:00:37] Richard Gaffin: Well, what a teaser. So that topic is, and of course like to, to use the metaphor of the front lines or take that further. We're, we're pulling you off the front lines of the attribution wars yet again to talk to us about sort of your latest experiment or the latest salvo there. So one thing I wanted to kick off this conversation with, and we were talking about this a little bit before we hit record. Um, CTC can sometimes be accused of being dogmatic about our stance towards third party attribution tools, your Triple Whales, Rockerboxes, and so forth which is basically across the board. Don't use them, they're bad. That's maybe the simplistic way of thinking about them. They're a waste of time. And I think the, the content of this conversation won't necessarily contradict that stance, but I do wanna take a moment maybe to talk to you guys. To weigh in a little bit to maybe clarify our stance. We are not against attribution tools because we want to be, we are against them because what ..?

[00:01:34] Taylor Holiday: So I think the key for me is that we are in pursuit of the best available truth. About what allows brands to drive profitable growth. That is the thing that we are in pursuit of now. We feel an obligation to be rigorous about defining the reasoning behind the strategies that we deploy on behalf of our customers, and that there is evidence to support the strategy.

And so I think that I would contend that actually we are nowhere. We are, we are sort of the opposite end of dogmatic in that I have no moral or financial incentive to support Facebook to support. An attribution tool to support a cost cap, a design, like I have the interest of supporting the FI profitable financial outcomes of my own brand and the businesses that we work with.

And so what we feel like is that we have a job to produce evidence that drives strategy, that then it gets consistently evaluated on the basis of a hypothesis that we can test. And so we think that we have worked hard to create those things and to find whether or not there is use. In any tool or strategy that we're deploying on behalf of our customers, and that is subject to change, we may change our position.

We have not always used cost caps. We're currently in the midst of a test right now about the merits of a S C that Luke himself has been very public about that. I'm sure we'll do an episode on when it's concluded, but I think that we are trying to be as public and as objective as we can about the assessments we're running.

[00:03:16] Richard Gaffin: Yeah. A way to think about it is like, as an agency, we've had an opportunity to test our hypotheses against a number of different brands. And we've, even though we are open to the possibility, we've never seen evidence that attribution tools are, are helpful, but we're open to it if it does. So that, or in light of that rather Luke, you have a blog coming out. say in a, in a couple weeks, we'll see how, what the editorial calendar shakes out as, but it's entitled attribution Tools Are Hurting Your Acquisition Efforts, which is pretty straightforward. So Luke, maybe talk us through a little bit of the background to this particular article and then sort of the experiment that went along with it and what came out of it.

[00:03:57] Luke Austin: Yeah, so to give some of the background, some examples of these conversations that many of you may find yourself in, and that I find myself in on a weekly basis with. Brands of all sizes across all industries is is the data signal that I should be looking at in order to most effectively allocate my budget? And some of those conversations might go like this, Google Analytics last click is telling me this about this channel's performance while Rocker Box MTA is saying this, while ad platform performance is saying this directly from meta, and I have conflicting signals across these different seemed to be really reputable platforms that I'm trying to understand how to sift through it to make decisions on where to allocate my budget most effectively. is what this, this conversation comes down to is that question of which data signal is as closely correlated to the outcome that we want. So for the purpose of this experiment, what we're really looking at is if our goal is to drive new customer revenue growth, we're trying to drive new customer revenue, new customer margin as effectively, as efficiently as possible, what is the data signal that is most clo closely correlated to that outcome of driving new customer revenue growth? That's the question that we're seeking to get clarity on in this conversation very specifically.

[00:05:25] Taylor Holiday: So I think that's a great outline, Luke, and it, it's important to highlight a couple things. So this is. A study that analyzes a specific outcome. That outcome is new customer revenue efficiency. We would look at it in a M E R acquisition, marketing efficiency rating, new customer revenue over ad spend. In most cases, not all our customers are using their media budget.

As new customer acquisition engine, and so we wanna understand what is accomplishing that. If you go, please, if you haven't yet, pause this podcast and go listen to the hierarchy of metrics video that we have on YouTube that outlines our approach to measurement. It begins with financial measures, contribution margin, moves to business metrics, moves to new customer level metrics, and then down to channel specific metrics.

We think proxy metrics, which are the channel level, are the least useful, including in platform measurement. We care about your actual bank account the most, and so any signal that happens at the channel level. The reason we're designing this study this way is because we are trying to understand what impacts your overall new customer revenue as a business.

Most directly, that's the question that we care about the most because we wanna make decisions on the basis of improving your A M E R more than any other metric or signal inside of the business. At the ad level. Like that's what we're trying to do. And so it's important to design to understand that the study is designed that way.

[00:06:51] Richard Gaffin: Gotcha. Okay, so

[00:06:52] Taylor Holiday: Okay.

[00:06:53] Richard Gaffin: Let's move then into the sort of like the, what happened, Luke? So you talked about, essentially what we're trying to do is determine across, let's see, in the blog at least, you have lit, listed meta, Google ads, analytics measured rocker box, triple whale, getting sort of a, a sense of the signals and like which one is correlated.

I guess the idea would be with the kind of bottom line outcomes that we're. Talking about being important to us, like contribution margin, what's in your bank account, that kinda thing. so maybe break down a little bit about the design of the experiment and then the results. What happened?

[00:07:27] Luke Austin: Great. So. I want to continue to come back to in this, the hypothesis that we're trying to prove through this experiment design is that's the, the specific conversation we're wanting to have. So the question is, if our goal is to drive new customer revenue growth, what data signals most closely correlate to that outcome? hypothesis and the conclusion that we'll actually circle back to is this, that attribution platforms, NTA and incrementality models are redundant. And don't offer unique insight in driving new customer revenue versus on platform attribution. So that's what we're going to prove through this experiment design, and then come back to as a conclusion as well as we have this really specific conversation that we're trying to answer.

So to circle back on the experiment design, what we did is we looked at each of those platforms that you just mentioned, Richard. So we looked at meta on platform. We looked at Google ads on platform. Also meta was seven day click attribution. to be even more specific. So Google ads, Google Analytics, that's last click.

Attribution measured, that's measured I or measured incremental as well as measured lt. There's two different attributions that measured uses within its platform. Box, we looked at mta. Or the multi-touch attribution model. Rocka Box also has even weight first touch. There's different ones.

We looked at mta and then finally triple whale, new customer roas within that platform as well. So we're looking at multiple platforms and then we did this across multiple multiple brands as well. So to set up the experiment design, we're trying to look at the correlation between the signals from each of these platforms, attribution. And the brand's new customer revenue, that's the outcome, that's the source of truth that we're trying to get at. new customer revenue really clear, really specific. We're pulling that from the Shopify store or whatever store the brand is on the actual first time customer revenue. And what we're doing here is we're looking at a weekly correlation. So I'll walk through sort of like this, a spreadsheet structure of this for those of us who think in in terms of spreadsheets which is that we have weekly rows of data. Okay. Across multiple years for each of these brands. So you can see data separated out by week for about two years for each of these brands. And we're pulling in first time customer revenue by week based on the, the Shopify store, the, the brand store, pulling that in. Then we're pulling in each of the A attribution acquisition revenue, so that's new customer spend. Right. Your acquisition campaigns, your prospecting, we're excluding any cross-selling retention type of campaigns, so, The spreadsheet view, right?

Weekly breakdown over two years.

Of first time customer revenue.

On each

weekly

day click. Weekly breakdown of Google, excluding your brand. Search campaigns. Weekly breakdown of Rocker Box mta, triple pixel. New

ga.

Last click LT and measured in incre.

Spreadsheet that we're looking at. And in summary, it's about 70 weeks of time. Thousands, a

rows of data for a number

10 to a

e-comm brands across six different attribution models. So that's the data set that we're working with here in terms of setting up the experiment.

[00:10:50] Taylor Holiday: And Luke can, can I ask why did you choose seven day click? For Meta in particular?

[00:10:59] Luke Austin: So

we prefer to use seven day click across

[00:11:03] Taylor Holiday: from,

[00:11:05] Luke Austin: for multiple reasons.

Main one is just

out view attribution, which is the piece we could add in, right? We can look at one day click, seven day click, and then we can add in that one day view on top of those. what we're trying to look at here is the incremental contribution from each of these

[00:11:21] Taylor Holiday: At seven 30.

[00:11:23] Luke Austin: And many of these incrementality attribution

[00:11:26] Taylor Holiday: Yeah, good idea.

[00:11:27] Luke Austin: only

Click data. Impression data isn't shared through

them.

[00:11:32] Taylor Holiday: I'll talk to you soon.

[00:11:33] Luke Austin: release impression logs,

many of these platforms, so it helps us to get more one-to-one in terms of the data that attribution platforms have, as well as what we believe to be the most incremental revenue contribution from Medis platform.

[00:11:46] Taylor Holiday: Great. And, and what I would offer is that if you're gonna go recreate this study on your own data, which I think is a useful activity to do cause you may get variable results is you could actually pull lots of Facebook ad, you could pull all three, you could pull one day click, seven day click, and seven day click, one day view.

And what I would actually encourage you to do is to start with. Whatever you're currently using for optimization. So our default account setup at CTC is going to be seven day click, and we will. Move to one day click or seven day click, one day view, depending on circumstances of the unique situation of an individual brand.

So we also believe in the connection between optimization and measurement. And so for us that makes sense. But again, I would look at what optimization set, I think setting are you using. And I would probably start by analyzing that in your own data cuz that's, that is really ultimately the decision that you have to make here.

There's actually to me, All of attribution actually boils down to just the selection of which optimization you're using, because that's, at the end of the day, that's how the dollars actually get spent. It's not really up to us at all. It's up to the signals and the measurement that Facebook's using for the allocation for your estimated conversion rate, for your estimated action rate in your brand, which is based on the optimization setting that you have inside of meta.

[00:13:01] Richard Gaffin: Cool. All right. So we got the setup. We have, or rather the hypothesis and then the setup. So let's just get, let's get to the results then, Luke. So what, after running this test, what happened and let's say, was the hypothesis confirmed or not?

[00:13:17] Luke Austin: Yep. So going back to the spreadsheet view, we have those, 70 weeks of time, thousands rows of data, several different brands and attribution platforms, and then worked with our data analytics team to back into the piercing correlation coefficient for each of those data sets. So that's, that's the number that we're using to, judge. which of these platforms has the strongest correlation to new customer revenue? We're saying which of these things, right? Correlation coefficient is trying to understand the relationship between two data sets. So it's saying which of these data sets attributions on platform versus the other attribution models, which of those has the highest, the strongest correlation to the new customer revenue?

Weekly data set, the strongest relationship to that the correlation coefficient is going to give us that indication. So I'll walk through just a few examples here. For us. So, let's say brand one here, ad platform new customer revenue correlation 0.94. strong correlation. Anything 0.9 or above is gonna be incredibly strong in terms of the relationship and so we're happy to see that, that sort of number box MTA 0.95, add platform 0.94, rocker box MTA 0.95, measured incremental 0.92. Measured LT 0.89. Okay, let's stop there. So that's brand one. And people right up the bat might say, Hey, rocker box MTA was at a 0.95. Add platforms at 0.94. It's actually a little better, right? Or incremental 0.92. That's a really strong correlation. Why not just use that? I'd be pretty happy with a 0.92, correlation in terms of relationship against my new customer revenue. But have to take a step back and understand what's what's at risk here, what's at stake in introducing these other platforms. And so I think this is a good, good spot to open up that conversation. Because it's not just, Hey, let's use this DI data signal versus this other one, and there's no other repercussions from that. There are some first and second order effects from introducing these. One is the cost of these platforms, right, can be pretty, pretty substantial on a monthly BA, on a monthly basis, outside of the direct cost the inefficiencies and the decision making process that you introduce by bringing in these platforms, right? let's say you lead a media buying team, or you are a media buyer. And you are introducing Rocker Box mta and measured along with ad platform attribution to make decision making on budget allocation. Your decision making process in terms of identifying which platform and which to allocate more budget to. Decreased inefficiency by at least three x probably more because what you're doing is you're looking this tab rocker, box MTA versus this tab measures platform versus on platform, and you're trying to triangulate these data signals that may be adverse to one another. They might not be saying the same thing. And your decision making process just got lengthened. So the ad platform cost along with the inefficiencies that, that are introduced through these additional signals that you're trying to find the source of truth in the middle of.

[00:16:42] Taylor Holiday: And if we go back to the initial value proposition that these attribution tools rose into prominence on the backup, it was like this idea that they're standing in the gap. Of Facebook's data loss, right? That they are providing a closure of the loop that gives you more accurate data. And I think what we're saying is standing here on June 27th, 2023, that that gap doesn't exist, that meta.

It's correlation to new customer revenue in every case. And we're gonna go through the other brand examples, but in this case, even it's 0.94 like Meta's data is directionally accurate towards your new customer revenue in a way that is just as good as any other platform. And so this, this gap doesn't exist in the way that we have been sort of led to believe at this point.

Now that we don't have like a version of this that shows its progression over time from iOS till now. It could have been very real for a while, but at this moment, in almost every single case that we see, the accuracy of meta's set like tight click attribution against new customer revenue is incredibly strong in terms of its relationship and outperforms most of the attribution tools most of the time in that relationship.

And so when we say it's redundant, what we're saying is that that problem doesn't exist. And so you can simplify your decision making criteria. You can align attribution and optimization by just focusing on the in platform data and using that to make decisions quicker.

Hey everyone. Richard here again with a quick reminder that if you sign up for three months of admission before the end of June, you'll get every CTC product included for free. That includes the Enterprise Scaling Guide, the E-Commerce Diagnostic Toolkit, the media buying tools, and the TikTok Ads playbook, and three months of the DTC index.

For free. So stop the podcast right now if you have to and go sign up@commonthreadco.com slash all access one word, or you can grab the link from the show notes. All right, back to it.

[00:18:47] Richard Gaffin: Yeah, so let's, maybe let's go into those other two brand examples real quick. Luke, is there anything to call out from those that was interesting as far as the results go?

[00:18:55] Luke Austin: Nothing. Different than the first brand example gave us, but I'll walk through those results as well. So here's a second brand ad platform attribution to new customer revenue. Correlation of 0.97, triple whale, new customer of 0.96. here's a third brand example ad platform of 0.83 last click of 0.79. So in most all cases within this study, what we saw that the ad platform had, highest correlation coefficient between any of the attribution platforms. In a couple cases, we saw them be equal or in one case which was the only case where Rocker Box MTA was slightly higher than ad platform by 0.01, which again, we have to then put on the table that 0.01. Correlation difference between the two with ad platform already being at a 0.94. that worth introducing the additional cost of the platform, the inefficiencies in the process that these other data signals introduce? And then to tailor's last point, the, the the separation of attribution and optimization, right?

The ad platforms aren't gonna use that signal. They, they

the rocker buck, MTA rocker's, MTA signal. And so we'd argue in that case that. At least they're redundant and in many, in many times, they have a lower correlation than ad platform, and so most likely are actually unhelpful and and hurting in your business.

[00:20:21] Taylor Holiday: All right. And I think it's also important to note that in every study that I've seen, and this is also confirmed by Facebook's own data, that the worst choice is last click. So if we were to rank these in terms of, I think what one of the things that we believe is that you should choose one. You know, some people think that this idea of having multiple signals is sort of helpful to making decisions, and I've never seen that be true.

All I've seen it do is create organizational bogging and quagmire around decision making, argumentation and paralysis. So what we would recommend is to choose the one that is closest, that has the closest correlation. To new customer revenue, which is the thing, and use that to make your decisions, trust them and go and constantly check back on this relationship.

Because I do think that there's a fair argument, and I've had this conversation with the team over at Ridge because you know, they're big proponents of the of North Beam's MTA solution. And I will say that if you're going to do it, I think the way that they do it, which is North Beam, one day click for everything, we make all our decisions on a channel basis on the basis of that allocation.

data point ready, go act against it and then use the platform on one day click. Also optimization to make in platform ad decision levels. That to me is a structure that it's like, if you wanna pay a bunch of money for this tool, it gives you some sense of satisfaction, or maybe you wrote a check to it, then cool, that's the best available system.

But to me it's an unnecessary step. And I think that that's what we're saying here is that stop with the unnecessary, because it's built. What I've found is that it's rooted in this. Boogie Monster Distrust that we have sort of adopted this idea that Facebook data is bad and without validating the alternative, just accepted that the other is better.

And this is where I think we really have to challenge the way we're thinking about data, which is like, what is a, the evidence that shows that one data point is more trustworthy than the other? And that's what I would just encourage you to find. Do you have that data point that validates that the decision you're making is in support of the financial outcome you're attempting to create?

[00:22:21] Richard Gaffin: Yeah. I think like maybe the headline summary here is, is although I could see somebody accusing us of saying that these tools don't work, the result of the experiment show that. Particularly like say in Brand One's example where Rocker Box had a 0.95 coefficient as opposed to 0.94. It's not that they don't work per se, although in some cases they work less well,

but they work as well as the ad platform.

So why are you paying somebody. For a second opinion that does not necessarily give you different information, or rather that predictably doesn't give you different information than the initial opinion did. And I think that there's another interesting thing about this idea of like needing more signals to contextualize this.

And that's maybe why you would wanna go with a rocker box or whatever, we, you actually do have a signal to contextualize it, which is, is your, Acquisition revenue going up

[00:23:09] Taylor Holiday: That's right.

[00:23:10] Luke Austin: Excuse me.

[00:23:10] Richard Gaffin: there is a way to judge whether the attribution that you're getting is accurate or not.

[00:23:16] Taylor Holiday: And that, that's the anchoring you have to anchor to truth. And like when we think about the hierarchy of metrics, we like to say that the bank account is the only objective truth and everything gets slightly fuzzier from there. As you move down into even revenue is complicated cuz it includes returns and there's a shipping cost in there that varies and it's like not totally predictive.

And then you move into. New customer data and it's like, well, LTV changes a bunch over time and there's sub cohorts that are different than others and it's harder to sort out. And then platform metrics. I, I think it's really important to understand that we don't believe that that's truth. In the sense that Facebook's ad manager is not the end all, be all number the the new customer revenue is the thing that we are most anchored towards.

And we just wanna know which of the available optimization choices that we're choosing from helps us create that outcome. And the thing about these attribution tools is usually one of two things is true either. In like in these cases, which is very common, they are redundant in their relationship to new customer revenue.

So they don't give you any new information. And the ROAS and ads manager looks very similar to the ROAS and the other one, or they're wildly different. And in that case, what it means is that one of them is a better predictor of new customer revenue than the other. And almost always that we see that's the platform.

But let, let's the, the challenge though is also that even if it was the rocker box, Number or the North Beam number, you're left with a problem, which is then how do you move that number? Because if the optimization setting inside of Facebook does not correlate to the return that you are measuring, how do you make a decision in the platform about how to improve the number?

And it's just, and what we find is, and, and this plays out even further where I'll watch people say like, oh, let's talk about your creative results. And it's like, we're gonna measure our ad performance on a measured incremental roas, but we wanna talk about the click through rate on your ads. And it's like, hold on a second.

Do, does the click through rate on my ad have any relationship to the performance of this measure that you're doing? Does Ccpm, does CTR do any like, and the answer is almost always no. That we are just arguing these proxy metrics for the sake of having something to point out. That gives us a sense of directional value, but it's not actually driving to the thing that we care about, which is more money, more money.

[00:25:32] Richard Gaffin: So speaking of new customer, revenues I wanna give like, maybe take a step back and think about, the other side of the argument, right? Which is like, why would you. Why are people like to say the guys at Ridge such fans of North Beam? one thing that you point out here in the article, Luke, is that many brands use MTA platforms to understand the percentage of new customer revenue versus returning customers. to that a little bit. Like, let's, let's address that. Is that a valid reason to use it? Is it not? And what did this experiment kinda show about this?

[00:26:03] Luke Austin: Yeah, so I think it's a really question for sure. is the percentage of my ad spend on Meta and Google that is driving new customer revenue versus bringing back in returning customers? I, I think that is a really valid question and does have implications on how you should set your targets and how you should set your budgets. And that's actually how we, how we do things here. We, we look at those metrics to then set channel targets and then our budget is allocated based on our targets with new customer revenue. are a couple things to speak to here though in terms of why it may not be necessary to bring in a, a third party to give us a proxy. So, One is the, the release of Shopify's first party pixel data through their api which is pretty, pretty exciting. So this allows us to do our, our STA lists dev team, saw that the api, for Shopify's. First first party pixel had been released so that we can see points specifically from the individual stop Shopify store at every touch point of the user, right?

So someone who came to my store, which source did they come from? Which pages did they then subsequently visit? Which products did they then added cart checkout? What did they end up purchasing? Were they a first time customer? Were they, were they a returning customer? And this is based on Shopify's first party data, right?

We're not bringing in an intermediary. And so this is a solution is very new, but also very real in terms of how it can give us the insight on what our new versus returning customer split looks like. That doesn't require the introduction of a third party to give us that insight. And we ran this for Bamboo Earth to understand what that percentage breakdown looks like. This, this chart's also in the article, but just to speak to it briefly on Facebook, we saw that from meta about 11% of the revenue, 11 12% of the revenue being driven from meta campaigns was returning customer revenue. And new customer was the remaining 88 80 9%. what's pretty interesting there is that number matches up really closely with. signal loss that meta is. Saying due to iOS, I think the most recent number from them was 8% signal loss. So it's pretty close. Maybe that's where the gap is. But I think Bamboo Earth gives us a really good gauge on what that signal loss variance is in the new versus returning. Given that it's very e-comm focused brand, really tight exclusions across all campaigns for existing customers. So even in that context, we're seeing about 11%, 12% returning customers come through those campaigns. On Google it was a different story. It was about 60% returning customer coming through versus 40% new customer. That's including brand search, though when we strip that out, it gets closer to 50 50 or actually a little bit more new customer which is, which is what we see.

And you can see this in within Google's ad platform as well, right? New versus returning customer segments and get an idea of your percent new versus percent returning from that data source as well. this is to say that the Shopify first party data validated what we are seeing in terms of the new versus returning report within Google Ads. Everyone can access that in their account. You don't need any, anything to point you in that direction. the Shopify first party data validated that there is a amount of signal loss that is contributing to returning customers coming through. It's probably about 10% or so on your campaigns, depending on how many distribution channels. have and, and so that validates using the data signals we already have available to us to get an idea of what that new versus a returning split looks like.

[00:29:50] Taylor Holiday: Yeah, I think this is a really important part of the conversation where it is worth acknowledging that knowing this is a, is a powerful data point. And so there are, I know there's a free, there's a free first party pixel out there that is being, know, promoted. All the, the platforms strip whale, you know, Northam, they all offer you first party pixel data, but Shopify exposes this in their api.

It's not talked about much. And we're actually gonna release an app that allows you access into this really simply more just all the first party data so you can look at it yourself. And I think that ultimately this is gonna become a commoditized part of the tech stack, which is just like, Hey, you have all the first party pixel data about what's happening by default.

Shopify's infrastructure. What I understand about it from a text point, text standpoint doesn't enable them to store all this data. It would just be too much storage cost. And so that's why they don't release it, is my understanding at this point. But that may be something that they overcome, but I, I do think that.

A couple of things. One, for Bamboo worth, it's important to note that we run hard exclusions on everything. So when to get a sense of that 10% leakage includes what we call a Swiss cheese approach, which includes like pixel data exclusion for all past purchasers, as well as automated updates from Klaviyo.

So you have the best sort of combination of. Trying to exclude all existing customers and yet you're still getting about a 10% leakage. Google's way higher than emails even higher than that. So all of this sort of reinforces, which is another thing, which is like when people wanna talk about incrementality by channel, the thing I've seen with every one of those studies is they all come back the same.

Facebook click-based pure prospecting with exclusions is the most incremental thing you can do. Right. And then search is slightly less incremental and then branded searches even less incremental. And then like Google Display remarketing, including existing customers is like even less incremental. And it just, it always looks the same, right?

And the point is just that when you run hard exclusions at the top of the funnel on a new click, on a seven day click, or a one day click basis, that number that you get back you can trust. And it is incremental new customer revenue in a way that is consistent and a valuable data point. To run and operate your ad spend off of.

[00:31:56] Richard Gaffin: Yeah, so like, maybe another way to summarize that too is that like there is a, a way to think about or to rather measure new versus returning customers with a tool that you are already paying for, which is Correct. again, it is another example of a potential redundancy that you can eliminate. Which is of course, absolutely crucial at this point in time. Okay, cool. So let's, let's maybe wrap up here with a couple of questions just around continue that thread on standing on the other side of the attribution wars for a second and asking ourselves, or, or maybe thinking through like what are your guys take on, why do people. Like this? or what's an argument to be made perhaps for why you would want to use an attribution tool? And then let's think about, well, actually, let's, let's just get into that question first. I don't know, Luke or Taylor, whoever wants to answer that.

[00:32:49] Luke Austin: Yeah, I, I can jump in first. So going back to the initial conversation around this is we're really trying to understand. The data signal that is most clo closely correlated with driving new customer revenue, new customer margin. And this, we really would love to open this up to other folks running similar experiments.

And so I think about this in two ways. One is there are other data signals, other brands out there that have data sets and wanna run this against their

obviously North Beam isn't included in here. That could be an interesting one, right? Like, let's. Let's do that. We're trying to get at the truth of what that data signal is that's gonna drive the outcome we're after. And we really want to put it under as much scrutiny as possible to see if the truth stands. And then second, in addition to running other data sets through this experiment, also aware that. There could be other experiment designs out there that could get at the truth of this more effectively than this, or at least through another perspective, another avenue than the one we put together.

So, folks out there who are introdu in interested in experiment design and thinking through what the question is they're trying to answer, and then how we set up a test to prove that. I, I would really love to see other ways to approach this question as well. Again, under that same objective of, let's try to get at the truth here of what's going to be the best way to allocate our dollars to drive new customer revenue and profit.

[00:34:21] Taylor Holiday: Yeah, and I think the reason it's so compelling, Or the, the promise or the, the possibility of these two is so compelling is one is that iOS created major distrust in Facebook. And so Facebook has a trust problem that they needed to resolve, that people are standing in the gap of now. Distrust of Facebook is not a reason to assign trust elsewhere.

This is, this is a real problem that I think people immediately just went to, oh, I should immediately trust the alternative. And I would just say like, scrutinize them all. Scrutinize them all. Under a very clear indication of do they help you generate the thing that you want? And then, and then make a decision accordingly.

The other thing is, is that the reason people love last click so much is it has this really powerful thing, which is all the revenue of the channels sums to your total, okay? And so an MTA offers you that same thing. So when you report up to finance, this is a thing that the CFO hates, that the sum of Facebook revenue, Facebook reported revenue, plus Google reported revenue exceeds total revenue.

Like that just breeds this sense that one of them is lying because I don't have this much revenue. So what an MTA does is it breaks apart and assigns that revenue into channels proportionately so that the total equals your actual revenue. People love that level of clarity right now. It doesn't mean that the assignment of those values is accurate and I, I think the whole premise of MTA is, It has flawed, it's probably a much deeper conversation for another time, but I, that, that like simplicity of data, that it all cleans up down to a number that matches my total revenue by channel that I can see is a really powerful data design thing that gives people a sense that this must be right.

And versus having to understand what each is actually saying and what it means and the sort of different attempts at at, at different attribution models in different places. It's hard. So I get that part of it where the complexity of the channel level metrics in an MTA offer a rollup. That's really helpful.

[00:36:22] Richard Gaffin: Makes sense. Okay. So what would then be, what would somebody need to show you, for you to say, yeah, actually I see how this tool is useful. I see how we might, like, what would, what would the evidence be that would prove that to you?

[00:36:39] Luke Austin: I think under the current experiment design, it would just be. A clearly higher correlation between some other data source versus on platform to new customer revenue. That's, that's what this experiment design was trying to get at is what is that? What is the data source that has the highest correlation coefficient that then we can anchor our, our decision making around? So that would be an interesting case if there is a brand out there where we're seeing clearly a higher correlation coefficient from another data source other than. Facebook's and Google's on platform attributed Rev. I think that would be a, a really clear indication. Then I'm sure there are so many ways outside of this experiment design, that we could answer this question or at least go about it another way that aren't immediately yeah, that, that just don't come to mind or that we're not thinking of.

And so other ways to think about getting at this question, that's what we're also really interested in, is how, how do we think about answering this question in a really thoughtful. A way, a way where we can create a hypothesis, have an experiment, design, and then back that up rather than sort of these dogmatic approaches, which we've been accused of in the past which where we're taking ownership of, of right, is it's like, Hey, let's, let's prove this out.

Let's get to a place where we're seeking truth in a way that we can all, read through and come to consensus around that, that analysis.

[00:38:06] Taylor Holiday: The other thing I would wanna see is I would love for someone to explain and validate a system that shows how you make decisions. In the ad account in a way that has an effect on a third party measure consistently. Because one of my things that I really struggle with is the disassociation between optimization and attribution.

Where, how do you, knowing that Facebook is choosing your cp, your bid price on the CPM level based on the conversion rate of the data that they have, how can that data move your third party measure? I don't really understand it. And no one's been able to give me a systemic way that says if you make decisions like this, this number will go up or down.

And so anytime we get, it's the same thing with last click. Like, if you tell me to move the last click roas, the only thing I know how to do to do it is move to the bottom of the funnel. And like that, that's the only way. But if you told me to use prospecting to improve last click roas, there's no decision making system I can give to a media buyer to make, to affect those outcomes.

And I haven't seen anyone be able to present that in any way that I go, okay, I can take this, replicate it and change the number. And so those are the things that I think in order for us to operationalize the ideology, which is ultimately what we have to do, is we have to go into the ad account on behalf of our customers and teach our media buyers how to make decisions.

I need to understand that part of it. So those are the things that I think would be we'd, I'd love to see, and I've, I've, I've had these conversations with these platforms and have not been able to sort of, extract that level of operationalizing exactly how to go about it.

[00:39:39] Richard Gaffin: Yeah, makes

All right, so long story short then. We're not necessarily saying that we love Facebook or that it's a better, or it's more trustworthy rather than some of these other tools, but we are saying that you are already on Facebook you don't necessarily want to pay for extra. And then of course, Facebook is making decisions off of the data that is coming through that particular platform.

So, Cool. Unless there's anything else you guys want to hit. Yeah 

[00:40:06] Taylor Holiday: the other thing, so one, I would just say that Luke, I appreciate you, you being like our field reporter, and I'm gonna tease the next design that we're doing because this is, this is the next, I think, big arena that we're being, you know, can questioned on the dogmatism over is this idea of like a s c, which is Facebook's, you know, hot new product versus bid caps versus consolidated accounts versus broad.

Can we design an Inc an approach to see which of these methodologies produces the best outcome for a customer, and then could we replicate that in more places so we're in the middle of a very high volume, like a lot of dollars being allocated towards this design that the next time Luke's here, I'm sure he'll be back to give us the results of that research as well.

And again, please reach out to all of us with, with other experiment designs. Let's try and aggregate this as a community to try and get to the best available truth we can.

[00:40:55] Luke Austin: Yeah.

[00:40:55] Richard Gaffin: Cool. All right. Yeah, we'll look out for that. And Luke, thank you for joining us yet again. And of course, thank you to you all for listening. We will see you again next week. See ya.