Building Useful Measurements for Profitability

Luke Austin

by Luke Austin

Oct. 09 2025

This is intended to be a clear framework to guide DTC ecommerce brand owners, operators, and marketers in answering three of the most critical questions at the heart of producing enterprise value:

  1. What should my total budget allocation be to achieve my business objective?
  2. How should I allocate that budget across channels?
  3. What system should I use to manage the daily performance of my business against that expectation?

Introduction

For most direct-to-consumer ecommerce brands, the entire marketing function—team, ad spend, creative resources, and more—is optimizing toward metrics that are not the true financial outcomes of the business.

Put simply, the objective is typically to deliver a specific EBITDA result in a given period (usually monthly). Yet the largest area of cost allocation and investment in the business does not track or optimize against that outcome on a daily basis.

Instead, most brands rely on proxy metrics: platform-reported ROAS, Adobe last-click revenue, MTA channel contribution models, or any number of other measures. This creates two major problems.

First, the absence of a clear, agreed-upon framework leads to wasted resources at best and organizational chaos at worst. In many organizations, the “metric of the week” dictates focus—one week the team is optimizing GA last-click ROAS, the next it’s % of new sessions from paid media.

This hamster wheel forces constant context-switching, with large groups of people and significant ad budgets chasing a shifting target on short time horizons. The result is not an effective use of resources or media spend.

Second, even when a measurement system is defined, the chosen metrics often have no causal relationship to the actual financial outcome. Attribution models can only estimate correlations between channel activity and revenue. They cannot substitute for causal measurement derived from statistically valid experimentation. 

This problem is anecdotally illustrated in a recent roundtable with 25 DTC owners and operators: when asked how they determine what’s “good” for their business in regards to the outcome of their media allocation, there were 25 different answers. The framework for setting something like a Meta ROAS target should not differ wildly across companies—some approaches are simply less effective than others.

Any useful measurement framework must be:

  • Clearly articulated across the organization
  • Consistently applied at every level
  • Built on experimentation as its foundation

Across the hundreds of brands we’ve partnered with on their growth journeys, this is the framework we’ve identified as most effective—and the one we’ve operationalized at Common Thread Collective.

Pillar 1
Data Integration – The Three Necessary Data Dimensions

Any platform or system designed to answer the three core questions for a business must integrate data across all three essential dimensions:

  1. Revenue (e.g., Shopify online store transactions)
  2. Marketing investment (e.g., Meta Ads, Google Ads, Affiliate spend)
  3. Costs (e.g., product COGS, fulfillment, and delivery costs)

A system integrated into only one or two of these dimensions is inherently incomplete and incapable of producing the correct outputs. To illustrate: consider Google Ads’ Budget & Bid Simulator.

Although Google is not an unbiased source, it likely possesses the most sophisticated understanding of how incremental spend may generate additional conversions and revenue. This tool integrates spend and revenue, covering two of the three required dimensions.

However, because it excludes product and delivery costs, its recommendations fail to maximize contribution margin. Once fully loaded costs are applied, the limitations of the model become clear: what appears as growth in revenue does not necessarily translate to growth in enterprise value.

Further, in our analysis across numerous brands and data sources, we have found no evidence that these additional proxy metrics improve the strength of relationship to financial outcomes such as Shopify new customer revenue.

To the contrary, they obscure the fact that attribution models represent historical correlations, not causal insights. Without statistically valid experimentation, no proxy metric—regardless of modeling sophistication—can be relied upon to guide resource allocation decisions in pursuit of EBITDA outcomes.

To solve this problem, we developed Statlas. The platform integrates revenue and order data from the online store, paid media spend from ad platforms, and variable cost data (COGS and fulfillment) into one unified layer. This comprehensive foundation provides the necessary visibility to begin answering the three critical questions at the heart of enterprise value creation.

Pillar 2
Budgeting –
From Fixed Budgets to Flexible “Spending Power”

Traditional marketing budgets are often set as rigid allocations—annual or monthly limits established in advance. While administratively convenient, fixed budgets can work against the financial interests of the business. In periods of growth, they may arbitrarily cap opportunity by limiting investment below profitable levels.

Conversely, in downturns, they can encourage overspending in an attempt to meet an unrealistic plan. In either case, the budget becomes a constraint on enterprise value rather than a tool for maximizing it.

We propose a shift from fixed budgets to what we define as Spending Power: the amount of ad spend a brand can spend in a given month to increase their weighted CAC $1. The Spending Power for each past month:

We then project that into the future using over 30 time series models:

From these 30+ projections into the future, we determine which model or combination of models best represents the brand’s performance in the past and going into the future. This is not just a measure of the marketing success of the brand, but of the consumer market’s taste for the brand and their products.

We can then apply this to determine how we should spend in the future. In practice, a Spending Power Model answers the fundamental question: How much can we spend while still achieving our profitability objectives?

To quantify Spending Power, we utilize a spend / aMER model. Here, aMER (acquisition Marketing Efficiency Ratio) is defined as first-time customer revenue divided by total ad spend. By plotting incremental spend against observed aMER values, we can visualize the marginal return frontier. As spend increases, efficiency predictably declines due to diminishing returns.

The critical insight is that every business has an efficiency degradation curve unique to them specifically, and there is variation in that efficiency degradation at different times in the year.

There are three optimization selections available in the Spending Power Model:

  1. Maximize Contribution Margin - achieve the maximal volume of Contribution dollars from first time customer in the selected month
  2. Maximize Lifetime Contribution Margin - achieve the maximal volume of Contribution dollars from first time customer in a defined LTV time window
  3. Maximize First Time Customer revenue - within the selected month and constrained by breakeven first order profit

The guiding orientation of this approach is to focus on the next incremental dollar. The operative question is: Where can the next dollar of ad spend generate positive incremental profit? Investment should continue until the marginal return on the last dollar approaches the threshold of acceptability—whether defined as Max CM, Max Lifetime CM, or Max Revenue.

The outcome is an adaptive budgeting system. Instead of static allocations, spend is continuously recalibrated in response to real-time performance. This prevents the two most common errors in budget management: underspending when profitable opportunities remain, and overspending when incremental returns no longer justify additional investment.

In short, adaptive budgeting aligns marketing investment dynamically with enterprise value creation.

Pillar 3
Measurement –
Committing to Incrementality as the Source of Truth

There are many ways to measure the impact of marketing, but not all provide equally reliable guidance. To establish a foundation for effective decision-making, it is essential to distinguish between proxy-based approaches and those that directly capture causal impact.

Platform Attribution

The simplest and most widely used method, platform attribution assigns full credit for conversions to the last recorded interaction (click, view, or both), over a defined time window. While immediate and accessible, this approach is often misleading.

Platforms such as Meta or Google naturally attribute conversions to themselves, leading to systematic overstatement of true incremental value. For example, brand search campaigns may receive credit for sales that would have occurred without advertising intervention.

Multi-Touch Attribution (MTA).

MTA attempts to distribute credit across multiple interactions using rules or algorithmic models. While it provides a broader view than last-click, MTA is heavily dependent on the ability to track user-level interactions. A major limitation is the inability to accurately track impression-based interactions as impression logs are not consistently shared across platforms.

This has the tendency to give stronger credit to direct click based conversions (from Google Brand for example), under representing the impact of channels further up the tunnel (like YouTube for example). Further, MTA attempts to fractionalize credit with the goal of all adding up to one hundred percent of the revenue volume in that point in time.

This misses a key consideration of understanding what marketing interaction was uniquely critical in driving that conversion and if it hadn’t been present, would the conversion have happened regardless or not.

Marketing Mix Modeling (MMM).

MMM applies statistical methods to aggregated data to estimate channel contributions. It is less vulnerable to signal loss and provides a useful high-level perspective for long-term budget allocation.

However, MMM remains correlation-based rather than causal. This introduces risks of spurious association, particularly in cases where certain channels “spend into performance.” For instance, MMM may credit Meta with incremental revenue correlated to campaign spend, while geo-experiments reveal that the true driver was a concurrent email promotion.

Geo Holdout Testing (Incrementality Testing).

Geo experiments, in which test regions are exposed to advertising and matched control regions are withheld, isolate the causal effect of marketing. By comparing outcomes across test and control, geo holdouts directly measure incremental lift.

Unlike attribution models, they do not infer contribution from correlations but observe it in reality. While these tests require investment of time and resources, they remain the gold standard for determining true marketing impact.

Commitment to Incrementality.

For these reasons, CTC commits to incrementality testing as its primary source of truth to answer the only question that truly matters: What portion of revenue would not have occurred without marketing investment? This orientation grounds every performance conversation in causal evidence rather than modeled estimates, and is broadly considered the gold standard of measurement. (Source 1 | Source 2)

Practical Application and Benchmarks.

Because geo tests cannot be executed continuously, interim planning may rely on benchmark incrementality values derived from historical testing across many brands. We have access to results of hundreds of tests across over 20 Channels & Tactics, and use that to update our database on an ongoing basis.

These benchmarks serve as provisional guides until a brand-specific geo test produces statistically significant results. Once achieved, the measured incrementality factor becomes the authoritative input for that brand’s decision-making in place of the benchmark.

Incrementality Priority Roadmap

There should also be a clear framework in place for prioritizing which channels to test and at what cadence. As with anything, the lens of prioritization should be against anticipated impact to the business. We quantify this by looking at every channel not yet tested, applying the low and high end of our incrementality benchmark to the reported revenue from that channel, to understand the range of risk that exists in the media mix relative to the spectrum of revenue contribution it could be contributing.

The channel with the highest range of undefined revenue outcome is prioritized as it represent the most upside and downside in terms of the true incremental revenue that channel is driving compared to what it’s currently attributing.

Capturing the Full Picture: Halo Effects.

Properly designed incrementality tests must also account for halo effects across channels. For example, withholding Facebook ads in control regions may not only depress DTC website sales but also reduce Amazon marketplace orders from those same regions.

If such effects are ignored, the contribution of advertising is systematically underestimated. CTC’s methodology ensures tests evaluate total business impact—including ancillary effects—when comparing test and control regions.

Outcome: A Single Source of Truth.

By anchoring measurement in incrementality, organizations establish a stable, causal foundation for decision-making. This eliminates the reactive cycle of switching between attribution models or debating whose numbers are correct. With incremental lift as the guiding metric, the marketing function can confidently align daily execution with what truly matters: net new revenue and profit.

Pillar 4
Optimization –
Aligning Ad Platforms with True Profitability

Having established a clear understanding of incremental performance, the next challenge is to ensure that ad platforms optimize toward the same objectives as the business. We now have the aMER target set as an output of the Spending Power model based on the stated business objective at that point in time, as well as the incrementality result for each individual channel. The final step is to connect those outputs to the optimization target set within each channel.

The solution is to calibrate platform signals using the Incrementality Factor (IF). The IF, derived from experimentation, represents the proportion of attributed conversions or revenue that is truly incremental.

For instance, if geo-testing indicates that only 50% of reported conversions in the specific channel tested are incremental, the IF is 50% and the advertiser can adjust platform targets accordingly. A business requiring a true 2.0 aMER then (out of the Spending Power model based on the business objective) would therefore need to set a 4x ROAS target in-platform to account for the inflation in attribution.

Similarly, if the result of an incrementality test shows a channel as under reporting its full impact, and the IF is 140%, then the platform ROAS target would be a 1.42 at the same 2.0 business aMER target at that point in time.

This calibration has two major benefits. First, it ensures that automated bidding strategies—Target ROAS, cost caps, or value-based bidding—are oriented around true profit rather than overstated outcomes. Second, it creates a feedback loop between strategy and execution: the same truth metric that guides budget allocation also informs campaign optimization. In this way, the IF becomes the connective tissue linking high-level measurement to day-to-day platform behavior.

Outcome: Profit-Driven Execution. With calibrated optimization, machine learning models work in harmony with business objectives. The marketing team is no longer forced to “fight” platform algorithms or second-guess their outputs. Instead, platforms become partners in pursuing incremental contribution margin, closing the loop between budgeting, measurement, and execution.

Conclusion
Putting the Four Pillars into Practice

Taken together, the pillars of this framework form a unified system for profitable growth:

  1. Dynamic Budgeting aligns spend with the marginal return frontier.
  2. Incrementality-Based Measurement establishes causal truth as the foundation for decision-making.
  3. Calibrated Optimization ensures platforms execute in alignment with profitability.

Each pillar reinforces the others. Flexible budgeting identifies opportunities for growth, incrementality testing clarifies where the true value resides, and calibrated optimization translates these insights into action at scale.

The strength of the framework lies in its simplicity and focus. By concentrating on contribution margin, incremental lift, and calibrated ROAS, the system eliminates “metric paralysis” and provides a small set of guiding numbers that orient the entire organization. From executives to channel managers, everyone is aligned around the same principles and metrics.

Over time, this consistency builds stability and trust. Rather than reacting to every platform update or market fluctuation, teams adhere to a steady, evidence-based process: spend to the margin frontier, measure lift through experimentation, and feed those insights back into optimization. The result is greater confidence, improved morale, and more predictable financial outcomes.

Next Steps. To operationalize the framework, marketing leaders should begin by auditing their existing budgeting and measurement processes. The next priority is to schedule incrementality tests to establish brand-specific benchmarks, and then calibrate platform tracking systems to reflect these insights. Tools such as Statlas, spend/aMER modeling, and incrementality testing can provide practical support in this process.

Ultimately, adopting this framework allows DTC brands in the $10M–$100M revenue range to transform how they plan, measure, and grow. The outcome is profitable growth that is not only measurable, but also repeatable and scalable.


Luke Austin

As the Director of Growth Strategy at Common Thread Collective, Luke Austin leads our team of Growth Strategists working with some of the most exciting $100M+ consumer ecommerce brands in the industry. Connect with him on Twitter.