5 pillars of good marketing measurement

Takeaways

  • The resurgence of interest in MMM presents an opportunity to shake up marketing science, organising measurement around the critical decisions marketers face. 

  • But many providers are burrowing deeper into existing rabbit holes - spending huge amounts of time and money building overly complex, granular models, which produce results that ultimately have a limited impact on decision making.

  • A better approach is to focus analytics to where it can have the most impact and to organise measurement around business-critical decisions.


Marketing analytics is at a point of inflection. The end of third party cookies is choking-off traditional digital measurement tools of the fuel they need. Marketing mix modelling (MMM) is filling that void.

This presents a unique opportunity to transform measurement, designed around the decisions marketers face and spoken in a language they understand.

The ‘measurement ROI’ can be huge. Analytics can guide marketers through the maze of strategic, campaign and digital optimisation decisions, giving them confidence amid a lot of uncertainty. 

It should draw on a broad spectrum of experience and knowledge. It should act as bumpers in a bowling lane, placing gentle constraints on how marketers approach their objectives, balancing creative exploration and brand building with shorter term ROI.

But many providers are taking it in the wrong direction.


What is MMM?

‘Econometrics’ is a branch of economics concerned with quantifying causal relationships between variables. A fiendishly tricky endeavour. When X goes up, Y goes up. Does X cause Y to go up? Or does Y cause X to go up? Or does Z cause both X and Y to go up? 

At some point marketers co-opted the term and it’s used interchangeably with ‘marketing mix modelling’ (MMM). Whatever you call it, the idea is to correlate peaks and troughs over time in media activity and sales using data on past campaigns.

This is good because we don’t need sensitive PII data. But it’s tricky, because the data is noisy, the effects we’re looking for are often small and it’s extremely tricky to be sure our correlations are causal. 




How not to measure

As multi-touch attribution becomes infeasible, MMM is being stretched to answer more and more questions. Models are becoming more granular and more complex. But there’s a limit to what you can do with observational data. Some things, like very small or very long term effects simply can’t be measured this way.

“Some things, like very small or long term brand effects simply can’t be measured with MMM.”

Conversely, there are some questions that aren’t addressed at all. For example, MMM does not say much about strategic decisions, such as audiences, messaging and brand positioning. For these important questions, we need other approaches.

Modelling projects are often scoped without proper reference to the impact on outcomes. To maximise ‘measurement ROI’, we need the benefits of measurement (better knowledge about the relationships between spend and sales) to outweigh the costs. Huge amounts of time and effort are wasted trying to explain relationships in data that are either already known with some confidence or aren’t cared about (or trusted) enough to adjust actions.

The demands on modellers to answer more and more questions is also exacerbating an existing problem in MMM: cargo cult statistics. That is, ritualistically flogging models until they give reasonable results, or until misleading statistics (like R-squared) are satisfactory. Modellers can spend weeks trying to ‘get variables in’ to models instead of asking whether the framework they’re using is fit for the job at hand.

Finally, MMM results are often not communicated well. Either by focusing on the techniques rather than the recommendations, failing to flag uncertainties in the estimates or ignoring the context and constrains facing planners.


5 pillars of good measurement

1. Start with the decisions

Different decision types call for different analytical solutions, different results cadences and a different medium of delivery. And MMM isn’t always the solution!

2. Maximise ‘measurement ROI’

To maximise the value of measurement, we need to ensure the benefits outweigh the costs. That means focusing on areas of uncertainty where more knowledge will have the biggest impact on decision-making and outcomes. It also means choosing analytical solutions that are proportional in scope to the benefits we expect to generate.

3. Balance exploration vs. exploitation

The pay-offs of advertising are inherently uncertain. To maximise long-term ROI we need to strike the right balance between ‘exploiting’ what we know works and ‘exploring’ new, potentially better strategies and tactics. To navigate these uncertainties and trade-offs, we need a carefully designed framework, that codifies our knowledge and guides learning.

4. Efficiently combine information sources

Models built on noisy sparse data alone are unlikely to tell you anything very insightful. There’s no point in spending weeks pretending you think the effect of TV is zero and begging a few data points to tell you otherwise (people actually do this). You should incorporate and appropriately weight all the information at your disposal. That includes not just past campaign data, but expert knowledge (e.g that advertising effects are positive…) and benchmarks.

It also includes results from experiments. Causal relationships are extremely difficult to find in observational data alone. Running digital media experiments is crucial for gaining a better understanding of how different channels or ad characteristics impact outcomes.

We can use Bayesian modelling methods to combine and weight these sources of information.

5. Constraints not rules

MMM results should highlight what is feasible, and leave it to planners to work within those constraints. This allows for creative freedom and acknowledges the unavoidable uncertainties around recommendations. Indeed, these uncertainties should be front and centre of the results and recommendations. Again, a Bayesian approach easily allows for this.


In summary, we need to get out heads out of the models and into the minds of our clients who use them. That means listening, understanding the decisions they make and the real world constraints they face. It means carefully designing analytical responses that efficiently provide usable knowledge.



DS Analytics build marketing mix models for a wide variety of brands and agencies. Get in touch to find out how we can help you.

Previous
Previous

Unlocking the Power of Customer Lifetime Value: 5 Key Benefits for Your Business

Next
Next

Predicting subscriber lifetime value using survival analysis