Most Brand Trackers Collect Data. Few Actually Change Decisions.

Brand Tracking Done Right
INTRO

By Troy Kohut, Chief Commercial Officer @ Glow

Brand tracking is one of those things most marketing teams know they should be doing. The harder question isn’t whether to track, but instead,  whether the way you’re tracking is actually helping you make better business decisions.

In my experience, the gap between running a brand tracker, or getting genuine value from one, is larger than most people realise. The problem however, is rarely the data.

It’s the structure around it: what you’re measuring, at what cadence, and whether the program has enough flexibility to answer the questions your business is actually asking right now.

With a few decades of research experience under my belt – and having been involved in building the technology that most efficiently tracks brands impact – I have decided to pen my thoughts on how to think practically about building a brand tracking program that earns its budget allocation (and paints a much clearer picture of ROI):



SECTION 1:

Start With Decisions, Not Metrics

The most common mistake I see teams make when setting up a tracker is starting with the question list.

What should we measure?” is the wrong place to begin.

The right question is: “What decisions do we need this data to inform?

Every metric in your tracker should connect to a call the business needs to make. If it doesn’t, you’re collecting data and definitely not generating insights. Some examples:

  • If you need to decide whether to increase brand spend, you need aided awareness and consideration lift. You don’t need total awareness
  • If you need to decide whether a campaign worked, you need exposure tracking and message recall alongside funnel movement.
  • If you need to decide whether to reposition, you need to attribute shift data relative to competitors.
  • If you need to decide which creative to back, you need a module that connects creative exposure to consideration movement.

My take? Start lean.

5 metrics tracked well will always beat 20 tracked for comprehensiveness (which usually leads to internal overwhelm from too much data, resulting in option paralysis)

A tight core – unprompted awareness, prompted awareness, consideration, preference – plus 3 to 5 brand attributes that genuinely differentiate in your category is a foundation you can build on.


SECTION 2:

Why a Flat Awareness Number Tells You Almost Nothing

Here’s a scenario that plays out more often than it should (if currently using a Brand Tracker tool/platform):

The Brand Tracker comes back.

Prompted awareness is flat.

Consideration is flat.

The team looks at the dashboard, and agrees that “not much has changed,” and moves on.

But flat doesn’t mean stable. It can mean your gains in one segment are being offset by losses in another.

It can mean a competitor is eating your consideration among the buyers you most need.

It can mean a campaign lifted recall but didn’t shift intent.

Without the ability to cut the data properly – and without the right questions sitting alongside the core metrics – you’re not quite reading brand health and instead reading an average that obscures more than it reveals.

This is where light-touch tracking consistently falls short. A low-cost always-on tool gives you the headline number. What it rarely gives you is the diagnostic layer: why is awareness flat? Which audience is moving? What’s driving the gap between awareness and consideration?

Those questions require depth, and depth requires structure.

Scenario:

A retail brand we worked with saw prompted awareness sit flat at around 60% across multiple waves, which looked like a non-event. It was similar to other brands in the category and, as a result, didn’t raise any red flags.

But underneath, awareness among under-35s had risen nearly 10 points, while it had fallen sharply among 35–54s, the client’s core revenue segment. What looked stable overall was actually a shift in who the brand was reaching.

Catching that early allowed the team to rebalance media and protect their core audience. Without that cut, they would have optimised toward a number that was hiding the problem.

Brand Tracking Data Issues
SECTION 3:

The Other Failure Mode: Too Much Data, Too Late

If light-touch tracking gives you too little, traditional enterprise research can give you the opposite problem:

Comprehensive methodology.

A 150-page report.

Data that arrives six weeks after fieldwork closed – structured around questions that were locked in before the campaign launched.

By the time the insights land, the decisions have already been made. The window to optimise a campaign that’s still in market has closed. The competitive response that needed to happen in week three happened in week nine, on instinct, with no data to back it up.

Scenario:

A very large banking client was running a large-scale tracker with 200+ metrics and a large monthly sample delivered in a quarterly report.

The issue wasn’t the data, it was timing. Results landed 6–8 weeks after fieldwork, well after campaigns had finished and budgets were locked. Great data came too late to impact decisions.

In one case, a drop in trust linked to pricing comms was only identified after the campaign had already scaled nationally. The insight was valuable but the delay in reporting meant it was purely retrospective.

Depth without speed didn’t drive decisions, it explained them after the fact. This is a case where an investment in better tracking would have paid for itself many times over with better decisions on just one campaign.


SECTION 4:

The Case for Modularity

The most useful structural change you can make to a brand tracking program is separating the stable core from the flexible surround.

The core – your brand funnel metrics – should never change. Not the wording, not the scale, not the order.

Trend integrity is the entire value of a tracker over time. Every change to the core is effectively a reset.

Everything else should be able to move. Alongside the stable core, modules can rotate in and out based on what’s actually happening in your market:

  • Campaign exposure and message recall during active media periods
  • Creative element testing when multiple assets are in flight
  • Competitive response when a new entrant is gaining share
  • Issue tracking when something breaks in the news cycle

This modular approach matters because it keeps the tracker useful between strategic reviews. Without modularity, you’re commissioning separate studies to answer questions that should live inside your existing program.

That costs time, budget, and continuity.

Scenario:

I had a client come to me mid-campaign with a familiar problem: performance looked “fine” on the surface, but there was a growing sense something wasn’t landing.

The usual approach would have been to commission a campaign evaluation study; brief it, scope it, wait three weeks, and get an answer after the media had already run.

Instead, we dropped a campaign + message diagnostics module straight into the next wave of their tracker.

Within days, the problem was obvious: people were seeing the ads, but taking out the wrong message entirely. The campaign was building awareness but for positioning the brand wasn’t trying to own. They changed the creative while the campaign was still live.

Most teams don’t have that option. They find out what went wrong in a post-campaign debrief when the budget’s spent, the opportunity has been lost and the learning is academic.


SECTION 5

Match Your Cadence to Your Market

Cadence is a decision most teams make based on budget rather than need. Which is understandable… but it’s worth being deliberate about.

Fast-moving consumer categories running significant media weight benefit from monthly, fortnightly or even weekly waves.

The data is most useful when it can inform a campaign that’s still in market, react to rapidly changing events and keep you on the front foot, not one that finished last quarter.

Slower-moving categories or brands in earlier growth stages can run quarterly without meaningful loss. The brand is moving slowly enough that monthly granularity adds cost without adding decision value.

Campaign periods are a special case. They are an ad-hoc pulse – a short, targeted wave run during active media – catching lift while it’s happening. By the time a quarterly wave arrives, the campaign effect has often decayed beyond reliable measurement.

The mistake in this scenario would be to not make an active choice about the cadence at all, as opposed to running at the wrong cadence.


Brand Tracking Done Right
SECTION 6:

Where Glow Fits Into This

Glow sits between the two failure modes described above. We’re not trying to out-methodology a traditional research agency.

If you need deep proprietary analytical modelling and decades of category expertise, that’s a different engagement.

And we’re not a stripped-down dashboard tool if what you need is a lightweight pulse check with no diagnostic depth.

What we build are custom, modular brand tracking programs – at a cost structure and turnaround speed that makes continuous tracking viable for brands that couldn’t justify it at traditional agency pricing.

The brand funnel runs every wave. The modules adapt to what’s happening in your market. And the data is live in a self-serve dashboard, not sitting in a report queue.

We’re honest about what sits outside the platform: deep data integration with CRM and sales modelling happens off-platform, with your own data infrastructure or a specialist partner. What Glow provides is the brand perception layer – the inputs that, when combined with behavioural data, start to tell a commercial story.

If you’re reviewing a current program or building from scratch, we’re happy to walk through what a well-designed tracker looks like for your category.

Book a strategy session at glowfeed.com

Troy Kohut is Chief Commercial Officer at Glow, a research technology platform that helps brands and agencies make faster decisions with better data. Glow has worked with Mars, Reckitt, Mondelez, Lego, Tourism Australia, and agencies including EssenceMediacom, Bain, and PwC.

Troy Kohut

Fresh data, insights, and case studies delivered to your inbox.