Nielsen, the MRC, and who watches the watchman?

The battle between Nielsen and the MRC has recently taken a turn from being a battle of righteous wills to an empathetic but blameless – on the part of Nielsen – declaration of “we understand, we feel your pain, we are working to resolve the issue, but we didn’t do anything wrong because we were only interested in the safety of panelists and those who maintain them.” Nielsen did not use any of these words; but that’s the gist of the letter CEO David Kenny released this week. The letter was both an attempt to get in front of the recent controversy – after weeks of being behind it – over shortfalls in ratings accuracy coming from the TV metering powerhouse and their until-recent unwillingness to recognize the errors.

Missing from the foreground of the back-and-forth between Nielsen and the MRC – the Media Rating Council (which issues the accreditation but does not perform the audit upon which it is based; a topic for another time), the auditing and accrediting body for media measurement – has been the looming large in the background reality of ever more numerous ways for people to consume video content (TV for our purposes here) and Nielsen’s inability to keep up with those changes.

The specific kerfuffle is ostensibly related to the panel’s not being properly maintained during the early COVID months but it is really about the panel method’s incongruence with current viewing habits, which is all about streaming. Nielsen’s very largest clients are CPG (traditional advertisers) and major media (traditional advertising suppliers). When over 70% of revenue is coming from those who did well under the old conditions, the innovator will always have them as an enemy. Nielsen hasn’t had any incentive to look deeply at streaming, because its biggest customers didn’t. Why care a lot about Netflix or YouTube viewing data when Netflix or YouTube don’t need Nielsen to support their business models? But Nielsen’s revenues have been slowly going down since 2017, with the biggest drop between 2019 and 2020 (-4.3% since 2017, or $282 million, with 74% of that in just one year between 2019 and 2020). Given the drop, and that the old guard has moved into the space (Disney, NBC/Comcast, CBS, etc.) in a meaningful way… Time for Nielsen to get serious.

The specific kerfuffle is ostensibly related to the panel’s not being properly maintained during the early COVID months but it is really about the panel method’s incongruence with current viewing habits, which is all about streaming. Nielsen’s very largest clients are CPG (traditional advertisers) and major media (traditional advertising suppliers). When over 70% of revenue is coming from those who did well under the old conditions, the innovator will always have them as an enemy. Nielsen hasn’t had any incentive to look deeply at streaming, because its biggest customers didn’t. Why care a lot about Netflix or YouTube viewing data when Netflix or YouTube don’t need Nielsen to support their business models? But Nielsen’s revenues have been slowly going down since 2017, with the biggest drop between 2019 and 2020 (-4.3% since 2017, or $282 million, with 74% of that in just one year between 2019 and 2020). Given the drop, and that the old guard has moved into the space (Disney, NBC/Comcast, CBS, etc.) in a meaningful way… Time for Nielsen to get serious.

What happens next?

To answer that, here is a brief exercise:

Observable phenomenon

  • Ratings of live, linear TV are shrinking.

  • The ratings coin of the realm has been shown to be impure.

  • The costs of linear TV inventory continue to rise, meaning advertisers pay more for less.

Cause of observable phenomenon

Smaller aggregated audiences over more points of contact with the medium.

  • Makes the current methodology inadequate for the current medium’s structure.

  • That structure is more streaming.

  • There is no common measure for streaming.

What the observable phenomenon causes

  • Declining faith in the purity of the current currency

  • Uncertainty of accompanying value of goods secured with that currency, i.e. TV inventory.

  • The value and relevance of the measuring body and the one that verifies it are called into question.

And that means?

Things can go a number of ways,

  1. Nielsen wins: Nielsen will hold its line, and the industry that has been dependent upon it for generations will fall in behind that line.

  2. MRC wins: A group of powerful networks and advertisers will band together and decide to strike now with an alternative while the beast is weak.

  3. Creative anarchy wins: The Republic once held in tow – and dependent upon – the benevolent emperor will break up into small states of media-and-measurement combinations, manifesting a variety of alternatives geared towards a variety of circumstances. Local CTV will be done one way, local linear another. National linear some version of how it’s always been done, national CTV/OTT another…

Nielsen panels will be held up by some (Nielsen most assiduously followed by traditionalists) as providing a more accurate rendering of impressions, since the structure reports on content viewed and, thus, ads viewed. Digitally delivered video content (“TV”) may not have impressions defined in the declarative way that Nielsen panels do (though it could, if a body fielding the panel was doing it through a ubiquitous connected device, e.g. a smartphone and an app). But it may not have to, if that body could triangulate the TV (more and more of which are connected devices), the portable personal connected device (i.e., smartphone), and the location. There are companies that currently do this, intersecting device IDs to get deterministic, person-level data. Some of these companies are not knowledgeable in the ways of media discipline and analytics, but in the right hands, their output could be powerful. Would one be able to get person-level data on who did or did not leave a room? Difficult though possible, if not always precise… yet.

All that said, none of it may matter if large advertisers conclude that having a one media currency system requires more friction than extending the analytics they are already doing to arrive at a common denominator metric across the multiple media they are using. Basically, what has been the usual and sometimes accurate, if not always precise, way of validating media across multiple channels and types will continue to be disparate and over time become more precise. Advertisers will still want a common denominator metric against which to normalize their means of valuation across media channels. A lot of work has been done across more than a few marketing service providers and their clients to put pieces together that do not naturally cohere in order to get a comprehensive understanding of media performance. It’s what’s been done for ages in one way or another, and with more media connected to an IP address – video (TV), audio (radio), digital out of home – the data to do these analyses will come faster and be easier to centralize and process.

And then? To quote Nabakov:

“[O]ne day we shall have a real, all-embracing explanation, and then perhaps we shall somehow fit together, you and I, and turn ourselves in such a way that we form one pattern, and solve the puzzle: draw a line from point A to point B… without looking, or, without lifting the pencil… or in some other way… we shall connect the points, draw the line, and you and I shall form that unique design…”

Previous
Previous

Less granular data is a good thing

Next
Next

Google delays telling us all to go FLoC ourselves