Programmatic Targeting Can Miss A Forest Looking for One Tree

(An earlier version of this was published in January of 2022.)

 

Over an average day, an individual consumes media for over half of their day. In 2022, the average American consumed 13 hours and 11 minutes of media a day; nearly 8 of those hours were classified as digital media.

 

Those 8 hours are filled with data harvesting opportunities. With every flutter of fingers on a keyboard or a stroke of a finger over a screen, oodles of data are generated. The data might be demographic (gender, age, location) or behavioral (visited a news source or purchased an Instapot). When combined, psychographics can be inferred (guys 25 to 54 from New Orleans visiting ESPN.com like sports and cooking). For years, advertising technology companies have worked to render human activity into machine-readable form and turn the resulting output into more information about audiences to target with advertising. That targeting has become increasingly precise, with more data points brought to bear on each contact with media. These data points number in the hundreds, even thousands. All in the service of learning more about customers and then using that knowledge to find more customers, which is itself in the quest to eliminate the always lurking threat of “waste.” Digital advertising also affected closing the loop between advertising events, the data that advertising used for targeting, and the actions and interactions taken in response to that advertising. The data generated from these interactions puts the advertising at the feet of actions taken and where those actions occur.

 

Over time, the movement of advertising closer and closer to the actions taken led to targeting methods that preferred the locations of the action rather than the people advertisers wanted to take that action.

 

This has led to the selection effect in targeting decisions. What’s the selection effect? Known also as selection bias, it introduces factors that can skew conclusions drawn from observable phenomena and associated data based on how those and which factors are chosen. While there are multiple types of selection bias, the two that impact advertising are sample and time interval. Stripped to its underwear, sample bias results from a non-random selection of an observed population.

 

Regarding look-alike targeting, it uses data related only to existing customers. Decisions based only on data gathered from purchasers means the data is based solely on buying. This is akin to putting ads for Roundtable Pizza outside Roundtable Pizza parlors and concluding that the ads led to pizza purchases. Time interval bias is artificially selecting a time frame for observation that may not cohere with a natural timeframe within which certain behaviors might occur. This is like gathering data about turkey buyers by looking only at the week before Thanksgiving. Given both examples, sure, advertising pizza outside of pizza parlors gives you a better than random chance at reaching people who buy pizza, but if I want to grow share and prove the advertising contributes positively to that goal, shouldn’t the advertiser be reaching people who aren’t patrons of Roundtable, too? And if I want to sell more turkey, shouldn’t I want to know about ALL customers who buy turkey at any time, not just the one occasion a year when I know turkey is purchased even by those who might not buy turkey the rest of the year? (Not to get into the weeds of statistics, but this also screws with any proper negative binomial distribution reading).

Advertisers need to start looking at audiences again rather than countable actions. You aren’t growing share if all you do is advertise to the people who have your product in their hands, standing in line at the checkout.

 

Does this mean no targeting data should be used? No, but more rigorous metrics planning should be done, and more stringent testing protocols should be deployed. There’s been too much reliance on the magical thinking of conventional wisdom. While not everything meaningful is countable, and not everything countable is meaningful, some things meaningful are countable, and even empty spaces are worthy of study.

 

What’s needed is not more or even better data — though there is a lot of chaff passed along as wheat — but better metrics planning.

Previous
Previous

This is how the cookie finally crumbles

Next
Next

Does overreliance on programmatic hide a more significant problem?