02_Elements/Icons/ArrowLeft Back to Insight

Insights > Media

Advertising’s Holy Grail: Connecting Ad Exposures To In-store Sales

4 minute read | Leslie Wood, Chief Research Officer, Nielsen Catalina Solutions | July 2016

How do you know if advertising works? Is there a way to directly measure the sales results of consumers exposed to an advertising campaign?

Scholars and marketers have grappled with measuring advertising effectiveness for decades. In the 1970s and early 1980s, they focused on lag effects, decay rates, adstock, lag coefficients, and half-lives. In the 1990s, game-changing work demonstrated that the long-term effect of advertising is roughly equal to twice the value of its short term effect. The industry was making strides in its understanding of advertising effectiveness in many different ways, but when it came to actually comparing what ads people were exposed to with what they bought, the data sources it relied on had so little in common, it was impossible to draw definite connections between advertising and the behavior it sought to influence.

The most powerful response to this difficulty has come in the form of “single source” data. Single-source data allows you to track what a group of people watch and what they buy. Because you know some were and some were not exposed to certain advertising, you can isolate the sales driven by that advertising, by controlling for enough variables so that the only real difference between your two groups are that one did and one did not see the advertising.

The concept has been around since the mid-1960s, but Project Apollo in 2006 was the first large scale commercial pilot that leveraged single-source methodologies. Nielsen Homescan® technology was used to capture consumer purchase behavior and combined with television exposure data from Arbitron for several major consumer package goods (CPG) companies.

Bingo ­­– sort of. Much was learned. But collecting everything from a household or person—both what they watched and what they bought—was expensive, so the price of precision was that the data was small data. The Apollo panel included about 11,000 persons in 5,000 households ­­– not large enough to report findings at the level of granularity needed for small brands.

Today, however, we can create single-source datasets by merging transaction datasets at the scale required. Datasets are linked and anonymized by a third party via a shared identifier in each dataset to create a single dataset showing how the household purchases of those exposed to particular advertising differ from those who were not exposed, isolating the sales effect of the advertising campaign.

These datasets are hard to create. How do you replicate the precision of small data on the scale of big data? We draw on frequent shopper data, set-top box data, data from cookies ­­– in short, all the big data we can gather. But no big data is complete. For instance, with set-top box data, you can’t always tell whether the TV set is on or who’s watching (for more on this, see “The Value of Panels in Modeling Big Data” in this issue). And even with 90 million people in our frequent shopper database, we can’t tell what other purchases they’re making without their loyalty cards.

One solution is to take a genuinely complete dataset, and use it to “calibrate” the big data dataset. In our case, we take our Homescan data, which tracks every purchase made by 100,000 households. If you “compare” Homescan against our frequent shopper database, you will find overlaps between the datasets. Now you can see, for the people in both databases, what purchases are missing from the frequent shopper database. You can then model that gap to reflect the full purchase patterns for the large database, and project the results to the total population.

This progression from “small and smart” data to “big data” to “big and smart” data is what makes it possible for single-source data to be leveraged at scale today, and with enough precision to support daily marketing decisions.

Of course, there remains a next frontier in precision. For instance, a lot of questions arise when you combine watch and buy datasets. Some of the data is person level, some is household level. The person watching the Fruit Loops commercial isn’t necessarily the person buying the Fruit Loops. It’s still very difficult today to untangle the important question of purchase influence ­­– that is, when someone watches something and gets someone else to buy something. Viewability of digital ads is another important challenge, as is fraud ­­– bot traffic and the many other ways in which digital exposure is fraudulently increased. Solving these challenges will make our answer to the central question—what effect does watching have on buying ­­– that much more precise.

Ultimately, it’s important to recognize that the data sources that contribute to single-source datasets need to be near real-time, accurate, complete and comprehensive, and the methodologies used to calibrate them must be capable of projecting the results to the total population.

Related tags:

Continue browsing similar insights

Our products can help you and your business

  • Consumer & Media View

    Access syndicated and custom consumer research that will help you shape successful brand, advertising and marketing…

  • NCSolutions

    Maximize CPG advertising effectiveness with data to better segment, optimize and enable sales-based outcomes.