Last week, NextMark released its first data card quality report for the 2012 calendar year. This report continues to spark intense debate on the subject of data card quality, so let’s talk about it. Your comments would be much appreciated as we plan to refine the program for digital media publishers and planners.
Here’s how it all began…
When NextMark first launched its self-service data card publishing wizard at the turn of the new millennium, the new interface was met with mixed emotions. Some media managers and list owners were excited to finally have control over their promotional content, but others were not looking forward to the extra work. This created some gaps in regards to the attention that data cards received, subsequently creating issues for researchers who rely on data cards for purchase decisions and campaign planning.
To further encourage media managers to update their data cards and improve their content for researchers and campaign managers, NextMark introduced a new service on October 15, 2000 to integrate data cards on managers’ web sites. This created an even greater sense of ownership and brand awareness, but it was still not enough to address the issues of missing contact information, out-of-date counts, and other deficiencies.
On May 13, 2003, NextMark introduced its first data card quality report by electronically analyzing over 30,000 data cards (currently over 70,000). For each data card, a proprietary algorithm rates the quality of 13 key attributes. The primary objective of this initiative was to make sure that data cards were complete and accurate, and there was a little improvement.
On February 21, 2008, the data card quality report went public with a ranking of the top 50 managers. This resulted in a vanity check for companies that did not make the list, so a refined version was released on June 23, 2008 to categorize the top managers by number of data cards in their respective portfolios.
As word got out, some good things started to happen. Data card publishers began to pay close attention to the scores and the rankings, and many began to institute best practices for timely updates and list content management. Scores have been improving ever since, but something else also started to happen.
Data card quality rankings became a promotional opportunity for media managers and the scores were often taken out of context. The scoring algorithm, intended to measure completeness and update recency, grew in perception as a holistic measure for media management firms. Although unintended, this created some confusion.
To keep it simple, here’s the 3 point truth about data card quality.
Point 1: data card quality is independent of list quality.
Point 2: data card quality measures completeness and update recency.
Point 3: data card quality does not measure content quality or accuracy.
You should not judge a media manager on data card quality alone, and there are more important factors to consider. For example, take a look at the following catalog list rate card and you will notice that it has a high ‘popularity index‘ in addition to a quality presentation of the media (postal list in this case) it represents. The counts are current through the end of the most recent month, the monthly and quarterly hotlines are provided, and the average age and income is provided for the audience. It is important to also be aware of the fact that some media managers may confirm an update without actually changing the counts. We are on to them and will flagging that accordingly to make sure our research users are aware of the difference.