AMS Quarterly, vol. 6 (November 2004): 10
Original manuscript, copyright retained by author

Political Polls, Samples and Research Misinformation
Herbert Jack Rotfeld
Professor of Marketing
Auburn University

With every election, politicians have paid opinion poll researchers on staff and changes are made in political strategy to fit the latest readings of the public mood. News events of minor import cause a shift of ratings, causing supporters to encourage a candidate to change marketing strategy, with the most common headlines featuring opinion poll results that say who is "winning" or "losing" at a given point in time.

Even the most minimally informed voter must note a certain irony of all this. A party's national convention often seems to result in what reporters tend to call "the convention bounce," then numbers can shift back a short while later as if nothing had changed. As news organizations spend increasing amounts of money to accurately call each election winner at greater speed, they still make mistakes and call things wrong well beyond their noted sample errors. Predicted landslides turn out to be close and close elections develop into blow outs.

For the public that is critical of all this, radio or television news programs sometimes provide an interview with the head of a company that provides the various polling updates for the media to report. And they start by defending the use of samples to predict votes, often by asserting a comparison with a blood test: people opposed to doing research with samples of the public should ask their doctor to take all of their blood for testing. This is a common metaphor, but it is not a valid comparison unless every blood cell, like every snowflake, is unique, such that blood samples only provide a statistical statement of probable content of the rest of the circulatory system.

A better metaphor: a sample of the public is like taking an x-ray of one part of a body as a basis for concluding about bone structure for the entire skeleton. Yes, bones are not randomly distributed in the body, but then, people are not random in their neighborhoods. A truly random sample does not exist. Every sample frame has bias and distortions, and people selected add additional biases by not being available for the interviewers or not responding if they are. Significant proportions of some demographic groups are near impossible to contact. Increasing numbers of people are no longer easy to even find by their exclusive use of mobile phones, their use new telephone technology for avoidance of unknown callers or by their distractions of a busy lifestyle.

The polling organization reports that the sampled responses are from "likely voters," but they don't report how many people contacted said they would not vote and no one has ever gone back to discern if the opinion poll's ratio of likely voters to nonvoter matched the eventual voter turnout or even the last turnout from a similar type of election.

To further invalidate a biological metaphor, consumer research is not a straight-forward test, like that of DNA matching or blood levels of alcohol. They ask opinions and beliefs and the questioners are not telepathic.

A few elections back, when called upon to explain the failure of the polls to accurately call the elections, National Public Radio commentators repeatedly called each poll a "snapshot" of opinions at that time. And they still do. The polls were not wrong, or so they explained, but the polls were right at the time they were conducted, ignoring that the exit polls that surveyed people who had just voted were also wrong that year. And in 2004 this had become a common statement by many journalists.

But a snapshot is also the wrong metaphor. Pictures do not lie, or so the saying goes, but the same as any other effort of marketing research, public opinion surveys are incapable of capturing that degree of truth. In reality, each poll is more like an impressionistic painting, carrying various qualitative biases that can't be escaped and colored by both the people collecting the data as well as the people who must interpret what they found.

As the shifting winds of the political polls headline the news reports, the reporters only give the statistical sampling error, as if that and that alone would explain any limitations to the data. Yet the way questions are asked influence how they are answered. People are pressed to answer questions on which they have not given much thought, or they lie.

Even the most successful and cautious research-using company has had its share of product failures. Anyone with long experience in marketing knows that a majority of new product launches will probably fail even if extensive research precedes the launch. Marketing people know the limits of their research information, and try as much as possible to take it into account while making business recommendations. In academic journals, any discussion of research implications includes a notation to limitations beyond those from sample statistics reported in the results. Yet these same knowledgeable research people stand silent while reporters imply that, except for the lack of a larger sample, they could predict future consumer behavior with the accuracy of the character in "The Dead Zone" movie and television show.

Of course, the above litany of research limitations should be ingrained and obvious to anyone reading a marketing journal, or so we would hope. Yet when hearing this repeated abuse of survey research data in the news - information collected by otherwise reputable companies who might sometimes be providing the data to generate publicity for their firm - rarely is heard a dissenting voice from the marketing research community. Marketing professionals and educators should not allow this to go unchallenged; social scientists should be actively seeking to educate the public about research abuse.

A century ago, advertising people realized that the proliferation of false advertising lowered the credibility and impact of more honest efforts. The continued abuse and misuse of marketing information by news organizations could provide more criticisms of business practices as people remain ignorant of the reality of what marketing people are able to understand or predict of how people think.