It’s the political season, so we are bombarded with the latest survey updating the public’s mood on candidate standings. But what we neglect to take fully into account is that the surveys represent only a small percentage (those that chose to answer the phone) of registered party voters (not the general population) with listed phone numbers (surveys fall outside the Do Not Call parameters, but many have moved from landlines to cell phones and have not bothered to update their phone numbers.)
I personally never answer a phone number I don’t recognize, as a call screening tool. I never want to answer surveys. So the surveys are only getting a small percentage of a limited group that
- a) are at home, if a landline
- b) choose to answer an unknown number
- c) don’t hang up and agree to take the survey
- d) are registered voters of the party (versus independent or another party)
Doesn’t sound representative to me. The published results sound way bigger than they should. “So and so moved up/down double digits from the last survey” without bothering to qualify what percentage of the total the survey is reporting on. That might be good to know, to put things in proper perspective. But proper perspective isn’t the goal now, is it?
If the last survey noted the responses of 100 people out of a potential pool of 10,000 people, that’s the opinion of 1%. To move double digits within the 1% is hardly meaningful. The difference between 20% of 100 and 30% of 100 is only .01% of the overall whole, but ‘double digit increase’ sounds better than ‘a .01% change’! Obviously a transparency in knowing the basis of the survey results is a helpfulness that is not forthcoming.
And what is the percentage to the total of the second survey? If 200 people were reached in survey #2, you simply can’t compare the differences from survey #1 to survey #2; the response pools are not the same.
“In survey #1 we found these results… in survey #2, the favoritism is shifting as so-and-so lost ground and so-and-so gained points” – that is drawing a false correlation between diverse groups, since the exact same people didn’t participate in both surveys.
Claims are that all surveys reach a cross-section, but I’m not buying it. What about gender, age, socio-economic factors? So then the response would be that surveys are just an unofficial gauge… so why then would I ever have any faith in believing them?
Because the media makes it newsworthy, give the surveys real power, skew the public’s thinking. They pump up the data like it was real, hardcore, the reality. Only it’s not.
Here’s what surveys actually communicate: they give the perception that public opinion is headed in one direction, when in reality the surveys are minimally representative. But what happens next is unfortunate: the surveys can create the reality, to a point. Herd mentality means that many go along with the perceived majority; you think the herd knows something that you don’t know. The thinking becomes: thousands of people can’t be wrong. This becomes real power. But is the herd really a true herd? The survey leads you to believe it is.
Another problem with surveys is that they can be skewed for any desired results, with how the questions are worded. Unfortunately language can have easy loopholes.
It’s like asking my husband: Is this the best dinner you’ve had today? The answer is ‘yes’ 100% of the time, so the survey says, “repeatedly voted ‘best cook’ ”. Ha! And if you’ve tasted my cooking, you’d know that the survey is not purely objective 🙁
Comment?