Nate Silver's analysis: Gallup was the worst poll, much worse th

來源: 老忽叔叔 2012-11-11 03:48:52 [] [舊帖] [給我悄悄話] 本文已被閱讀: 次 (17412 bytes)

Nate Silver's analysis: Gallup was the worst poll, much worse than google.

Gallup had a systematic bias of 7.2% in favor of Romney.

IBD/TIPP was the most accurate, smallest bias, only 0.1%.

CNN poll had a bias of 0.6%.

http://fivethirtyeight.blogs.nytimes.com/2012/11/10/which-polls-fared-best-and-worst-in-the-2012-presidential-race/

Which Polls Fared Best (and Worst) in the 2012 Presidential Race

As Americans’ modes of communication change, the techniques that produce the most accurate polls seems to be changing as well. In last Tuesday’s presidential election, a number of polling firms that conduct their surveys online had strong results. Some telephone polls also performed well. But others, especially those that called only landlines or took other methodological shortcuts, performed poorly and showed a more Republican-leaning electorate than the one that actually turned out.

Our method of evaluating pollsters has typically involved looking at all the polls that a firm conducted over the final three weeks of the campaign, rather than its very last poll alone. The reason for this is that some polling firms may engage in “herding” toward the end of the campaign, changing their methods and assumptions such that their results are more in line with those of other polling firms.

There were roughly two dozen polling firms that issued at least five surveys in the final three weeks of the campaign, counting both state and national polls. (Multiple instances of a tracking poll are counted as separate surveys in my analysis, and only likely voter polls are used.)

For each of these polling firms, I have calculated the average error and the average statistical bias in the margin it reported between President Obama and Mitt Romney, as compared against the actual results nationally or in one state.

For instance, a polling firm that had Mr. Obama ahead by two points in Colorado — a state that Mr. Obama actually won by about five points — would have had a three-point error for that state. It also would have had a three-point statistical bias toward Republicans there.

The bias calculation measures in which direction, Republican or Democratic, a firm’s polls tended to miss. If a firm’s polls overestimated Mr. Obama’s performance in some states, and Mr. Romney’s in others, it could have little overall statistical bias, since the misses came in different directions. In contrast, the estimate of the average error in the firm’s polls measures how far off the firm’s polls were in either direction, on average.

Among the more prolific polling firms, the most accurate by this measure was TIPP, which conducted a national tracking poll for Investors’ Business Daily. Relative to other national polls, their results seemed to be Democratic-leaning at the time they were published. However, it turned out that most polling firms underestimated Mr. Obama’s performance, so those that had what had seemed to be Democratic-leaning results were often closest to the final outcome.

Conversely, polls that were Republican-leaning relative to the consensus did especially poorly.

Among telephone-based polling firms that conducted a significant number of state-by-state surveys, the best results came from CNN, Mellman and Grove Insight. The latter two conducted most of their polls on behalf of liberal-leaning organizations. However, as I mentioned, since the polling consensus underestimated Mr. Obama’s performance somewhat, the polls that seemed to be Democratic-leaning often came closest to the mark.

Several polling firms got notably poor results, on the other hand. For the second consecutive election — the same was true in 2010 — Rasmussen Reports polls had a statistical bias toward Republicans, overestimating Mr. Romney’s performance by about four percentage points, on average. Polls by American Research Group and Mason-Dixon also largely missed the mark. Mason-Dixon might be given a pass since it has a decent track record over the longer term, while American Research Group has long been unreliable.

FiveThirtyEight did not use polls by the firm Pharos Research Group in its analysis, since the details of the polling firm are sketchy and since the principal of the firm, Steven Leuchtman, was unable to answer due-diligence questions when contacted by FiveThirtyEight, such as which call centers he was using to conduct the polls. The firm’s polls turned out to be inaccurate, and to have a Democratic bias.

It was one of the best-known polling firms, however, that had among the worst results. In late October, Gallup consistently showed Mr. Romney ahead by about six percentage points among likely voters, far different from the average of other surveys. Gallup’s final poll of the election, which had Mr. Romney up by one point, was slightly better, but still identified the wrong winner in the election. Gallup has now had three poor elections in a row. In 2008, their polls overestimated Mr. Obama’s performance, while in 2010, they overestimated how well Republicans would do in the race for the United States House.

Instead, some of the most accurate firms were those that conducted their polls online.

The final poll conducted by Google Consumer Surveys had Mr. Obama ahead in the national popular vote by 2.3 percentage points – very close to his actual margin, which was 2.6 percentage points based on ballots counted through Saturday morning.

Ipsos, which conducted online polls for Reuters, came close to the actual results in most places that it surveyed, as did the Canadian online polling firm Angus Reid. Another online polling firm, YouGov, got reasonably good results.

The online polls conducted by JZ Analytics, run by the pollster John Zogby, were not used in the FiveThirtyEight forecast because we do not consider their method to be scientific, since it encourages voters to volunteer to participate in their surveys rather than sampling them at random. Their results were less accurate than most of the online polling firms, although about average as compared with the broader group of surveys.

We can also extend the analysis to consider the 90 polling firms that conducted at least one likely voter poll in the final three weeks of the campaign. One should probably not read too much into the results for the individual firms that issued just one or two polls, which is not a sufficient sample size to measure reliability. However, a look at this broader collective group of pollsters, and the techniques they use, may tell us something about which methods are most effective.

Among the nine polling firms that conducted their polls wholly or partially online, the average error in calling the election result was 2.1 percentage points. That compares with a 3.5-point error for polling firms that used live telephone interviewers, and 5.0 points for “robopolls” that conducted their surveys by automated script. The traditional telephone polls had a slight Republican bias on the whole, while the robopolls often had a significant Republican bias. (Even the automated polling firm Public Policy Polling, which often polls for liberal and Democratic clients, projected results that were slightly more favorable for Mr. Romney than what he actually achieved.) The online polls had little overall bias, however.

The difference between the performance of live telephone polls and the automated polls may partly reflect the fact that many of the live telephone polls call cellphones along with landlines, while few of the automated surveys do. (Legal restrictions prohibit automated calls to cellphones under many circumstances.)

Research by polling firms and academic groups suggests that polls that fail to call cellphones may underestimate the performance of Democratic candidates.

The roughly one-third of Americans who rely exclusively on cellphones tend to be younger, more urban, worse off financially and more likely to be black or Hispanic than the broader group of voters, all characteristics that correlate with Democratic voting. Weighting polling results by demographic characteristics may make the sample more representative, but there is increasing evidence that these weighting techniques will not remove all the bias that is introduced by missing so many voters.

Some of the overall Republican bias in the polls this year may reflect the fact that Mr. Obama made gains in the closing days of the campaign, for reasons such as Hurricane Sandy, and that this occurred too late to be captured by some polls. In the FiveThirtyEight “now-cast,” Mr. Obama went from being 1.5 percentage points ahead in the popular vote on Oct. 25 to 2.5 percentage points ahead by Election Day itself, close to his actual figure.

Nonetheless, polls conducted over the final three weeks of the campaign had a two-point Republican bias overall, probably more than can be explained by the late shift alone. In addition, likely voter polls were slightly more Republican-leaning than the actual results in many races in 2010.

In my view, there will always be an important place for high-quality telephone polls, such as those conducted by The New York Times and other major news organizations, which make an effort to reach as representative a sample of voters as possible and which place calls to cellphones. And there may be an increasing role for online polls, which can have an easier time reaching some of the voters, especially younger Americans, that telephone polls are prone to miss. I’m not as certain about the future for automated telephone polls. Some automated polls that used innovative strategies got reasonably good results this year. SurveyUSA, for instance, supplements its automated calls to landlines with live calls to cellphone voters in many states. Public Policy Polling uses lists of registered voters to weigh its samples, which may help to correct for the failure to reach certain kinds of voters.

Rasmussen Reports uses an online panel along with the automated calls that it places. The firm’s poor results this year suggest that the technique will need to be refined. At least they have some game plan to deal with the new realities of polling. In contrast, polls that place random calls to landlines only, or that rely upon likely voter models that were developed decades ago, may be behind the times.

Perhaps it won’t be long before Google, not Gallup, is the most trusted name in polling.

所有跟帖: 

這個預測有點像瞎貓抓老鼠, 民主黨的投票率出奇的高。 -sleepdonkey- 給 sleepdonkey 發送悄悄話 (180 bytes) () 11/11/2012 postreply 05:43:59

不奇怪,obama 做過community organizer. 他知道怎麽發動群眾。 -Eveline- 給 Eveline 發送悄悄話 Eveline 的博客首頁 (0 bytes) () 11/11/2012 postreply 11:37:14

keep thinking this way ... please. -老忽叔叔- 給 老忽叔叔 發送悄悄話 (0 bytes) () 11/11/2012 postreply 11:45:21

我也正想說這個問題,以前印象GALLUP是比較客觀的 -菜鳥抄股- 給 菜鳥抄股 發送悄悄話 (210 bytes) () 11/11/2012 postreply 06:00:48

客觀和準確是2回事。這次說明他們的統計方法和人群都錯的離譜。 -用戶名被占用了- 給 用戶名被占用了 發送悄悄話 用戶名被占用了 的博客首頁 (0 bytes) () 11/11/2012 postreply 06:43:38

其他不談,怎麽會統計出在FL羅尼大幅度領先這個結論?太耽誤事 -用戶名被占用了- 給 用戶名被占用了 發送悄悄話 用戶名被占用了 的博客首頁 (0 bytes) () 11/11/2012 postreply 06:44:27

元芳:Gallup 故意編造,politically motivated。玩不下去,隻好借口 -SJSharks- 給 SJSharks 發送悄悄話 (32 bytes) () 11/11/2012 postreply 07:03:23

羅尼自己都給騙了,選前過分樂觀了。還有這裏的羅粉 Cris, Daisy, 秋妹 -徒勞- 給 徒勞 發送悄悄話 (0 bytes) () 11/11/2012 postreply 08:01:48

還有這裏的羅粉 Noso: Landslide for Romney! 哈哈哈! 笑S! -SJSharks- 給 SJSharks 發送悄悄話 (0 bytes) () 11/11/2012 postreply 09:18:04

A FoxNews' copycat. -老忽叔叔- 給 老忽叔叔 發送悄悄話 (0 bytes) () 11/11/2012 postreply 11:39:13

所有媒體都大錯特錯了。選前說deadly close,結果是332:270。感覺他們是在拉廣告賺錢 -徒勞- 給 徒勞 發送悄悄話 (0 bytes) () 11/11/2012 postreply 07:59:00

結果是332:206 :)^_^ -SJSharks- 給 SJSharks 發送悄悄話 (0 bytes) () 11/11/2012 postreply 09:20:30

兩個都當選了,一個正的,一個富的:) -老忽叔叔- 給 老忽叔叔 發送悄悄話 (0 bytes) () 11/11/2012 postreply 10:48:00

奧巴馬都贏了還有啥好說的。現在就看他下麵4年了 -Eveline- 給 Eveline 發送悄悄話 Eveline 的博客首頁 (67 bytes) () 11/11/2012 postreply 10:19:13

Reality check. I guess Romney would be better served had he know -老忽叔叔- 給 老忽叔叔 發送悄悄話 (92 bytes) () 11/11/2012 postreply 11:38:30

是啊,他肯定沒有看人口普查。 -Eveline- 給 Eveline 發送悄悄話 Eveline 的博客首頁 (0 bytes) () 11/11/2012 postreply 11:40:54

沒有什麽砸不砸的 -大西洋海底- 給 大西洋海底 發送悄悄話 (608 bytes) () 11/11/2012 postreply 13:38:05

請您先登陸,再發跟帖!

發現Adblock插件

如要繼續瀏覽
請支持本站 請務必在本站關閉/移除任何Adblock

關閉Adblock後 請點擊

請參考如何關閉Adblock/Adblock plus

安裝Adblock plus用戶請點擊瀏覽器圖標
選擇“Disable on www.wenxuecity.com”

安裝Adblock用戶請點擊圖標
選擇“don't run on pages on this domain”