The problem

back to issue

TO err is human, and to err again and again is to be a pollster, not just in India but in most parts of the world.1 Does this then mean that we should give in to the demands made by some to ban election polls? Or, at least eliminate televised coverage of election forecasting, since it is little more than ‘a circus in town’, as critics of election polls like to suggest. Yet, even if election related polls are to be banned or contained, can we in policy making, in the election campaign design world and in academia, do without the data that is generated from opinion polls?2

In popular imagination, the only utility of opinion polls is the ability to predict election results. The controversy related to election polling and forecasting in the public domain is largely due to a couple of perceptions. First, if the polls are often wrong in election forecasting, then why do them? Second, many argue that opinion polls sway voters’ preferences, possibly create hawa (bandwagon effect) in favour of a particular party or sometimes ‘sympathy’ for another party (underdog effect),3 thus subverting the process of electing a government by fair means.4 The evidence to support this claim, at least in India, is very thin.

While a large part of polling during the elections is purely for the purpose of forecasting elections, the overall goals of many opinion polls are much broader. They are carried out as part of larger academic pursuits.5 This issue of Seminar deals with three basic poll-related issues. First, is polling an imperfect science? Second, what is an ideal time to conduct a poll that proposes to study the political behaviour of citizens? And third, what would be the academic cost of not conducting polls at the time of elections?

The comparative insight from country cases and other literature highlights several important aspects of polling. First, opinion polls, though imperfect, have come a long way in the last few decades in terms of decoding citizens’ political preferences. Second, though political parties and their leaders remain mostly critical of surveys, especially if they point to a probable outcome against their party, but they too conduct their own in-house surveys to plan election strategy. Third, polls often go wrong in forecasting results, but for very different reasons such as not following basic minimum methodological protocols, or design their inability to predict outliers. There is no one particular reason why polls err. Fourth, unless they have been horribly off the mark in estimating vote shares, the data generated is still useful for academic purposes, provided that the poll was conducted following proper methodological protocols. In particular, the data from opinion polls have been helpful in providing insights into some of the big questions of Indian politics, such as the higher participation by marginalized sections of Indian society6 and majoritarian impulses among Indian voters.

Still, after every election we rush to the conclusion that election polling and forecasting, a complex election based on a survey of few thousand voters, is nothing but a big tamasha (spectacle). Before we are pressured to join one side or the other of this debate, it is important that we have a nuanced understanding of what opinion polls are, how they help in discussing crucial questions of politics and society, and their limitations.

While the design of survey instruments, modes of conducting interviews, training of investigators and budgetary resources play a large role in the quality of the final data, the timing of polls also matters. The hardest part of polling is generating a random sample that is large and representative enough of the population the pollsters or researchers want to study. It is here that most polling agencies in India falter and have a dubious record. Notably, except for a few, none of the polling agencies make their methods publicly available.8

The data that is collected through polls is processed: the representativeness of the sample is checked and corrected for any skewness.9 Statisticians then use mathematical models to arrive at estimates of vote shares and seat predictions. This may sound odd, but by definition, ‘estimate’ and ‘prediction’ can only be ballpark-range figures.10 The demand of statistical precision from pollsters is thus unwarranted. The whole process of generating data from the field is long, tiresome, cost-intensive and often messy. This is not something that comes across to an average person when s/he sees a complex election and cumbersome data gathering exercise boiled down on TV screens, into a few numbers. During the span of a few hours of prime time viewing, we are thrilled and glued to TV sets at the thought of being able to predict the winner. Anchors do not want to dampen expectations by lecturing viewers about ‘margin of error’11 and how a change of few percentage points, which often happens, may well turn the results upside down. Consequently, the average viewer has an unrealistic expectation from election polls.

And once it becomes clear after election results are announced that most polls were wrong, there is usually an outcry to ban all polls. The criticisms range from naive comments based on unfair expectations, to much deeper questions that lie at the heart of various traditions of social science inquiry.12 Supporters of opinion polls routinely invoke the principle of freedom of speech and expression, whereas opponents may even go to the extent of questioning pollsters’ motives. In the case of the latter, all pollsters get painted with the same brush, labelled as ‘sold out’, such that the whole enterprise of election studies is seen as no different from the practices of palm readers and astrologers.

There is good evidence to suggest that the political preferences of a large part of Indian voters are not very stable and that voters regularly review and revise their vote choice. This feature, combined with India’s first-past-the-post electoral system with multiple parties competing and changing their pre-poll allies in every successive election, makes forecasting elections in India a herculean task. Perhaps that is why psephology in India has to be as much an ‘art’ as a ‘science’.13 The data that is collected through polls must be carefully interpreted with the deep knowledge of India’s diverse polity and society for meaningful insights.14

All said, the election studies enterprise in India and elsewhere have had its fair share of failures. There is a methodological debate about whether surveys are the best way to capture voters’ mood or preferences. Some have argued that an investigator spends far less time with a respondent than is necessary to win her confidence and gain insights into why she votes the way she does. While this criticism is fair, alternative methods of gauging voters’ moods, such as ethnography, are too narrow in their scope to help provide a macro understanding. They are nonetheless useful in providing ‘thick descriptions’ and laying out the mechanisms that connects the causal chain. Meanwhile, journalistic accounts often suffer from selection bias, ranging from the choice of travel route to the people interviewed.

In addition to the many challenges of conducting polls in India, there are other important issues that demand serious introspection on the part of election studies scholars. For instance, sample surveys can, at best, give snapshots of reality; they are less helpful when it comes to answering the ‘why’ question. The solution currently being offered is a turn towards experimental methods. However, there are disadvantages associated with conducting randomized survey experiments.15 For example, due to their design, randomized experiments are not very helpful in collecting information on many themes simultaneously, thus limiting the possibility of conducting a holistic analysis of citizens’ preferences.

Similarly, as many have argued, since the research agenda of the election studies enterprise has been so time-sensitive, it has almost entirely ignored the study of society and politics in between elections – a time when many durable political preferences are formed. As a result, crucial questions of nationalism, citizenship, and what democratic governance means to Indians have received insufficient attention.16 Another major criticism of the election studies enterprise in India is that it is much too western in its orientation – a ‘significant proportion’ of the questions placed in survey instruments do not fit well into Indian contexts. Equally, the wording of questions is often difficult for average respondents to decipher, thus eliciting a lot of ‘no opinion’ responses, generally from women, the uneducated and lower segments of Indian society. This skews the overall results, creates a bias against the underprivileged and defeats the very point of conducting a representative survey.

This issue of Seminar seeks to engage with some of these criticisms, as also examine various aspects of election polling and forecasting. It discusses some of the benefits and limitations of election studies enterprise in India in a comparative context. In the final instance, elections are merely a window and opinion polls are just one of the many tools used to peer through that window to study the political and social fabric of our society with the ultimate goal of understanding its democratic health. Hopefully a clearer understanding of the tools used will help one to better contextualize the results delivered.

RAHUL VERMA

 

* I would like to specially acknowledge Suhas Palshikar’s contribution in conceptualizing this Seminar issue and Sanjay Kumar for his generous support and guidance. Anustubh Agnihotri, Pranav Gupta, Sam Solomon and Shreyas Sardesai provided comments on the first draft. Susan Ostermann, as always, patiently read through various drafts, and Philip Oldenburg graciously helped in converting it to its current shape. The remaining errors are mine alone.

Footnotes:

1. Kaushik Basu, the present Chief Economist at the World Bank, said something very similar about economic forecasting last year.

2. We are largely concerned with polls that are conducted at the time of elections.

3. There is some indirect evidence to suggest that opinion poll results also influence voter turnout patterns.

4. See, Jill Lepore. ‘Are Polls Ruining Democracy? Politics and the New Machine’, New Yorker, 16 November 2015.

5. This in no way suggests that election night forecasting is a non-serious business. A lot of methodological innovation in the field of election studies has taken place to improve election night forecasting.

6. See, Yogendra Yadav, ‘Understanding the Second Democratic Upsurge: Trends of Bahujan Participation in Electoral Politics in the 1990s’, Transforming India: Social and Political Dynamics of Democracy. 2000, pp 120-45. For a somewhat contrary view, see, K.K. Kailash, ‘The More Things Change, the More They Stay the Same in India’, Asian Survey 52(2), 2012, pp. 321-347.

7. See, Suhas Palshikar, ‘Majoritarian Middle Ground?’ Economic and Political Weekly 39(51), 2004, pp. 5426-5430. Suhas Palshikar, ‘The BJP and Hindu Nationalism: Centrist Politics and Majoritarian Impulses’, South Asia: Journal of South Asian Studies 38(4), 2016, pp. 719-735.

8. Many have made a plea to various polling agencies to be transparent with their methodology. See, for example, Yogendra Yadav, ‘Opinion Polls – The Way Forward’, The Hindu, 12 November 2013. Rukmini S., ‘Behind the Method of the "Most Successful Pollster’’’, The Hindu, 20 December 2013. Karthik Shashidhar, ‘How to Make Opinion Polls More Honest’, Live Mint, 27 February 2014.

9. The use of sampling weights to correct skewness in data has an established tradition in statistics. This is not data massaging or manipulation, as it is sometimes called.

10. In an interview with Yogendra Yadav on the 1989 and 1991 Lok Sabha election predictions, Prannoy Roy put this point very succinctly: ‘When you get something spot on it’s bound to be a bit of fluke: the methodology doesn’t allow you to get anywhere but within 20 seats of the final result.’ See, Yogendra Yadav. ‘Interview with Prannoy Roy’, Seminar 385, 1991, pp. 61-63.

11. ‘Margin of Error’ is a statistical expression. Every survey has a margin of error. This simply means that the estimate arrived at from a random sample would differ from ‘truth’ or actual reality, if every person in the population was surveyed, simply due to chance. The higher the sample size (provided it followed a rigorous random sampling technique), lower the margin of error.

12. For a quick overview, see, Yogendra Yadav, ‘Whither Survey Research: Reflections on the State of Survey Research on Politics in Most of the World.’ Malcom Adiseshiah Memorial Lecture, 2008. Accessible at: http://www.lokniti.org/Malcolm_adiseshiah_ memorial_lecture_yy.pdf

13. See, the discussion between Yogendra Yadav and Ranjit Chib, ‘Psephology is not a science like microbiology... it’s poll studies. But everyone thinks only of seat forecasts’, The Indian Express, 27 January 2008. Accessible at:http://archive.indianexpress.com/news/-psephology-is-not-a-science-like-microbiology...-it-s-poll-studies.-but-everyone-thinks-only-of-seat-forecasts-/265885/0

14. See, Philip Oldenburg, ‘Pollsters, Pundits, and a Mandate to Rule: Interpreting India’s 1984 Parliamentary Elections’, The Journal of Commonwealth and Comparative Politics 26(3), 1988, pp. 296-317; Yogendra Yadav, ‘Polls, Prediction, Psephology’, Seminar 385, 1991, pp. 56-59; Rajeeva Karandikar, Clive Payne and Yogendra Yadav, ‘Predicting the 1998 Indian Parliamentary Elections’, Electoral Studies 21(2), 2002, pp. 69-89.

15. Randomized survey experiments mimic the methodology behind clinical trials, where one group is given the ‘treatment’ and the other group acts as a ‘control’, in order to determine the affect of a given drug. In the social sciences, researchers randomly divide a selected sample into control and treatment groups; the treatment group then gets a slightly different question (or similar) from the control group, in expectation that the treatment group will answer (or behave) differently in comparison to the treatment group given the different stimuli applied.

16. The State of Democracy in South Asia (SDSA) surveys conducted by the Lokniti-CSDS group in 2005 and 2013 are perhaps exceptions to this larger trend.

top