As the largest and possibly most diverse democracy in the world, India presents pollsters with a significant challenge. Sam Solomon recently spent 10 months working with survey research organisations based in Delhi, and here outlines key takeaways based on his findings.
I spent the last ten months in Delhi as a Fulbright-Nehru scholar working with three survey research organisations — the Centre for the Study of Developing Societies (CSDS), the Centre for Voter Opinion Trends in Electoral Research (CVOTER), and Impetus Research — to explore the challenges of polling India, the world’s largest democracy. During my time in Delhi, I assisted with the questionnaire design, sampling, and analysis of several election studies; interviewed senior public opinion researchers with decades of experience; and directly observed field investigators collecting data for pre-polls, exit polls, and post-polls in the five state elections occurring during my visit (Bihar, Assam, West Bengal, Kerala, Tamil Nadu). Below are some of my principal findings. They are something of a synthesis of my presentation at the South and Central Asia Fulbright Conference and an op-ed piece written for The Hindu following the Bihar elections.
1. Pay close attention to methodology when reading polls
Pollsters adopt different methodologies to measure public opinion in India. While CSDS conducts all its surveys with face-to-face interviews, CVOTER conducts some polls, including its all-India tracker polls, over the telephone. But nearly all researchers and journalists I interviewed agreed upon a need for greater transparency from Indian pollsters when reporting their research. It is not enough to just list sample size when reporting a poll. What was the demographic profile of the sample? Did it match the profile of the population being studied? Were interviews conducted in-person, over the phone, or by another means? On what dates were the data collected? Many of the polls I read about in newspapers, and particularly exit polls on television, omitted these details, which should be presented with every poll. An Indian Polling Council is being launched to encourage the spread of greater transparency among Indian pollsters.
The public nature of interviews and time pressure factor for exit polls makes them especially difficult to execute. As I observed in Bihar, Assam, and West Bengal, probability sampling becomes more challenging when interviews are conducted publicly. Voters are less likely to want to be interviewed right in front of their polling stations. When they are, they are more hesitant to express their opinions. The challenge of collecting data quickly and sending it to a media outlet within the same day adds to the difficulties. Praveen Rai of CSDS recounted how CSDS exit polls badly misprojected the 2007 Punjab Vidhan Sabha election results. A heavy rainfall earlier during the day meant that the Akali Dal did not send out buses to collect its voters until later in the evening. Because CSDS had to collect and send data to their media affiliate by the afternoon, they vastly under-sampled Akali Dal supporters and incorrectly projected a Congress win.
2. Polls give an estimate of the vote share a party will win. Seat share is another matter
Many public opinion researchers, and even some journalists, complain about the incessant focus of media attention on the seat projections that anchor any panel discussion of polls on Indian television. It is of course natural that news consumers will be interested in extrapolating from polls who will win an election. But India’s first-past-the-post electoral system and multiplicity of political parties mean that small shifts in overall vote share can produce vastly different outcomes in seat share. Seat projections based on polls are therefore art just as much as science.
The only researcher who has publicly explained in any detail how s/he generates seat projections is Rajeeva Karandikar of the Chennai Mathematical Institute, who made seat projections for CSDS’ polls that appeared on CNN-IBN (CSDS now dutifully avoids seat projections). Karandikar is quite open about the pitfalls of using polls to predict how many votes a party will win. The exercise involves an assumption that party vote shares will increase or decrease uniformly across states when this is rarely the case. Further subjective judgments are involved whenever old alliances are broken and new ones formed — as was the case with the JDU breaking its alliance with the BJP in 2013 then aligning with the RJD and INC during the 2015 Bihar Vidhan Sabha elections. The modeller must assess how much of the vote won by the JDU-BJP alliance in the past will go with the JDU and how much will go with the BJP, and has little beyond intuition to do so. Karandikar thus encourages skepticism when reading or seeing any seat projections; it is far more informative to pay attention to the vote share. This may be frustrating — who doesn’t want to know who is actually going to win the election? — but it is a much more judicious use of survey data.
3. India’s diversity makes some populations more difficult to survey than others
While researchers differed on the specifics, certain populations were repeatedly cited as generally more difficult to survey than others: Muslims, dalits, women, urban residents. With publicly available census data on most of these demographics, researchers can weight their data to ensure that their sample demographics appropriately match the demographics of the population they are studying. This is not the case for caste, however. While the decennial census includes figures about the numbers of Scheduled Castes (SCs) and Scheduled Tribes (STs) in each state, it offers nothing beyond that; all other castes are grouped together as “Others” and no specifics are included about the breakdown of these groupings. Since the last caste census for which researchers have data was conducted by the British in 1931, there are no figures for the share of Other Backwards Classes (OBCs) in each state, to say nothing of the share of each individual caste. Public opinion researchers of India would thus greatly benefit from the release of caste data from the 2011 Socioeconomic Caste Census, so they could all weight to the same statistics.
The effects of India’s incredible cultural, social, religious, ethnic, linguistic, and socioeconomic diversity upon respondent participation of course differs across each state and often within each state. Assam’s many tribal groupings and their corresponding languages mean that Assamese-speaking field investigators sometimes cannot interview Bodo-speaking respondents in their assigned polling station. Muslim residents of Patna may be less likely to participate in exit polls than Hindu residents. In areas of Tamil Nadu where inter caste violence is an endemic problem, field investigators must exercise discretion when asking about caste in certain villages. Husbands and fathers of female respondents may be more likely to answer for their wives and daughters in some parts of India than others. Response rates among Dalit respondents may increase with the election of a Dalit chief minister (such as Mayawati in Uttar Pradesh) or a backward caste chief minister (such as Lalu Yadav in Bihar).
These examples all highlight the need for local knowledge, in the form of competent and experienced field investigators, when measuring public opinion amidst India’s complex and variegated social realities.
Note: Sam recently visited LSE to participate in an Explaining Electoral Change in Urban and Rural India (EECURI) workshop held at LSE on 7-8 June 2016. This article gives the views of the author, and not the position of the South Asia @ LSE blog, nor of the London School of Economics. Please read our comments policy before posting.
About the Author
Sam Solomon has experience conducting quantitative and qualitative research projects across seventeen countries in the Middle East, North Africa, and South Asia. You can read more about his research in India at Polling One Billion.