More public discussion on science alone is unlikely to convince people to productively engage in scientific discussions. Zuleyka Zevallos explores the sociology of beliefs, values and attitudes and calls for wider reflexive critical thinking on how scientists understand science and the public. The social sciences in particular are well-poised to improve the public’s trust in science as they are focused on the influence of social institutions on behaviour and can instigate a multidisciplinary dialogue about what it means to do science.
I’m one of around 20 Moderators who run Science on Google+. Our Community is managed by practising scientists and our membership includes researchers as well as members of the public who are interested in science. I run the Social Science stream (along with Chris Robinson who created the Community). Our Community aims to improve the quality of science posts and public outreach, by connecting the public to real scientists. This week, we celebrated the fact that our Community has grown to 200,000 members. The Community receives numerous posts each day. We want to move discussion away from people sharing their personal opinions on “fluff” science pieces that often end up distorted in the news, and instead we’d like to focus on the relevance, validity and reliability of peer reviewed science.
Chad Haney wrote a fantastic post about how social psychology concepts might explain why people refuse to engage with scientific evidence. Chad invited me to comment on his post, and this has led me to crystallise thoughts that I’ve had circling my head since I started blogging seven years ago. Other than a sheer love of the social sciences, why bother with public science? Who is our audience? Does it “work” and how do we measure its success? How can we improve it?
My post will discuss the sociology of beliefs, values and attitudes to describe the cultural, institutional and historical ways in which the public has engaged with science. I present two case studies of “hot topics” that usually draw anti-science comments to our Community regarding gender inequality and genetically modified foods. I show how cultural beliefs about trust and risk influence the extent to which people accept scientific evidence. I go on to discuss how sociology can help improve public science outreach.
Beliefs: Case study of gender inequality
In social science, the concept of belief describes a statement that people think is either true or false. Beliefs are deep rooted because they evolve from early socialisation. They are maintained tacitly through everyday interactions with our primary social networks like family, religious communities, and through close friendships with people from the same socio-economic backgrounds. Beliefs are hard (though not impossible) to change because there is a strong motivation to protect what we believe. Beliefs are strongly tied to personal identities, culture and lifestyle. Beliefs are harder to change in a short frame time because they’re interconnected to structures of power and inequality. Chipping away at one belief means re-evaluating all beliefs we hold about what is “true,” “natural,” and “normal.”
Beliefs are hard to justify objectively because they represent the social scaffolding of all we take for granted. In this meaning, beliefs represent the status quo of what we’re willing to accept. The key to understanding why beliefs are hard to shift comes down to one question: Who benefits from this belief? For example, when we ask: Are men and women fundamentally different? Someone who benefits from patriarchy and doesn’t want to lose their gender privilege will say: “Of course, men and women are different, look around you! Women act this way and they’re from Venus; men act that way and they’re from Mars.” A social scientist will bring up examples from other cultures where gender is organised differently. Still, the other person will see these examples as exceptions to their rules about gender.
When people don’t believe scientists on gender inequality, climate change or vaccinations, it comes down to their assessment of: What does this mean for me? What life changes are required of me? How does this scientific knowledge undermine my place in the world? In other words: how does this science support or threaten my values?
Values: Case study of the adoption and resistance of GM foods
Values are linked to one’s sense of morality. Where belief is something maintained at the individual level through agents of socialisation, values are more easily distinguished through their connection to broader social interests. Values relate to the standards of what individuals perceive to be “good” or “bad” in direct reaction to what our society deems to be “good” or “bad.” Values are shaped by cultural institutions like education and religion. Societies depend on shared values to maintain social order, which is why many societal values are often enshrined in law. Still, values are contested, depending on whose social interest is being served.
Take for example GM foods. In Sweden, public consultation and scientific input is framed around best interests for public good. (Not without controversy.) Some Scandinavian laws will allow GM food to be grown in controlled areas because it benefits their national economy, but they won’t support imported GM foods. In countries like Peru, GM foods have been banned for 10 years because they mostly come from imported products. These products are deemed to be unsafe. At the same time, GM foods conflict with class struggles of the highly political Indigenous farming movement. Peru has also joined other Latin nations to wind down trade with the USA and increase trade with Asian nations. Resistance to GM foods serves a dual economic and political purpose of resisting cultural imperialism and supporting Indigenous farming movements. (Though Indigenous environmental protests are ignored in other areas, such mineral resources and specifically big oil.) The key here is the political economy of Peru (and Latin America more broadly) is informed by socialist values that resist capitalist interests. GM foods have publicly been positioned as part of this capitalist incursion.
In the USA the GMO public debate has been framed around commercial interests. This stems back to early industrial era and the plant patent act of 1930. The commercialisation of American agriculture goes back to the early 19th century, where many farming communities were self-sufficient. By the early 1920s differentiated crop varieties were already established. Trade associations arose as mass production started. With more money at stake, legislation stepped in to formalise the production of seeds. At this time, when economic rationalism was beginning to set precedents, commercial interests won out over collective interests. This isn’t simply a case of greed of corporations, this is about the political economy of early American society. American values were firmly tied to Protestant beliefs (sociologist Max Weber has detailed this thoroughly). The 1930 plant patent was fought heavily on moral and social grounds.
Debates about GM foods are, in fact, a cultural battle over value systems. Are capitalist nations like America still deeply invested in individualistic values or can we move towards collective action? Scientists make the case that some GM food technologies represent a safe, relatively inexpensive way to address hunger. In order to accept this argument, the public needs to be able to trust that the science isn’t governed by commercial entities. How you see this depends on your values: are GM foods “good” or “bad”? A vocal section of the public is distrustful of science on GM foods. They think GM foods are bad. This is the outcome of history, culture and social changes.
In these country-specific cases, science is used to draw very different conclusions. GMOs are either for or against the national interest. GMOs either support social change or they impede progress. Attitudes towards the science depends on the cultural and political interests of different social groups. What’s “good” or “bad” about GM foods depends on whose point of view best aligns with societal values.
Attitudes: Case study on perceptions of risk and trust
Attitudes are relatively stable system of ideas that allow people to evaluate our experiences. This includes objects, situations, facts, social issues and other social processes. While attitudes are relatively stable, they are more superficial than beliefs and less normative than values. Attitudes can be changed more easily than beliefs. Sometimes people will say one thing, especially if it’s socially desirable to do so, but in private they may not adhere to that attitude. Someone can say they support equality but they may not practice it at home. Often times, however, people are not always aware that their attitudes are contradictory.
In contrast to values, which are culturally defined, attitudes are interpersonal. We are constantly interpreting other people in relation to the situation we find ourselves in. Language, motives, emotions and relationships can change attitudes over time. Social context also matters: cultural beliefs and values can influence whether or not attitudes change.
Attitudes about science are shaped by many societal processes, such as education, class, ethnicity and so on. Yet the social science literature has overwhelmingly shown that attitudes towards science are connected to:
- Whether or not people are willing to accept the risks associated with a particular scientific issue; and
- Whether or not people trust scientists in general.
Trust is a multi-dimensional concept; that is to say, it is made up of many different characteristics and these change with respect to a given social group in a particular time and place. Psychologist Roderick Kramer provides an extensive review of the empirical research on trust (he covers studies from the 1950s to 1999). At the interpersonal level, we develop trust in another person based on a belief that they have an interest to live up to our expectations. They care about us, they need us, they have a legal or moral obligation to help us. Among individuals, trust is about behaviour and reciprocity: I’ve proven you can trust me because I have not let you down and because we both understand that our trust goes both ways.
At the societal level, trust doesn’t always work in relation to direct interpersonal engagement. Kramer shows how some people in certain circumstances will trust authority figures based on their history. That is: I trust this organisation because they have a strong reputation and other big players endorse them. Others will trust due to someone’s category of authority (science, politics), their role (medical practitioner, priest), or a “system of expertise” (bureaucratic management). People who trust an authority figure or an organisation’s motives are more likely to accept outcomes, even if they are negative. Trust will matter more when people have a lot to lose, such as when an outcome is unfavourable. For example, when science will lead to social change or some new technological impact that I don’t want because it threatens my beliefs, livelihood, culture, identity or lifestyle – this requires high trust. Bluntly put, more public discussion on science alone is unlikely to convince people to productively engage in scientific discussions.
Even amongst scientists, trust in science and risk perception is affected by sociological processes. A 1999 study of the members from the British Toxicological Society finds that women were more likely to have higher risk perceptions for various social issues in comparison to men. This ranged from smoking, to car accidents, AIDS, and climate change. Looking deeper, it was a specific sub-set of White men who were more likely to perceive a low risk for these social issues; those with postgraduate qualifications who earned more than $50K a year and who were conservative in their politics. They were more likely to believe that future generations can take care of the risks from today’s technologies; they believed government and industry can be trusted to manage technological risks; they were less likely to support gender equality; and they were less likely to believe that climate change was human-made (bearing in mind climate change science has since developed further). These men had a higher trust in authority figures and they were less likely to support equality and social change. Why? Because being in a position of relative social power, they had the most to lose from social change on environmental, gender and political issues.
So if beliefs and values are so seemingly immutable, and attitudes mask underlying motives that people are unaware about, how do we increase trust in science to improve the tangible outcomes of public outreach?
Reflexive critical thinking and how sociology can help
People like to think of themselves as objective and even-handed when weighing up evidence. This is one way to see critical thinking. I know that when I raise the idea of critical thinking on Science on Google+ when people are espousing sexist, racist or anti-science views, people get very irate. Critical thinking makes up a basic component of all scientific training, but reflexive critical thinking is so much more than simply taking in information, or arguing against views that conflict with our own.
Sociologist Ulrich Beck argues that a general state of reflexivity might be part of the reason why people are so worried about technological and social risks in ways that are not especially productive. Constantly distrusting institutions for the sake of distrusting institutions doesn’t get society very far. Questioning information that conflicts with our own is often a circular pursuit when both people are convinced they’re right, even though they’re relying solely on belief rather than evidence. You don’t really move forward.
Reflexive critical thinking is a methodology for knowing how to question information, as well as identifying and controlling for our personal biases. Scientists need to constantly engage in enquiry and reflection. Science requires that researchers actively change their ideas upon reflection of new interactions and new evidence. Again, I return to the question of: who benefits?
To date, science has been mediated through journalists who don’t often reflect the science correctly. They sensationalise and give misleading headlines. They report selectively or worse – they present science as simply validating pre-existing beliefs and stereotypes. Blogging, social media, and collective efforts such as the Science on Google+ Community help scientists take back control of how science is presented to the public.
Part of this effort, we are quickly realising in our Community, is that public science education is not just about telling people about our own research, but publicly discussing other scientists’ research. Public critique of poor science journalism is also important. Both of these pursuits helps the public better understand what methodologies are used to weigh up evidence. Moreover, we’re seeing value in educating the public about how to read and discuss science. Our recent Science on Google+ Hangout (here) included a discussion of what our Curation team looks for in Community posts. We also have a range of tips for improving Community posts, our Curator’s Choice showcase examples of excellence, and we give other advice on general science writing for Google+.
These efforts are not about turning the public into a mass of scientists. No amount of public outreach can teach you to become a scientist. To qualify as a scientist you need structured learning. You need to learn to run your own research projects under strict supervision by an expert. It also means getting training in methodologies and ethics, and practising science through teaching and research assistance work. By stripping back the scientific process a little further, however, reflexive critical thinking can help the public better understand how science works. This process may go a long way to increasing public trust in science.
Public outreach, including all the hard bits like answering public questions and comments, is time consuming. Yet if we are going to repair our relationship with the public, we need to do more than just write in publications that speak to people who already think like us. For every amazing and dedicated sociologist I know who blogs, I know dozens upon dozens who do not do any public engagement whatsoever. Why should they? It doesn’t pay and it takes up effort that might be spent on peer-reviewed publications that universities covet. So, instead, they write in esoteric academic journals and wait hopefully for traditional media to pick up their research. Don’t get me wrong: the peer reviewed system is important to scientific advancement. We need the formal critique of our colleagues, and we need to contribute to the theoretical and empirical growth of our discipline. The problem is that this work is largely hidden from the public eye.
Open access journals are still not the norm in academia, and certainly not in sociology. As Susanne Bleiberg Seperson and others have argued, sociology has a public image problem. The public doesn’t know what we do, let alone how we do it. We write in jargon. We write in private circles that the public can’t join. We are seen as too theoretical and not very practical.
More social scientists need to step up to do the hard public science. That is, not just talking to the media once in a while, but talking more regularly with the public. The social sciences are well-poised to improve the public’s trust in science because our work is focused on the influence of social institutions on behaviour. We are not above critique on these grounds. My blog has regularly shown how even as we expand social knowledge of culture and inequality, Western social sciences can misappropriate minority cultures or exclude Indigenous voices.
Sociology has long invested in a critique of the natural sciences, probably most famously by Michel Foucault in The Birth of the Clinic and other texts. The fact is that our craft also needs scrutiny and public engagement. I see great value in contributing to a public, multidisciplinary dialogue about what it means to do science. This means collaborating within non-social-scientists on improving public science communication, while also engaging in mutual critique of methods, conclusions and applications of science in the real world.
Many of the anti-science critics are espousing cultural arguments without knowing it. This is where public sociology can really shine, by showing how inequality, social values and power affect how people engage with science. This is a big and important job. We need all hands on deck. This means you!
This is an extract of a piece which originally appeared on The Other Sociologist and is reposted with the other’s permission.
Note: This article gives the views of the author, and not the position of the Impact of Social Science blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.
One immediate problem I would identify, as a sociologist of ‘science’ (I use the scare quotes deliberately to problematize how people invoke insider versus outsider status as ‘scientists’ etc.) is: what does the author mean by ‘anti-science critics’? The term ‘anti-science’ is frequently used as an ad hominem, for example, against people critiquing certain claims by scientists, but that term is then a misrepresentation of the critic’s position. It also raises the question: what would ‘pro-science’ mean? Another problem, one I am particularly interested in, is how are ‘the public’ treated by ‘scientists’ when ‘the public’ have legitimate criticisms to make about the claims or practices of ‘scientists’? This is not addressed by this article. It seems rather one-sided, implying that ‘the public’ is a largely ignorant and culturally unreflective mass subject to mistaken ideas about ‘science’, that ‘scientists’ need to educate, with the help of ‘science’-savvy sociologists. That public critiques may be rational and legitimate is not countenanced, which is a problem. The critique, by social scientists, of certain socially constructed aspects of science practice and claims has a wide range in social science, beyond Michel Foucault. The understanding that (a) ‘scientists’ are not necessarily reflective and critically analytical of the social systems within which they work and that this causes problem in how ‘scientists’ engage with ‘the public; (b) ‘the public’ are not necessarily per se ignorant of science issues (c) social scientists are often deemed as ‘ignorant public’ by ‘scientists’ precisely BECAUSE of ‘scientist’ ignorance of social science, is also missing from this deliberation. Lastly, a tour of ‘skeptic’ and ‘science’ forums and blogs show a massive contempt for those deemed of ‘outsider’ status, as ‘non-scientists’: this problem at some point is going to need to be addressed by social scientists more fully, especially those trying to promote positive public engagement with science (although even the need for that can be contested, depending on ideological issues around different types of ‘science’).
Hi Angela. Thanks for your comment. I replied to you last week but my response did not seem to go through. I agree that a sociology of science is important. I have addressed the points you raise in my longer blog post which is linked at the bottom of this article. The LSE has published an edited version of my longer essay (though they have still been generous with the length). In my longer blog post, I show that scientists of all fields, including from the social sciences and sociology specifically, have cultural biases towards reading science. My use of the term “anti-science” is explained with respect to people who dismiss peer-reviewed empirical evidence to protect their vested political interests and social privileges. I provide case studies of how this works with respect to gender equality, the environmental movement, GM foods, and in the attitudes towards scientific risk that scientists hold. I have showed that sociology does not escape this critical lens. The public’s mistrust of science is historically informed; again, I address how this has evolved in more detailed in my original post.
You have evoked Foucault’s work which I also referenced in my article. Nowhere do I imply that the public is ignorant and should accept all science without question. Instead, I talk about the need for reflexive critical thinking about science. My post is about how cultural biases stop people from engaging with certain studies and findings. I am advocating that social scientists step up and help shape how the public critically engages with science. Please see my longer article for further discussion.
Thank you Angela.
Following on from your thread……..
The analysis of science by non-scientists is not a new phenomenon. Philosophy of science and the sociology of science associated with the name Robert Merton have a long history. What has attracted the ire of the critics is the turn toward the social analysis of the content of science in addition to analysis of its social organization.
Hi mickdonalds64. Absolutely, this is most certainly is not a new phenomenon. I’m not sure what you mean by “critics” here. I’ve written about the decline in public trust in science, which significantly dropped in the mid-1960s. Social media is merely making this critique more visible. My point is that we need to contribute to a more profound public understanding of science. How does it work, why do we rely on peer review, what are the ethics of practising science, how do cultural values and politics influence how we do and understand science. We should also be writing about our research in plain language, rather than simply in academic texts and journals, and engaging in public discussion about the significance of our findings. If you’d like to know more, have a read of my longer blog post. The link is at the bottom of this article.