Most evaluations of public engagement work focus on the impacts on the public participants. But what about the benefits of public outreach on the scientists themselves? Karen Peterman, Elana Kimbrell, Emily Cloyd, Jane Roberston Evia and John Besley have created new scales to document the mutual exchange of ideas that is central to the public engagement approach, and the influence of this approach on scientists.
Ever wonder if there are any benefits to public outreach for scientists? Most evaluation of public engagement with science has focused on impacts on the public participants. Until recently, scales were not available to document the mutual exchange of ideas that is central to the public engagement approach, and the influence of this approach on the scientists who participated. We created two new scales was to meet this need. The scales were commissioned by the American Association for the Advancement of Science (AAAS’s) Center for Public Engagement with the intention of developing common measures that might be used across a range of public engagement projects.
The first scale was developed to measure a scientist’s self-efficacy for public engagement with science; defined as their belief in their ability to succeed in participating in reciprocal public engagement activities. We conducted “think-aloud” interviews with scientists, asking them to read each survey question aloud, explain what they thought it meant, and then share what they were thinking about as they decided which rating to choose. This process helped ensure the items on the scale made sense to scientists, that they were interpreting the items as we intended, and that the items felt relevant to scientists’ experiences with public engagement. It also provided data to guide editing for some items and helped determine that some items could be removed entirely.
Next, we collected data from scientists using the revised items. The data from those scientists were then analysed using item response theory, a technique to understand the strength of individual items in detecting what you are trying to measure. In our case, we wanted items that could detect both low and high self-efficacy. These statistics helped us narrow the scale down to 13 items that provided the best measures of self-efficacy. The final scale includes statements such as: “I am able to create activities that participants find engaging”, and “I am able to moderate discussions with participants, even when they include a wide range of perspectives.”
We used the same process to develop a second scale. This measures a scientist’s outcome expectations for public engagement, or their belief in the effectiveness of a specific public engagement activity to benefit both themselves and the publics who attended. A scientist’s outcome expectations related to outreach would be expected to inform the extent to which they continue to engage with the public as well as the nature of such engagement. The validity evidence for this scale supported the use of six items. Scientists reflect on a specific public engagement activity and rate their agreement with statements such as: “the activity helped participants connect science to their everyday lives”, and “the activity provided me with an opportunity to learn from the broader community.”
The scales were developed with the hope they will be used in evaluations and research that will contribute to our growing understanding of public engagement with science. The self-efficacy scale, for example, can be used as either a reflection tool or as a tool to collect data across time to document changes in scientists’ self-efficacy that would be expected to result from science communication training programmes. Though our published validation work did not include use of the scale before and after a science communication intervention, additional pilot work has indicated that the scale is sensitive enough to detect change in self-efficacy over a year-long training intervention. The outcome expectations scale might be used to understand the factors that contribute to scientists’ continued participation in public engagement activities. We believe this scale holds particular promise if used as a measure in multivariate research and evaluation efforts that investigate outcome expectations alongside other constructs to understand public engagement comprehensively.
Given our interest in the potential for the scales to be common measures for the field, we have made the results of our work available in three formats. Snapshot reports were designed for those who might not want or need to know the details of the psychometric analysis, but who are interested in a quick snapshot of each scale and ideas for how it might be used. Technical reports were created to include brief details about the psychometric work done to validate each scale. Research articles were also published in the academic literature to provide the full rationale and details of the analysis.
- Self-efficacy: research article, snapshot report, technical report
- Outcome expectations: research article, snapshot report, technical report
Though these resources document the potential of each scale, whether and how they are used by the broader evaluation and research communities will be the true test of whether and how they add value to the field. We hope that the scales will be of value to these communities, and would welcome feedback from those who do and do not choose to use them in future work. Please contact Karen Peterman or Emily Cloyd with any feedback you have to share.
This blog post is based on the authors’ article, “Assessing Public Engagement Outcomes by the Use of an Outcome Expectations Scale for Scientists”, published in Science Communication (DOI: 10.1177/1075547017738018).
Featured image credit: Yelp Science Fair / Dark Matters by Mack Male, licensed under a CC BY-SA 2.0 license.
Note: This article gives the views of the authors, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.
About the authors
Karen Peterman is the President of Karen Peterman Consulting, Co., a firm that specialises in the evaluation of and research on STEM education projects. Her research focuses on developing and studying methods and measures that are appropriate for use in informal learning environments.
Elana Kimbrell is a Communication Program Officer with the American Association for the Advancement of Science’s Center for Public Engagement with Science and Technology. She manages an online community of practice and supports other work to bridge research and practice in public engagement.
Emily Cloyd is the Project Director for the American Association for the Advancement of Science’s Center for Public Engagement with Science and Technology. She is a scientist and public engagement enthusiast and focuses her work on building scientists’ skills in communicating and engaging the public around science.
Jane Robertson Evia is Assistant Collegiate Faculty in the Department of Statistics at Virginia Tech. Her research interests include statistics and STEM education, collaborative learning, self-efficacy, and programme evaluation.
John C. Besley is an associate professor and the Ellis N. Brandt Chair in Public Relations at Michigan State University. He studies how views about decision-makers and decision processes affect perceptions of science and technology. His work emphasises the need to look at both citizens’ perceptions of decision-makers and decision-makers’ perceptions of the public.