It has been well established that men and women experience differences in access to social and financial capital, hiring outcomes, and performance evaluations more generally. One commonly cited explanation for these observed gender differences has to do with a lack of objective information. Specifically, in the absence of concrete information about quality or expected performance, evaluators have been found to favour men across a number of contexts because they expect men to be better performers, on average. However, previous lab-based research on double standards suggests that women may still be disadvantaged even when clear indicators of quality are present.
To what extent is gender still factored into evaluations when unambiguous, objective indicators of quality are available and market pressures incentivise evaluators to select on quality alone? Since evaluators in previous lab-based studies do not face the competitive pressures typically present in organisations and markets, it remains unclear just how extensive double standards really are in practice.
We address this challenge using data from a unique online platform on which buy-side (e.g., hedge fund, mutual fund) investment professionals share investment recommendations and these recommendations are subsequently evaluated by peers on the platform. This setting is well-suited for addressing our research question because investment professionals have access to objective quality information, as each recommendation is associated with its return to date, and are motivated to identify quality. As one investment professional told us: “I am in the business of making money, I don’t care who you are, as long as you can help me in that goal.”
We collected data on the 3,520 buy- and short-sell recommendations submitted for stocks listed on a U.S. exchange over a six-year period (2008 to 2013; by 1,550 unique investment professionals). In this study, we focused on a two-stage evaluation process where investment professionals first choose whether to click on a recommendation in order to access more detailed analyses and then anonymously rate the recommendation. In the first, or attention, stage investment professionals are deciding which recommendations they want to look at more closely, or click on, based on a limited set of basic information. Specifically, they see the name of the company (stock) being recommended, the price the recommender forecasts the stock will reach, the market return since the recommendation was submitted, as well as the recommender’s name and their employer’s name. Once they click on a recommendation, they enter the second, or feedback, stage where they have access to the detailed analyses and research supporting the recommendation and can rate the recommendation on a five-point scale.
On the platform, gender is never explicitly stated during the evaluation process; therefore, to stay close to the data generating process we use an investment professional’s first name to infer gender. Specifically, we scored each name in terms of how likely it is to be associated with a woman using an IBM algorithm. We found that recommendations posted by individuals with a more female name were significantly less likely to be clicked on in the attention stage.
For example, recommendations submitted by individuals with very female names, such as “Mary,” received approximately 25 per cent fewer clicks than those submitted by investment professionals with more male names, like “Matthew.” To rule out the possibility that some unobserved element associated with gender, such as personality, could be driving our results, we also ran the analyses on the subsample who self-disclosed their gender as male. Our results are robust in this subsample analyses: men with more-female-sounding names, such as “Kelly,” received fewer clicks.
One key factor predicting when this gender penalty was most pronounced was the number of recommendations in the consideration set. The gender difference was largest during high-volume periods where evaluators saw a larger number of recently submitted recommendations. These results suggest that gender serves as a sorting mechanism during times of increased search costs, where evaluators are unable to devote careful attention to the merits of each option or candidate. Furthermore, we did not observe any gender differences in the feedback stage of the evaluation process, where evaluators are focused on a single recommendation and have much more detailed information on which to base their assessments.
Even in organisation and market contexts where performance is typically paramount, our results suggest that performance information alone is insufficient for redressing inequality, especially when evaluators are taxed with a large number of options. Therefore, one implication of this study is that obfuscating gender-related information may be worthwhile when evaluators are assessing a large set. Once the pool of candidates or options is narrowed down, our results suggest that gender is far less likely to creep into assessments. This is especially true if additional pertinent information is available about a smaller set of candidates, as we saw in the feedback stage. Generally, our results suggest that women (and likely other historically disadvantaged groups) may benefit from providing more extensive information about their quality and performance.
- This blog post is based on the authors’ paper Pursuing Quality: How Search Costs and Uncertainty Magnify Gender-based Double Standards in a Multistage Evaluation Process, Administrative Science Quarterly, Vol 62, Issue 4, 2017
- The post gives the views of its authors, not the position of LSE Business Review or the London School of Economics.
- Featured image credit: Design by geralt, under a CC0 licence
- When you leave a comment, you’re agreeing to our Comment Policy.
Tristan L. Botelho is an assistant professor of organisational behaviour and a faculty affiliate of the program on entrepreneurship at the Yale School of Management. His research interests are careers, innovation, social evaluation, and strategy, primarily focusing on digital platforms and entrepreneurship. He received his Ph.D. from the MIT Sloan School of Management in 2017.
Mabel Abraham is an assistant professor of management at Columbia Business School. Her main stream of research examines how organisational and social network processes contribute to gender differences in outcomes. Another stream of her research looks at resource exchange patterns among entrepreneurs more generally and the performance implications. She has received several prestigious awards for her research, including the Academy of Management’s Pondy Best Dissertation Paper Award (2015). Prior to pursuing a career in academia, Professor Abraham worked in defined benefits consulting and risk management at Fidelity Investments. She completed her PhD in Management at the MIT Sloan School of Management.