What organisational conditions nurture creativity and promote innovation? Steve Jobs—whose track record suggests that he could have answered this question authoritatively—asserted once that “killing good ideas” is “a hallmark of great companies.” Bob Sutton relates the incident as part of an argument that, while the ability to generate good ideas obviously is important, focus and discipline in deciding which projects to green-light—and which ones to abort—also are crucial (though often overlooked) ingredients for commercial success. These attributes, as Sutton notes, prevent bloated objectives and a dilute company identity.
An implication is that, in addition to talented and driven innovators, a company also needs sceptical managers, who can kill off all but the best ideas. A case study that may give us pause, though, is that of 1970s Xerox, whose Palo Alto Research Center (PARC) staff members often are credited with having invented the personal computer, laser printing, Ethernet, and other products that ultimately were adopted and monetised by other companies (which, in many cases, had managed to poach—or even had been founded by—former PARC staffers). These products turned out to be not just commercially successful but revolutionary and, despite being widely known, seldom are associated with Xerox.
An oft-cited reason for Xerox’s missed opportunities is that Xerox’s management team failed to appreciate these products’ potentials. PARC’s researchers, as a result, were left disgruntled and ended up finding (or founding) other companies that were more supportive. Notwithstanding his praise for killing good ideas, Steve Jobs also recognised Xerox management’s apparent myopia and unimaginativeness, speculating once that, “[i]f Xerox had known what it had and had taken advantage of its real opportunities, it could have been… the largest high-technology company in the world.”
These anecdotes support an argument—which I present formally in a recent paper—that managerial scepticism needs to be finely tuned in organisations that rely on innovation. In the paper, I develop a theoretical model of knowledge production, communication, and decision-making in organisations that have the following features:
- Knowledge production and decision-making are fundamental, salient activities.
- There are three types of actors: (frontline) workers, (middle) managers, and executives.
- Workers are experts that collect costly information to aid in decision-making.
- Managers observe and interpret the collected information and form recommendations regarding decisions.
- Executives observe the recommendations (but not the raw information) and make decisions.
I apply tools from game theory to derive insights about how executives in such organisations—examples of which include research labs and tech companies—should hire workers and managers from pools of candidates that have varying biases regarding decisions.
Below I frame my findings in the context of a concrete—and perhaps familiar—example: a tech startup. Suppose that the founder (i.e., the executive) intends to hire a product designer (i.e., a worker) to produce prototypes and a product manager (i.e., a manager) to evaluate prototypes and make recommendations that will inform the founder’s decisions about which projects to fund. Candidates for both positions vary in their levels of eagerness to churn out new products.
It may seem that operations would run most smoothly if all staff members were fully committed to the startup’s mission, and hence that the founder should hire maximally pro-innovation types for both positions. This reasoning ignores the fact that bringing a sceptical, anti-innovation manager onboard would force the designer to work harder to have his project green-lighted. Hiring such a manager therefore could induce the designer to submit higher-quality prototypes for consideration and thus improve the founder’s funding decisions.
More scepticism is not always better, though. While a designer typically has an intrinsic desire to see his projects green-lighted, there will be some maximum level of effort that he would be willing to exert in producing a high-quality prototype, and hence in persuading the manager to recommend green-lighting the project. Consequently, an overly sceptical manager—one that demands an excessively robust prototype—will destroy a designer’s incentives altogether. This outcome is clearly bad for the company. So, for a given designer, the founder should look to hire a manager that pushes the designer as close to his limit as possible without surpassing the limit. As my paper’s title indicates, the manager should bend, but not break, the designer.
In general, a more zealous, pro-innovation designer will be willing to exert greater effort and thus can be bent further without breaking. This finding is intuitive: a zealot typically will be willing to work harder to advance a cause than will a moderate. So the founder should look to hire the most zealous designer that she can. Since even the most zealous designer will have some limits, though, the founder should eschew an overly sceptical manager. In summary, while the founder wants a maximally zealous designer, she wants a manager that is only moderately sceptical.
I find, also, that the optimal manager typically will be more sceptical than the founder. Thus, the founder benefits by introducing a form of discord into her company. Previous work has shown that disagreement among members or conflicting objectives among units of an organisation can generate useful information. My work yields similar insights but focuses on clashes across hierarchical levels of organisations that meet the above conditions.
While the model discussed here assumes that all available designers have equal technical capabilities, in a different paper, I also consider designers that vary not only in zeal but also in technical ability. As one might expect, both attributes matter, but I find that the more zealous of two equally skilled designers is generally better, and even that zeal can partly substitute for technical ability. With managers, though, increased scepticism is better only up to a point: the founder should look to bend the designer to the greatest possible degree without breaking them.
- This blog post is based on the author’s paper Bend Them but Don’t Break Them: Passionate Workers, Skeptical Managers, and Decision Making in Organizations, American Economic Journal: Microeconomics, Vol. 9, No. 3, August 2017
- The post gives the views of its author, not the position of LSE Business Review or the London School of Economics.
- Featured image credit: Photo by JJ Ying on Unsplash
- When you leave a comment, you’re agreeing to our Comment Policy.
Omar Nayeem is an economist whose research interests include economic theory and its applications to the design of organizations, institutions, and regulatory policy. After completing his Ph.D. in economics from the University of California at Berkeley, he joined the Office of Strategic Planning and Policy Analysis at the U.S. Federal Communications Commission, where he served until June 2017. He joined the transfer-pricing practice at Deloitte in July 2017. The views presented here are his and do not necessarily reflect those of the Federal Communications Commission or of Deloitte.
How does a hiring executive screen for skepticism?
Excellent question. While it seems difficult to measure (and hence screen for) skepticism per se, I would note simply that this argument provides a rationale for bringing in outsiders as managers, particularly outsiders that come from (and presumably were influenced by) organizations that are more risk-averse and less prone to experiment. The presence of such managers may well act as a disciplining force. Another point (which I show in the paper’s online appendix) is that the executive can design compensation schemes (for both workers and managers) to achieve the same effects as passion and skepticism, respectively. So, technically, even if screening for skepticism is infeasible, the executive still has some tools available to her to exploit the mechanism that I describe.
You article raises some excellent points. There is a very strong tendency in the current enthusiasm for talking about innovation (but not necessarily being innovative) to throw away critical reasoning and logic.
Any new idea benefits from criticism, if it is a good idea that solves a real problem it will bend or (at least flex) and become a better idea. As we saw in DotCom era and with some “innovations” in the Fintech boom, many “innovative” ideas are patently stupid, not solving real problems or offering worse solutions or being so ambiguous no one really understands what they are. Stupid at a level were even the most basic logical examination will reveal them to be nonsense.
At the Center for Evidence Based-Management (www.cebma.org) we try to teach managers to take the common sense approach to solving problems (whether technical or more broadly managerial), make sure you define the problem you are attempting to solve, make sure solutions (or ideally solutions) are clearly defined, look for evidence of the existence of the problem and the effectiveness of solution. Assess the quality of the evidence available. If you implement a solution make sure you assess the outcome, ideally in relation to a baseline that will show you what impact the solution had.
This does not mean you only do things that there is “evidence”, it means you going into things with a clear picture of the extent of risk and uncertainty. By all means do things without any evidence (invent and innovate) but try to make sure your experiment is a real experiment rather than an exercise in confirmation bias.
The age old name for sceptical manager is the devil’s advocate. Then there is the more recent “Tenth Man” idea, who is encouraged to find arguments against everyone else.
Would be great to get in touch, you can contact me via Linkedin
I will read your paper
Thank you for the feedback. While my point was more about using skepticism to motivate innovators to come up with more robust, thoughtful ideas, I think your point–about the ideas themselves improving as they are subjected to scrutiny–is also a good (albeit distinct) one. I’d be happy to connect.