Using artificial intelligence at work may have many benefits, but a growing body of research calls attention to the potential negative impact of AI on employee performance. Pok Man Tang writes that the use of AI may have different effects on people, depending on their personal characteristics. Those who think more highly of their abilities may suffer the most when they see that technology determines how well they perform.
“Whether or not employees benefit or suffer from working with artificial intelligence very much depends on their personal characteristics, like how they (un)favorably view themselves…”
Albert Lam, Chief Marketing Officer (CMO), NOVELTE Robotic Solution
The opening quote from the chief marketing officer of a robotics company reveals a critical insight about working with, or depending on, artificial intelligence: AI can be a double-edged sword for both employees and managers. It can augment employees at work by facilitating their goal accomplishments, since it helps them complete more complicated analytical tasks. However, this emerging digital tool might unexpectedly trigger an aversive host of psychological states in employees. The critical question to resolve thus seems to be for whom the benefits (versus costs) of working with AI will be more salient.
In our recent article, my co-authors and I write that for some employees, this augmentation (i.e., making employees depend on AI) has a bright side, but there might be a negative consequence in its ability to enhance employee performance. Yes, there are likely both benefits and harms to employees as a result of these human-machine collaborations, but we suspected that personal characteristics might influence how strongly those benefits vs. harms are experienced. If true, this would have significant implications for organisational decision-makers worldwide who might have to rethink their business practices regarding pairing employees with AI.
The central premise of these arguments is based on self-regulation theory, a dynamic process wherein individuals evaluate discrepancies between their current states (how they are feeling or what they are doing now) and some desired end-state (what they want/expect themselves to feel or do). We posited that, on the one hand, someone’s dependence on AI in various work tasks can be discrepancy-reducing. They may be able to complete important tasks more effectively or eliminate repetitive tasks or hassles, which can improve their performance. On the other hand, the same phenomenon can be discrepancy-enlarging, making clear that employees could not complete their tasks without AI assistance, which eventually has the potential to harm their performance.
Across two different studies with participants from India and the United States, we consistently showed that dependence on AI at work can both reduce discrepancy, by eliciting goal progress (a positive consequence) and increase discrepancy, by triggering a self-esteem threat (a negative consequence), with mixed consequences for employee performance. Yet, more importantly (based on the opening quote), our focus was on how employees’ personal characteristics would affect the strength of these relationships. We expected that whether AI can be discrepancy-reducing or discrepancy-enlarging would depend on employees’ core self-evaluation (CSE), their overall self-assessment. Our expectation was that employees with higher levels of CSE would be more likely to see their dependence on AI negatively, as they tend to attribute their work accomplishments to their ability (instead of AI’s help).
And indeed, this is what we found. Dependence on AI was discrepancy-enlarging when it became a self-esteem threat (employees tended to experience larger threats to their self-esteem). Along similar lines, for these individuals, dependence on AI was not discrepancy-reducing when it came to their work-goals (they tended not to feel that depending on AI was effective for making progress on tasks at work). Overall speaking, our results suggest that employees high in CSE might attain fewer performance benefits from working with AI.
These findings have critical implications for managers and other organisational decision-makers who are navigating through the Fourth Industrial Revolution. Specifically, the conventional wisdom on CSE is that it is a good filtering criteria in recruitment and selection, as well as performance management. Our findings suggest to decision-makers that this consensus of CSE might need to be revisited. It is critical that managers recognise that tried-and-true methods they have used in the past may not work in the new digital era.
While one remedy might simply be to encourage managers to avoid pairing high-CSE employees with AI (or at least to avoid their heavy engagement with AI), this may not be feasible all the time given that high (compared to low) CSE employees have proven to be better in a lot of aspects in their daily lives and work.
To this end, we encourage decision-makers to adopt a more holistic perspective for the outcomes that are measured when evaluating the success of these augmentation efforts. Beyond focusing on increases to performance, managers should be attentive to the well-being of their digital workforce, striving to avoid the reductions to task performance that we found, as well as other costs that can arise.
One potential solution is to encourage employees to engage in positive self-affirmations (which provide a means to restore or protect one’s self-regard) or mindfulness practice (which can also have positive effects for employee well-being). Overall, we recommend that managers be careful to avoid “over-selling” the promise of AI, and we echo those who have called for a more conservative judgement of the organisational impact of intelligent machines.
In closing, our research joins an emerging conversation about the potential benefits and costs of coupling employees with AI at work and throws light on the personal characteristics that amplify these benefits (vs. costs). We continue to add to the recent stream of research highlighting that some conventional wisdom and organisation practices might no longer be valid in the new digital era—a manuscript we previously wrote about for LSE Business Review.
- This blog post is based on The self-regulatory consequences of dependence on intelligent machines at work: Evidence from field and experimental studies, by Pok Man Tang, Joel Koopman, Kai Chi Yam, David De Cremer, Jack H. Zhang, Philipp Reynders, in Human Resource Management.
- The post represents the views of its author(s), not the position of LSE Business Review or the London School of Economics and Political Science .
- Featured image by fran innocenti on Unsplash
- When you leave a comment, you’re agreeing to our Comment Policy.
I truly believe that AI in fact can be a facilitator of goal congruent behavior and therefore be a driver of top performance. However people may sometimes perceive an organization’s capability to integrate Technology in some key HR functions like Performance Management as micro management in certain cultural contexts. Therefore effective communication can indeed decide the effectiveness of AI integration and the “why” factor.
Now, I believe AI is important in providing very valuable insights in performance evaluation if all key process areas are in fact interlinked/ integrated. So a Human + Tech approach will be the ideal mix where tacit knowledge of the evaluator can guide an employee. (keeping in mind the possible skewness human perception can bring in while assessing an employee/ zero human error based approach) .
So a mix from a game theoretic perspective will be a good state of equilibrium specially when asymmetric information is present.