LSE - Small Logo
LSE - Small Logo

Aiha Nguyen

March 5th, 2019

Monitoring and surveillance technologies shift power dynamics in the workplace

0 comments | 1 shares

Estimated reading time: 5 minutes

Aiha Nguyen

March 5th, 2019

Monitoring and surveillance technologies shift power dynamics in the workplace

0 comments | 1 shares

Estimated reading time: 5 minutes

Whether it’s the use of closed circuit televisions or keycard access to track movement, expectations of privacy are often left at the door when an employee enters the workplace. New technologies are enabling greater and more pervasive forms of monitoring and surveillance, resulting in new challenges for workers. Public debate both in the United States and in Europe have led to recent calls for greater consumer rights over the collection and use of their data. In a work context, however, surveillance and data collection raise issues that go beyond privacy concerns based on individual rights.

Employers surveil workers for various reasons and can use the same technologies for both beneficial and extractive ends. Monitoring tools may serve purposes such as protecting assets and trade secrets, controlling costs, enforcing protocols, increasing work efficiency, or guarding against legal liability. New technologies that couple monitoring tools with granular data collection now allow employers to use these systems to exert greater control over large workforces, rapidly experiment with workflows, detect deviant behaviour, evaluate performance, and automate tasks.

These technologies can be broadly grouped into three categories: predicting and flagging, remote monitoring and time tracking, and biometric and health data monitoring.

Management may use predicting and flagging tools to monitor employee behaviour, helping them identify employee characteristics or predict an employee’s future performance. For example, in service industries like retail or food, employers use flagging systems in point of sale metrics to generate performance reports, which produce both aggregate and individual data. This data includes “exception-based reporting,” which singles out workers whose data exhibits unusual patterns, such as processing a higher than average number of customer returns. In effect, workers are treated as suspects based on proxies and metrics that are machine-readable, but might not tell the whole picture.

In other instances, predictions made by such systems are tenuous. Predictim, an online service which provides vetting for domestic services, claims to use “advanced artificial intelligence” to analyse a job candidate’s personality by scanning their social media posts. The service then generates a profile that lists identified traits like “bad attitude.” The use of proxies like sentiment analysis can open channels for bias and data that purports to correlate social media behaviour with someone’s ability to do a job can be problematic.

Remote monitoring and time tracking through GPS-location, computer monitoring software, app-based activity trackers, and remote sensors allow managers or clients to manage large groups of workers indirectly. Many workers on platforms are classified as independent contractors despite the company having significant control over worker actions. Gig platforms like Handy.com and Uber, for example, use apps to decentralise their control of worker activities, but still collect detailed data about trips, communications, and pay. This information can allow companies to nudge workers in ways that advantage the company, but not necessarily the worker (such as directing workers to perform a poorly compensated task that they might not accept if given more information). Recently, Instacart came under scrutiny for using tips that drivers receive in order to supplement pay when they didn’t earn enough to meet the minimum wage. While the company has since changed its policy, others like DoorDash and AmazonFlex continue to engage in this practice. Companies are able to do this because they have detailed information about worker earnings.

Finally, the collection of biometric and health data through wearables, fitness tracking apps, and biometric timekeeping systems are newer forms of workplace monitoring. These programs are often part of employer-provided health care programs, wellness programs, and digitally tracked work shifts. Employers, like BP America, are adopting these devices for their employees in a bid to improve employee health habits while simultaneously persuading insurance companies to reduce insurance rates at significant savings to the company. Additionally, fitness apps and wearables usually follow employees out of the office, bringing workplace privacy concerns into their private lives.

Facial recognition tools or fingerprint scanners are likewise becoming increasingly common but pose privacy challenges. In the U.S., more than 50 companies have faced lawsuits over the collection of employee fingerprint data through biometric timekeeping tools. Concerns arise over employees’ ability to opt out of such programs if doing so means financial penalties or being deemed riskier by their employer.

Workers and advocates are challenging the power imbalances these tools generate, as well as their accuracy and fairness on a technical level. However, monitoring tools that can be used to make decisions about a worker’s compensation, perceived risk, or even employment are hard to contest. In some cases, employees may not know they are being surveilled or that data is being collected. The large amounts of data collected about workers is interpreted, analysed, and repurposed, yet there is no clear means of ensuring that the data accurately reflects the situation. Also, this raises questions about who retains this data and how they can use it.

These challenges point to a need for a broader framework to balance the economic interests of companies and the personal and economic interests of workers. As technology expands the scope and scale of what can be done with surveillance tools, workplace protections must also evolve to address the collective information asymmetries, biases, accuracy of proxies, and shifts in power that are occurring.

♣♣♣

Notes:


Aiha Nguyen is the Data & Society Research Institute’s labour engagement lead for the research initiative Social Instabilities in Labor Futures. She bridges research and practice to expand our understanding of technological systems’ impact on work; builds the field of actors engaging on this issue; and informs policy on future of work. Aiha has over a decade of experience in advocacy, research, policy and organising. She received her masters in urban planning from UCLA and has authored several reports, including an analysis of outsourced passenger service work at Los Angeles International airport; impact of automated self-checkout systems on public safety and jobs; and a baselines study of Orange County’s philanthropic community.

 

About the author

Aiha Nguyen

Aiha Nguyen is the Data & Society Research Institute’s labour engagement lead for the research initiative Social Instabilities in Labor Futures. She bridges research and practice to expand our understanding of technological systems’ impact on work; builds the field of actors engaging on this issue; and informs policy on future of work. Aiha has over a decade of experience in advocacy, research, policy and organising. She received her masters in urban planning from UCLA and has authored several reports, including an analysis of outsourced passenger service work at Los Angeles International airport; impact of automated self-checkout systems on public safety and jobs; and a baselines study of Orange County’s philanthropic community.

Posted In: Technology

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.