## Algorithmic Labor and Algorithmic Management The warehouse worker's headset buzzes when they have been stationary for more than a minute. The rideshare driver's pay for the same trip is different from their colleague's, calculated by an algorithm that neither of them has access to and that the company describes as a trade secret. The remote knowledge worker's keystroke cadence is logged against a productivity baseline they cannot audit. All of this is normal in 2025. Most of it was marginal or experimental when the book was published. ### What Has Changed Since 2018 "Algorithmic management" names a specific thing: the use of automated systems to assign tasks, set pace, evaluate performance, discipline workers, and make firing decisions — usually without a human manager in the loop, often without any meaningful appeal. It is distinct from [algorithmic scoring](https://spoileralert.wtf/md-files/p18_algorithmic_scoring.md) (which is gatekeeping: decisions made once about whether someone gets in) and from [automation](https://spoileralert.wtf/md-files/est_automation.md) (which replaces workers with machines). Algorithmic management is the ongoing mediation of a worker's day by a system. Four concrete developments since 2018 sharpen the picture. **Amazon's TOT (time off task) metric**, enforced through scanner and badge data in warehouses, has been the flagship case of algorithmically-paced industrial labor. The French data protection agency CNIL fined Amazon's French warehouse operations €32 million in 2024 for surveillance the agency described as excessively intrusive — including second-by-second tracking of warehouse workers and the use of the data to issue warnings and terminations. The fine did not end the practice; it flagged a specific national regulator's view that the practice violated European data protection law. **Rideshare and delivery deactivation.** Uber and Lyft drivers, Amazon Flex drivers, and their counterparts across the world can be "deactivated" — effectively fired — by algorithmic determinations with minimal human review. Reinstatement processes vary by platform and jurisdiction; in most cases, they are opaque, slow, and difficult to escalate. The worker's economic lifeline is held by a system whose decisions they cannot interrogate. **Algorithmic wage discrimination**, a term coined by law professor Veena Dubal in 2023, describes the use of granular individual-level data to produce unpredictable, variable, and personalised hourly pay. Two drivers doing equivalent work in equivalent conditions are paid differently because the algorithm has modelled their individual reservation wages, home location, current financial stress, and willingness to accept lower rates. This is not a bug. It is the point. The platforms can extract more value by paying each worker the minimum that worker will accept — and the technology makes the minimum legible in real time. **Productivity surveillance software** (Teramind, Hubstaff, ActivTrak, and dozens of others) has moved from call centres to remote knowledge work, especially after 2020. These tools log keystrokes, mouse movements, application use, and screenshots. Some categorise this time as "productive" or "unproductive" against thresholds the worker did not set. The data feeds into performance reviews and, increasingly, into direct compensation decisions. **Collective bargaining has begun to respond.** The 2023 Writers Guild of America strike secured contract language restricting studios' use of generative AI to write or rewrite covered material — among the first binding contract language explicitly addressing algorithmic systems in labor. It is not a template yet. It is a precedent. ### Why It Matters The [Risk Innovation](https://spoileralert.wtf/md-files/ntf_risk_innovation.md) framework, which the book develops as an expansion of conventional risk thinking beyond physical harm, is particularly apt here. What is at stake in algorithmic management is not usually safety. It is dignity, autonomy, belonging, and the capacity to understand the conditions under which one's own life is being evaluated. Those are the exact categories the book expands risk thinking to include. [Permissionless Innovation](https://spoileralert.wtf/md-files/rei_permissionless_innovation.md) applies directly. These systems were deployed widely before any substantial regulatory or legal framework caught up, and the workforce most exposed to them had, by and large, no opportunity to decline. The [Surveillance, Privacy, and Control](https://spoileralert.wtf/md-files/rei_surveillance_privacy_control.md) framework matters because the workplace has become the most densely surveilled environment most people inhabit — and the surveillance is typically legal, disclosed in employment contracts nobody has the option to decline, and operationalised in ways that did not exist when those contracts were first signed. The [Power, Privilege, and Access](https://spoileralert.wtf/md-files/rei_power_privilege_access.md) dimension is stark. The workers most exposed to algorithmic management are disproportionately lower-paid, disproportionately in precarious employment, disproportionately without the organisational infrastructure (unions, professional associations) to push back collectively. White-collar knowledge work is catching up fast; at this writing the forms are different but the trajectory is the same. ### How the Book's Frameworks Apply - **What the book directly addresses.** The [*Elysium*](https://spoileralert.wtf/md-files/ch06_elysium.md) chapter treats robotic police and algorithmic authority as direct subject matter, and its arguments about what it is to live under non-negotiable automated judgment apply here with almost no modification. The [*Minority Report*](https://spoileralert.wtf/md-files/ch04_minority_report.md) framework — algorithmic judgment applied to people — applies to ongoing management as directly as it does to one-off decisions. Automation, surveillance, permissionless innovation, and the Risk Innovation framework all apply directly. - **What the frameworks suggest when extrapolated.** The book's treatment of [disposable workforces](https://spoileralert.wtf/md-files/est_automation.md) was developed primarily for physical replacement by machines. Extended to algorithmic *management* of workers who remain in place, the core pattern — that workers bear risks and costs the system's designers do not — continues to apply. - **Where the frameworks reach their limits.** The question of what governance innovations would actually address algorithmic management — works councils with algorithmic auditing rights, data protection law, labor contracts that constrain system design — is a policy question the book does not answer. The frameworks are diagnostic. The treatments are elsewhere. Dubal's work and the growing platform-worker literature are essential complements. Films outside the book's twelve: *Sorry to Bother You* (Boots Riley, 2018) is already on [Claude's film recommendations](https://spoileralert.wtf/md-files/claude_film_recommendations.md) and goes further than most mainstream cinema in naming the commodification of the worker. *Sleep Dealer* (Alex Rivera, 2008) is also on that list and addresses remote labor-through-technology with prescience. *I, Daniel Blake* (Ken Loach, 2016) is not sci-fi; it is algorithmic benefits systems rendered as kitchen-sink realism, which is in some ways the most useful register for this page. ### Explore Further - [Should an algorithm be allowed to be my boss?](https://spoileralert.wtf/md-files/ceq_algorithmic_management.md) — the complex emerging question this page raises - [Automation and Robotics](https://spoileralert.wtf/md-files/est_automation.md) — the book's foundational treatment of machine-replaces-worker - [Social Credit, Algorithmic Scoring, and Automated Gatekeeping](https://spoileralert.wtf/md-files/p18_algorithmic_scoring.md) — the sibling P18 page on one-off algorithmic decisions - [Ubiquitous Surveillance and Big Data](https://spoileralert.wtf/md-files/est_surveillance.md) — the surveillance infrastructure this builds on - [Surveillance, Privacy, and Control](https://spoileralert.wtf/md-files/rei_surveillance_privacy_control.md) — the workplace as dense surveillance environment - [Power, Privilege, and Access](https://spoileralert.wtf/md-files/rei_power_privilege_access.md) — who bears the costs of these systems - [Permissionless Innovation](https://spoileralert.wtf/md-files/rei_permissionless_innovation.md) — the deployment pattern - [Risk Innovation](https://spoileralert.wtf/md-files/ntf_risk_innovation.md) — dignity, autonomy, belonging as legitimate risk categories - [*Elysium* (chapter)](https://spoileralert.wtf/md-files/ch06_elysium.md) — the closest cinematic analogue