## "Should an algorithm be allowed to be my boss?" The question sounds absurd on first pass and becomes more serious with each example. The question is not whether algorithms should ever appear in management — they already do, and some of what they do is uncontroversial. The question is whether ongoing, consequential authority over a worker's pace, pay, discipline, and continued employment should sit in an automated system the worker cannot audit, appeal to, or negotiate with. ### Why This Question Is Hard The intuitive answer is no. On examination the intuitive answer runs into real complications. Human managers are not paragons of fair judgment either. The empirical literature on human bias in hiring, promotion, discipline, and firing is long and depressing. The appeal of algorithmic management is that it promises consistency, transparency (in principle), scale, and the elimination of certain kinds of arbitrariness. Those promises are not nothing. The practice has not matched the promise. Algorithmic management systems trained on historical workforce data inherit the patterns of past management — including discriminatory ones — and re-encode them as neutral-looking metrics. The outputs are worse when the "training data" is the record of a company that has been under-paying and over-disciplining a particular demographic for decades; the algorithm's job becomes to ratify that pattern with the confidence of quantitative authority. The opacity problem is the hardest. A worker fired by a human manager can often get an explanation, escalate, grieve, litigate. A worker fired by an algorithmic system — "deactivated" from a rideshare platform, for instance — is frequently told that the decision was automatic, that the logic is proprietary, and that the appeal, if any exists, is to another algorithmic system. The economic relationship has become non-negotiable in a way it is hard to imagine being accepted in any other context. The power-distribution problem is the deepest. The workers most exposed to algorithmic management are, on average, the workers with the least leverage to opt out. The ability to decline algorithmic management is itself a form of workplace privilege. Asking "should algorithms be allowed to manage workers?" without attending to *which* workers are being managed and *what else* is on the table for them is a form of theoretical luxury that the question does not support. ### What the Book Brings to This *Films from the Future* develops, across [*Minority Report*](https://spoileralert.wtf/md-files/ch04_minority_report.md) and [*Elysium*](https://spoileralert.wtf/md-files/ch06_elysium.md), a sustained treatment of what it is to live under non-negotiable automated authority. The robotic police in *Elysium* do not argue with the people they arrest. The precogs in *Minority Report* do not present evidence; they present conclusions that the legal system treats as sufficient. Both films are, in their different registers, about the loss of the ability to contest. The book's [Risk Innovation](https://spoileralert.wtf/md-files/ntf_risk_innovation.md) framework expands risk thinking beyond physical harm to include threats to dignity, autonomy, and belonging. Algorithmic management is an unusually clean case for this expansion. The harms are often not catastrophic on any single occasion, but they are pervasive, cumulative, and structured around those three categories. The worker fired for "time off task" violations without being told what the threshold is has suffered a dignitary harm that conventional risk assessment has no good way to quantify — and that, on the book's account, is a reason to expand the conceptual tools rather than to exclude the harm from serious consideration. The book's [Informed Consent](https://spoileralert.wtf/md-files/rei_informed_consent.md) question applies in an uncomfortable register. Workers did, in the formal sense, consent to be managed this way — the employment agreement or platform terms of service said so. That formal consent is very hard to take seriously as consent. The employment market does not, for most of the affected workers, offer a real option to decline; the agreement was offered on a take-it-or-leave-it basis; and the specifics of how algorithmic management would work were not, in most cases, disclosed in a form any reasonable person would have understood when signing. The book's distinction between genuine informed consent and the legal fiction of consent is apt. The adjacent CEQ, [Should an algorithm be allowed to decide whether I get a job, a loan, or parole?](https://spoileralert.wtf/md-files/ceq_algorithmic_decisions.md), covers one-off algorithmic decisions. This question extends that one: what happens when algorithmic authority is not episodic but continuous — the relationship, not the decision point. The productive reframing, borrowing the move the book makes in [*Never Let Me Go*](https://spoileralert.wtf/md-files/ch03_never_let_me_go.md): the question is not whether algorithmic management should exist. The question is what accountability, auditability, and appeal rights a worker should have when an automated system holds meaningful power over their economic life — and whether those rights are substantively meaningful or a formality the employer can route around. The answer to that question is a matter of deliberate policy, not technological destiny. ### Explore Further - [Algorithmic Labor and Algorithmic Management](https://spoileralert.wtf/md-files/p18_algorithmic_labor.md) — the current landscape of the practice - [Should an algorithm be allowed to decide whether I get a job, a loan, or parole?](https://spoileralert.wtf/md-files/ceq_algorithmic_decisions.md) — the sibling CEQ on one-off decisions - [Social Credit, Algorithmic Scoring, and Automated Gatekeeping](https://spoileralert.wtf/md-files/p18_algorithmic_scoring.md) — the gatekeeping counterpart - [Automation and Robotics](https://spoileralert.wtf/md-files/est_automation.md) — the book's foundational treatment - [Predictive Algorithms and Machine Learning](https://spoileralert.wtf/md-files/est_predictive_algorithms.md) — the underlying technology - [Risk Innovation](https://spoileralert.wtf/md-files/ntf_risk_innovation.md) — the expanded risk framework most relevant here - [Informed Consent and Autonomy](https://spoileralert.wtf/md-files/rei_informed_consent.md) — the consent-as-formality problem - [Human Dignity and What Makes Us Human](https://spoileralert.wtf/md-files/rei_human_dignity.md) — what dignity requires in the employment relationship - [*Elysium* (chapter)](https://spoileralert.wtf/md-files/ch06_elysium.md) — non-negotiable automated authority in fiction