## "Should an algorithm be allowed to decide whether I get a job, a loan, or parole?" Algorithms already make or heavily influence these decisions. The question is not whether it is happening — it is whether it should be, under what conditions, and who is accountable when the system gets it wrong. ### Why This Question Is Hard The intuitive answer — "of course not, humans should make important decisions about other humans" — runs into an uncomfortable fact: humans are biased too. Studies consistently show that human judges are influenced by factors like the time of day, whether they have eaten recently, and the race of the defendant. Human hiring managers are swayed by names, accents, and unconscious associations. The appeal of algorithmic decision-making is precisely that it promises to be more consistent, less prejudiced, and more efficient than human judgment. But that promise has not been fulfilled in the ways its advocates hoped. Algorithmic systems trained on historical data inherit the biases embedded in that data. A hiring algorithm trained on a company's past hiring decisions will replicate the patterns of those decisions — including discriminatory ones. A recidivism prediction tool trained on arrest records will reflect policing patterns that disproportionately target certain communities. The bias is not in the algorithm's design — it is in the world the algorithm learned from. The deeper problem is opacity. When a human decision-maker denies someone a loan, there is at least the possibility of asking why. When an algorithm does it, the explanation may be technically incomprehensible, legally protected as proprietary, or simply unavailable. The person affected is left contesting a black box. ### What the Book Brings to This *Films from the Future* explored this territory through [Minority Report](/md-files/movies_minority_report.md), where precognitive technology is used to arrest people for crimes they have not yet committed. The film's central insight — that prediction is not the same as certainty, and that acting on predictions as though they were certainties creates injustice — maps directly onto the current landscape of [algorithmic scoring](/md-files/p18_algorithmic_scoring.md). The book's [Informed Consent](/md-files/rei_informed_consent.md) framework asks a pointed question: did the people being scored agree to be scored? In most cases, the answer is no — or rather, "consent" was buried in terms of service that nobody reads. Being subject to algorithmic evaluation has become a condition of economic participation, not a choice. The [Human Dignity](/md-files/rei_human_dignity.md) theme is also central. Reducing a person to a score — flattening the complexity of a human life into a number that determines their opportunities — raises the same fundamental question the book asks through [Never Let Me Go](/md-files/movies_never_let_me_go.md): at what point does treating people as objects of a system, rather than subjects of their own lives, become intolerable? The question is not whether algorithms should ever inform decisions — they can add genuine value. The question is whether the safeguards, transparency, and accountability mechanisms exist to ensure that algorithmic power is exercised justly. Right now, for the most part, they do not. ### Explore Further - [Predictive Algorithms and Machine Learning](/md-files/est_predictive_algorithms.md) — the technology behind algorithmic decisions - [Social Credit, Algorithmic Scoring, and Automated Gatekeeping](/md-files/p18_algorithmic_scoring.md) — the current landscape - [Facial Recognition and Biometric Surveillance](/md-files/p18_facial_recognition.md) — biometric dimensions of automated judgment - [Informed Consent and Autonomy](/md-files/rei_informed_consent.md) — the consent problem - [Human Dignity and What Makes Us Human](/md-files/rei_human_dignity.md) — reducing people to scores - [Minority Report](/md-files/movies_minority_report.md) — the film that explored pre-judgment most powerfully