## "What do we owe a lump of brain tissue in a dish?" It is a strange question to have to ask in this decade. A cluster of human neurons, grown from stem cells, firing in patterns that researchers cautiously describe as not nothing. Not a brain. Not a person. Not, on any account anyone takes seriously, a rights-bearing being. And yet — not an ordinary laboratory sample, either. The existing vocabulary strains. So does the governance. ### Why This Question Is Hard The obvious framing is: *is it conscious?* That question is a trap. There is no agreed scientific definition of consciousness, no agreed measurement of it, and no agreed threshold that would resolve the question even in principle. Different theories of consciousness — Integrated Information Theory, Global Workspace Theory, Higher-Order Theories — give different answers about whether integrated, inter-regional connectivity is a candidate substrate for experience, and current [brain organoids](https://spoileralert.wtf/md-files/p18_brain_organoids.md) sit in different places relative to each theory's criteria. A question that depends on a philosophical commitment the field has not made is not a question that can be decided by a committee vote. The deeper difficulty is that the is-it-conscious framing lets everyone off the hook. If the answer is no (as most researchers currently believe), nothing changes. If the answer is yes, something massive changes — but by then, enormous research infrastructure has already been built, and the Collingridge dilemma bites: by the time the evidence is clear enough to act on, changing course has become extraordinarily costly. The question is structured so that the default answer is *keep going*, and the burden of disruption falls on whoever would question that default. There is a second structural problem. Organoid research is load-bearing for meaningful medical work: Alzheimer's, autism, drug screening, developmental disorders. Any regulatory response that meaningfully constrains the research imposes costs on patients who might have benefited from it. Any response that does not constrain the research accepts some moral risk in exchange for research throughput. Neither answer is cost-free, and pretending that one of them is — in either direction — is a form of bad faith. ### What the Book Brings to This *Films from the Future* contains, in its treatment of [*Never Let Me Go*](https://spoileralert.wtf/md-files/ch03_never_let_me_go.md), perhaps the sharpest tool available for this question. The book's argument is that the central move of the film — society's slow, comfortable conclusion that the clones are not fully human — is the wrong question. Not because clones obviously are or are not human, but because the question itself functions as avoidance. It permits the infrastructure of harvesting to continue while a metaphysical debate plays out offstage. Transposed: asking whether a given cortical organoid is conscious lets the infrastructure of organoid research — now including commercial biological computing, transplantation into other animals, and the wetware-as-a-service economy — continue while philosophers and neuroscientists debate thresholds. The book's move is to ask a different question. Not *what is this thing?* but *what relationship is appropriate, given what this thing is and what we are asking of it?* A recent philosophical intervention on organoids makes precisely this move under the word *agency*. Whether or not a lump of neural tissue experiences anything, we can still ask whether we are treating it with the care appropriate to what it plausibly might be — and that is a question that does not require a consciousness threshold to answer. The book's [Informed Consent](https://spoileralert.wtf/md-files/rei_informed_consent.md) framework adds a second layer. The donor who contributed the stem cells to a research programme in 2017 did not consent to having their derived tissue transplanted into rats, or used as a processor in a commercial biological computer. They cannot meaningfully consent retroactively. What is owed is not to the organoid — or not only to the organoid — but to the chain of people whose contributions made it possible. And the book's [Too Valuable to Fail](https://spoileralert.wtf/md-files/rei_too_valuable_to_fail.md) framework names the structural pressure: every year that this work continues without resolved ethical consensus makes resolution more costly to act on. The field is entrenching faster than the conversation about what the field is. The question, properly asked, is not "how close is this tissue to personhood?" It is "given genuine uncertainty about what this tissue is, what does honest practice look like?" The first version demands metaphysics no one can supply. The second demands only that we act as though the uncertainty matters. ### Explore Further - [Brain Organoids and Neural Tissue of Uncertain Moral Status](https://spoileralert.wtf/md-files/p18_brain_organoids.md) — the post-2018 development this question responds to - [Biological Computing, Wetware, and Bio-Silicon Hybrids](https://spoileralert.wtf/md-files/p18_biological_computing.md) — where organoid tissue becomes a commercial substrate - [Human Dignity and What Makes Us Human](https://spoileralert.wtf/md-files/rei_human_dignity.md) — the background debate about what dignity requires - [Informed Consent and Autonomy](https://spoileralert.wtf/md-files/rei_informed_consent.md) — the donor-consent dimension - [Too Valuable to Fail](https://spoileralert.wtf/md-files/rei_too_valuable_to_fail.md) — why entrenchment makes this harder every year - [Could We? Should We?](https://spoileralert.wtf/md-files/rei_could_we_should_we.md) — the question the book names as central to any technology running ahead of governance - [*Never Let Me Go* (chapter)](https://spoileralert.wtf/md-files/ch03_never_let_me_go.md) — the wrong-question framework applied to clones; the closest analogue the book provides