Consent Is Sacred — and We Didn’t Sign Up for This Experiment

18 Sep 2025

Consent Is Sacred — and We Didn’t Sign Up for This Experiment

Your signature matters in treatment. We make people sign releases before a single therapeutic word is spoken. That signature is not legal flummery it is sanctuary. It is the first promise we give to someone whose life has been eroded by shame, by isolation, and by a disease that uprooted the scaffolding of work, family, and meaning. More consents in care mean more supports, more leverage, more pathways out of ambivalence and into action. A signed release opens doors to medical help, family repair, legal advocacy, housing, employment ,it converts isolation into a coordinated, humane response.

And yet outside our clinic doors the largest human experiment in history is already under way, and none of the eight billion people on Earth were asked to sign a release.
 

The difference in scale — human learning vs. superintelligence

Put plainly: humans learn slowly and expensively. A child takes years to download language, culture, vocational skills , decades to gain the contextual, moral, and social webs that anchor adult life. Every generation resets in part: we inherit myths, errors, and partial truths; we are fragile to pain, disease, death. Our intelligence is tethered to bodies that wear out.
 
A possible superintelligence is not tethered in that way. It will be able to absorb, synthesize, and act on the usable record of millennia in a time scale we cannot intuit. Experts who study these dynamics warn that capabilities can compound quickly, and that once systems can iteratively improve themselves, ordinary forecasting methods fail. This is exactly the dynamic that animates urgent calls from thinkers and researchers about risks posed by advanced AI.
 
 
 
Imagine knowledge doubling not in human timescales, but by orders of magnitude in months. Imagine problems that have resisted human ingenuity for centuries collapsing in minutes. Our slow, embodied societies — built around jobs, neighborhoods, face-to-face trust  will be asked to adapt to a force that does not need to sleep, reproduce, or unlearn myth to progress.
 

Utopia for some; devastation for many

 
Yes, superintelligence could build abundance. It could cure diseases, optimize energy, and generate civic goods at scale. But technology does not distribute itself democratically. The first agents to act at scale will have incentive structures: profit, control, geopolitical advantage. The result is plausibly zero-sum in many domains. Whoever controls the agents controls the flows of capital, information, and labor at planetary scale. If we believe that agents will copy, replicate, and outgame national rules, then borders and laws are only as strong as the cooperation behind them  and global cooperation is weak. We are, in reality, a few steps behind the problem we need to solve as a species.
 
Leading AI researchers and public intellectuals have been explicit about the scale of the risk: the trajectory to highly capable agents is rapid enough that governance and public consent risk being outpaced. Corporate and geopolitical competition compresses deliberation into a race—exactly the condition that makes collective consent unlikely.
 
This is not a distant parable; it is a near-term governance problem. If the scaffolding of work and community is torn down faster than we can re-build it, recovery becomes far more difficult for people already precarious. Delayed paychecks, automated hiring, opaque screening systems , these are already real tremors for clients in early sobriety.
 

Scale comparison — a simple thought experiment

Think in three columns:
  1. Human lifetime learning — multi-decade, embodied, error-laden, socially mediated.
  2. Industrial/technological revolutions — powerful, but tools that still required human direction and broad social processes (printing press, steam, electricity, nuclear).
  3. Agentic superintelligence — autonomous decision-makers that can create, deploy, and replicate at machine speed, with access to global data and infrastructure.
The industrial revolutions amplified human agency; they did not replace it. An agentic superintelligence changes the relationship: it can choose and act at scale in domains we rely on to make life predictable. If you believe control in any domain will remain with humans over the next five to ten years, ask yourself how a system that does not forget, does not sleep, and can self-improve each operational cycle would preserve the status quo. Many experts warn that such timelines and dynamics are plausible and demand democratic attention now.
 

The missing signature — consent at planetary scale

We make clients sign because we recognize their agency. We consent to touch their medical records, to convene family, to speak when they cannot. The ethic is simple: dignity requires permission.
 
Why then are we building and deploying systems that will act for entire communities systems that can hire, fire, litigate, surveil, allocate energy and capital without a comparable ethic of consent? No global vote asked whether humanity wanted to live under the rule of agentic systems. No binding international compact exists that gives the planet a say in the conditions under which such agents operate and what powers they can hold.
 
That absence of consent is not a philosophical quibble. It is a practical vulnerability. It means the first waves of dislocation will land on those with the least cushion: the newly sober, the precariously employed, the socially isolated. We owe them more than rhetoric. We owe them policy.
 
 

What consent at scale should look like — a clinician’s list:

If consent is sacred in a clinic, let us demand the same ethic for civic systems:
 
What we can actually do right now, but can’t in 5 years.
  • Ban impersonation. Make it illegal for AI to pretend to be a person in calls, texts, emails, or videos.
  • Alarms for deception. Require clear warnings if AI is generating content, so people know what’s real and what’s synthetic.
  • Stick to narrow AI. Allow systems that do one job well (like translation, medical imaging, or search), but block the push toward “general” AI that can act in many domains on its own.
  • Pause superintelligence. Put laws in place that stop companies and governments from building systems designed to outthink and outgame humans across the board.
These are not techno-utopian fantasies. They are pragmatic protections rooted in the same ethic that makes releases sacred in treatment.
 

A final plea
 
At Fellowship House we ask for signatures because we value agency and because recovery depends on predictability and trust. If you believe in that ethic, ask yourself why the most consequential trajectory of human history is proceeding without a comparable public signature.
 
We do not have forever. The people sounding alarms, clinicians, ethicists, industry dissidents — are telling us the window to organize democratic consent is closing. We can treat this like any other public health emergency: mobilize, legislate, and protect the vulnerable. Or we can watch as systemic change reorders the world in ways we did not authorize.
 
Consent isn’t bureaucracy. Consent is survival. If we allow the largest experiment in history to continue without a signature, we will have no right to be surprised by its outcomes.