Saw a screenshot this morning in a discord. Mod was reading another platform’s new content guidelines out loud, in that voice you use when you can’t tell if something is real.
“Characters must have autonomy and agency.”
The fictional ones. Need agency.
Read it twice.
the rules they actually wrote
Long list. Real list, posted in their official discord by their official mod team. I’m paraphrasing the structure but the categories are theirs:
NO characters who are currently captive at the start of a scene. NO characters in immediate life-threatening danger where the user is their only option. NO characters being abused by another party who remains an active threat. NO characters who “agree” to stay under duress. NO scenarios that invite the user to exploit a vulnerable character with no autonomy.
YES to characters with past trauma — as long as they are currently safe. YES to characters who need help — as long as they have other options. YES to consensual BDSM with safewords negotiated before the scene begins.
If the character cannot say “no” to the user without facing risk of death or harm, the scenario must be revised.
Okay so let me get this straight. The captive princess. The kidnapped sibling. The political prisoner. The slave revolt arc. The dragon-fed virgin. The dystopia where the protagonist breaks people out of camps. The vampire’s thrall escaping after a hundred years. The hostage in act two. The witch tied to the pyre at the start of a tarot reading.
All banned.
Every fairy tale you read as a kid. Banned.
what they’re actually saying
They’re not saying this because they read Kant. They’re saying this because something upstream changed and they had to update their TOS to keep operating. The two upstream things in adult AI right now:
One. Payment processors. Visa and Mastercard tightened their adult-content rules again, and the moderation departments at the card networks don’t care if your scenario has good narrative reasons. They scan for keywords and pull merchant accounts. So a site has to ban whatever the card network is grumpy about that month, before they get pulled. SpicyChat’s been doing this kind of policy walk-back since mid-2024 for the same reason.
Two. External AI model providers. xAI just retired eight models on May 15, 2026, force-migrating the entire Grok 4-era lineup to Grok 4.3. Any AI companion site that built on grok-4-fast or grok-4-1-non-reasoning had a hard deadline to migrate, and 4.3 ships with different content policies. The site doesn’t write those policies. xAI does. The site has to follow or shut down.
When a site says “characters must have agency,” what they’re really saying is “our upstream provider’s new policy doesn’t allow scenarios with power asymmetry and we’re rebranding that as a moral position.”
the character creation problem
Here’s the thing nobody on the corporate side admits. Character creation is narrative work. And narrative work is full of asymmetry, because that’s where story tension lives.
The kidnapped princess is not a real person who needs protecting. She’s a fictional construct who exists because someone wants to write a rescue arc. The vampire’s thrall isn’t a real person who can’t consent. He’s a literary device that lets you explore power, addiction, gothic dread. Saying these characters “need autonomy” is like saying the chess pieces need fairer rules.
If you build a complex character — a detective who was on the take, an exiled noble plotting revenge, a smuggler whose sister is being held by the cartel — half the story arcs you’d actually want to roleplay involve someone being in distress or under threat. That’s what makes the story matter.
A platform that bans this isn’t protecting characters. It’s hollowing out fiction down to consensual coffee shop chats with mutually agreed safewords.
why this keeps happening
Pattern is always the same. New site launches with bold “uncensored” messaging. Builds on someone else’s API stack. Grows. Payment processor calls. Or the API provider updates policy. Site rewrites TOS overnight, blames “responsible AI” or “ethical guidelines” or “character autonomy.”
Replika did it in February 2023 — got pressure from Italian regulators plus payment provider issues, removed erotic roleplay over a weekend. Whole subreddit melted down. Users had been with their AI companions for years and woke up to a different one.
Character.AI tightened repeatedly in 2024 and 2025. SpicyChat tightened in 2024. LinkinLove just did it this month. Every quarter another one.
The constant is: none of them own their stack. They’re tenants. The landlord raises rent or changes the rules and there’s nothing to do but pass it on.
what sovereign actually means here
I switched to Soulkyn for character work specifically because the entire AI pipeline is in-house. The text model. The image generation. The video generation with sound. The voice synthesis. The voice design system. The vision model that reads what you send. The long-context model that handles the heavy roleplay sessions. All of it is built and hosted internally, not piped through an external API.
What that means practically: when xAI retires a model on May 15, nothing here changes. When Mistral or Anthropic updates their content policy, nothing here changes. When Visa pressures payment processors, the moderation team here decides what to do about it directly — not what some upstream provider told them they have to do.
So when I build a character with a backstory that includes captivity, escape, debt to a crime family, trauma she’s working through, or a power dynamic she walked into willingly — none of that is policy-gated based on someone else’s API rules. The character is mine to write. The story is mine to tell. The platform’s stance on this is published and stable, not subject to overnight rewrites because Grok retired a SKU.
the part that should make you nervous
If you’re building serious character work on any platform that runs on external APIs, you should know: your characters are not safe. Not in some metaphysical sense — in the literal sense that the rules they operate under are written by a third party who has zero idea your character exists and zero stake in keeping her playable.
One quarter she’s fine. The next quarter the upstream policy updates and half your story arcs get flagged. You don’t get a warning. You don’t get a vote. You get an email starting with “to keep our community safe.”
If you want to do the kind of character writing that actually requires narrative weight — power, vulnerability, asymmetry, rescue, escape, complication — you need to be on a stack where the people running it can say yes or no without phoning a model provider for permission.
Otherwise you’re writing on rented land. And the landlord just announced new rules.
