AI and Creative Writing Ethics: A Working Guide for Fiction Writers
AI and fiction overlap at four ethical questions: whose data trains the models, who can claim authorship of the output, who has to disclose what, and what tools a working writer actually has. This guide takes them one at a time, because conflating them is what produces most of the bad takes.
The legal landscape moved decisively over the last eighteen months. There are now specific rulings, specific settlements, and specific platform requirements. They are worth knowing in detail.
1. Training data
The state of the case law, as of mid-2026:
- Bartz v. Anthropic (June 2025): Judge William Alsup ruled that training a large language model on legally-acquired books is fair use. Training on pirated books — Anthropic had downloaded roughly seven million from LibGen and the Pirate Library Mirror — is not.
- Anthropic settlement (September 2025): $1.5 billion, to be split among rightsholders of approximately 500,000 affected titles. The largest copyright settlement in publishing history.
- Authors Guild v. OpenAI: ongoing, consolidated with the New York Times case. Plaintiffs include George R. R. Martin, John Grisham, Jodi Picoult, David Baldacci, and others.
- Publishers v. Meta (April 2026): Elsevier, Cengage, Hachette, Macmillan, McGraw Hill, and Scott Turow filed a class action alleging Meta trained its Llama models on pirated works.
The unsettled question is what counts as "legally acquired" at scale. Buying one copy of a book and feeding it to a model is not what the major labs did. The pattern emerging from the courts: training is being treated as fair use, but piracy in the underlying corpus is not forgiven.
For a fiction writer, the practical implication is small in the short term. You will likely not be able to opt your work out of training that has already happened. You can, going forward, signal opt-out for future crawls.
2. Authorship and copyright
The US Copyright Office released the second part of its AI report in January 2025, and the line is now drawn:
- A work created entirely by AI from a prompt is not copyrightable. The Office argues that the variation in output for the same prompt indicates the human did not control the expressive elements.
- A work that contains AI output but reflects meaningful human authorship — selection, arrangement, modification, or significant additional human-written content — is copyrightable in the human-authored portions.
- Applicants must disclose AI-generated content during registration and explain the human contribution.
The line is meaningful for working writers in two scenarios:
Scenario A. You wrote a 4,000-word short story. You used a tool like Grammarly or a sentence-level rewrite for a few paragraphs. The work is fully copyrightable; you did not need to disclose anything.
Scenario B. You used GPT-4 to draft a 3,000-word story from a paragraph of your notes, then revised by hand. The copyright now covers the parts you wrote and significantly modified, not the parts that remained AI-generated. Disclose during registration; describe the human contribution.
The line is wherever your hand left a real mark. Selection counts. Arrangement counts. Substantial revision counts. Pressing "regenerate" until the output reads better does not.
3. Disclosure
Disclosure rules have arrived faster than people noticed.
- Amazon KDP (since September 2023): authors must disclose if text, images, or translations are AI-generated. The cap of three new self-published titles per day per account, added at the same time, was a parallel response to the AI book flood.
- Apple Books (2024): added a disclosure label for AI-narrated audiobooks.
- EU AI Act (in force August 2024): general-purpose AI providers must label synthetic content; certain AI-generated text must be disclosed to users.
- Authors Guild "Human Authored" mark (launched 2024): a certification mark writers can apply to denote human authorship — voluntary, like a kosher mark or a fair-trade label.
- Major literary magazines: *Clarkesworld* paused submissions for several days in 2023 after being overwhelmed by AI-generated work. Most paying speculative markets and literary journals now have explicit no-AI policies for prose.
The convergence is rapid. The norm forming is: AI involvement should be disclosed, in proportion to its weight in the work.
4. Practical posture for working writers
A short list of things a working fiction writer can do, ordered by leverage:
A. Decide your own line, and write it down. "I do not use AI for prose generation; I do use Grammarly for line edits." That is a coherent stance. So is "I co-write with GPT for first drafts and revise heavily by hand." So is "I use no AI tools at any stage." The position you can defend is the one you can describe in a sentence.
B. Add an opt-out signal to your site. A `robots.txt` rule disallowing `GPTBot`, `ClaudeBot`, `Google-Extended`, `CCBot`, and similar reduces the chance your published work is scraped going forward. The `ai.txt` standard from Spawning is a parallel signal. Neither stops bad actors; both shift the legal posture.
C. Use the Authors Guild's Human Authored mark if you want a public signal. The certification carries no legal weight; it carries cultural weight in submission queues and on author pages.
D. Read platform terms specifically for AI clauses. Some platforms now reserve the right to use uploaded work for AI training or model improvement. Many do not. The clause is usually short, often new, and worth scanning before signing or accepting an updated TOS.
E. Don't waste energy on Glaze and Nightshade for prose. Those tools work for visual art. The text-side equivalents — robust watermarking, perturbation that survives paraphrase — remain a research problem, not a working tool.
Where bibli's stance lives
bibli was built on a simple prose principle: scenes on bibli are written by humans. Authors confirm this at submission. The reading app is small and editorial, not a content marketplace. The scenes are picked one at a time. We treat AI-generated submissions the way *Clarkesworld* does: declined.
That stance is a position, not a moral verdict on writers who use AI. The line many working writers draw — assist with line edits, do not generate prose — is reasonable, and bibli's stance does not gainsay it. What we ask is that the work bibli reads is the writer's own.
Frequently asked questions
Can I copyright a story I co-wrote with ChatGPT? You can copyright the parts you wrote and substantially shaped. The fully AI-generated portions are not copyrightable. Disclose during registration.
Was my work used to train ChatGPT or Llama? Possibly, if it was scraped from the open web or appeared in any of the pirate-library corpora the major labs used. There is no easy way to confirm individual works.
Can I opt my work out of future training? You can add `robots.txt` rules and `ai.txt` signals to your site. These work for compliant crawlers and shift the legal posture. They do not stop bad actors.
Do I have to disclose AI use on Amazon KDP? Yes, since September 2023, if the text, images, or translation is AI-generated. AI-assisted (light editing, brainstorming) does not require disclosure.
What is the Authors Guild "Human Authored" mark? A voluntary certification mark a writer can apply to a published work to denote human authorship. It carries no legal force; it is a cultural signal, like a kosher mark.