top of page

Why I won’t edit materials produced using AI

Updated: Jul 14


Logo by Eva Vagreti; used with permission


by

Andrew Park

July 11th, 2024


“I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.”

Author Joanna Maciejewska on 𝕏 (March 29, 2024)


At this point, if you’re a human with a pulse and an internet connection, you’ve likely heard of ChatGPT. With its startling ability to write and edit human-like prose, ChatGPT seemed to epitomize that old tech-bro injunction: “Move fast and break things.” It’s almost two years since OpenAI released ChatGPT on an unsuspecting world, and we’re still deciding what to make of it. Depending on who you listen to, large-language learning models like ChatGPT are either more revolutionary than fire and electricity or “just another tool.”

Writers, editors, and educators seem to have adopted one of three general attitudes toward ChatGPT and its AI relatives. Some, like Joanna Penn, are frothily enthusiastic over AI’s potential to assist their creativity. They’re in the camp of “I personally welcome our ChatGPT overlords.” Members of the second group are waving white flags and trying to find “ethical” ways to engage with AI writing tools. Many ethical engagers are teachers reframing writing assignments into critical analyses of what ChatGPT produces.

A third and, I suspect, shrinking minority of creatives, educators, and journal editors have taken a zero-tolerance approach to AI-assisted work in general, and ChatGPT in particular.

I belong firmly in this third camp of AI refuseniks. I’d rather give up writing altogether than resort to using AI. As a professional editor, I refuse to edit materials produced wholly or in part with the help of AI, and here are seven reasons why:

1.   ChatGPT (and graphic models like Midjourney) have been trained on the work of human creators without attribution or compensation.

In my world that’s called stealing, and the editor of Clark’s World and a gaggle of A-list writers would agree. Writers who include Jodi Picoult and Jonathan Franzen have launched lawsuits for copyright infringement against OpenAI. Similar lawsuits are being levelled by visual artists against the makers of graphic AIs. Unless Silicon Valley is prepared to compensate creators, it’s hard to see how ne could ethically use their chatbot products.

2.   By using AI, you’re outsourcing a part of your human creativity to a machine.

Whether its playing guitar, shooting hoops, or brain surgery, you become better at activities by practising them! Long and dedicated practice forges fresh neural connections in your brain and solidifies your skills.

The practice of writing is no exception. Sitting (or standing) at the keyboard or notepad and hammering out the words—your own words—lights up multiple regions of your brain. By resorting to AI as a writing aid, you are potentially hobbling your personal growth as an author.

It’s been argued that ChatGPT can be used to make us better through brainstorming ideas or writing outlines in the interest of efficiency. Screenwriter Matt Aldrich argues that this approach reduces the writer to the level of a studio executive, who critiques and fiddles with script ideas generated by others but is not a writer themself.

3.   AI enshrines mediocrity.

After the first rush of amazement/euphoria/terror over the impact of AI on writing and editing, expectations have been dialled back. It turns out that ChatGPT’s writing is both repetitive and lacklustre. So, when AI infects education, we end up in the situation where “In a single afternoon on ChatGPT, a student can generate enough mediocre prose to complete an entire undergraduate degree.” Presumably with Bs and Cs. Now, in university, Bs and Cs will get you a degree, and maybe mediocrity will serve for corporate memos and interoffice emails, but surely we should be aiming higher.

4.   ChatGPT is likely to further impoverish the creative class.

Even AI booster Joanna Penn admits that AI will result in digital abundance, which will drive revenue down as there is so much supply.As an ink-stained wretch of a writer, therefore, you’ll either have to ramp up your supply (and self-promote and hustle like crazy) or you’re going to get poorer. Most professional writers already earn peanuts. Why would they embrace a system that is almost guaranteed to make them poorer while having to work harder or (worst case scenario) not at all?

5.   Chatbots are disrupting creative industries and education—and not in a good way.

The creators of ChatGPT released it into the wild without giving much thought to its wider societal implications. Among other things, we’ve seen literary magazines inundated by badly written stories generated by AI. The editors of SF magazine Clarkesworld were forced to pause submissions in the face of AI overwhelm. Copycat manuscripts mimicking the style and content of George R. R. Martin and editor Jane Friedman have been passed off as real on Amazon.

Meanwhile, plagiarism and cheating, already a chronic problem in schools and colleges, has become almost impossible to detect. OpenAI’s CEO, Sam Altman’s answer to this challenge is that AI is so powerful that we would be “doing students a disservice” by limiting their access to it. But what about ethics? you ask. He’s got an answer for that too: “Cheating on homework is obviously bad … But what we mean by cheating and what the expected rules are does change over time.” Subtext, we’ll be redefining what teachers mean when they say “in your own words.”

6.   ChatGPT has “hallucinations.”

This one’s for nonfiction and essay writers. For reasons its creators don’t fully understand, ChatGPT and other large-language models fabricate what they don’t know. These hallucinations arise, in part, from the incomplete, often biased data used to train the model. Whatever the reasons, hallucinations can have serious real-world consequences, as when a lawyer presented a court with fake legal case precedents generated by ChatGPT. And if you prompt ChatGPT with intentionally false information, it’ll come up with perfectly plausible yet meaningless responses complete with fictional citations!

7.   AI creators don’t fully understand their creations

Hallucinations and the recursive “self-teaching” nature of AI mean that even its creators don’t understand exactly how it works (see here and here). To me, this realization implies that the hallucination challenge may never be solved. And since the overarching goal of the tech bros is to create an artificial general intelligence (AGI) that is truly self-directed, shouldn’t that worry us a teensy bit?

If current AIs just had one or two of these problems, you might dismiss me as an old luddite. The goods outweigh the bads, etcetera. But the downsides I’ve listed—and there are more that I didn’t list—are genuine. And as I write this, there’s no plan to compensate creators whose work has been appropriated to train ChatGPT. In fact, AI companies are trying to do a legal end-run around copyright by claiming “fair use” for educational purposes. If these arguments win the legal day, then anything we put online, including this blog, would become fair game as bot fodder.

I’m not arguing for a blanket prohibition on AI here. AI systems are being used to great effect in fields as diverse as medical diagnostics, immunological genetics, and cosmology. These and other scientific uses of AI have the potential to enhance human capabilities in ways humans alone can’t achieve. And that criterion—enhancement, not harming of human potential—seems like a reasonable test for the value of an AI system.

Writing chatbots, in my opinion, fail this test. Humans, for the most part, are familiar with the basics of writing. And if we want to produce better written, more interesting stories, we can train and encourage humans to hone their writing craft and storytelling chops. Indeed, there are thousands of aspiring authors out there who are trying to do this.

Which brings me to Joanna Maciejewska’s brilliant epigraph at the head of this blog. As an editor and writer, I want to see great human storytellers thrive. I want to see them produce stories I’ve never heard that grow out their unique human experiences (or, for the speculative crowd, experiences they could imagine). Could ChatGPT have ever produced the uniquely tragic perspective of Elif Shafak’s “10 minutes 38 seconds in this strange world” or the visceral weirdness of Lidia Yuknavitch’s “The book of Joan”?

I’d also like to see human writers make more money, and the tech bros profiting from their unpaid labour make a lot less. I know, I know, when did authors ever make money? Fair point, but ChatGPT virtually guarantees writers will get paid less in the future, and I don’t want to see that.

So that, dear reader, is why I won’t edit materials produced or aided by chatbots. You may argue, as many have done, that there are ethical ways to use ChatGPT, in which case we can agree to differ and I’ll wish you luck on your writing journey.


123 views0 comments

Related Posts

See All

Comments


bottom of page