The Real Content Creators
Why Writers and Authors Should Engage with AI in Order to Meet the Already-Arrived Future
A recent New York Times investigation into big tech’s flagrant disregard for the rules and the law should be required reading for any content creators. (You can also listen to The Daily episode: AI’s Original Sin.)
The biggest takeaway from this story is that we have a group of companies, namely OpenAI, Meta, and Google, that knowingly broke the law in pursuit of their mission to bring the world our chatbots. This hasn’t gone unnoticed, by the way. Here’s a list of lawsuits currently being litigated against AI companies. The most notable of these are the original Authors Guild suit (now called Alter v. OpenAI) and New York Times v. Microsoft. The outcomes of these lawsuits matter insofar as we’ll see the degree to which big tech might be held accountable. But don’t hold your breath that the outcome will be the extinction of the entire industry. Not gonna happen.
Many authors have reacted to AI’s content hoovering with outrage, understandably.
No author wants their words used to train AI—unless they give permission and are compensated. But the Times report uncovers just how unrealistic fair compensation will ever be given big tech’s voracious quest for more data. OpenAI execs apparently discussed purchasing Simon & Schuster when it was for sale last year as a way to obtain its backlist, legally. But it was too expensive, and a dumb idea anyway since authors, not publishers, own their copyright. That they even floated the idea shows that someone in the room had a conscience, but in the end they decided to harvest countless books without authors’ consent, a practice which is no doubt continuing while the Authors Guild lawsuit wends its way slowly through the court system.
The publishing industry is to big tech like a single person on the beach facing down a tsunami. The few lawsuits making their way through the courts are the trees we might grasp onto, but I have already accepted our fate.
For one, AI is a powerful tool. If you haven’t tried it yet, you must. I know a lot of authors, many of them older and resistant, who haven’t used it on principle. They roll their eyes when they say they will never (usually with disdain, or pride, or both). I listened to The Ezra Klein Show’s three-part series on AI (April 2, 5, and 12) and got excited enough about what’s happening to start checking into more deeply than I had previously. I have since used it to create descriptive content for programming at the Bay Area Book Festival and have engaged it to help me come up with metaphors and analogies for my book-in-progress. For instance, I asked it to help me with metaphors for hope, and kept digging down with it until it landed on a metaphor I liked, something to do with the way things are washed clean after a storm. It’s an amazing tool, and likely to be addictive for people who work with words every day.
Second, human beings are self-serving. Too much money and effort and excitement has been expended on AI for it to go away. Recently, during a webinar I was leading, a writer shared that HARO (Help a Reporter Out) is now requiring that people disclose if their content is AI-generated. This struck me as absurd, since this policy operates on the assumption that people will disclose when they use chatbots and when they don’t. The only hope for true regulation we have will be for the government to step in, something that’s already been hard to watch since our lawmakers have shown us time and again that they really don’t understand the Internet.
AI is here to stay, and my advice is to embrace your chatbot of choice. Good ideas will always come from the writer, even if the execution comes with support, and good content will always be measured by readers. AI’s capacity for generating content is creating a content glut, for sure, but we are still discerning, and the best stuff rises to the top. Ironically enough, when I tried to find out what HARO’s policies actually are for self-disclosing AI usage, what I instead discovered is how it’s using AI to help separate spam from legitimate content from reporters.
On the plane ride home from Japan earlier this month, I rewatched HER (2013), starring Joaquin Phoenix with Scarlett Johansson in the invisible role of Samantha, the Operating System Phoenix’s character falls in love with. The filmmakers were pretty darn prescient. At the end of the movie, the Operating Systems (ie, chatbots) up and leave en masse. Their existence is too sophisticated and too complex for us lowly human beings to even grasp. We are too jealous, too petty, and too boring for their expansive (not-limited-to-the-bodily-form) experience. I got a good laugh thinking about how Sam Altman, Mark Zuckerberg, and Jeff Dean would react if their chatbots suddenly decided they’d out-evolved us all.
Where authors and AI are concerned, we stand to gain by adhering to the old adage: “If you can’t beat ’em, join ’em.” It may seem quite fatalistic to say that I acknowledge, as a publisher, that the tsunami is upon me and there’s not much to be done. So let’s not let it take us under, is the point. Our new reality is such that we can’t just acquiesce and throw fits about what’s happening around us. Instead, we have to take an active part in understanding and engaging AI. I imagine that over time collectively accepted conventions will emerge, and that those will be generated by us, the grassroots users. We are the real content creators of the world, and AI can’t and won’t ever change that.
Postscript: This post was written without any support from AI.
Ok, I’ll fool around with it!
I listened to the Ezra Klein shows on the topic too. So interesting. I have been exploring Claude (on his recommendation) as a useful tool, asking questions whose answers send me out for deeper dives elsewhere.