Should it stay, or should it go? 

Generative AI, that is.

Today, almost 10 months after ChatGPT began taking the world by storm, publishers are suspicious of and intrigued by the technology. Some publishers are testing generative AI tools in their newsrooms, and others are filing lawsuits. A global WAN-IFRA survey, conducted in collaboration with SCHICKLER Consulting, about the use of generative AI in newsrooms found that it’s believed that less than 5% of journalists use ChatGPT or tools like it weekly. 

Let’s try to understand the practical applications of generative AI in publishing and the red flags website publishers should watch for. 

Generative AI publisher use cases

Some publishers have used forms of AI in their newsrooms before the advent of ChatGPT. The most commonly cited use cases include the following. 

Streamlining reporter notes

Journalists take a lot of notes when they research stories and interview subjects. Combing through these details when it comes down to writing the story—particularly if multiple journalists are involved, as is often the case for major news sites—can be cumbersome. In one instance, Andrew Rodriguez Calderón, a computational journalist at The Marshall Project, used ChatGPT to produce summaries of different states’ banned book policies to save time on copying over this information from reporters’ notes. 

He thought the paragraphs the tool spit out were dull, so his team iterated on the prompts to create descriptions of the notes with subheads. He asked ChatGPT to categorize the relevant parts of each policy under the specific subheads, and then two individuals fact-checked the descriptions. The takeaway for Calderón: Publishers should document ChatGPT prompts to track the best iterations for future projects.

Editing, summarizing, and analyzing

Scott Brodbeck, founder and CEO of Local News Now, has used ChatGPT to scan recently published articles for errors, summarize articles and media releases, analyze sentiment, and write sponsored content. He’s also used AI in a morning newsletter for one of his sites, ARLNow, that includes AI-generated summaries of articles from the day before.

“I want [our writers] to focus on the actual journalism and not on these more various sundry tasks, and so having the AI to do that has really opened up possibilities of doing stuff that we did not have available to us before,” Brodbeck said


At the world news site Worldcrunch, editors use generative AI to automate translations. Anyone who has ever commandeered a translation process knows how laborious it can be. Certain words don’t translate the same way or in the same context in different languages, for instance. 

“In many (though not all) languages we translate from, the machine has gotten good enough to refer to it when rephrasing a clunky sentence in a translated piece from a language I don’t know,” said Jeff Israely, cofounder of Worldcrunch and a former Time magazine foreign correspondent. “Perhaps even more helpful has been the ability to browse through entire newspapers in various languages and feel confident assigning stories to translators who don’t necessarily see the pieces we’re looking for. That can save a ton of time and mental energy.”

Finding trends

Mark Hansen, a professor at the Columbia School of Journalism, has used ChatGPT to scan daily paragraphs on the monkeypox virus to detect trends and surges. He had to tweak the tool to learn how to pinpoint the most significant changes and help design a template for an article on those results.

Where generative AI might cross a line

Some of the most frequently shared qualms about tools like ChatGPT include everything from poor-quality writing to plagiarism.

Copyright infringement

The New York Times might become embroiled in a lawsuit with OpenAI, which created ChatGPT. 

Here’s the thing to remember about tools like this: They take all the information ever fed to them and train their models so they (presumably) get smarter over time. The NYT alleges that OpenAI illegally copied its articles to do this very thing via scraping large parts of the internet without permission. The fear for the publication, and others like it, is that an internet user could get a paraphrased portion of an article when conducting searches, bypassing the need to go to the publication’s website. This is a valid concern, given that Microsoft is using ChatGPT to power its search engine, Bing.

Poor-quality writing

A common critique of generative AI tools is that they produce subpar writing. 

There are countless experiments in which AI writing was compared to human writing, and the latter was still considered superior. Even if the grammar and spelling are accurate, writing elements like voice, tone, and context can get lost in AI translation. As of today, there is yet to be an AI match for human subject matter expertise. 

Advertising fraud and misinformation 

Digital advertising has long been rife with fraud, but AI tech, if used for evil, could exacerbate the problem beyond what we’ve ever seen. 

AI can create hundreds of thousands of websites, send invalid traffic and impressions their way, and design fake ad exchange accounts. Because annual programmatic ad impressions number in the trillions, it’s tough to detect this type of fraud. These tools can also create poor-quality inventory plagued with plagiarism, misinformation, and spam. And, again, chatbots can take this data and feed and train their ML algorithms with it so they get better at mimicking real websites over time.

What’s a publisher to do?

As we’ve said before, we look at ChatGPT and tools like it with fascination, not fear. 

After all, we use AI in our own business to help publishers like you automate routine tasks (like comment moderation) so you can focus on writing quality content and innovating your offerings. It’s still early days, and the jury is still out. Publishers can use generative AI, or they can’t. But at some point, they’ll likely need to test these tools to stay competitive and avoid getting left behind. 

Having an audience engagement platform that helps you maintain a loyal, engaged community and generate revenue from quality content gives you a secure base for trialing generative AI tools. Get in touch with us today to learn how we can help.