More and more publications are adding the “not generated by AI” disclaimer to their content contributor requests. The Harvard Business Review content submission form includes a standard question on whether generative AI was used; not only in the creation of the content submitted, but also in completing the submission questions!
On top of increasing revenue and views, newsrooms have the added pressure of distinguishing between genuine subject matter expertise and content compiled by generative AI. While it is still relatively easy for a trained eye to spot content written – or just “compiled”, as some would argue – by generative AI, it is an unnecessary burden on journalists and editors.
Subject matter experts and communication professionals alike need to carefully consider whether their reputations are worth the perceived cost and effort savings of using an AI tool rather than an experienced human writer.
On the other hand, writers and communication professionals need to understand that their clients are searching for solutions that make the effort feel less and the expense worth more than the straight monetary value of it. Subject matter experts need writers and communication professionals to interpret what they’re trying to say, rather than merely regurgitating it.
While each party involved in this unpredictable rollercoaster of generative AI may feel like they have right on their side, they’re all indirectly for same thing: increasing knowledge. Subject matter experts may approach the issue with increased speed and decreased effort in mind, writers and communication professionals may consider quality and differentiation as top priorities, and newsrooms may hold the journalistic principles of truth, accuracy, independence and objectivity as the highest standard. In the end, increasing knowledge – not simply reiterating existing information – is the point.
Most generative AI tools use large language models (LLMs) based on existing knowledge to compile responses (answers) to prompts (questions). It is easy for users of these tools to think of the responses provided by the generative AI tools as new, simply because it is new to them, the users. Truly new knowledge however springs from critical thinking; a skill not yet mastered by most generative AI tools and not often requested by users.
When using generative AI to produce content that will be submitted to media for publication consideration, we have to be honest with ourselves when we consider whether we are asking AI to help with the pure “doing”, or whether we are inadvertently outsourcing our thinking to AI.
There is no easy solution to navigate these sticky situations. Having AI policies and rules for subject matter experts, writers, communication professionals and media in place is a good start. From there, the impact this phase will have on the future of information and media integrity is up to the moral compass of each individual who gets on board the generative AI rollercoaster.