From news gathering to distribution, the entire journalism cycle is being impacted by a wide and growing range of AI tools. As the use of AI becomes more widespread, the ethical considerations around its use in journalism become more pressing.
From navigating transparency and accountability to the potential biases in LLMs and the role of big tech companies, news leaders are facing tough questions as they assess the promises and threats of AI and the technology’s impact on journalistic values.
Publishers need to take a cautious but open-minded approach to exploring AI opportunities, while maintaining human oversight of all published content, according to a panel discussion at our AI Forum in Bangalore.
The Quint’s AI policy: “Human in the loop”
When generative AI exploded onto the scene with the release of OpenAI’s ChatGPT in November 2022, the newsroom at The Quint, an Indian digital-first media company, quickly began exploring the possibilities offered by the new technology.
Since then, the publisher has gone through “several stages of thinking” about AI, ranging from curiosity to confusion.
“But at no point there was any kind of denial in accepting that this is something that’s going to be big,” said Shelly Walia, Executive Editor at The Quint.
The publisher moved quickly to create company-wide policies, setting up a cross-functional committee that brought together editors, product, marketing and graphic artists to draft the guidelines.
“It was important to set the policy right at the beginning” because The Quint’s newsroom is filled with digitally curious young people who are likely to start playing with new AI tools as soon as they become available, Walia said.
Today, The Quint uses AI for tasks such as SEO tagging and keywords, translations and transcriptions. Crucially, “nothing is without a human in the loop,” said Walia.
There are also clear red lines: using AI for fact-checking or to create news articles or photo-realistic images is not allowed.
To disclose or not?
AI is “definitely making a difference in the newsroom,” added Subhash Rai, Digital Editor at the Hindu Group, which publishes The Hindu newspaper. His company has identified specific use cases where AI has been particularly useful, such as moderating discussion forums and tasks related to data analysis.
“We have used it on elections and cleaning up data, and getting code quickly done. It generates very good code,” he said.
But should all AI use cases be disclosed to readers? Walia shared insights from her conversations with local US newspapers who were using AI to drive efficiencies, but had decided not to disclose its use because they thought “their audience doesn’t understand what AI is”.
However, audiences will increasingly begin to use AI tools, which will also improve their grasp of the technology.
“As soon as audiences recognize the use of AI in their daily lives, there will be a fair pressure on media to fully declare where they are using artificial intelligence,” said Shashank Chouhan, Output Editor for BBC Indian Languages, Collective Newsroom.
Publishers should be mindful of this pressure, he said, because being transparent about their use of AI will help them gain credibility and trust.
Human oversight essential
Audiences may also be particularly sensitive when AI is used for certain tasks, and less so in other use cases. Combined with this, AI is still a poor tool for what is the core of journalism – on-the-ground reporting, telling nuanced stories – “with all the hallucinations and inaccuracies,” Walia said.
Indeed, the need for transparency may depend on the level of AI involvement and the exact task it preforms, Rai said: “If you’re just summarising something, maybe you don’t need to declare that. But if you are actually using [AI] as part of your story, then you might want to say that AI was used.”
Underpinning all use cases is the need to keep people in control. “All credible organisations have mandated that [the use of Generative AI] should be overseen by a human,” said Chouhan.
He added that AI could be helpful in doing the initial work on complex stories, such as deep financial investigations and corporate reports, where it can comb through and process information much faster than a human.
“But in the end, it needs to be something that will be vetted by a human and only then published,” he said.
Language gaps and cultural biases
In the context of India’s multilingual media scene, questions about AI’s linguistic capabilities are central. In short, all panellists had found current AI tools lacking beyond English-language content, which limits their effective use cases.
Most AI tools have been trained primarily on English-language content and developed based on Western thinking, resulting in a limited understanding of languages and cultures in Asia and Africa, Chouhan said.
As a result, these tools are less efficient in these contexts, to the point where “it is very difficult to genuinely engage with these models right now,” he said.
For example, his company ran tests using AI to translate BBC content into local languages, but the results were poor.
“It simply did not work. The translation was horrible. We had to stop it and we had to bring back humans to doing what they do best,” Chouhan said.
The AI-Big tech power struggle
From a broader perspective, the new AI era marks another chapter in the complex, often fraught relationship between news publishers and big tech firms.
Against this backdrop, and based on past experience, publishers need to be careful not to cede too much power to the companies driving AI development.
When it comes to dealing with tech firms, “almost all of media have had their fingers burnt in recent years,” said Chouhan. And yet, “we all depended on them in a major way.”
“The relationship between big tech and media is an unequal one. … I wish there was a more of an equal relationship, where they learned from us, we learned from them, and both of us grew together. But as of now, that’s not the case,” he said.
In the context of AI, publishers need to make sure to maintain control of their own platforms, Rai said: “If you’re looking at predictive content on your homepage, you are ceding control. And that’s exactly what happened over the years – we’ve ceded control of many aspects of our journalism to big tech.”