The initiative is being led by Ezra Eeman, WAN-IFRA’s AI Expert, and Director of Strategy & Innovation at Dutch public broadcaster NPO.
The webinar examined how newsrooms are adopting AI, explored emerging tools and interfaces, and delved into the strategic questions that will determine journalism’s role in an AI-driven world.
“AI is no longer optional for newsrooms – it’s existential,” said Eeman, underlining the urgency for media leaders to act.
Takeaways from San Francisco Study Tour
Eeman shared insights from WAN-IFRA’s January study tour to San Francisco, where a small group met with OpenAI, Perplexity, several startups, and VC firms to assess the state of AI and identify opportunities and challenges for the year ahead.
He shared his key takeaways from the tour:
Preparing for a post-link reality: Publishers in the Bay Area are preparing for a future with reduced social traffic and declining traditional search. Many are exploring how to be present in new AI interfaces – either by integrating AI directly, or placing their content within emerging AI ecosystems.
The AI ecosystem remains fragile: During the DeepSeek-induced market dip, participants witnessed how AI volatility affects the broader economy. The top seven tech companies investing in AI now represent 30 percent of the US economy. VC firms and investors expressed concerns about a potential AI bubble, uncertain whether it would burst or slowly deflate in 2025.
Foundation models commoditised: The rapid rise of open-source and competitive proprietary models is narrowing the performance gap. Investors are shifting focus to the application layer, where they believe the real value and differentiation will emerge.
Immature monetisation and partnerships: Despite publisher interest, there’s still no clear path to monetisation in AI interfaces. Deals remain rare and exclusive. OpenAI, for instance, has only three people on its global partnership team, signalling the ecosystem’s lack of scale and maturity.
Machine-to-machine monetisation: Startups are experimenting with protocols where bots pay a toll to access content. These models remain underdeveloped, with no current partnerships with major AI players, but may offer a scalable alternative to traditional licensing models in the future.
Newsrooms look beyond big tech for AI adoption
While much of the innovation in AI is centred in Silicon Valley, experts at the session stressed that the most promising opportunities for journalism may lie elsewhere – in open-source communities and within newsrooms themselves.
Nikita Roy, founder of the Newsroom Robots Lab and host of the “Newsroom Robots” podcast, noted a gap between tools used in industries like finance and insurance and those adopted in journalism.
“Tools like Langchain are being used across industries to build custom AI applications – but hardly anyone in the news is using them,” she said. “There’s a huge missed opportunity there.”
Roy observed that the barrier often lies in unfamiliarity or intimidation, especially with open-source tools that don’t come with the marketing polish of Big Tech products.
“Many journalists aren’t yet equipped with the skills to explore these tools. But every journalist needs to understand AI – it’s as transformative as when computers first arrived in newsrooms,” she said.
“ROI is the biggest question for most newsroom leaders right now,” Roy said. “People want to experiment, but they also want to know: what’s the endgame?”
She pointed out that this uncertainty around return on investment is often what slows down organisational adoption. Many teams are in the “playground” phase, trying tools in isolated ways, without integrating them into wider newsroom strategy.
Florent Daudens, Press Lead at Hugging Face, urged newsrooms to go beyond using AI for isolated tasks and instead build a deeper, systemic understanding.
“If you want something robust and adapted to your needs, you need to understand the models, the constraints, the trade-offs,” he said.
Both Roy and Daudens emphasised that open-source AI offers more than just transparency. It enables greater control, ownership, and the ability to customise tools for specific editorial or ethical contexts – advantages that are critical for newsrooms wary of black-box solutions from major platforms.
“Microsoft and Google are focused on adoption and integration, which is great,” Roy added. “But they’re not necessarily building for journalism. Open source may lack polish, but it offers flexibility.”
Roy highlighted the example of iTromsø, a local Norwegian newsroom that built its own AI assistant, DJINN, to monitor municipal documents.
“It used to take hours to go through council documents. Now, DJINN does the initial scan and flags the most relevant items. Editors designed the scoring system, so it reflects their judgment,” she said. “It’s a great case of editorial-led innovation.”
In its first week of use, two interns working with DJINN broke five front-page stories.
“That’s what happens when you combine newsroom knowledge with the right AI tools,” Roy said. “It’s about transferring expertise and supporting young reporters.”
Daudens agreed, adding that newsrooms don’t need to build massive systems from scratch.
“You can take a large language model and fine-tune a smaller, efficient one for your own newsroom tasks—whether that’s summarisation, categorisation, or audio transcription,” he said.
“AI projects don’t fail because of tech. They fail because no one owns them,” Daudens said, pointing to the need for internal accountability.
He warned that many organisations still treat AI like a secret side project.
“There’s a lot of individual experimentation happening – but people are afraid to talk about it. That means newsrooms aren’t building collective intelligence around AI,” Daudens said.
He encouraged leadership to foster open conversations and shared tools. “The Financial Times is creating prompt libraries that everyone can use. That’s the kind of infrastructure that helps AI become a team resource, not a personal hack.”
AI agents, personalised interfaces, and news as data
As AI interfaces grow more sophisticated, the way users access and engage with news is beginning to shift radically – from search engines and social feeds to AI agents and task-based interactions.
Eeman demonstrated how an AI agent could open a browser, locate a news story, and summarise it on the user’s behalf – without ever clicking on the publisher’s site.
“If this becomes how people consume content, they won’t see your homepage. No ads, no recirculation, no brand touchpoints,” he said. “That breaks the economic model of journalism.”
Because these agents operate through users’ browsers, they bypass bot detection systems entirely. “It’s not a bot – it’s the user. Or rather, a machine acting on behalf of the user,” Eeman said.
Roy showed how she had built her own AI clone – trained on her voice and content – to answer participant questions after her workshops.
“It’s not just about efficiency,” she said. “It’s about scaling your presence in a personalised way.”
Daudens presented a prototype AI agent he created over a weekend to generate daily news briefings and podcast scripts tailored to individual users.
“I’m not a developer,” he said. “I built this using an LLM and plain language. That shows what’s possible now.”
These developments signal a potential redefinition of content itself. “We’re moving from news as a story to news as data,” Roy said.
She demonstrated how ChatGPT’s Research Mode could generate structured tables, like a database of newsroom-AI partnerships, sourced entirely from public content.
“If ChatGPT becomes a dominant discovery layer, those who feed it structured, machine-readable content will benefit. Others may be left out entirely,” she warned.
At the same time, licensing, trust, and verification remain pressing concerns.
A recent BBC study cited by Daudens found that over 50 percent of AI-generated news content had factual or ethical issues.
“Verification is critical,” he said. “The best AI projects start with the best datasets – high-quality, trusted information.”
Legal debates are also intensifying. Roy pointed to the Reuters vs Ross Intelligence case as a potential precedent. “Depending on how courts rule, publishers may gain more control over how their content is used in training AI,” she said.
However, Daudens argued that the solution won’t lie solely in litigation.
“If publishers want influence in the AI ecosystem, they have to engage with it – through experimentation, partnerships, and open-source collaboration. The real competition isn’t other newsrooms. It’s tech companies.” he said.