If journalism is nothing more than a new information harvesting exercise for systems that transform that harvest into this product called 'customized content,' If journalism is nothing more than a new information harvesting exercise for systems that transform that harvest into this product called 'customized content,'

[Inside the Newsroom] Confronting journalism’s Oppenheimer moment

2025/12/14 11:00

Over the weekend, Nieman Labs — known for reporting on the intersection of technology and journalism — published a flurry of thought-provoking predictions for journalism in 2026. 

A number of them caused a stir among Filipino journalists. These included one which declared, “Sorry, the explainer is dead,” and another which predicted the rise of a new type of journalism — “one not directed at people, but tailored explicitly to machine compilers of language and information.” 

The writer of the first opinion piece basically said that, because people are going to AI for things like explainers and evergreen content, these do not matter anymore. What works? The author said it is the basics: hyperlocal news, breaking news, scoops, notable first-person narratives, and investigative journalism. “AI,” the expert said, “can’t or won’t summarize this information because it’s too recent or too unique.”

The author who talked about agentic journalism meanwhile noted that “AI systems do not need ledes, nut-graphs, or narrative flows; they need user-relevant, novel, and machine-readable content.” 

The role of agentic journalism, he further pointed out, was to write the five Ws, quotes, context, and links to multimedia content. All of these, the author noted, are then assembled and customized at the point of delivery, based on what suits the end user. 

Both pieces troubled me. Thinking about it further now, I realize that it is because these observations, while dire, are true to some degree. 

In fact, what troubled me the most was that these articles did not really go far enough. And they did not delve into real solutions to existential challenges newsrooms around the world are facing today. 

Hello! I am Gemma Mendoza, head of digital services and lead researcher for disinformation and platforms at Rappler. 

Since Rappler was launched, I have been working closely with our tech and data teams in the design and development of systems and platforms that produce content at scale using data. These projects range from the highly interactive election results pages we have launched every election year since 2013 to our GraphRAG-powered chatbot, Rai.

I also lead our research on disinformation and hate in platforms, which began as we observed changes in Facebook’s algorithms towards the 2016 elections. 

As mentioned earlier, I agree with many things that the authors of the NiemanLab articles raised. For instance, indeed, a better semantic architecture will help these large language models surface information better. At Rappler, we worked on our own knowledge graph before we went on to develop our chatbot Rai.

But I want to talk about the troubling things that we have observed while working on AI systems and their impact on our own platform. 

The author of the first article I mentioned is mistaken when she said that AI cannot summarize unique information. This is not true. 

If you let them, AI systems can easily summarize unique content. In fact, even if you explicitly tell them not to, they will find a way to do this.  

We know this for a fact because we have been observing the AI bots that have been crawling and harvesting content from the Rappler website. We know that despite restrictions our website rules imposed on AI scraping, these chatbots continue to happily churn out data and content that Rappler’s team has painstakingly gathered. 

In one case, ChatGPT instantly provided detailed information on findings in a report written by our campus journalist fellow about an entrenched dynasty in the Bicol region which spent around a million pesos on social media ads to attack its rivals. The information used for this story was painstakingly pieced together for several months by a student journalist who participated in our fellowship program. (See the screenshot below.)

Anybody familiar with scalable systems will tell you that if a search engine is querying a database for the first time, there will be lag in delivery. Even Google’s search engine needs time to process and index information. Based on its instant response, ChatGPT clearly had the entire article within its own database, even before we queried it. 

This is not the only unique piece of content published by Rappler that the bot has chewed on. 

When we added AI scraping restrictions to our robots.txt file, ChatGPT acknowledged that it was aware of these restrictions. However, it still found a way to use our content, initially by claiming that it used publicly available sources or by referencing other sites that also scraped and synthesized our content — also without permission. (See below.)

My point is that we are already talking to the machines. We have been for a while. This is nothing new.

This is what we do every time we shift the way we write our stories and social media captions to conform to updates in Facebook and Google’s algorithms. We are making it easier for these machines to understand our content as well as use and monetize what we have gathered, written, and produced. 

Over the years, we were told that adding microtags to stories would help surface our content better. And so we did it. The question really is whether newsrooms and journalists should continue to go in this direction.

This used to be a symbiotic relationship. Optimizing for these platforms used to reward Rappler with substantial traffic, allowing us to scale. 

Now, with AI, we are being pushed again to “optimize” — this time, for answer engines. 

But here’s the dilemma: these now “agentic” systems are totally different beasts. They are information-hungry machines that want to be know-it-alls. 

Unlike helpful librarians, they parasitically — without asking for permission — extract and suck the juice off content they feed upon.

And they do not want to pay for that content. Believe me, we asked.

OpenAI did pay a handful of publishers. None of them (still) are from the Global South.

And the problem is that these chatbots are catering to an audience that is increasingly being behaviorally-engineered towards instant gratification. These audiences do not click on those links that are so subtly placed within the AI-generated answers. Many times, they do not even check if the links are real. 

And the numbers show it. The traffic even the largest news publishers are getting from these parasitic systems is downright pitiful.

What is worse is that journalists, and the news industry itself, are not taking this seriously enough. Some major media networks are taking what they can by inking deals with the AI platforms. Others who have less clout are fading helplessly on the fringes. 

As another article from the NiemanLab series pointed out: “We’re not learning from the past. Instead, we’re approaching this new era of generative AI much like we did platforms.”

If this continues, the author said, “We guarantee the death of most remaining newsrooms.”

I agree. 

This is journalism’s Oppenheimer moment. 

The question is this: Should journalists and newsrooms succumb to this directive once again, hook, line, and sinker like we did with social media? 

The question here is: Does the public whom we serve ultimately benefit if we continue down this path of surrendering to the platforms? Or — for their sake and ours — should we push back and chart an alternative path, one that is not parasitic but focuses on achieving symbiosis? 

If journalism is nothing more than a new information harvesting exercise for systems that transform that harvest into this product called “customized content,” what becomes of its soul and its mission?

These questions need to be confronted not just by us journalists, but also by citizens who believe that independent journalism needs to survive as a check on power, abuse, and corruption. 

We need supportive communities to collaborate with us on this journey. 

Over the past couple of years, Rappler has been building a mobile platform that will allow you to directly converse with us and with the rest of our community in a way that is safe and free from manipulation.  

We have also started working with other newsrooms in the Philippines and in the ASEAN region towards building a bottom-up approach to news circulation — a more sustainable path to growing audience and revenue beyond fickle Big Tech algorithms.

If you have not yet done so, download our mobile apps and subscribe to our newsletters so that you can have that direct line with us.

If you have not yet done so, please consider signing up for Rappler Plus, our premium membership program. And please help us convince others.

We need you to help us confront journalism’s Oppenheimer moment. 

Help us carve these pathways to retaining agency so we can sustain our mission in this world of generative AI. – Rappler.com

Sorumluluk Reddi: Bu sitede yeniden yayınlanan makaleler, halka açık platformlardan alınmıştır ve yalnızca bilgilendirme amaçlıdır. MEXC'nin görüşlerini yansıtmayabilir. Tüm hakları telif sahiplerine aittir. Herhangi bir içeriğin üçüncü taraf haklarını ihlal ettiğini düşünüyorsanız, kaldırılması için lütfen service@support.mexc.com ile iletişime geçin. MEXC, içeriğin doğruluğu, eksiksizliği veya güncelliği konusunda hiçbir garanti vermez ve sağlanan bilgilere dayalı olarak alınan herhangi bir eylemden sorumlu değildir. İçerik, finansal, yasal veya diğer profesyonel tavsiye niteliğinde değildir ve MEXC tarafından bir tavsiye veya onay olarak değerlendirilmemelidir.

Ayrıca Şunları da Beğenebilirsiniz