The integration of Artificial Intelligence (AI) into journalism is not just a technological advancement; it’s a paradigm shift both for the profession and for the business of making news. The Frontline Club event ‘Rise of the Machines: AI in Journalism’ on 24 May shed light on this transformation, raising questions that are both intriguing and essential. As AI’s role in newsrooms continues to grow, the landscape of journalism is undergoing a profound transformation, raising questions that are both intriguing and essential.
Generative AI: the double-edged sword
Generative AI, exemplified by tools like Chat GPT, is rewriting the rules of content creation. Reuters Institute digital journalist Marina Adami‘s observation captures the zeitgeist: “We’re at the cusp of a new era. AI isn’t just a tool; it’s becoming a collaborator.” This collaboration, however, comes with its challenges. The potential of AI to fabricate content that’s indistinguishable from human-generated content is both its strength and its vulnerability.
Dr. Bahareh Heravi added depth to this perspective, noting, “Generative AI is like fire. It can warm your house, but it can also burn it down.” The implications are clear: while AI can enhance journalistic capabilities, unchecked use can erode the very foundation of trust and authenticity.
“Over the last decade we’ve seen increasing use of AI throughout the media value chain; from metadata creation to sentiment analysis to recommendation engines. Generative AI is just the next iteration in this process,” said Patrick O’Connor-Read. Elaborating on the role of Large Language Models (LLMs), he added: “LLMs offer an evolution within this wider AI trend, but they are prone to hallucination. So the workflow will shift, rather than machines supporting humans, humans will support machines, providing input guidance and output verification/sensechecking.”
The trust equation: balancing innovation with integrity
For institutions like the BBC, trust is not just a value; it’s their currency. “In the age of misinformation, our commitment to truth is unwavering. But how do we integrate AI without compromising that trust?” David Caswell, former Executive Product Manager at the BBC, pondered during the Frontline Club event. Parmy Olson, a Bloomberg Opinion columnist covering technology, with a particular focus on AI and chatbots, echoed this sentiment, “The challenge isn’t just about using AI responsibly; it’s about communicating its use transparently to our audience.” This transparency is the linchpin that holds the trust equation together. The traditional model of journalism was linear: journalists create, and consumers consume. AI is disrupting this linearity.
Patrick O’Connor-Read, who has navigated both sides of this equation, producing TV and digital content for over ten years with companies like Zatzu and researching applied AI, offers a unique perspective. He points out, “It’s an open secret that the business models of many media outlets constrain time intensive human research; a lot of this low to mid level work can be automated and achieve near parity outcomes, at least for perfunctory output.”
As David Caswell shared, “It’s no longer just about what we want to say; it’s about how the audience wants to hear it.” This shift towards consumer-centric content is revolutionary. Marina Adami highlighted the transformative potential of this shift: “Imagine a world where news adapts to you, not the other way around. That’s the promise of AI in journalism.” However, with this promise comes the responsibility of ensuring that customization doesn’t lead to echo chambers, a concern raised Heravi, a senior academic at the Institute for People-Centred AI at the University of Surrey.
O’Connor-Read believes that while news outlets are personality-led, the emergence of synthetic personalities is imminent. “News outlets are personality led, people return to their favourite brands and hosts who provide a clear point of view. These sources of authority will continue to command attention, but it is only a matter of time – weeks and months more than years and decades – before synthetic personalities emerge able to process and synthesize information beyond the capabilities of a human, and attract and retain an audience.” He continues to explore the boundaries of what pure/total AI formats look like with his current venture, Zolayola, focusing on blockchain & applied AI.
The automation wave
The article from The Economist, titled ‘The Third Wave of AI in Journalism,’ highlights the initial phase of AI in journalism as automation. Machines have been assisting in delivering news for years. The Associated Press began publishing automated company earnings reports as early as 2014. The New York Times leverages machine learning to decide how many free articles to show readers before they hit a paywall. This automation wave has been primarily data-driven, focusing on generating news stories from structured data like financial reports and sports results.
Augmentation and analysis
The second wave, as described by computational journalist Francesco Marconi in the Reuters Institute article titled ‘ChatGPT: Threat or Opportunity for Journalism?’, shifted the emphasis to augmenting reporting. AI was used to analyze large datasets and uncover trends. This wave saw the use of machine learning and natural language processing to provide deeper insights and context to news stories. The Argentinian newspaper La Nación‘s use of AI to support its data team is a prime example of this phase.
Generative AI: a double-edged sword?
The third wave, which we are currently experiencing, is characterized by generative AI. These are large language models capable of generating narrative text at scale. While they offer applications beyond simple automated reports, they come with their own set of challenges.
While Madhumita Murgia recently appointed AI editor at the FT points out that while generative AI can synthesize information and make edits, it lacks the necessary originality and analytic capability that distinguishes quality journalism.
Yup, that’s a Gannett paper running AI-generated high school football stories. Yup, it’s terrible. pic.twitter.com/VkuM1vNpy1
— Steve Cavendish (@scavendish) August 21, 2023
The future: collaboration or replacement?
The overarching sentiment from all sources is that while AI has a significant role to play in the future of journalism, it cannot and should not replace human journalists. AI can assist, augment, and even automate certain tasks, but the human touch, analysis, and intuition remain irreplaceable.
The future of journalism in the age of AI is not about machines taking over but about journalists and AI working in tandem to deliver accurate, timely, and insightful news to the masses.
Insightful report from Cointelegraph’s Savannah Fortis on ‘Media companies grapple with AI both inside and outside newsrooms’ — published 29 August 2023.
The article highlights that many leading media companies, including CNN, The New York Times, and Reuters, have taken steps to prevent AI technologies like OpenAI’s ChatGPT from scanning their online content. These companies have coded their platforms to block OpenAI’s web crawler, GPTBot.
It also mentions that other sectors, like tech giants Samsung and Apple, have banned the internal use of AI chatbots due to concerns over data security. However, in contrast some companies like Netflix are exploring the use of AI, as indicated by their job listings for high-paying AI roles.
Standout questions
Ethical considerations: How should media companies balance the use of AI for efficiency with ethical concerns like data privacy and copyright infringement?
AI in Journalism: Could AI ever replace human journalists, or will it serve as a tool to assist them?
Consumer trust: With nearly three-quarters of consumers concerned about the unethical use of AI by firms, how can companies build trust while integrating AI into their operations?
And then this from Morning Brew popped into my in-box:
After being dragged on social media for its hilariously bad AI-generated high school football reporting, the Columbus Dispatch and its owner Gannett announced they are pausing their local AI sportswriting initiative.
What happened: An article written by AI recapping a football game in Westerville, Ohio, went viral on X for being borderline illegible.
The write-up used the phrase “a close encounter of the athletic kind” to describe the game.
One sentence reads: “The Warriors chalked up this decision in spite of the Warhawks’ spirited fourth-quarter performance,” which makes perfect sense.
The Dispatch’s ethical guidelines state that AI content has to be verified by humans before being used in reporting, but it’s unclear whether that step was taken. Another AI-written sports story in the Dispatch initially failed to generate team names, publishing “[[WINNING_TEAM_MASCOT]]” and “[[LOSING_TEAM_MASCOT]].” The Dispatch has since updated AI-generated stories to correct errors.
Big picture: Major news outlets are still figuring out how to incorporate AI into their reporting process. Reuters, the AP, and others have published guidelines to define AI’s role in the newsroom, while Google is reportedly testing an AI product that helps journalists produce news stories. But expect more close encounters of the robot kind—experts estimate that 90% of content on the internet in a few years will be AI-generated, according to Axios.—CC