The arrival of generative artificial intelligence has accelerated the use of AI in different fields, including journalism. AI is most visible in journalism when things go wrong. Some newsrooms have published articles on AI riddled with errors or offensive suggestions. There is widespread anxiety that AI will be used to replace journalists at a low cost. But a new global survey demonstrates the ways AI has made its way into business, even as journalists worry about its implications — and they’re not just about writing articles.
The report was published this week by JournalismAI, an initiative of Polis, the journalism think tank at the London School of Economics and Political Science. It is supported by the Google News Initiative.
“Creating change: a global survey of what news organizations are doing with artificial intelligence” includes the perspective of “more than 120 editors, journalists, technologies and media creators from 105 small and large newsrooms in 46 countries.” JournalismAI does not claim that the survey is representative of the entire global industry, but it does give an idea of how the media market is using these new technologies.
More than 75 percent of respondents used artificial intelligence tools at some point in the information process.
Among respondents, more than 75 percent used AI somewhere in the news gathering, production and distribution chain. More than half cited greater efficiency and increased productivity as a reason for using it; Ideally, AI can automate monotonous and repetitive tasks.
About a third of respondents said they expected AI technologies to help them reach a broader audience, personalize reader experiences, and improve audience engagement.
At the same time, more than 60 percent of respondents were concerned about the ethical implications of AI integration in terms of editorial quality and other aspects of journalism such as accuracy, fairness and transparency. In general, newsrooms continue to view human intervention as crucial to mitigating the potential harms of AI systems, such as bias and inaccuracy. Even when respondents were concerned that AI technologies could exacerbate biased news coverage and misrepresentation of marginalized groups, few organizations provided strong examples of possible solutions.
The report says AI presents particular difficulties for newsrooms in the Global South. Most AI tools are developed with a focus on English (with very specific accents). Funds and resources are often concentrated in a few countries, and different political realities can affect people’s trust in AI.
However, the survey revealed that 90 percent of newsrooms already use some form of AI in news production, 80 percent in news distribution, and 75 percent in news gathering. News gathering tasks include automated transcription and translation, text extraction from images, and web scraping or use of automated summarization tools. News production could include translating articles into other languages, proofreading, writing headlines, or writing full articles. Distribution includes the use of AI-powered search engine optimization, as well as things like tailoring content to a specific audience.
There are big differences in scale here: spell-checking an article with AI and using AI to generate an entirely new article, for example, handing over a very different level of control to AI tools. The report does not specify how many newsrooms are using this technology for each specific task, but it does mention that news distribution has the widest range of use cases, being mentioned most frequently as the area most affected by AI in the newsroom. drafting.
Although most respondents are concerned about the implications of adopting AI tools, only about a third of them said their organization had an AI strategy or was currently developing one.
Tech companies like Google are incorporating artificial intelligence tools into their core businesses, even as they raise new ethical and legal concerns. Several newsrooms have blocked the use of their data by GPTBot, the web crawling tool from Google competitor OpenAI, and authors have sued OpenAI and Meta for using their work to train AI. However, news agencies have also reached agreements with artificial intelligence companies, such as The Associated Presswhich signed an agreement with OpenAI earlier this year.