Home Tech The limits of the AI-generated ‘Eyes on Rafah’ image

The limits of the AI-generated ‘Eyes on Rafah’ image

0 comment
The limits of the AI-generated 'Eyes on Rafah' image

As “All eyes on Rafah” circulated, BBC Verify journalist Shayan Sardarizadeh published in X which “has now become the most viral AI-generated image I have ever seen.” It’s ironic, then, that all those eyes on Rafah aren’t actually looking at Rafah at all.

Establishing the role of AI in the act of spreading news quickly became complicated. goal, like NBC News noted this week, has made efforts to restrict political content on its platforms even as Instagram has become a “crucial outlet for Palestinian journalists.” The result is that real images of Rafah can be restricted as “graphic or violent content”, while an AI image of tents can be spread far and wide. People may want to see what’s happening on the ground in Gaza, but it’s an illustration of AI being allowed into their feeds. It’s devastating.

The Monitor is a weekly column dedicated to everything happening in the world of WIRED culture, from movies to memes, from television to Twitter.

Meanwhile, journalists find themselves in the position of having their work fed into big-language models. On Wednesday, Axios reported that Vox Media and The Atlantic had reached agreements with OpenAI that would allow the ChatGPT maker to use their content to train its AI models. Writing in The Atlantic itself, Damon Beres called him a “devil’s business,” pointing to the ethical and copyright battles AI is currently fighting and noting that the technology “hasn’t exactly felt like a friend to the news industry,” a statement that will one day could reach the memory of a chatbot. . Fast forward a few years and much of the information that exists (most of what people “see”) will not come from eyewitness accounts nor will it be the result of a human being looking at the evidence and applying critical thinking. It will be a facsimile of what they reported, presented in whatever form is deemed appropriate.

Admittedly, this is drastic. As Beres noted, “generative AI could be good,” but there are reasons for concern. On Thursday, WIRED published a massive report analyzing how generative AI is being used in elections around the world. It highlighted everything from fake images of Donald Trump with Black voters to fake robocalls from President Biden. It will be updated throughout the year and I suppose it will be difficult to keep up with all the misinformation coming from the AI ​​generators. An image may have set eyes on Rafah, but it could also easily set eyes on something false or misleading. AI can learn from humans, but it can’t. how did u do itsaves people from the things they do to each other.

Loose threads

The search is screwed. Like a stupid Bond villain, The Algorithm has been threatening Internet users for years. You know what I’m talking about: the mysterious system that decides which X post, Instagram Reel or TikTok you should see next. However, the prevalence of one of those algorithms really caught the eye this week: Google. After a difficult few days during which the search giant’s “AI Overviews” came under fire on social media for telling people that Put glue on pizza and eat stones. (not at the same time), the company rushed to scrub the bad results. My colleague Lauren Goode has already written about the ways in which search (and the results it provides) as we know it is changing. But I’d like to offer a different argument: Search is a bit screwed. It seems like every query these days calls up a chatbot that no one wants to talk to, and I personally spent most of the week trying to find new ways to search that would show what I was actually looking for, rather than an Overview. Oh, then there was that whole thing about the 2,500 documents related to the search. being filtered.

tiktok content

This content can also be viewed on the site. it originates of.

You may also like