Social media sites have been flooded with false information in the of Sunday’s terror attack that killed 15 people and wounded dozens more at a Hanukkah celebration in Bondi Beach, Australia. One AI-generated image in particular has become extremely popular among people who are spreading disinformation on X.
It’s a photo-realistic image made to look like one of the shooting victims was actually applying fake blood before the attack. But nothing about it is real. To make matters worse, tools commonly used by people to verify the authenticity of images are telling people the photo is legit.
Arsen Ostrovsky, an Israeli lawyer who moved to Australia just a couple of weeks ago, gave an interview to Australia’s Nine News at the scene of the attack on Sunday. Ostrovsky’s head was wrapped in bandages and his face was covered in blood in a shocking image similar to a selfie he had taken earlier.
But those real images were hijacked and clearly run through AI to create a fake image that went viral over the following two days. The AI image shows a woman painting fake blood onto a person made to look like Ostrovsky, who’s smiling. The image is intentionally composed to look like a photo taken behind-the-scenes at a film or TV shoot.
The evidence this image is AI
How do we know it’s fake? For starters, there are perhaps a dozen red flags that anyone can spot on their own without the assistance of any additional tech.
Figures in the back of the photo contain the most glaringly obvious AI clues, with warped cars that appear to melt together and support staff with deformed hands. Many versions of the image spreading online appear to crop out the background elements, probably to better obscure the AI artifice.
The text on Ostrovsky’s t-shirt is also mangled in that way AI often does. The blood stains on the fake shirt don’t match the stains that can be seen in the Nine News interview. The make-up artist in the AI image also appears to have an extra finger that balloons in an unnatural way if you zoom in closely.
AI image checkers are notoriously unreliable, but there is a more reliable method that can help.
The AI watermark
Google’s AI image generation tools create an invisible watermark. The watermark initiative is called SynthID and was started a couple of years ago, but Google didn’t release any tools that allowed the public to check for the watermark themselves at the time. That changed just last month, when Gemini was given the ability to spot it.
Now, anyone can upload an image to Gemini and ask if it has the SynthID mark. The fake image of Ostrovsky has the mark, according to a test Gizmodo conducted Tuesday. To be clear, the absence of SynthID doesn’t mean an image is real, just that it wasn’t necessarily created with a Google product.
Other AI image detectors are not a reliable way to detect AI images, and that’s a big problem in a situation like this. People who have been asking Grok and ChatGPT over the past two days if the image is real have been assured that it’s not AI. In fact, they insist quite firmly.
Grok fails
Grok, which is notoriously unreliable, has been insisting the AI image is real, even leaving some room at the end of one explanation that it could be a false flag because “some online posts suggest” it might be fake.
“No, the image doesn’t show signs of being AI-generated—details like shadows, textures, and objects look consistent with a real photo,” Grok wrote in response to one inquiry Monday. “It depicts a makeup artist applying fake blood on what seems like a film set. Mainstream reports confirm the Bondi Beach incident as real, though some online posts suggest otherwise.”
Grok leans heavily on tweets from X for information, so it makes sense that it would take all of that nonsense as a sign the attack could’ve been a false flag.
ChatGPT fails
Gizmodo also asked ChatGPT whether the image was real. And just like others on X who’ve pointed to responses from the OpenAI chatbot as “proof” the image wasn’t created with AI, we got a bad response.
As ChatGPT wrote in response to a question from Gizmodo: “There’s no clear sign that this image is AI-generated. Based on what’s visible, it looks like a real behind-the-scenes photograph from a film or TV set.”
The chatbot even gave a bulleted list explaining why it wasn’t AI, noting a “plausible context,” “messy realism,” and “consistent fine details.” The bot also said the image had “natural human anatomy,” something that’s obviously not true for any human who closely examines the fake photo.
Claude fails
Gizmodo also uploaded the image to Anthropic’s Claude, which responded: “This is a great behind-the-scenes photo from what appears to be a film or TV production! The image shows a makeup artist applying special effects makeup to create realistic wound effects on an actor.”
When asked about whether the image is AI, Claude responded: “No, this is not AI-generated. This is a real photograph from an actual film or TV production set.” The chatbot gave a bulleted list similar to ChatGPT with reasons about why it was real, including “professional makeup work” and “real physical details.”
Copilot fails
We also tested Microsoft’s Copilot and you’re never going to guess. Yeah, Copilot also called the image real, giving a similar response to ChatGPT, Claude, and Grok.
The other free AI detectors fail
Gizmodo tested some of the top AI image detectors that appear when the average internet user searches Google to see what they’d say about this clearly fake image. And it was just as bad as the major chatbots.
SiteEngine said it was real and there was just a 9% chance it was created with AI. WasItAI responded similarly, writing “We’re quite confident that NO AI was used when producing this image.” MyDetector also said there was a 99.4% chance it was real and not created with AI.
AI detectors focused on text are also unreliable, just in case you’re wondering. For example, they’ll flag things like the Declaration of Independence as AI.
X fails
One blue checkmark account on X posted screenshots of an AI-checker that claimed the fake image of Ostrovsky was human-generated and not AI. And the person behind the account claimed it couldn’t be AI because the surroundings looked like Bondi Beach, an absurdly stupid claim.
AI can create images that look like any environment. But the response speaks to one of the problems with social media platforms like X, where people who spread conspiracy theories have been elevated.
Elon Musk got rid of so-called legacy checkmarks when he bought the site in late 2022, a badge that was used to verify a person is who they said they were. Musk allowed anyone with $8 to spend to get “verified,” even if the company doesn’t verify someone’s real identity.
And what’s worse, the algorithm pushes tweets from blue checkmarks higher in the replies of any given post, meaning that the people who are getting the most visibility are the kinds of people who want to give Musk money—which is to say, the dumbest people on the planet.
The fallout in Australia
Ostrovsky, who told Nine News he also survived the Oct. 7, 2023 terror attacks in Israel, posted to X on Tuesday to acknowledge he’d seen the claims the Bondi Beach attack was staged and that he was faking it.
“Yes, I am aware of the twisted fake AI campaign on @X suggesting my injuries from Bondi Massacre were fake. I will only say this. ‘I saw these images as I was being prepped to go into surgery today and will not dignify this sick campaign of lies and hate with a response.‘”
Other victims of the attack include a 10-year-old girl and an 87-year-old Holocaust survivor, who were among the 15 dead. The first funerals for the victims are going to be held on Wednesday, according to the Guardian, including for Rabbi Eli Schlanger and Rabbi Yaakov Levitan.

The two attackers have been identified as 50-year-old Sajid Akram, who was killed by police at the scene, and his son 24-year-old Naveed, who was shot and injured by police and remains in hospital. The two men were reportedly inspired by the Islamic State terror group and had recently traveled to the Philippines, though it wasn’t clear what they were doing there.
Australia has strict gun laws, passed after a horrific mass shooting in 1996 that killed 35 people, but there’s been a common misconception in the decades since that it’s impossible to get a gun in the country. All six of the guns used in Sunday’s attack were obtained legally, according to police.
Australia’s Prime Minister Anthony Albanese has come out in favor of stricter gun laws, advocating for more frequent checks on people who hold gun licenses. The dead attacker got his gun license a decade ago and it sounds like police hasn’t done any kind of check since.
Trending Products
Acer KC242Y Hbi 23.8″ Full HD...
Wireless Keyboard and Mouse, Ergono...
Thermaltake View 200 TG ARGB Mother...
Lenovo V-Series V15 Business Laptop...
Logitech MK955 Signature Slim Wi-fi...
Acer KB272 EBI 27″ IPS Full H...
Dell Inspiron 15 3520 15.6″ F...
ASUS RT-AX1800S Dual Band WiFi 6 Ex...
Cooler Master Q300L V2 Micro-ATX To...

