Subscribe
The TikTok logo is displayed on a smartphone in this arranged photograph in London, U.K., on Monday, Aug. 3, 2020.

The TikTok logo is displayed on a smartphone in this arranged photograph in London, U.K., on Monday, Aug. 3, 2020. (Hollie Adams/Bloomberg)

In a recent post on TikTok, a woman stands solemnly in front of armed soldiers in bulletproof vests.

“My body was exposed, almost lifeless, in the back of the truck by militants of Hamas,” the woman says in Spanish over melancholic flute music. “My name is Shani Louk. Sadness and uncertainty are now part of my reality. My identity and my dignity were taken from me at that moment.”

The woman shares what she says are her opinions about the attack, and references her mother’s calls for her safety. But it’s not Louk, the 22-year-old German-Israeli citizen, killed in Hamas’ siege on a music festival Oct 7. It’s actually a deepfaked image of her — a video put together using readily-available artificial intelligence editing tools that can create a talking, moving imitation of any person with just a photo and a few minutes of effort.

Louk had often posted on social media, making her likeness easy to borrow for other content. The post, which went up the day after the attack, was viewed more than 7 million times.

Bloomberg reviewed dozens of deepfake videos on TikTok portraying victims of tragedy in various languages, racking up sometimes millions of views and thousands of comments. Most recent are the Israel-Hamas war victim videos, which appear designed to generate both sympathy and virality, though their creators are often anonymous. AI researchers call these “digital resurrections.” They follow a similar pattern: A tragic death is covered by the news, and within days or even hours, users post videos of that person’s likeness talking about how they passed. The format for the trend usually includes an introduction from the perspective of that person, and a deepfaked image of them on the screen telling the story of how they died.

The deepfaked videos of the dead — breaking TikTok’s rules while targeting people who can no longer give their consent or report the content for takedown — are going viral on the app and slipping past its content moderation machine. It has rules against portrayals of private citizens and children in so-called “synthetic” AI videos, but they often amass thousands to millions of views on the app. TikTok removed the videos that were part of Bloomberg’s review immediately after receiving the examples via email. The technology, relying on tools like DeepWord or DeepBrain, will continue to improve, producing more realistic portrayals that will be harder to detect.

“Like other platforms, TikTok continues to invest in detection, transparency, and industry partnerships to address this rapidly evolving technology,” said a spokesperson for TikTok, which is owned by Chinese tech giant ByteDance Ltd. “We have firm policies in place to remove harmful AI-generated content from our platform.”

The videos feature dead children, private citizens and celebrities, and none of the posts reviewed show evidence that the victim or their survivors consented to the use of that person’s likeness.

Multiple AI-generated videos of Hamas victim Ranani Glazer purport to show the 23-year-old Brazilian talking about his death. One had more than 81,000 views and was one of the videos recommended when searching his name on TikTok, alongside other news-related content about the attacks by Hamas, which is designated as a terrorist organization by the U.S. and E.U. The video was removed after Bloomberg’s request for comment.

The face of Vladimir Popov, a Russian man killed by a shark in Egypt in June, was reanimated via a version of his passport photo in a TikTok video posted days after his death, which claims his spirit is now watching over his father. “I could feel my life slipping away and I knew I was going to die,” the voice-over says in English. “In those final moments, I thought about all the good times I had with my dad. Papa, I love you.”

Dozens of these videos are still live and haven’t been taken down by TikTok’s content moderation system, allowing the posters to grow their accounts, while the app benefits from the engagement people give these videos online. The fact that the videos appear alongside other, typically lighthearted TikTok content in a quick scroll exposes users to these manipulated images without any context or explanation, said Henry Ajder, a researcher on deepfakes and generative artificial intelligence.

“You are scrolling, scrolling, flicking, flicking through videos, and it might be that you see something for three seconds and then you get bored and move on,” Ajder said. “But in that three seconds you’ve seen something which isn’t real, and you haven’t got the part of the video where it’s disclosed or context is provided.”

Once known as a place of levity and comedic videos during the pandemic lockdowns, TikTok is now frequently caught up in global politics. More than 1 billion people use the app, often seeking out the latest on major news or being served videos about unfolding events. Hashtags related to the Israel-Hamas war have amassed billions of views, and the platform has been pushing back on claims that it is promoting pro-Palestine content.

Like most social media platforms, the app has also become a place users can post unverified narratives and misinformation that quickly rack up views during the chaos of breaking news. The app was built around boosting videos on topics users are interested in, rather than those from people with lots of followers, so anyone can partake in a viral trend. That factor, combined with the proliferation of cheap AI technology, makes TikTok particularly vulnerable to deepfakes.

In 2021, videos that appeared to show actor Tom Cruise biting into lollipops or attempting coin tricks began showing up on TikTok users’ feeds. The lighthearted videos, which the creator told CNN were a product of more than two months of training AI technology on Cruise’s face, racked up millions of views and helped the man behind it launch an AI company.

A now-infamous song using the deepfaked voices of Drake and The Weeknd was uploaded to TikTok and music streaming services, garnering wide public attention. Video and audio of politicians that range from manipulated to fully fake have been posted widely on social media. Congress began discussing legislative steps to protect celebrities’ and artists’ voices. Mostly absent from the conversation are protections for private citizens.

The private citizen deepfake debates have centered around pornographic content without consent, or using real voices to defraud elder relatives into wiring money. A small handful of campaigns use the likeness of deceased individuals with the consent of survivors in order to promote a specific message. After Mexican journalist Javier Valdez Cardenas was murdered for his coverage of drug cartels, his AI-animated likeness continued on in an award-winning campaign to raise awareness about dangers to members of the media.

“Telling stories is oftentimes the best way to persuade your audience or to engage them, and in that case, having a deceased person talking about their own stories could be very effective in conveying ideas or in persuading people changing their views,” said Chris Chu, assistant professor at the University of Florida’s College of Journalism and Communication, who has researched deepfaked videos as a narrative persuasion tool.

But on social media, without consent of the deceased or their survivors, a video that one person sees as well-meaning advocacy or in memoriam could be viewed by another as disrespectful or clout-chasing. In the videos reviewed by Bloomberg, users are already divided.

“Why would you even make this,” asked one user on an AI-generated video posted in September portraying George Floyd, the black man killed by police officers in 2020. Others show support for the victims. “Elena needs justice!!!!” wrote one user on a video of 17-month-old murder victim Elena Hembree, whose digital resurrection told the story of her rape and the injuries that led to her death. That post, with a caption advocating for people to “be her voice and share her story,” garnered 2.3 million views.

“There is an ethical and moral question here, and it’s a place where I think reasonable people can disagree,” said Hany Farid, professor in the Department of Electrical Engineering & Computer Sciences and the School of Information at the University of California at Berkeley.

That’s left platforms like TikTok deciding where those moral and ethical lines are. Its attempts to police them are spotty so far. According to TikTok’s community guidelines, “synthetic or manipulated media that shows realistic scenes” needs to be clearly disclosed. The app’s rules don’t allow any content that contains the visual or audio likeness of a private person or minor younger than 18 years old.

TikTok says users must disclose artificial intelligence-generated content that contains realistic representations with a label the app introduced to users. Only a handful of the digital resurrections reviewed by Bloomberg had the tag, which tells users the creator labeled the video as created by AI, and most portrayed private figures or children.

“Gruesome” or “disturbing” content isn’t allowed. Judgment of what crosses that line is left up to TikTok, its moderation algorithm and the human moderators that review videos. Some of the digital resurrection videos discuss details of abuse and murder through the images of AI-generated dead children.

TikTok also says “inaccurate, misleading, or false content that may cause significant harm to individuals or society, regardless of intent” is not allowed. It’s impossible to know whether statements delivered via digital resurrections about how the victims felt during their deaths are accurate.

And social media platforms are at their core set up to reward posts that people engage with — positively or negatively. When people like, comment, repost or respond to videos and photos on platforms like TikTok, that’s taken as an indicator that people are interested in that content, so it often gets pushed to even more people.

“They’re going to do whatever gets them attention, whether it’s for good attention or bad attention, and these videos get attention,” said UC Berkeley’s Farid. “Part of it’s just clicks, part of it is sensationalism, part of it is just morbid curiosity. The realism, the ease of use, the access to the technology will keep getting better.”

More stories like this are available on bloomberg.com

©2023 Bloomberg L.P.

Sign Up for Daily Headlines

Sign up to receive a daily email of today's top military news stories from Stars and Stripes and top news outlets from around the world.

Sign Up Now