New Video Sparks Debate Over Authenticity of Fugitive’s Claims

A recent video allegedly featuring fugitive Miloš Medenica, who was sentenced on January 28, 2024, to ten years and two months in prison for his role in a cigarette smuggling network, has stirred significant discussion on social media. The video reportedly shows Medenica addressing the Director of the Police Administration, Lazar Šćepanović, declaring he will continue to speak out until he is arrested.

In the video, Medenica purportedly states, “I will announce every day until I am arrested or until they deny that I am a bot.” This has led to speculation about whether the footage is a product of artificial intelligence (AI) or a genuine recording. The Police Administration has not officially commented since the first video surfaced on social media, which was confirmed to be AI-generated.

Experts weigh in on the implications of AI technology in such contexts. Doc. Dr. Nikola Cmiljanić, a professor at the Faculty of Information Technology and a digital forensics expert, explained that the authenticity of videos featuring wanted individuals is increasingly difficult to ascertain. “In serious cases, one should not rely on a single ‘quick check’ but rather on a combination of methods and multiple independent indicators,” he emphasized.

Understanding AI-Generated Content

AI-generated or AI-modified videos are created using generative models that can change or create images, sounds, or both. Cmiljanić highlighted that these technologies allow for sophisticated alterations, such as replacing faces, synchronizing lip movements with audio, or even generating entirely new voices that sound like real individuals.

He noted that the quality of these videos can be so high that even trained observers may struggle to determine their authenticity based solely on visual inspection. “Deepfake technology is a specific subset of AI video content designed for impersonation, making it appear as though a person is saying or doing something they are not,” Cmiljanić explained.

The broader category of AI-generated videos includes not just face-swapping but also complete synthetic scenes and other manipulations that can misrepresent reality. The ability to create convincing content raises critical questions about trust and verification, especially in sensitive situations involving sought-after individuals.

The Challenges of Verification

Cmiljanić pointed out that forensic analysis heavily relies on the quality and originality of the material in question. The most reliable results stem from accessing the original file, which allows for thorough technical analysis. Factors such as file structure, codecs, processing traces, and the consistency of lighting and shadows are crucial in determining authenticity.

In terms of detection tools, Cmiljanić noted that while there are forensic tools available, they are still in the early stages of development and standardization. The reliability of these tools can vary significantly, especially when videos have been shared multiple times or heavily compressed.

“Many traces can be lost or significantly altered in such instances,” he added. The challenge extends beyond just identifying whether content is AI-generated; it also involves attribution—determining who created and disseminated the video. This often requires a mix of digital forensics and traditional investigative methods.

Cmiljanić urged caution when encountering sensational videos, particularly in sensitive contexts. He advised that verification should precede conclusions drawn from initial impressions. “Today, it is technically possible to create convincing videos that can cause panic, compromise investigations, or tarnish reputations,” he warned.

The case of Miloš Medenica illustrates the broader issue at play. After his conviction, he reportedly fled Montenegro, crossing into Serbia through an illegal border. Sources suggest he may receive assistance from influential individuals in Serbia, connections traced through his mother.

As discussions around the authenticity of the video continue, Milica Kovačević, the Program Director at the Center for Democratic Transition, stated that their organization attempted to verify the video using the best available tools but could not definitively conclude whether it was AI-generated or authentic. She emphasized the need for transparency from authorities regarding how they reached their conclusions about the video’s authenticity.

The implications of such technologies on public perception and legal proceedings are profound. As AI capabilities evolve, the stakes for verification and trust in digital media will only grow, making it essential for authorities and the public alike to approach such content with skepticism and a demand for rigorous verification methods.