Photo credit: Freepik
In today's world, where man and their dog seems to be diving into artificial intelligence (AI), it's becoming increasingly clear how easy it is to fall into traps that can undermine well-intentioned efforts.
With trust becoming a major topic in the field of AI, it's crucial to approach machine learning-powered initiatives with caution.
Spotting AI-generated images, videos, audio, and text is becoming more challenging as technological advances make them nearly indistinguishable from human-created content.
This leaves us vulnerable to manipulation by disinformation. By understanding the current state of AI technologies used to create misinformation and recognizing the telltale signs of fake content, you can protect yourself from being deceived.
Remember the image of Pope Francis wearing a puffer jacket or the video of world leaders like Donald Trump and Vladimir Putin dancing? While these instances may be in jest, they clearly illustrate how easy it is to create AI-generated content, and fool at least some members of the public.
Unfortunately, there are bad actors willing to use this technology for more damaging purposes — such as the recent, unfounded accusations of Kamala Harris' U.S. election campaign team using AI to further their agenda.
World leaders are concerned. According to a report by the World Economic Forum, misinformation and disinformation may "radically disrupt electoral processes in several economies over the next two years," while easier access to AI tools has "already enabled an explosion in falsified information and so-called 'synthetic' content, from sophisticated voice cloning to counterfeit websites."
The Byteline recently spoke to two AI experts ‑ Youssef Jalloul, Founder & CEO of Inova AI Solutions, and Isabella Williamson, Founder & CEO of Tyde AI ‑ to gain a further insight into how to avoid the common pitfalls of AI-generated media.
"AI is as dangerous as the person using it," said Youssef. "If a content creator is using AI to optimize their work and save time, then the result will be great, as we have seen on many occasions.
"However, if someone uses AI for misinformation, fraud or other dark purposes, then that’s where the danger comes in. AI should be used to enhance our capabilities and eventually allow us to free up more time to spend our time on valuable things. Sadly, in the wrong hands it will be used for optimizing fraudulent activities, so we have to stay aware."
Isabella added: "The malicious use of deepfakes can spread mis/disinformation online which can confuse users and manipulate opinions, leading to cases of defamation, fraud and even social unrest.
"We’re seeing this play out mostly in politics and entertainment, where deepfakes have been used to manipulate the words and actions of political figures and celebrities.
"Deepfakes also pose a great security risk. Malicious actors have used high-end deep fake technology as part of sophisticated scams on consumers and businesses. This is bringing consumer trust in online content to an all-time low.
"However, we need to remember that this technology can be used for the benefit of humanity as well. In Venezuela, reporters are using AI-generated news anchors to protect their identities from the government.
"Malaria No More UK partnered with David Beckham and created a deepfake of the football star to deliver the message about Malaria in nine different languages. The duality of this technology is scary. It all comes down to the intent of the creator."
There are steps one can take to ensure what they are watching/viewing is genuine.
"The best way is to fact-check before you take action or make a decision for whatever reason when you see something online, because we are getting to the point where anything is possible and it is extremely hard to detect," said Youssef.
"There are some platforms that allow you to check for plagiarism or AI engine generation, but not everyone will go and pay to test the content themselves. So I believe there is a huge opportunity for platforms offering this service and integrating into existing legacy platforms to automate detection and identify fraudulent content.
"The idea is not to stop people from using AI to generate but to identify content that can potentially harm or fool users. Similar to harmful content restrictions on social platforms, where they will reject your content when they feel it doesn't abide by their safety policy. The time will come when this will be automated, and platforms will have to regulate what they publish."
Isabella offered further insight, including new training classes that can enlighten the general public about the pros and cons of AI.
"There isn’t one specific method or trick for identifying deepfakes with the naked eye,” she said. "For photos and videos, you need to focus on the little details of human features and how our environment works. Synthetic images may depict skin as too smooth or wrinkled, or the physics of lighting may be off. For videos, pay attention to lip movements and blinking ‑ do they look natural?
"Audio can be trickier since there’s less to analyze, but AI-generated voices often sound flatter ‑ more robotic ‑ and their speech patterns can feel off. The trick is to develop a sharp eye ‑ and ear ‑ for inconsistencies that just don’t add up.
"So the key takeaway is refining your ability to pay attention and analyze what does and doesn’t make sense in the natural world.
"I usually start my AI training sessions with an exercise to practice deepfake detection. I show the class three images of an AI-generated woman modeling a new clothing line and ask them to spot the traits that reveal it's synthetic, even though they all are photo-realistic. The more artistic and analytical participants usually spot the traits the fastest.
"Whenever you suspect a photo, video or audio recording has been manipulated, start by asking ‘What elements of this content are they most likely to manipulate’, then work from there in asking ‘What doesn't make sense or feel right’?"
With the new tech comes the race to regulate it globally, just as happened with the birth of the internet. Youssef feel urgent regulation needs to be put in place.
"More strict regulations are needed, on developers, publishers, platforms and users," he said. "There should be a general consensus for protecting people from fraud, just like regulations like GDPR came into play to protect personal data, other regulations will come in to protect users from fraudulent AI-generated content.
"But I feel this is still a few years off, policy is always late and usually action is taken after major events happen that influence public opinion. If we compare it to blockchain or cyptocurrency, policy took a good seven-plus years, and there are markets that are still not regulated."
Isabella concurs that stricter rules are required.
“Yes, the damage that the misuse of deepfakes can cause warrants tighter restrictions," she added. "The challenge we’re facing is that the technology is advancing at such speed that regulation is having to catch up.
"I’m sure we will see more regulation around the creation and dissemination of deepfakes in the future, but I’m intrigued to see how this will work and whether it will have a real impact."