Have you seen the breakthrough FakeCatcher that uses facial blood flow to separate deepfakes from reality with 96% accuracy? With the threat of increasingly higher quality Generative AI fakes being used in complex schemes, these fake images, videos, and audio clips where a person is replaced with someone else’s likeness desperately need detection!
Deepfakes are THE go-to tool for driving the spread of misinformation!
The continually improving deepfake landscape now includes interactive and composed deepfakes. Interactive deepfakes offer the illusion of talking to a real person. Compositional deepfakes are made by bad actors who create many deepfakes to compile a synthetic history (terrorist attacks that never happened, fictional scandals, or generating fake proof to support a conspiracy theory.)
Now, deepfakes of business leaders and celebrities including Elon Musk, Tom Cruise, and Leo DiCaprio among others, have shown up in advertisements – often without their permission.
In 2022, people all around the world saw the Ukrainian President ask his soldiers to surrender – in a deepfake video.
Despite deepfake videos of political candidates in ads being illegal in California and Texas, this isn’t the case in most states. North Carolina fought this problem repeatedly during the 2022 races:
Mail ads showing legislators with “defund the police” shirts that they didn’t wear.
A candidate shown in front of a police lineup wall, even though he was never arrested.
A tv ad featuring a deepfake video of an opposing candidate saying something that he never said.
How does FakeCatcher work in real-time and so accurately?
It is based on photoplethysmography (PPG), which is a method to determine the change in blood flow in human tissue. If a real person is on screen, their tissue will change color ever-so-slightly microscopically as blood is pumped through their veins. Deepfakes can’t replicate this change in complexion (at least not yet).
Positive uses for high-quality deepfakes?
I use Synthesia to create a virtual twin of myself! This tool allows professionals to create engaging content, that both looks and sounds like them, to be used for employee training, education, ecommerce, and more. This technology is used by some of the most trustworthy companies in the world.
How do we identify reality from truth?
Many deepfakes are so good that tech like the FakeCatcher is the only way to tell the difference! Computers can catch discrepancies that humans cannot.
Context is critical. Is there something to be gained by smearing someone’s reputation? Was the content posted by an opposing party? If so, then assume that it could be fake!
For lower-quality fake videos, look for glitches, no blinking or too fast blinking, eyebrows that don’t move, and anything that looks “off” to you.
Dangers of these digital forgeries to the public:
Mob reactions to fictional events.
They can be used to fool a photo-identification system – imagine someone using a deepfake photo of you to open an online banking account or to access your real account!
Deepfake voices can be used to alter someone’s voice to sound like yours. Are you in customer service and think you’re speaking with one of your biggest customers? Beware! It may be someone posing as them.
How about a deepfake video of you on social media announcing that you had an affair and you’re leaving your spouse?
What about your company's reputation and financial valuation if your executives are watched making wildly racist comments?
Key takeaways:
Tech development will have to continue to advance to ensure that good outpaces bad.
BEWARE – don’t automatically trust what you see or hear!
Build a PR response plan NOW so that you can quickly respond if your staff or executives are targeted.
Added online security measures are needed for both your personal and business accounts. This is necessary to protect you and your business!
Reference: Intel Introduces Real-Time Deepfake Detector
Comentarios