I was in the middle of writing my column about the recent announcements by Neuralink, the company led by Elon Musk whose mission is to transform us into cyborgs, and its preparation to begin testing with humans. However, my attention was diverted when Beto, my CTO partner at Metrics, shared a message via Telegram from a user at X, Gary Markus, a renowned artificial intelligence expert. The message warned:
“Black Mirror has arrived, ahead of schedule. An entire cast of fake people cheated a CFO out of $25 million. “(In the) video conference of several people, it turns out that they were all fake “Deepfake shit is getting real.”
My astonishment was immediate, and after an exchange of messages, Beto proposed an even more alarming hypothetical situation: the possibility that our own financial director would fall into a similar trap, called to a meeting by us, the partners, through an SMS. . I thought for a moment and admitted, “She would definitely fall for it.”
This incident underlines a worrying trend: the use of deepfakes, an advanced artificial intelligence technique, is finding fertile ground not only among organized criminals, propagandists and pranksters, but also in darker and more questionable spheres of our reality. The speed with which this technology has been adopted for illicit purposes is alarming and reveals a widespread vulnerability to its potential for deception.
The case of Hong Kong clearly illustrates this problem. An employee of a multinational was induced to transfer $25.6 million to scammers who used deepfake to impersonate the company’s financial director in a video conference. This fact not only highlights the ingenuity of criminals in using artificial intelligence to commit highly complex financial fraud, but also the urgency of developing effective countermeasures.
During the deception, the worker believed he was communicating with his colleagues, when in reality he was interacting with falsified digital avatars. Despite initial suspicions of receiving a phishing email, the familiarity of the deepfake voices and faces dispelled his doubts, evidencing the sophistication and danger inherent in this technology. Many X users who commented on the news blamed the lack of protocols for the incident. Oh really? Who is ready for that?
According to the Chinese authorities, this is not an isolated case. Over a three-month period, eight identity cards stolen in Hong Kong were used to apply for loans and open fraudulent bank accounts, demonstrating the ability of deepfakes to fool facial and voice recognition systems. In Mexico, there are already voice-based attempts to steal identity and misuse it, both in extortion and bank embezzlement.
The impact of deepfakes transcends the financial sphere. The spread of Taylor Swift deepfake pornographic material on social media has further highlighted the harmful potential of this technology. The case of Taylor Swift is a media case due to the relevance of her character, but similar acts are happening to young people who take their photos from social networks without their consent, transform them with AI, and then distribute them in chat groups .
This incident in Hong Kong, pioneering in its nature due to the magnitude of the fraud and the methodology used, emphasizes the urgent need to implement advanced security measures and raise awareness about the dangers associated with deepfake. But it is not the only open front.
Are we really prepared to face this new era of advanced digital counterfeits? The response and action we take will be crucial to protect our personal and business integrity in an increasingly digitalized world. For now, if you don’t have a secret passphrase with your colleagues and family, it’s time to think about having one. Who knew that at some point we would have to ask people close to us for a password (passphrase) to ensure everyone’s identity?