AI will help scammers steal your money
The rise of ubiquitous AI makes it easier for scammers to impersonate anyone
Artificial Intelligence (AI) has been full of incredible breakthroughs in the last few years and the potential for AI to change and shape our world for the better is immense. However, AI has already been used in scams and schemes in order to defraud people. As AI becomes more increasingly available and usable to the common man, the more it has the potential to be used for evil. Learning from the incidents that have already occurred will help you defend yourself against AI assisted crimes.
When most people think of AI these days, they immediately think of ChatGPT. This is the recent large language model (LLM) product that creates text based on a prompt or question that is asked. While this is a product or type of system that can be used to scam people en masse, it is another type of AI that has increasingly been used for large payout scams.
Besides text, AI has also been used for audio both for creating text from speech and speech to text. There is also AI capability to mimic and clone a persons voice in order to put new words in a persons mouth that they have never said. In the movie Sneakers (1992), one of the ways in which the main character gets past a security checkpoint is by using a tape recorder to put together separate, distinct pieces of audio from someone in order to fool a computer into letting him inside a secure area. While this trick worked for the movie, the capabilities that AI brings to this scenario is much more sophisticated and harder to detect now.
In 2019, a scammer using an AI based software system recreated the voice of the Chief Executive Officer (CEO) of an unnamed European Energy company in order to steal money. The CEO of a UK based energy company was on the phone with someone who he thought was his boss (the CEO of a German based Energy company which was the parent company of the UK company). The voice disguised scammer called several times regarding payments and during the course of events it was found to be a scam. The scammers received at least one payment totaling $243,000.
Another scam in 2020 saw a bank manager in Hong Kong fall victim to an AI based deepfake voice of the director of a company based in the United Arab Emirates (UAE). The bank manager had spoken to the real director before and did not know that he was being fooled by AI technology. The scheme was very elaborate and the scammers stole $35 Million.
These types of scams don’t just hit the wealthy and business types, they hit everyday people as well. Two stories, both in 2023 and both out of Canada illustrate this. An elderly couple in Canada got a call from a lawyer claiming that their son had killed a diplomat in a car accident and needed money for legal fees before going to court. He put someone on that sounded like their son on the phone and the voice was good enough to fool the parents. They went to a few bank branches, got out $21,000 Canadian (over $15,000 American Dollars) and sent it over a Bitcoin terminal. A few hours later they got a call from their actual son who had no idea what had been happening. The couple lost all of their money sent via bitcoin. It is supposed that the voice of their son was obtained through his YouTube channel where he talked about his hobby of snowmobiling.
Another elderly couple in Canada got a phone call from someone claiming to be their grandson. He was in jail and needed money for bail. They withdrew over $2,000 (American Dollars) which was their maximum and went to another bank for more money when the bank manager heard what was happening. The bank manager told them that another very similar story had occurred to a patron of the bank. They realized they had been duped. This couple did not lose any money thankfully due to the bank managers help.
With increased ease of use and prevalence of AI assisted tools, the number of scams hitting vulnerable older people as well as wealthy businesses and individuals are going to rise. So how do you prevent this? How do you stop thieves and scammers from tricking you? Besides being aware of these issues and watching out for them, strategies include alternative verification, code words, and tools to detect AI.
Alternative verification could be used in various ways such as requiring that a face to face encounter is necessary, trusting an intermediary who can verify something has actually taken place and through alternate communication methods. For instance, if you are called and told that your son is in jail and needs money, call the jail directly using a number not provided from the call itself. Calling the local police to check with national or foreign police to see if they are aware of a situation is also another way to check. If your child has a phone, calling it just to make sure is another easy preventative action.
Code words can be used as well as a mitigating strategy. This could be real code words or phrases that are used for certain situations such as when transferring money. This does not even have to be a code word per se, but could be something that is usually said within a family that few others would know. Personal relationships and the familiarity works to your favor in this case.
There are also tools that can detect AI words and voices, and may be of some help in weeding out some scams. This may not be foolproof, but could be a good tool to use in addition to other methods.
AI, as with all technology, is neutral, but the users of AI can determine if it will be used to benefit humanity or to steal and scam others. AI used to be relegated to sophisticated labs and supercomputers, but not any more. With the rise of AI in readily available, easy to use software, the sheer number of these scams will increase. Be aware and stay secure.
Links to items referenced:
Video Clip from Sneakers (1992):
https://www.washingtonpost.com/technology/2023/03/05/ai-voice-scam/
https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402
https://www.forbes.com/sites/thomasbrewster/2021/10/14/huge-bank-fraud-uses-deep-fake-voice-tech-to-steal-millions/