Hello, and welcome back to The Scam Logs. Today, we’re revisiting an article I said would have a part 2—but with a twist. A redux, instead of a true sequel.
Because A.I. has developed way faster than I had initially anticipated. And I wanted to mentally prepare you for the situation, so you can start thinking of ideas to keep yourself safe.
Before, we were talking about “deepfakes.” A trick where A.I. is used to mimic people to then scam other people. Like, imagine your “wife” sends you a video asking for your Social Security number. But, before, those deep fakes might have had tells. Famously, A.I. systems had a habit of getting the number of fingers on hands wrong. Audio copying wasn’t always seamless.
But with recent advances, there’s likely not going to be any easy tells. It’ll look completely realistic.
So, what do we do to defend against scams in a post-image world? What do we do when anything digital isn’t guaranteed to be real? How does one defend themselves when someone can mimic anything?
Well, first off: keep yourself informed. I am going to give my advice, but there are people with way more knowledge than me on the subject, and they have already put out content. Find very official channels (confirm the URL, confirm the person’s expertise) and see what they have to say. Cointelegraph.com has constant new articles and a lot of them explain scams. If you prefer videos, go on YouTube, find similarly big and reputable groups, subscribe to them, and keep up with what they post. And follow multiple such channels—across all forms of media—just in case one gets taken over somehow.
And then, my personal advice: figure out “tells” that don’t exist digitally. A.I. is only in screens and eventually robots. They can’t change printer paper. Go somewhere without any phones, smart devices, or anything else nearby that could feasibly listen, and sort out a code phrase or some other secret thing that you can recall—and never put it into a device. If you’re hacked, the hacker might be able to find it.
And, as you’re doing this, both to increase your security and others: warn other people about the issue. Inform your loved ones who might not be as technologically savvy that even if the person sounds like someone they’ve worked with for years and talks like them, maybe even knows some personal information, it could still be A.I.
Now, to be clear, I don’t want to inspire paranoia. Not every message is a machine. Not every phone call is faked. The world has not become an endless parade of tricky ghosts overnight, but the technology is now here—and people can use it to do all sorts of nefarious things. Scammers already prey on people who can be tricked by less sophisticated and less believable tricks; they will absolutely get creative with this one. I expect to hear some piece of horrible news that I could never have thought of, or imagined, within the month.
So keep informed. Be skeptical. Understand that it “looking real” is not a good test anymore—and be safe out there.