How malicious bots fool users all over the Internet
When you think about malicious bots, the first thing that likely comes to mind is something quite rudimentary and unconvincing. We’ve all had that social media friend who has suddenly started sending strange, nonsensical messages and making out-of-character posts about their favorite diet tea or a cryptocurrency you’ve just got to invest in. Who’s that fooling?
But to dismiss all bots as being so obvious would be to let your guard down, particularly when bots account for about 40% of web traffic. On social media, bot activity is particularly rife. According to Business Insider, Twitter bots make up less than 5% of all accounts on the site (and not upwards of 20%, as Elon Musk once estimated). However, bots generate 20-29% of US content on Twitter, while only 19% of real, authenticated US users tweet daily.
It’s troubling because bots are getting increasingly sophisticated and adept at fooling both everyday people and social media companies. After all, fake news became the phenomenon it is for a reason. You may think you’d never be fooled, but chances are, you already have been, even if you haven’t been outright scammed.
Even the most Internet-savvy people can get caught off guard, especially when bots may not take the form you expect.
Bots, bots, everywhere
According to CNN, sophisticated robocall bots have successfully targeted crypto investors around the US. The bot works by calling people with Coinbase accounts, claiming to be from the company’s “security prevention” line, and informing them of unauthorized activity and an attempted login on their account. Creating a sense of danger and urgency, the bot tells the victim to press 1 to recover their account. They are then prompted to enter their 2FA details. The fraudsters then have access to their accounts.
Meanwhile, the online advertising industry is rife with bots. A report from Wired discusses how fake clicks power a large portion of the online economy. It explores the case of Aleksander Zhukov, who stood trial in New York in 2021 for running a company that placed advertisements on an elaborate network of fake websites where they were seen only by bots instead of humans. Zhukov denied he had committed a crime, claiming that he was just running a business, giving online companies exactly what they wanted — cheap traffic, whatever the source. As a bizarre result, companies wrapped up in this fraudulent economy pay millions to research the opinions of bots and advertise to them.
Then there’s the issue of dating apps and websites. If you match with someone who seems too good to be true, they just might be. Especially if they constantly make excuses not to meet in real life and start asking you for money, gift cards, or cryptocurrency. According to the FBI, these kinds of romance scams cost victims $1bn in 2021. However, sometimes an apparent bot might not even be a bot but a fake profile with a malicious actor behind it. Generally, though, the aim of both is the same; to scam you out of your hard-earned cash.
Lauren Goode at Wired went down a rabbit hole looking into strange profiles on the dating app Hinge. She found that while a large percentage of the men she matched with were bots (sending odd messages like “A sincere, kind and caring Days and Nights in Wuhan <3”), others were clearly real people whose garbled language was likely due to using bad translation apps. Fortunately for Hinge users, the app will soon bring in video verification to tackle their fake account problem.
Detecting bots online
So, why is it so hard to root malicious bots out of the online landscape? Part of it is the fact that bots can behave very differently, depending on what platform they’re on, just like humans. Kathleen Carley, director of Carnegie Mellon University’s Center for the Computational Analysis of Social and Organizational Systems, told Grid News, “It’s really hard for myself or for anyone to tell if something is a bot or not without computer assistance.”
That’s where platform-specific bot detectors come in.
A bot detector is a type of software that uses machine learning to figure out how normal humans behave on these platforms, then pinpoint bots by finding anomalies in user behavior. For example, humanly impossible behavior like a user being in multiple geographic locations at the same time or sending hundreds of posts or messages instantaneously.
The problem is, malicious bots are continually learning new ways to fool the platform. For example, in the same Grid report, Kai-Cheng Yang, a Ph.D. candidate in informatics at Indiana University said that bots on Twitter are beginning to use human profile photos to fool bot detection. “Recently, I started to realize there are some bot accounts using fake faces, using neural network to generate the face,” Yang said. “These are human faces that don’t exist, and they’re using them as their profiles.”
While it seems like we’re still a long way off from separating bots from the online fraud landscape and creating a product that can 100% distinguish a bot from a person, there are still many ways to protect yourself online.
Check out our blog Why spam bots are ruining the Internet, for more information.
Illuminating and distressing