Why RBLs and greylisting can’t stop modern email threats
[[{“value”:”
Email security has always been a technological arms race between bad actors and those who work to thwart them. But now the arms race is accelerating at an unprecedented pace, rendering many of email security’s de facto protections obsolete and requiring entirely new approaches to detecting and eliminating threats.
As much as technological progress and accessibility is exciting, it also allows criminals to become more sophisticated in their attacks. As bad actors rapidly develop their tactics thanks to new tech, it is unsurprising that email remains the main vector for attacks, with phishing the most popular method. The dawn of generative AI is only exacerbating this issue, with AI-powered phishing seeing a 222% increase in the second half of 2023.
This in turn is leading to a decline in the effectiveness of the tools businesses typically rely on for email security, such as real-time blackhole lists (RBLs) and greylisting. As malicious actors are increasingly shrouding the source of an email and legitimate actors become vehicles for malign traffic, it is clear there is a need for a more holistic and nuanced response.
What are RBLs and greylists, and why are they becoming less effective?
When an IP address, sender domain, or web domain becomes recognized as being the source of spam, it gets added to a blocklist. Many methods are used to achieve this, from manual flagging to “honeytraps” that are designed to lure and detect spammers. There are several organizations that run these blocklists and email providers typically plug in one or more to filter out spam in real-time before it can do damage, hence, real-time blackhole lists.
Greylisting works similarly, though the email is delayed rather than blocked if it comes from an unknown source. By holding the email for a period of time, legitimate senders can reattempt the delivery, which is then likely to go through as spammers tend to try only once. This prevents mass-scale spam attacks, without blocking emails that might have initially been a false positive.
The problem today is that the link between an email’s source and its risk level has been severed. Where criminals used to bombard servers with fake accounts or exploit vulnerabilities, malicious actors can now cloak their attacks via seemingly legitimate channels that bypass list-based email security systems. This is often achieved by infiltrating an organizational email address and using it to send malicious emails.
When an email address is compromised to launch attacks, the organization or the entire service might end up on a blocklist. Thousands of users may end up having their emails flagged as spam, causing massive personal and professional communication issues as collateral damage of spam attacks. As such, RBLs and greylists aren’t just failing to catch criminal activity, they risk making the service worse for legitimate users.
How can AI help email security stay one step ahead of cybercriminals?
With source-based filters no longer fit for purpose, how can email security fight back against the rising tide of phishing and email threats? The answer is not to look at a single piece of data but to draw in a holistic range that gives a broader, network-level view of email attacks, their origins, and their vectors. It also means broadening the category of the data itself to include the content of emails and behavioral analysis.
Of course, crunching and analyzing so much data is a monumental undertaking, which is where AI — in particular, large language models (LLMs) and machine learning (ML) — come into play. LLMs can be trained to have semantic understanding of email content, and flag suspicious activity in real-time. ML engines, meanwhile, can analyze vast quantities of historic data to develop predictive capabilities that can stop attacks before they start.
Organizations can deploy such capabilities internally, using AI engines to learn normal email usage patterns so that aberrations can be easily detected, with any false flags corrected via human oversight to further refine the model. In effect, AI can provide companies and even individuals with bespoke email security services, running round the clock and in real-time.
Unfortunately, the “good guys” aren’t the only ones getting hands-on with AI. Criminals have been quick to deploy generative AI’s ability to rapidly create convincing text, images, and even voices to launch an array of scams for which the public and businesses are not adequately prepared. In email security, the eternal arms race continues, with fraudulent emails now able to “clone” the communication style of members of staff to trick colleagues, or spoof business communications to scam customers.
Real-time predictive capabilities that can counter those deployed by secure email providers may currently be out of arm’s reach for all but state actors but, given the breakneck speed of AI development, it’s only a matter of time until such technology is widely accessible — even able to be run locally, away from the oversight of AI platform holders that might otherwise revoke access.
The future of email security then will be AI versus AI, and providers must rapidly increase their technological capabilities on this front as well as invest in the talent to develop and utilize AI-based solutions. Criminals will be doing the same, and any organization that is still relying on legacy, source-based methods will soon find themselves caught in the crossfire.
We’ve listed the best cloud antivirus.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
“}]]