Digital Risk Protection
How AI Intersects with the Cyber Threat Landscape – for Both Analysts and Threat Actors
In recent months, the buzz surrounding AI technology has grown rapidly, due in large part to the release – and subsequent zeitgeist moment – of ChatGPT, a chatbot fueled by language modeling AI technology that is free to the public and has been the subject of seemingly endless discourse regarding its implications since its launch last November.
This type of AI technology is convincingly…well, intelligent. It’s almost like a contemporary iteration on the concept of a search engine – you can type in a prompt, and within moments you’ll receive a well-articulated, seemingly accurate response pulling from sources all over the web.
“AI (Artificial Intelligence) has a significant impact on cybersecurity and is both a valuable tool for defenders and a potential threat in the hands of attackers.”
That’s the response to a prompt I posed to ChatGPT. More or less, it summed up some of my thoughts on the matter – that AI is neither intrinsically good nor bad for the security space; it simply is.
We’ve been hearing some murmurs about potentially nefarious applications of AI, and so it seemed time to set the record straight. How, exactly, does AI impact cybersecurity? Can attackers use it to launch cyberattacks that endlessly improve upon themselves, rendering even the most advanced security technology powerless? Are security vendors making use of it to enhance their platforms and defend against increasingly sophisticated attacks?
Business as Usual…Except for the Unusual Parts
Let’s make one thing clear right off the bat: AI is not new. Tech companies have been using AI and machine learning (ML) to augment parts of their platforms for years now. Everyday commodities like navigation apps and autocorrect functions use AI. BlueVoyant uses AI to optimize every facet of its platform, as do countless other software vendors in all sorts of industries.
In speaking to some of the brilliant people on my team who help drive product strategy and development, I learned that our AI is built on the backs of our human intelligence. Our expert cyber threat analysts pool their knowledge of threat actor habits, activity, watering holes, and behavior to create the framework of our harvesting systems. This allows us to automatically monitor for threats emerging across the open, deep, and dark web. The nuance of human experiences and intelligence paired with the power of machine learning allows us to scale seamlessly, detecting threats to any number of organizations and their subsidiaries.
And that seems like a good segue to the flip side of this coin – how AI is helping the baddies win.
Technology is constantly evolving, and with the introduction of new AI-based (and other) tools, threat actors will be able to launch their attacks much faster and more efficiently than before. That poses a real problem for security teams who don’t have the ability to scale endlessly as the sheer volume of attacks levied against them grows in unprecedented ways.
What Does it Mean for Security Teams?
AI doesn’t fundamentally change how threat actors levy attacks. The biggest risk is an existing one: attackers can use AI to increase the volume of their attacks, for example, by putting the process of deploying a phishing kit on autopilot. The problem itself doesn’t change, but its scope becomes greatly magnified. Tools like 10Web that allow users to clone and produce websites en masse will help facilitate significant increases in the sheer numbers of phishing websites leveraging spoofed domains.
The good news? Cyber threat intelligence vendors are one step ahead of the game – for now. As mentioned above, while AI has become more buzzy over the past year, BlueVoyant (and presumably others) have been investing in AI and machine learning since day zero. We allocate significant resources to research and development to learn as much as we can about phishing infrastructures and evasion mechanisms, which are applied to our automatic monitoring and detection mechanisms on a rolling basis. Our machine learning algorithms can detect lookalike domains, lookalike logos and graphics, proprietary HTML and IP infringement, fake social media profiles, and more.
AI in the cyber threat arena might seem like a video game power-up that makes previously difficult-to-defeat enemies invincible. But in reality, AI is only as intelligent, prescient, and powerful as its human creators allow it to be. Its adoption by threat actors will certainly pose additional challenges to security teams, but challenges tend to breed innovation in this space, and that will almost undoubtedly continue to be true.
As long as humans remain in control, AI can be an immensely useful tool for security practitioners to combat the growing threat of threat actors using it to increase the scope of their attacks.