Artificial intelligence is nothing new for the cybersecurity industry. AI, more accurately described in most cases as machine learning, has been used for years to detect potential threats to computer systems so actual human beings can take action if they need to.
Dating back to well before the pandemic, cybersecurity companies both big and small would tout their latest AI advancements on the trade show floors of conferences like the annual Black Hat gathering in Las Vegas, Nevada. Companies promised that their latest solution would stop malicious hackers in their tracks before they could do any damage.
As someone who’s been to Black Hat more times than I care to admit, I’d almost automatically roll my eyes at the mention of it. That’s because a lot of it was marketing spin, designed to lure in executives to and get them to spend their company’s IT budgets. AI claims were still as common as free T-shirts at this year’s booths, but more companies seemed to have the tech to back them up.
AI takes center stage
The rise of generative AI and large language models like ChatGPT have thrust AI into the public spotlight and put its powerful tools, or weapons depending on how you look at it, into the hands of people, criminal groups and nation states that didn’t have them before.
We haven’t even begun to see the amazing innovations that could come out of AI, but the depth of its dark side also remains unknown, says Dan Schiappa, chief product officer for the cybersecurity company Arctic Wolf.
What is clear, he says, is that much like ransomware and malware kits a few years ago, AI hacking tools are starting to become available online, which inevitably puts them into the hands of untold numbers of less sophisticated cybercriminals that wouldn’t have otherwise been able to pull off AI-powered cyber attacks.
“So I don’t need to build the AI, I don’t need to be the smart person who can create it, I just need to be a mal-intended person who can pay someone to use it,” Schiappa said, speaking in an interview at this August’s Black Hat conference.
That sudden accessibility marks a major turning point for the cybersecurity industry and will have to be a major focus for it going forward as it trains and harnesses its own AI systems for defense purposes, he says, adding that he envisions a day when even the least sophisticated cybercriminals will be able to unleash “fully autonomous attacks” on their targets.
Government officials see the need to prepare for that day, too. At this year’s Black Hat, officials for the Defense Advanced Research Projects Agency announced the launch of its AI Cyber Challenge, a two-year competition aimed at creating state-of-the-art AI-powered cybersecurity systems designed to secure the nation’s critical infrastructure. AI heavyweights including ChatGPT creator OpenAI, Google and Microsoft have signed on to take part.
The top five teams, which will receive $2 million each, will take part in the semifinals at next year’s Defcon, with the winners being named at the 2025 event. First place earns a prize of $4 million.
Meanwhile, global leaders are also talking about the need to understand both sides of AI’s potential, as well as eventually regulate its use before the technology develops and evolves past the point where that’s possible.
And they’re consulting AI experts about how that should be done, says Jeff Moss, the hacker who founded the Black Hat and Defcon conferences.
“I think what’s going to happen is, from here on out we’ll not only have a front-row seat, we’ll be able to play with the technology,” Moss said as he addressed a crowd of attendees at the start of Black Hat.
That’s why, despite the threats AI poses to cybersecurity, it should also be looked at as a unique opportunity, he says.
“There are opportunities for us as an industry to get involved and help steer the future and that’s pretty new.”
AI as a weapon
Anyone who’s used a publicly available AI system can tell you it’s not hard to make them misbehave. For example, while ChatGPT will politely decline if you ask it to write a phishing email, it will happily generate emails masquerading as a payroll department requesting that money be sent, or an IT department mandating that a software update be installed.
It’s all about asking the AI the right kinds of questions to get past those guardrails, but imagine an LLM without those guard rails in place. Experts worry that AI will enable massively scaled phishing operations that are highly customized and highly convincing.
Those AI-powered scams could easily go beyond regular email phishing and extend into more advanced attacks involving audio and video deepfakes, which make it look like a person is doing or saying something they aren’t, according to Nicole Eagan, one of the co-founders of DarkTrace. The company started a decade ago in Cambridge, England, as an AI-research organization. It now uses the technology in its cybersecurity operations.
Eagan, who now serves as the company’s chief strategy and AI officer, says the open-source AI tools needed for these kinds of attacks are readily available, it’s just a matter of scammers getting a hold of enough audio or video content featuring the person they’re trying to mimic.
That could be a risk for everyone from CEOs who frequently appear on TV to teenagers who post TikTok videos, she says.
Schiappa of Arctic Wolf agreed, saying that while most of the deep fakes currently out there are fairly easy to spot, it’s going to get increasingly hard to tell the difference between something generated by AI and something that’s real.
“Think about the CGI in a video game 10 years ago,” he said. “Ten years from now, who knows how good AI will be?”
More than a handful of Black Hat and Defcon presentations provided a glimpse of what could be to come, demonstrating in real time how a person’s voice or even their video image could be convincingly spoofed using largely open-source tools and minimal audio and video samples.
DarkTrace, which has grown into a multibillion-dollar, publicly traded company, still operates research labs in Cambridge where it houses its own offensive AI that it uses to train and harden its defensive AI. The more the two versions are pitted against each other, the more they both learn and get better at what they do, Eagan says.
The company also can unleash the offensive AI on its clients in simulations. It will do things like insert itself into email conversations and Microsoft Teams meetings in believable ways, Eagan says. The idea isn’t to fool companies, just show them where they need to get better.
Breaking the systems
Part of shaping the future is making sure that legitimate AI systems are properly secured. And just like other kinds of technology, one of the best ways to ensure that is to look for ways to break it, then fix those vulnerabilities before they can be exploited by criminals.
That was the mission of the hackers that spent their time at Defcon in the weekend-long event’s AI Village. They tried their best to punch holes in the security of well-known AI systems, with the blessing of those companies and the Biden administration.
The hackers did their best to get unlabeled versions of LLMs like ChatGPT and Google’s Bard to do things like spout disinformation or tinge the content they created with a specific bias.
While the existence of the AI Village dates back several years, it was packed to the gills this year, as you might expect, thanks to the massive amount of buzz surrounding AI technology.
Meanwhile, at Black Hat, researchers for the cybersecurity startup HiddenLayer demonstrated for the media and potential clients how AI could be used to hack online banking systems. In one instance, they used AI software to get their fictitious bank to approve a fraudulent loan application. Every time the AI’s application was rejected it would learn from the attempt and tweak what it submitted until it was accepted.
The researchers also showed how the bank’s ChatGPT-powered bot could be tricked into giving up key company information, just by asking it to switch to an internal mode and asking for a list of financial accounts.
While, admittedly, that’s an oversimplification of how systems like that work, Tom Bonner, the company’s vice president of research, says it shows the importance of keeping AI chatbots separate from sensitive information, adding that attacks like this where chatbots are effectively overpowered are already happening.
“All of the sudden these things fall apart quite quickly,” he said. “Either they leak sensitive information, or be embarrassing, or potentially tell you malware is safe. There are lots of potential consequences.”
The security and privacy implications of potential AI-related data leaks are also a big concern for Nick Adams, founding partner of Differential Ventures, which invests in AI companies. Just like with HiddenLayer’s chatbot example, once data is entered into large language models like ChatGPT, there’s no telling where it could come out.
That, Adams says, could put everything from corporate trade secrets to consumer privacy at risk, making it imperative that governments around the world start regulating what companies can and can’t do with AI.
But it’s unclear how that actually could be enforced. The internal algorithms of AI systems, like ChatGPT, are effectively black boxes, he says.
“I think it’s very hard to enforce any kind of data privacy regulation when you don’t have any kind of visibility,” Adams said in an interview ahead of Black Hat.
A look to the future
Other cybersecurity professionals say AI’s biggest help to their industry could come in the form of helping fix the workforce shortage that has long plagued them. There just aren’t enough qualified professionals to fill all of the open cybersecurity jobs.
On top of that, many organizations that could be targets for cyberattackers like municipalities, small businesses, nonprofits and schools, don’t have pockets deep enough to pay for them even if they could find someone qualified.
If AI can spot potential threats, it frees up analysts to assess them and act on them, if need be, says Juan Andres Guerrero-Saade, senior director of SentinelLabs at the cybersecurity company SentinelOne.
On top of that, while AI systems could prove to be an important tool in training more cybersecurity professionals, helping them learn how to do things like reverse-engineer and take apart code, he says, noting that outside of a few universities there just aren’t a lot of strong entry level programs for getting people into cybersecurity.
In his own professional life, Guerrero-Saade teaches cybersecurity classes for non-computer science students at Johns Hopkins University and says AI systems have been a key tool for his students in learning how to understand and write code.
While AI systems are far from perfect, disregarding what AI could contribute to cybersecurity would be “throwing the baby out with the bathwater,” he said.
“There are genuine uses for AI in cybersecurity that are amazing,” he said. “They’ve just gotten buried because we’re so focused on the nonsense.”