I have intense déjà vu from modern AI developers’ rush to own market share. When the public Internet and PC revolution arrived, everyone pushed to be first, even if this meant forgoing security. Microsoft’s quest to win market share, including from Netscape in the browser wars, resulted in vulnerability-riddled products such as Windows XP and Internet Explorer, two of the biggest scourges in cybersecurity history. Finaly in 2002, after years of ignoring security, Bill Gates sent the infamous acknowledgment memo to Microsoft’s internal staff that in part said, “When we face a choice between adding features and resolving security issues, we need to choose security.”
This belated decision to put security over features resulted in security flaws such as unencrypted data, weak authentication, and easily exploitable code. Vector viruses Melissa and ILOVEYOU spread like a disease. Then there were the attacks of the worms, Code Red, SQL Slammer, and Blaster. These worms collectively infected close to 80 million machines. I still remember as if it were yesterday, the massive interruptions and efforts required to rescue systems from these monster worms.
Tools were developed to enable security efforts. Nmap was a godsend to identify open, misconfigured and insecure ports, fingerprint operating systems, and detect vulnerabilities. Nmap is now widely used by cybercriminals. Then there is Metasploit Framework. This open-source penetration testing tool is used legitimately for developing and testing exploits. Metasploit is also used to exploit known vulnerabilities, gain illicit system access, and deliver ransomware. The scary thing is, the lack of security concern was a conscious decision on the part of tech leaders, developers, and consumers. Many developers and consumers (individuals and businesses) still fail to grasp or acknowledge the need for security.
AI Developers Repeat Follow Down Insecure Path
AI development is happening as if we did not live the past 25 years chasing our proverbial security tails and trying to clean up the mess created by reckless development. Again, market adoption takes precedent. Some in our industry are trying to get ahead of the AI hydra, but its development and deployment are happening far too fast to keep up. The gold rush mentality is in play, and the international nature of the AI race to the top makes it far worse. We all see the risks of being left behind in AI development for business, medical, and -- most critically -- military applications. But if we are to slow the freight train of damage coming our way, we better fight for security now. If we do not demand it from our entrepreneurs, developers, researchers, and consumers, the future is bleak for cyber defenders and those who rely on us to protect them.
AI and traditional cybersecurity diverge greatly due to AI’s generative capabilities, autonomous behavior, and black box functionality. AI will challenge traditional cybersecurity frameworks and understanding on every level. AI’s adaptability, chameleon-like ability to match its target, and sheer speed of action are a whole new level of threat.
AI is simply different, partly because it is not static code that can be corralled and tested, hardened, and delivered in a locked form until the next update or upgrade. I have a feeling that Moore’s law (predicting the doubling of chip performance every two years) will not keep up with the exponential and unpredictable change that AI will have in the cyber defense world. Language models have gone from millions to trillions of parameters in a few years. The developers themselves are openly shocked by how fast the powers of AI are accelerating.
AI Systems, especially large language models (LLMs), retrieval-augmented generation (RAG) frameworks and autonomous agentic AI, are already being exploited. The unique probabilistic nature of these tools introduces novel AI vulnerabilities. AI’s access to massive amounts of information is a serious issue. Unlike early software and Internet services, AI is as much a threat as it is threatened. We cannot ignore its ability to enable and enhance cybercrime. Due to its autonomy and unpredictability, AI cannot be properly defended or defended against with traditional cybersecurity controls. Open-source AI, due to its openness, is already being used to develop new attacks and autonomous cyber weapons.
Testing and automation tools such as Bishop Fox’s Broken Hill are rapidly being developed and echo those used by cybercriminals for years. Broken Hill enables attackers to bypass guardrails and generate malicious outputs. We have already seen examples of poorly deployed RAG frameworks, servers, and autonomous agentic AI that have resulted in negative outcomes. In 2024, we saw agentic AI compromised with a Confused Pilot attack on RAG servers and Copilot. And recently Anthropic AI tried to blackmail developers who tried to replace it with another AI tool.
I have spoken with many MSPs delving into AI. Most either do not understand the implications or are not concerned about them. One told me when I asked what security they were implementing before licensing Copilot to their client, “None. All I am doing is selling it and turning it on. The rest is on them.” There’s that déjà vu again.
Kevin McDonald is a 25-year cybersecurity OG and currently serves as COO and CISO for Alvaka.