When AI Goes to the Dark Side
How to Combine the Force of Humans and AI Against Cyber Warfare
Written by Jarrod Piper
ContributorsJeremy Nelson, Juan Orlandini,Jason Rader
Listen to this article
Over 90% of cybersecurity professionals are concerned bad actors will use AI to levy malicious actions that are hard to detect. And for good reason.
Among security professionals who witnessed an increase in cyberattacks over the last year, 85% believe it’s likely those attacks used a form of weaponized generative AI.
The expansive availability of Large Language Models (LLMs) and the growth of generative AI is partly to blame.
“The dark side of cybersecurity has been a pervasive threat for a long time,” says Jason Rader, chief information security officer at Insight. “Initially, the tools and machines required to combat these threats were only available to big businesses with deep pockets. However, with the advent of new technologies, the bad guys now have access to the same tools and machines.” And those tools are only getting better and more accessible.
In fact, the accessibility might be the biggest reason cyber defense teams are struggling to keep the upper hand. “As AI emerges as a tool to enable faster response times, it’s important to acknowledge that threat actors are also leveraging these new technologies to circumvent the countermeasures we’re putting in place,” says Jeremy Nelson, director of services at Insight. “These leading-edge technologies are being used by the bad guys to advance their efforts, just as we’re using them to defend against cyberattacks.”
With AI-based technology, bad actors can create highly-realistic manipulated videos, images and audio recordings. One of the biggest concerns with this type of threat is the integration of social engineering. For example, a deepfake video could be used to create a convincing impersonation of a CEO, which could trick employees into transferring funds or sharing sensitive information.
This involves injecting data into the training set of a machine learning model to introduce a vulnerability that serves as an access point (“backdoor”) for an attacker. As an example, the training data of a spam filter could be manipulated to include certain keywords or phrases that, when encountered, would cause the filter to allow a malicious email to pass through. The attacker could then send a targeted email containing these keywords or phrases to bypass the filter and deliver a destructive payload.
This form of privacy attack targets machine learning models, especially those used for classification or generation tasks. The adversary exploits the model’s outputs to extract sensitive information about the input data used to train the model. For instance, within a facial recognition model, they might seek to uncover details about individuals whose images were part of the training dataset, potentially unveiling their identities or other confidential information based on the detected facial features.
Attackers can use machine learning algorithms to analyze a target’s online activity and social media posts to create more convincing, personalized communications. For example, an email may appear to come from a trusted source, containing highly relevant details to trick the recipient into clicking on a dangerous link or attachment, thereby gaining access to internal systems.
The dynamic nature of cyberthreats demands creative approaches to defense, and one promising avenue lies in the collaboration between humans and AI. By combining the complementary strengths of human expertise and artificial intelligence, you can enhance your cybersecurity posture and reduce attack vectors.
A veteran in the field, Rader strongly believes human expertise remains invaluable in both offensive and defensive measures. “It’s imperative to recognize that while AI offers remarkable capabilities, it should not be viewed as a substitute for human security experts. It’s about combining innovations like automated monitoring and threat detection with the experience of seasoned professionals to achieve a stronger security posture,” says Rader.
Cybersecurity professionals bring a wealth of experiential knowledge and practice, including critical thinking, intuition and contextual understanding, which enables them to anticipate evolving threats and respond with effective countermeasures. People can solve more complex, volatile situations that may fall outside predefined algorithms or models.
Yes, of course.
No, we rely on humans.
Not yet, but we plan to.
I have no idea.
“It’s imperative to recognize that while AI offers remarkable capabilities, it should not be viewed as a substitute for human security experts.”
From a technical perspective, breakthroughs in AI have emerged as a potent ally in modern cyber warfare. AI systems excel at processing vast amounts of data in real time, identifying network anomalies and detecting patterns indicative of malicious activity. This level of continuous monitoring on the backend enables organizations to optimize their threat detection protocols and respond swiftly to imminent threats.
AI-powered cybersecurity tools can also automate routine tasks such as log analysis and malware detection. This frees up time for human analysts to focus on more strategic activities, including the development of organizational contingency plans and advanced security controls. And, AI can provide valuable insights by correlating disparate data sources and predicting potential vulnerabilities before they’re exploited.
The bottom line? The key to a best-in-class approach to cybersecurity lies in a thoughtful collaboration between human expertise and AI innovation. Rather than viewing AI as a replacement for professional analysts, embrace the technology as a force multiplier that augments human intervention. Cybersecurity teams can provide crucial context, interpret AI-generated insights and make informed decisions based on the AI recommendations.
Human insight can’t be manufactured, but there’s also a reason why cybersecurity professionals should welcome an AI security copilot with open arms.
The rapid pace of technological evolution, coupled with the actions of relentless cybercriminals, creates immense pressure on people.
In a 2023 survey, more than 55% of security experts indicated a rise in stress levels. Top factors included staffing and resource limitations as well as rising technology complexity. Beyond stress, the frequency of attacks is also driving alert fatigue.
So how can you keep your teams technically and emotionally ready for the next offensive?
Rader believes in the value of customized training. “This is a crucial aspect of our cybersecurity strategy at Insight. We believe that tailored training that’s specific to our user base and environment will be much more effective in raising awareness of AI threat vectors. We also conduct phishing tests across the organization to demonstrate how easily one can be tricked into clicking a link. While it may not be our primary responsibility to predict how cybercriminals will leverage AI, it is essential to stay informed and vigilant. Awareness is key to staying ahead of the curve.”
The path forward requires a focus on enabling human capabilities to emerge stronger. By investing in the training, education and empowerment of your cybersecurity workforce, you can ensure they remain resilient and agile in the face of persistent threats.
Beyond fortifying enterprise systems and streamlining the more tedious aspects of cybersecurity, the latest advancements in AI are contributing to enhanced security for end users.
Enter the era of next-generation AI devices. With AI in the silicon, these AI devices safeguard user data and privacy — with greater efficacy than ever before.
By executing AI operations directly on the device, the conventional reliance on cloud processing and its associated privacy vulnerabilities are no longer a concern. This is a gamechanger for handling highly sensitive workloads and data processing tasks, especially for employees in the healthcare and manufacturing sectors.
Yes, anything helps.
No, probably not.
But the security benefits of AI devices aren’t limited to data protection alone. According to Insight Chief Technology Officer and Distinguished Technologist Juan Orlandini, “The increased computational power enables the implementation of more robust defensive measures directly on these devices. That’s why tech leaders are so enthusiastic about incorporating AI PCs into their arsenal, as they’ll be able to handle more complex security operations and predictive, AI-based analysis with ease.”
In other words, IT teams can arm workers with devices that can run more robust security software without slowing down the device, and workers can use generative AI functions without unnecessary data exposure. That’s a little more peace of mind for security teams and greater productivity for everyone.
Be sure to check out our Buyer’s Guide to learn more about all the benefits of AI devices.
Despite the advancements in AI technology, the human factor remains irreplaceable in safeguarding operations and protecting sensitive data. Looking ahead, the fusion of human ingenuity with AI capabilities will undoubtedly continue to shape how organizations defend their digital ecosystems.
By combining the power of AI with the expertise of human professionals, we can create a more resilient and secure digital landscape that’s better equipped to handle the challenges of tomorrow.
Defense from silicon to security expert.Discover how to reduce your risk with end-to-end security solutions from Insight. Learn more