Is Your Security AI-Proof? How to Prepare for the Weaponisation of Generative AI
Dr. Justin Morris is an academic philosopher turned tech writer. As a contributor at Insight, Justin applies his extensive knowledge of ethical issues to endorse sustainable practices and encourage forward-looking conduct within the tech industry.
When the World Economic Forum asks such a question, you know APAC has a problem. And that problem is tied to the 174% increase in the annual cost of cybercrime. It was $8.4T in 2022 and is expected to reach $23T by 2027.You may be desensitised to scary-sounding statistics like these. But what makes them harder to ignore is the disruptive impact of AI. In case you hadn’t noticed, generative AI is shifting two paradigms at once: the way your organisation runs, and the risks your organisation faces.
Those two shifts combine to form the third and most important consideration: How your organisation responds to these risks. Fire can be catastrophic. It can also keep you from freezing. The same logic applies from a cybersecurity perspective. Generative AI is not the enemy. It’s what you need in your security solution to keep the lights on.
Preparing your cybersecurity team for the next generation of cybercrime means enabling your systems to perform Extended Detection and Response (XDR). This approach covers all three bases: (1) it improves how your organisation runs, (2) mitigates the risks your organisation faces and (3) accelerates your organisation’s threat response time to machine speed. To fully appreciate the value of security solutions that take the best of generative AI to prevent the worst it can do, let’s take a closer look at the increasing sophistication of the problem.
That’s not sensationalism; it’s the new reality. There is not a more advanced tool than AI at this moment. You know it, and so do cybercriminals. The fact they’re using these advancements in AI to wreak havoc on your bottom line is driving up the cost of succumbing to — and recovering from — cyberattacks. Now the global average cost of a data breach is $4.45M.
Although cybersecurity professionals are working hard to fill gaps and vulnerabilities and tame the threatscape, there aren’t enough of them to go around. For supply to meet demand, the world would need an additional 3.4 million skilled cybersecurity workers. This is not good news. We all know what happens when demand far exceeds supply. (How much did you pay for that PlayStation 5, again?)
Even if your organisation is fortunate to have a dedicated 365/24/7 security operations team, it’s still no picnic and may seem impossible (note the qualifier) to predict the ways that cybercriminals will use AI to infiltrate your digital infrastructure and exfiltrate your sensitive data.
The promise of AI-powered tools is they can do what big-brained sentient mammals like you and I cannot. Such as:
Analyse millions of data points in real time.
Detect hidden correlations and subtle indicators of potential threats.
Automatically respond to risks before they escalate into full-blown attacks.
The problem is the solution. AI-powered attacks on your systems must be met with an AI-powered cybersecurity team. Fortunately, the development of AI-powered security is evolving more rapidly than the threats they’re mitigating.
If you’re familiar with Microsoft 365 Copilot, you know it harnesses the power of large language models to “fundamentally transform the way we work.” Suppose you’re creating a project proposal with data from emails, spreadsheets and presentations. It’ll take the better part of your afternoon to manually compile the information.But now, you can enter a prompt into Copilot: “Generate a proposal using data from my recent emails, Q3 spreadsheet and product roadmap.” Copilot will automatically analyse the data and create the proposal for you in seconds.
Have you met Microsoft Security Copilot? This tool integrates the AI-driven processes used to improve company workflows to assist your cybersecurity team in their quest to preserve your business continuity. The advantage of putting AI security guidance armed with the knowledge of 65 trillion daily signals is speed. No longer will your cybersecurity team have to spend countless hours (ask them for an exact number) manually searching for vulnerabilities in your network. With AI-powered security products like Microsoft Security Copilot, they’re able to spot and fill gaps at machine speed. Possessing what is essentially “on-demand predictive intelligence” is how you can stop an attack before it happens.
You’ve got remote employees. They need a remote desktop added to their computers. This will be done through good old TCP port 3389. It’s a familiar remote desktop protocol for IT professionals. They use it to grant access to your system. And wherever access can be granted, malicious actors will be searching for a gap to gain entry. To prevent this from happening without AI-assisted security, you’ll need to:
Run a scan (this will take several hours).
Analyze the results (this could take even longer).
Write a script to close the open ports (there must be a better way).
Total time spent: Several days, if not weeks.
When all is said and defended, you’ll have spent considerable time and effort resolving this issue. And with the shortage of qualified cybersecurity professionals, lesser-known vulnerabilities may have gone unnoticed. Unfortunately, you may not know that until it’s been exploited by a cybercriminal. To solve the same issue with the benefit of AI, you’ll need to:
Use an AI security tool to automate the process.
Shift your attention to other critical tasks.
That’s all.
Time spent: Less than it took you to read that list.
Outsmart attackers.We’ll help you close gaps in your defenses — and keep them closed.
Learn more