How AI agents can aid cyber criminals: ICAEW Insights
Oct 27, 2025
AI technologies are becoming increasingly autonomous, but this brings added cybersecurity risk. How can we protect ourselves?
North Korean operatives use cutting-edge AI tools to create fake tech engineer identities. They ace the application process and land remote employment roles with US Fortune 500 tech companies and funnel their salaries back to the regime.
This sounds like a work of fiction, but it’s actually the latest iteration of an employment scam that’s been going on for several years, according to the FBI . Where previously the scammers required intensive IT training, AI – specifically agentic AI – is now doing the heavy lifting.
What is agentic AI?
AI agents are individual software tools designed to complete a particular task autonomously. ‘Agentic AI’ refers to the framework in which they operate, or the science of orchestrating a number of agents to work together to solve problems and complete more complex and sophisticated tasks.
While this next level of AI has great potential in the workplace, it has also lowered the bar to a career in cybercrime. “It has the power to transform attacks,” says Alistair Grange, Technology Risk Director, Ernst & Young LLP (EY UK). “What might have been two to three weeks’ work for a technically skilled threat actor can now be done in around 10 minutes with agentic AI. Criminals can exploit a vulnerability much more rapidly, without the same level of technical prowess.”
Accountancy firms with their treasure troves of client data could become more frequent targets in the future, according to Radhika Modha, Board Member of the ICAEW’s Tech Faculty. “A cybercriminal can use AI-driven tools to carry out reconnaissance to look for weaknesses, understand how a business is organised, identify who to target and decide on the most effective approach. AI has made it much easier to pull together information from multiple sources and build a detailed picture of an organisation’s inner workings.”
Additional protection
Now that cyberattacks can cause significant damage in a short space of time, we need to up staff training and awareness, Modha adds. “Extra vigilance is needed as accountants often work in high-pressure environments where things need to get done quickly, which cybercriminals can exploit. Be aware and inquisitive if something seems off.”
It’s also important to train teams to be open and raise the alarm immediately if they think something has gone wrong, rather than fearing they'll be blamed, Modha adds. “You also need to think about how you’re working with your third parties. What access are you giving people external to your organisation?”
In a heightened threat landscape, it’s even more important to ensure you have the essential range of defences in place through gaining a cyber accreditation. Steps include dividing your network into isolated segments and adopting the principle of least privilege, which means staff access is limited to the parts they need for their jobs, explains Grange. “Really trying to limit the amount of lateral movement attackers can make if they do penetrate your network is absolutely key.” He advises small organisations with limited budgets to focus on applying protections to systems that are critical to the business and getting controls validated by an independent organisation through penetration testing.
Agentic AI in the workplace – benefits and risks
Organisations are now experimenting with AI-enabled workflows; their use is likely to grow in the future. However, the increasing autonomy and complexity of AI brings greater vulnerability, and adversarial attacks can take a variety of forms. One is data poisoning, or corruption of the data used to train models, Grange explains. “If you’re using it for fraud detection, you might have a database of transactions, some marked as fraudulent and some as legitimate. If someone can get into that data set and scramble it, then your tool will provide erroneous outputs.”
Another form of attack involves model inversion, in which AI outputs are analysed to make inferences about the data that it is trained on, which may be of a sensitive nature. Prompt injection tricks the technology by modifying input data in subtle way – a simple example would be adding a malicious prompt in white text that is imperceptible to the human eye but would be acted on by the technology. “You need to have those AI-specific threat vectors at the front of your mind as you're deploying these kind of solutions, and then test at regular intervals to make sure that your outputs are reliable,” advises Grange.
Human in the loop
When taking AI use to the next level, you need to embed human involvement into the governance and the process design, says Jason Walters, Technology Risk Director, EY UK. “Moving from AI agents to agentic AI, you will have different agents working together with an orchestration layer on top. These multi-agent frameworks operate more autonomously, driven by an objective rather than just a prompt. As this continues to become more complex and the level of autonomy increases, that's where we’ll see risks become more challenging to respond to. Rather than a human starting a process and then receiving output from an agent, human interaction needs to be designed in as a part of an agentic process.”
At EY, the use of this type of technology has been ramped up over the last three years, says Pragasen Morgan, UK Technology Risk Leader at EY UK. “AI is reshaping some of the things we do, adding efficiency and allowing us to do things a lot more quickly. The question is: what sort of ROI are we likely to get from it across businesses? What we’re seeing at the moment is that the ROI across businesses isn't coming through really fast, but the use cases are starting to develop. I suspect this emerging technology will be part of our lives as we go forward, so organisations will need to adapt and find the right balance when it comes to risk.”
[ICAEW]
