The security implications of AI integration — Azeria Labs CEO explores the future of AI and its threat landscape
[[{“value”:”
More needs to be done to address the lack of skills and resources surrounding AI integration and security, Maria Markstedeter, CEO and founder of Azeria Labs, put to the audience at the recent Dynatrace Perform 2024 conference in Las Vegas.
In order to combat the risks posed by new innovations such as AI agents and composite AI, security teams and data scientists need to enhance their communication and collaboration.
Having experienced the frustrations that a lack of resources brings from her experience reverse engineering ARM processors, Markstedter believes that better collaboration and understanding is needed to minimize the threats posed by AI integrations.
“You can’t find vulnerabilities in a system that you don’t fully understand”
The increasing size and complexity of data processed by AI models is moving beyond the bounds of what security teams are capable of threat modelling, especially when security professionals lack the resources to understand them.
New attacks and new vulnerabilities, “require you to have an understanding of data science and how AI systems work but also at the same time [have] a very deep understanding of security and threat modeling and risk management,” Markstedter states.
This is especially true when it comes to new multimodal AI systems that can process multiple data inputs such as text, audio and images at the same time. Markstedter points out that while unimodal and multimodal AI systems differ greatly in the data they can process, the general call and response nature of human to AI interaction remains largely the same.
“This transactional nature just isn’t the silver bullet that we were hoping for. This is where AI agents come in.”
AI agents present a solution to this highly transactional nature by essentially having the ability to ‘think’ about their task and come up with a unique final result depending on the information available to them at the time.
This poses a significant and unprecedented threat for security teams as, “the notion of access and identity management has to be reevaluated because we’re basically entering a world where we have a non-deterministic system that has access to a multitude of business data and apps, and has the authorization to perform non-deterministic actions.”
Markstedter argues that because these AI agents will need access to internal and external data sources, there is a significant risk of these agents being fed malicious data that might otherwise appear non-harmful to a security evaluator.
“This processing of external data will become even more tricky with multimodal AI because now the malicious instructions don’t have to be part of text on a website or a part of an email, but they can be hidden in images and audio files.”
Its not all bad news though. The evolution of composite systems that combine multiple AI technologies into a single product can, “create tools that give us a much more interactive and dynamic analytics experience.”
By combining threat modelling with composite AI, and by encouraging security teams to collaborate more closely with data scientists, it is possible to greatly mitigate not only the risks posed by AI integrations but also enhance the skillsets of security teams.
More from TechRadar Pro
AI is not the only silver-bullet solution when it comes to data problemsLooking to boost your productivity? Here is our guide to the best AI toolsHere is our rankings of the best endpoint protection software
“}]]