Inside AI Cybersecurity: Insights from Deloitte’s Kieran Norton on Emerging Threats and Defenses
Deloitte’s Kieran Norton shares expert insights on managing emerging AI cybersecurity risks, including autonomous AI agents, data poisoning, and the development of AI firewalls to protect enterprises.
Kieran Norton’s Role in AI Cybersecurity
Kieran Norton, a principal at Deloitte & Touche LLP, leads the US Cyber AI & Automation practice, bringing over 25 years of technology and cybersecurity expertise. He spearheads AI transformation within Deloitte’s cyber practice, focusing on developing AI and automation solutions that help clients boost cyber defenses while managing risks associated with AI technologies.
Emerging Cybersecurity Threats from Autonomous AI Agents
As AI agents gain autonomy—perceiving, deciding, and acting independently—they introduce complex challenges in maintaining oversight of interactions between users, data, and other AI agents. This autonomy can lead to risks such as data leakage, prompt injection attacks, and agent-to-agent exploit chains. The proliferation of these agents across enterprises raises concerns about visibility and control.
Managing AI agent identities also becomes critical, as AI models are frequently created and decommissioned, complicating efforts to monitor behavior and ensure trustworthiness.
Risks of Data Poisoning in AI Training Pipelines
Data poisoning, where malicious or faulty data corrupts AI training sets, poses a significant threat. This can stem from adversarial attacks or inadvertent errors during data collection and annotation. Poisoned data can skew AI outputs, producing false positives or negatives.
Prevention involves layered strategies: procedural data validation and trust assessments, technical safeguards like federated learning, and architectural measures including zero-trust pipelines and anomaly detection through robust monitoring.
Post-Deployment AI Model Manipulation and Detection
Malicious actors can manipulate AI models post-deployment via APIs or embedded systems, using techniques such as API hijacking, runtime memory manipulation, or gradual model drift poisoning. Early detection strategies include endpoint monitoring (EDR/XDR), secure inference pipelines utilizing confidential computing and zero trust, and model watermarking or signing.
Prompt injection attacks can exploit AI models to extract unintended information or cause harmful outputs. While guardrail tools exist, this remains an evolving arms race between attackers and defenders.
Limitations of Traditional Cybersecurity Frameworks for AI
Established cybersecurity frameworks (NIST, ISO, MITRE) remain relevant but require updates to address AI-specific nuances. AI expands the range of inputs and outputs, challenging traditional penetration testing and rule-based detection, thus emphasizing the need for automation and tailored controls integrated into secure software development lifecycles.
Building a Cybersecurity Strategy for Generative AI and LLMs
There is no universal strategy; organization-specific factors guide the approach. Key foundations include:
- Conducting readiness assessments to identify capability gaps
- Establishing AI governance with cross-functional stakeholder involvement
- Designing trusted AI architectures with integrated security tools
- Enhancing SDLC practices to embed security controls in AI development
Understanding AI Firewalls
AI firewalls monitor and control AI system inputs and outputs to prevent misuse, data leaks, and unethical behavior. Unlike traditional firewalls that manage network traffic, AI firewalls analyze natural language interactions and apply contextual policies to safeguard AI models.
The Future of AI in Threat Detection and Cybersecurity
Security operations centers (SOCs) have evolved with AI/ML models to improve threat detection, classification, and automated response. AI-driven agents, like Deloitte’s 'Digital Analyst,' assist human analysts by triaging alerts and recommending responses based on historical data.
The Evolving Relationship Between AI and Cybersecurity
AI will continue to be both a powerful tool and a complex risk factor in cybersecurity. Organizations must adapt by integrating AI-aware strategies and controls to harness AI’s benefits while mitigating emerging threats.
Сменить язык
Читать эту статью на русском