<RETURN_TO_BASE

Nick Kathmann on AI's Impact on Cybersecurity and Governance at LogicGate

Nick Kathmann, CISO/CIO at LogicGate, shares insights on how AI is reshaping cybersecurity, governance frameworks, and the future risks associated with AI in enterprise environments.

Leadership in Cybersecurity and IT at LogicGate

Nicholas Kathmann serves as the Chief Information Security Officer (CISO) and Chief Information Officer (CIO) at LogicGate. He directs the company's information security initiatives, fosters innovations in platform security, and collaborates with clients to manage cybersecurity risks. With over 20 years in IT and more than 18 years focused on cybersecurity, Kathmann has experience managing security operations for both small businesses and Fortune 100 companies.

About LogicGate and Risk Cloud®

LogicGate provides a risk and compliance platform designed to help organizations automate and scale their governance, risk, and compliance (GRC) efforts. Their flagship product, Risk Cloud®, enables teams to identify, assess, and manage risks enterprise-wide with customizable workflows, real-time analytics, and integrations. The platform supports diverse use cases, including third-party risk management, cybersecurity compliance, and internal audit processes, empowering companies to develop more agile and resilient risk strategies.

The Evolving Role of AI in CISO and CIO Responsibilities

Kathmann believes AI is already reshaping the roles of CISOs and CIOs and anticipates a significant rise in Agentic AI within the next 2–3 years. This form of AI can transform daily business processes by automating tasks typically handled by IT help desks, such as password resets and application installations. AI agents will also streamline tedious audit assessments, freeing CISOs and CIOs to focus on strategic priorities.

Navigating AI Deployment Amid Cybersecurity Layoffs and Deregulation

Despite deregulation trends in the U.S., global enterprises must prepare for stricter regulations in regions like the EU concerning responsible AI use. For U.S.-only companies, an adoption learning curve is expected. Kathmann stresses the importance of robust AI governance policies and maintaining human oversight to prevent uncontrolled AI behaviors.

Challenges Integrating AI into Cybersecurity Frameworks

A critical blind spot identified is the visibility and control over data locations and transit paths. AI integration complicates oversight because vendor AI features may not send data directly to AI models or vendors, rendering traditional security tools such as Data Loss Prevention (DLP) and web monitoring less effective.

Effective AI Governance Frameworks

Kathmann critiques many AI governance strategies as "paper tigers," where processes are known and enforced by only a small group. Since AI impacts every team differently, governance must be tailored to specific use cases. He recommends frameworks from organizations like IAPP, OWASP, and NIST for evaluating AI governance, emphasizing the challenge of applying requirements appropriately.

Managing AI Model Drift and Ensuring Responsible Use

AI model drift and degradation are inevitable but accelerated by AI technologies. Continuous testing strategies that assess accuracy, bias, and other risks are essential. Organizations need proper tools to detect and measure drift to maintain model integrity over time.

Role of Changelogs and Feedback Loops in AI Governance

While changelogs and limited policy updates currently help reduce provider risk, frequent real-time feedback loops may hinder customers' ability to govern AI effectively due to communication changes.

Concerns about AI Bias in Financial Services

Kathmann shares an example of AI/ML models in banking producing unexpected underwriting decisions, such as denying loans when the phrase "great credit" appears in customer interactions regardless of context. This highlights the need for better oversight and accountability to minimize such biases.

Auditing High-Stakes Algorithms and Accountability

Continuous testing and benchmarking of algorithms in real time are necessary, although human judgment is crucial to identify outliers. Organizations deploying these models should be held accountable for outcomes, similar to human decision-makers.

AI's Influence on Cyber Insurance and Risk Management

AI tools help analyze large datasets to identify patterns that inform risk understanding and management for both customers and underwriters. They assist in detecting inconsistencies and maturity levels of organizations over time.

Using AI to Reduce Cyber Risk and Improve Insurance Terms

Kathmann advises focusing on critical risks by filtering out noise, which can lead to comprehensive risk reduction and better cyber insurance rates. Attempting to address every risk can be overwhelming and less effective.

Tactical Steps for Responsible AI Implementation

Start by understanding specific use cases and desired outcomes. Research applicable AI frameworks and controls tailored to those use cases. Strong AI governance is vital for risk mitigation and operational efficiency, as automation relies heavily on data quality. Being prepared to answer questions about AI usage is essential for business success.

Future AI-Related Security Risks

Kathmann predicts that as Agentic AI becomes embedded in business processes, attackers will exploit these agents for fraud and malicious activities. Instances of language-based manipulation to bypass customer service agent policies have already been observed.

For further insights, visit LogicGate.

🇷🇺

Сменить язык

Читать эту статью на русском

Переключить на Русский