33.9 C
New York
Saturday, July 26, 2025

Nick Kathmann, CISO/CIO at LogicGate – Interview Collection


Nicholas Kathmann is the Chief Data Safety Officer (CISO) at LogicGate, the place he leads the corporate’s data safety program, oversees platform safety improvements, and engages with prospects on managing cybersecurity danger. With over 20 years of expertise in IT and 18+ years in cybersecurity, Kathmann has constructed and led safety operations throughout small companies and Fortune 100 enterprises.

LogicGate is a danger and compliance platform that helps organizations automate and scale their governance, danger, and compliance (GRC) applications. Via its flagship product, Threat CloudĀ®, LogicGate allows groups to determine, assess, and handle danger throughout the enterprise with customizable workflows, real-time insights, and integrations. The platform helps a variety of use circumstances, together with third-party danger, cybersecurity compliance, and inner audit administration, serving to corporations construct extra agile and resilient danger methods

You function each CISO and CIO at LogicGate — how do you see AI reworking the tasks of those roles within the subsequent 2–3 years?

AI is already reworking each of those roles, however within the subsequent 2-3 years, I feel we’ll see a serious rise in Agentic AI that has the ability to reimagine how we cope with enterprise processes on a day-to-day foundation. Something that might normally go to an IT assist desk — like resetting passwords, putting in functions, and extra — could be dealt with by an AI agent. One other crucial use case shall be leveraging AI brokers to deal with tedious audit assessments, permitting CISOs and CIOs to prioritize extra strategic requests.

With federal cyber layoffs and deregulation traits, how ought to enterprises method AI deployment whereas sustaining a robust safety posture?

Whereas we’re seeing a deregulation development within the U.S., laws are literally strengthening within the EU. So, should you’re a multinational enterprise, anticipate having to adjust to international regulatory necessities round accountable use of AI. For corporations solely working within the U.S., I see there being a studying interval when it comes to AI adoption. I feel it’s necessary for these enterprises to type robust AI governance insurance policies and keep some human oversight within the deployment course of, ensuring nothing goes rogue.

What are the largest blind spots you see right now in terms of integrating AI into present cybersecurity frameworks?

Whereas there are a few areas I can consider, probably the most impactful blind spot can be the place your information is situated and the place it’s traversing. The introduction of AI is simply going to make oversight in that space extra of a problem. Distributors are enabling AI options of their merchandise, however that information doesn’t at all times go on to the AI mannequin/vendor. That renders conventional safety instruments like DLP and internet monitoring successfully blind.

You’ve stated most AI governance methods are ā€œpaper tigers.ā€ What are the core elements of a governance framework that truly works?

Once I say ā€œpaper tigers,ā€ I’m referring particularly to governance methods the place solely a small crew is aware of the processes and requirements, and they don’t seem to be enforced and even understood all through the group. AI may be very pervasive, that means it impacts each group and each crew. ā€œOne dimension suits allā€ methods aren’t going to work. A finance crew implementing AI options into its ERP is completely different from a product crew implementing an AI function in a particular product, and the record continues. The core elements of a robust governance framework differ, however IAPP, OWASP, NIST, and different advisory our bodies have fairly good frameworks for figuring out what to judge. The toughest half is determining when the necessities apply to every use case.

How can corporations keep away from AI mannequin drift and guarantee accountable use over time with out over-engineering their insurance policies?

Drift and degradation is simply a part of utilizing expertise, however AI can considerably speed up the method. But when the drift turns into too nice, corrective measures shall be wanted. A complete testing technique that appears for and measures accuracy, bias, and different crimson flags is important over time. If corporations wish to keep away from bias and drift, they should begin by making certain they’ve the instruments in place to determine and measure it.

What position ought to changelogs, restricted coverage updates, and real-time suggestions loops play in sustaining agile AI governance?

Whereas they play a task proper now to scale back danger and legal responsibility to the supplier, real-time suggestions loops hamper the flexibility of shoppers and customers to carry out AI governance, particularly if adjustments in communication mechanisms occur too ceaselessly.

What issues do you’ve round AI bias and discrimination in underwriting or credit score scoring, significantly with ā€œPurchase Now, Pay Laterā€ (BNPL) providers?

Final yr, I spoke to an AI/ML researcher at a big, multinational financial institution who had been experimenting with AI/LLMs throughout their danger fashions. The fashions, even when educated on giant and correct information units, would make actually shocking, unsupported choices to both approve or deny underwriting. For instance, if the phrases ā€œnice credit scoreā€ have been talked about in a chat transcript or communications with prospects, the fashions would, by default, deny the mortgage — no matter whether or not the client stated it or the financial institution worker stated it. If AI goes to be relied upon, banks want higher oversight and accountability, and people ā€œsurprisesā€ have to be minimized.

What’s your tackle how we must always audit or assess algorithms that make high-stakes choices — and who needs to be held accountable?

This goes again to the great testing mannequin, the place it’s essential to constantly take a look at and benchmark the algorithm/fashions in as near actual time as potential. This may be tough, because the mannequin output could have fascinating outcomes that can want people to determine outliers. As a banking instance, a mannequin that denies all loans flat out can have an excellent danger ranking, since zero loans it underwrites will ever default. In that case, the group that implements the mannequin/algorithm needs to be answerable for the end result of the mannequin, similar to they’d be if people have been making the choice.

With extra enterprises requiring cyber insurance coverage, how are AI instruments reshaping each the chance panorama and insurance coverage underwriting itself?

AI instruments are nice at disseminating giant quantities of knowledge and discovering patterns or traits. On the client aspect, these instruments shall be instrumental in understanding the group’s precise danger and managing that danger. On the underwriter’s aspect, these instruments shall be useful to find inconsistencies and organizations which might be changing into immature over time.

How can corporations leverage AI to proactively scale back cyber danger and negotiate higher phrases in right now’s insurance coverage market?

Right now, the easiest way to leverage AI for lowering danger and negotiating higher insurance coverage phrases is to filter out noise and distractions, serving to you concentrate on an important dangers. When you scale back these dangers in a complete approach, your cyber insurance coverage charges ought to go down. It’s too straightforward to get overwhelmed with the sheer quantity of dangers. Don’t get slowed down making an attempt to handle each single subject when specializing in probably the most crucial ones can have a a lot bigger influence.

What are a couple of tactical steps you suggest for corporations that wish to implement AI responsibly — however don’t know the place to begin?

First, it’s essential to perceive what your use circumstances are and doc the specified outcomes. Everybody needs to implement AI, however it’s necessary to think about your objectives first and work backwards from there — one thing I feel quite a lot of organizations wrestle with right now. After getting understanding of your use circumstances, you’ll be able to analysis the completely different AI frameworks and perceive which of the relevant controls matter to your use circumstances and implementation. Sturdy AI governance can also be enterprise crucial, for danger mitigation and effectivity since automation is simply as helpful as its information enter. Organizations leveraging AI should accomplish that responsibly, as companions and prospects are asking powerful questions round AI sprawl and utilization. Not understanding the reply can imply lacking out on enterprise offers, straight impacting the underside line.

When you needed to predict the largest AI-related safety danger 5 years from now, what wouldn’t it be — and the way can we put together right now?

My prediction is that as Agentic AI is constructed into extra enterprise processes and functions, attackers will have interaction in fraud and misuse to govern these brokers into delivering malicious outcomes. We now have already seen this with the manipulation of customer support brokers, leading to unauthorized offers and refunds. Menace actors used language tips to bypass insurance policies and intervene with the agent’s decision-making.

Thanks for the good interview, readers who want to be taught extra ought to go to LogicGate.Ā 

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles