Cyber Business Podcast

Why Every CISO Must Use AI Now and How to Do It Without Losing Control with Greg McCord - Ep 203

Written by Matthew Connor | Apr 6, 2026 10:35:00 AM

Greg McCord is a career security leader operating across two roles simultaneously. As CISO at Lightcast.io, a leading labor market analytics firm, he protects one of the most data-intensive organizations in the workforce intelligence space. As founder and CISO of McCord Keystone Advisory, launched in late 2025, he extends fractional CISO services to small and mid-sized businesses that need executive-level security leadership but cannot sustain a full-time hire. His background spans government, public sector, and private enterprise, and includes time as an Army interrogator at the SERE school for special forces, an experience that informs how he thinks about intelligence, data relevance, and the psychology of adversarial pressure. 

 



Here’s a glimpse of what you’ll learn: 

 

  • Why Greg argues every CSO must incorporate AI into their daily security lifecycle or risk being left behind by adversaries who already have
  • Why adopting AI in a non-attributable way is the most important and underemphasized discipline in enterprise security right now
  • How quantum computing threatens to make every encrypted breach dataset collected today readable in the future and what that means for your data strategy
  • Why AI frameworks like AIUC-1 and CSA Maestro are becoming critical infrastructure for organizations trying to govern agents, prompts, and LLMs at scale
  • How running LLMs locally on hardware rather than in the cloud changes the security calculus for SMBs and enterprises alike
  • Why the cloud adoption analogy is the most useful mental model for thinking about where AI governance is headed
  • How AI-powered penetration testing and continuous red teaming are changing how organizations find and prioritize vulnerabilities
  • Why the right question is not whether to use AI but how to use it without losing positive control of your most sensitive data


In this episode…

Greg opens with a position that is both practical and urgent. Security leaders who choose not to adopt AI are not playing it safe. They are falling behind adversaries who are already deploying it against them. His counsel is specific: adopt AI, but do it in a non-attributable way. The moment confidential data is connected to an uncontrolled AI system, positive control of that data is gone and there is no reliable way to get it back. The traditional tools still matter. The telemetry and signal they provide remains valuable. But they need to be augmented with AI that can act faster, identify patterns earlier, and close the gap between detection and response before attackers achieve their objective inside your environment.

The quantum computing thread is where Greg brings one of the most forward-looking and underappreciated risks in the conversation. Governments and sophisticated threat actors are collecting encrypted breach data today with no current ability to decrypt it. Once quantum computing matures, that changes. Everything collected now becomes readable later. Greg draws on his Army interrogator background to frame it clearly: the goal is for your data to be irrelevant by the time anyone can crack it, but not all of it will be, and the organizations that are not thinking about this now will have no recourse when it arrives. That reality, combined with the convergence of quantum processing and AI training models, is what makes the current moment unlike anything the industry has faced before.

Greg closes with a perspective on frameworks and governance that is both honest about the pace problem and constructive about the path forward. By the time a framework is written and discussed, the technology it describes has already evolved. That is not an argument against frameworks. It is an argument for building continuous feedback loops between practitioners in the field and the people writing the standards. AIUC-1 and CSA Maestro represent serious efforts to govern AI agent behavior, prompt handling, and LLM risk in a structured way. The organizations that engage with those frameworks now, rather than waiting for mandates, will be the ones with the governance foundation in place when the next wave of threats arrives.

 

Resources mentioned in this episode

 

Matthew Connor on LinkedIn
CyberLynx Website
Greg McCord on LinkedIn
Lightcast Website
McCord Keystone Advisory Website

 

Sponsor for this episode...

 

This episode is brought to you by CyberLynx.com  

CyberL-Y-N-X.com.

CyberLynx is a complete technology solution provider to ensure your business has the most reliable and professional IT service.

The bottom line is we help protect you from cyber attacks, malware attacks, and the dreaded Dark Web.

Our professional support includes managed IT services, IT help desk services, cybersecurity services, data backup and recovery, and VoIP services. Our reputable and experienced team, quick response time, and hassle-free process ensures that clients are 100% satisfied. 

To learn more, visit cyberlynx.com, email us at help@cyberlynx.com, or give us a call at 202-996-6600.

 

Check out other related episodes:

Identity Is the New Perimeter: A Cybersecurity Director's Playbook with Jason Lawrence  

How AffirmedRX Is Using Technology to Fix a Broken Healthcare System with Laurel Cipriani

The Two AI Attack Paths Every Security Leader Needs to Understand Now with Sinan Al Taie 



Transcript: 

 

Greg McCord Interview Transcript

Matthew Connor: Hey, here we go. Matthew Connor here, host of the Cyber Business Podcast. Today we're joined by Greg McCord, Founder and CISO of McCord Keystone Advisory and CISO at Lightcast.io. Greg, welcome to the show.

Greg McCord: Thank you for having me.

Matthew Connor: Well, thanks for joining us. Before we get too far in, a quick word from our sponsors. Hackers are getting smarter — is your security keeping up? Cyberlink sells industry-leading, AI-powered cybersecurity solutions that detect threats in real time, so you know about an attack before the damage is done, not after. Learn more at cyberlinks.com. And now back to our show.

Greg, for those who aren't familiar, can you tell us about Lightcast and your role there as CISO?

Greg McCord: Yeah, Lightcast is a leading labor market analytics firm. We deal with a lot of data — we get data from many different sources, and what we do is take that information and provide a trustworthy source of labor market analytics. One of our latest pieces, Fault Lines, really exemplifies the changes in the labor market. We're seeing the impact of AI, and now with the conflict with Iran — how does that affect certain industries and segments? That may not always be addressed explicitly, but we dive into that world of our changing labor market. And of course, as CISO, I have to protect all the data, the infrastructure, and everything that exists there. As other CISOs may relate, it's a lot of work.

Matthew Connor: Fair enough. I also want to touch on McCord Keystone Advisory. What can you tell us about that?

Greg McCord: What I wanted to do there is offer a fractional CISO type of service to companies that can't necessarily afford a full-time CISO. I bring a lot of knowledge and expertise from government, public sector, private sector — you name it. I believe I can give back a little more on top of my CISO work at Lightcast to companies that really just want to enhance their security posture.

Matthew Connor: I think that's fantastic, and it's definitely something smaller businesses really need. They may get some help from internal IT, but IT teams are often too busy and security isn't their core expertise. And if they outsource to an MSP, that's not really their specialty either. They might go to an MSSP, but those teams are often spread thin. So having a fractional CISO really focused on them — brilliant. How long have you been doing that?

Greg McCord: I just kicked off the company in late December — it was a soft launch. I started it because friends were asking me, "Hey Greg, can you help?" And I said, "Sure, why not?" So now it's formalizing that desire to give back and help into an actual company. I'm also building out a bench with a lot of talented technical people who are much smarter than I am in certain areas, because at the speed everything is developing today, I can only do so much. There are only so many hours in a day, so it makes sense to build that out.

Matthew Connor: Absolutely. That's a really worthy cause, especially today. I want to switch back to Lightcast — I think it's fascinating. You're the CISO, not the data team, but can you explain a bit more about how the data is derived? Historically, we've all relied on government labor statistics — unemployment rates, and so on. How does Lightcast differ, and what gaps are you filling?

Greg McCord: That's an important distinction. Government data is still valuable — it's a good source of truth, and a lot of that data helps fill in gaps and smooth out the rough edges. But where our real value comes in is for the entities that leverage Lightcast — through our analyst product and other offerings — by providing additional insights specific to what they're looking for. For example, an educational institution might want to know how their alumni are doing after graduation: what markets are they entering, do they have jobs? It's about understanding the nuances that we don't always think about. I could go through more of our products, but I'll save that for another time.

Matthew Connor: That gives us a really good idea. Let's get timely — it's March 20th, and this episode will probably go out around the end of March. There's a lot going on right now: the conflict in the Middle East with Iran, and we've already seen things like the Striker breach. I think that's really instructive — it hopefully opens people's eyes to the need for leveraging AI more on the cybersecurity side. Here's a massive, multibillion-dollar business that clearly invests in security, but once social engineering enters the picture, a layer gets bypassed. There have to be other layers in place.

When I think about AI and cybersecurity, I think of products like Darktrace and the way they use AI for email and identity security. Applied to something like the Striker breach — you have a product actively analyzing identities, flagging anomalies, and quickly shutting things down while calling for human review. Those kinds of tools end up saving the day. The bad guys are using AI-powered tools — we know that. And I think traditional security products alone are the knife in that gunfight. What's your take on that? What's your sage advice in March 2026?

Greg McCord: If I went 100% CISO instinct, I'd say no — but I know I can't say that. If we don't leverage AI, we're going to get left behind. Every CISO needs to incorporate AI into their daily security lifecycle. What I would encourage — and caution — is to be as non-attributable as possible. As soon as you link confidential data to an AI tool before you're ready, it's gone. You've lost positive control of that data, and you can't really recover from it. You can send cease and desist letters, but by the time you do, how many times has that data already been copied?

When I think about our adversaries using AI against us, it reinforces the need to leverage those tools ourselves to bolster our capabilities. Going back to the knife analogy — traditional security tools are still very important. That's our foundation. We have data, telemetry, all that good information. Now let's make it better, optimize it, eliminate bottlenecks. So when something does happen — a misconfiguration, social engineering, business email compromise — we can quickly respond. Maybe the tool does it for me. Either way, it's an increase in capability that protects your organization and protects people. And that's something we haven't talked much about — yes, you're attacking a corporation, but there are people's lives at stake too, whether through reputational harm or other impacts.

Matthew Connor: I think you raise some really interesting points. When you split AI into machine learning tools in security versus LLMs, there's an important distinction. The challenge is that when people hear "AI," they think LLM, and the concern is valid — when you bolt an LLM onto a traditional email security gateway, you've inherently opened yourself up to prompt injections. We haven't solved that yet. From a business perspective, it's tempting to just "AI-power" something by adding a large language model underneath it. But that's not truly AI-powered security — you've opened a huge can of worms.

When you use machine learning properly in security products, that's the next level beyond rule-based detection where attackers find workarounds. You're using AI the right way. And similarly, an LLM can be used the right or wrong way — give it access to your email, your credit card, the web, and no guardrails, and you're going to have all kinds of problems.

We're at a stage where we have to manage it intelligently. It's that Indiana Jones moment — the guy with the sword versus the one with the gun. The challenge is finding the right products and using the right kinds of AI in the right places. Darktrace, for example, shows us where the future is headed — using machine learning for identity, email, and beyond. Just like self-driving cars today, you can see where it's going, but you can't fully trust it yet. People get lulled into a false sense of security. "It did such a great job on that email — let me automate it." And by email number 20, it's made a mistake that's already been sent.

Greg McCord: Exactly. And that's where things like AI-UC-1 come into play. They're building a comprehensive, robust AI framework that addresses prompting, agents, and LLMs specifically. A number of organizations are leveraging this — and why not have a framework that comprehensively addresses all of these areas and pulls in standards like ISO 42001 and CSA MAESTRO? Emil and Ravi over at AI-UC-1, and others at CSA, are doing great work publishing white papers on this topic — specifically on how to address the prompt, since everything starts there. You prompt the tool to do something. It's almost like talking to your kids: ask the right questions and you'll get the right responses back. I literally thought about this yesterday talking with one of my little ones — it really is like prompt engineering.

Matthew Connor: It really is. I tend to think of it like an intern — maybe high school, maybe college. You have to give really clear instructions, educate them, supervise them. In time they may become a good employee requiring less supervision, but they still require supervision. We're not at the point where you can just fire and forget. That day will probably come, but not yet.

Greg McCord: Not yet. Plausible, yes. Are the right people making it possible? Maybe. And I think security leaders have to make sure we're raising our voices and raising those red flags. Even if a business leader says, "I want to scale exponentially" — great, that's possible, but what happens when you get a breach and you have to stop, disconnect, and repair everything? That's going to cost a lot of money.

Matthew Connor: And putting on my futurist hat — not looking too far ahead, just binoculars, not a telescope — I think hardware improvements, like Apple's new M5 Ultra chip, are going to bring enough power to the desktop that you can run your LLM locally. That will make it a lot more affordable for SMBs to keep their data securely within their own organization, with no additional breach risk beyond what they'd normally have. Having that locally on your own hardware adds a real layer of safety for both SMBs and enterprises.

A lot still has to fall into place — I know that. But a lot can be baked in. We see that in the difference between something like OpenAI, which is moving as fast as possible and failing forward, versus something like Claude's enterprise tools, where security is designed in from the beginning, with guardrails, controls, and a much more deliberate approach. I think we'll see companies like Anthropic do the right work to put security first. We'll see how the more open, faster-moving approaches hold up when the security implications become clear. From your perspective, once the hardware, software, and everything is within your control and the confines of your network — how do you feel about it then?

Greg McCord: Philosophically, I keep thinking about what "AI" actually means in practice. A lot of people still use it as a more advanced search engine. Within organizations, you have it checking code, building notifications, a number of cool use cases — it helps optimize and makes you more productive. But the flip side is that in gaining productivity, you may be losing creative value. We haven't fully reckoned with that yet.

Going back to what you asked directly: yes, once we've secured it, put the guardrails in place, protected the agents and the agent-to-agent communication — which is itself a fascinating concept — I would obviously feel much more comfortable. It's similar to when businesses moved to the cloud. How did we get assurance that our data would be protected? How did we trust the physical data centers and infrastructure? It's kind of the same idea here.

Matthew Connor: And we all went through that transition. Now the cloud is indispensable — you can't really work without it. I think AI in all its forms will end up the same way. But in the interim, we have to navigate that knife-versus-gunfight period wisely.

I don't know if you saw the McKinsey report — I think it was earlier this week or maybe last week — where a red team used AI, I believe through a company called AI Works, to find and exploit vulnerabilities in about two hours. They safely extracted a large amount of data. The good guys did it, which is great. But that's a fantastic example of using AI to beat attackers at their own game — using it to harden defenses and find vulnerabilities. For larger companies especially, that should be on the radar. We've seen something like a 250% increase in cyberattacks since the conflict started, targeting critical infrastructure — financial institutions, energy, utilities. Our adversaries want to cut off access to money, resources, and essential services. How do you turn off a city's traffic lights? It sounds like a movie, but when you start thinking like an attacker, the options are unsettling.

Greg McCord: Exactly. And for larger companies that already have strong processes in place, it really comes back to my earlier point — leveraging AI in a non-attributable way to increase productivity and capability. At a big company dealing with a 250x surge in alerts, the SOC team is doing their best to stay ahead. You can't default-deny everything because you still have to run a business. So how do you protect the excess volume? You find a vetted, isolated tool — one you've tested and know is safe — and let it do the heavy lifting. Clear the clutter so your team can focus on what matters.

Matthew Connor: That's really key. And I love the cloud analogy — it's so spot on. You can't just jump in. You have to be smart about it, examine what you're doing, and apply it to your specific organization. No two organizations are the same, and there's no one-size-fits-all security product. But integrating the right kinds of AI in the right places — that is the future of security, just like cloud was the future. We can't sit on the sidelines clinging to Windows XP and Server 2008.

Greg McCord: People still do. God bless them. But you can't run a whole organization like that anymore. At one point that was cutting edge — you were doing great. But along came the cloud, and now along comes AI. And with governments beginning to talk about mandates and frameworks around AI security, there are mixed reviews right now about the current administration's competency in this area. But I think back to your earlier point — the people working on the frameworks are pushing everyone in the right direction. When the government says, "Our adversaries are coming at us hard with AI — we need to be focused on this too," that matters, even if they can't mandate specific products.

One of the first questions I got from my executive team was: "How broad is the scope of this? What guardrails can we put in place?" That's what the frameworks are answering — here are the areas we need to lock down, based on extensive research. And the best frameworks are built with feedback loops from people in the field who are actually building multi-agent platforms and experimenting with LLMs. What are you seeing that we haven't addressed yet? Keep that feedback loop going, and the frameworks will keep improving.

Then comes implementation — do you follow the traditional path of plugging it into a GRC platform and gathering evidence? That's a very tedious process, and you're already behind by the time you've done it. You need something that can pull controls and automate configurations. Maybe AI can help with that too — but without breaking anything. It's always a cost-benefit question: what can you do with what you have, while operating at the speed of your business?

Matthew Connor: And that's the real challenge — we've never seen anything advance this quickly. By the time you finish writing a policy or framework, the technology has already moved beyond it. You have to be looking far ahead, strategically and broadly.

I was on the fence about bringing this up, but let's not forget quantum computing. That hasn't gone away. When you think about the technological shifts over the last 30-40 years — from on-prem servers in closets to data centers to cloud — each wave set the stage for the next. Now what happens when the underlying computing architecture itself changes? Instead of binary ones and zeros, you have qubits with multiple transitional states. That dramatically accelerates processing. Once that matures and converges with AI, it's going to be an extraordinary moment. That could be where AGI actually emerges — not from a software breakthrough alone, but from the computational power to process and iterate at a scale we can't yet imagine.

And then you have the data problem. Governments are collecting encrypted breach data right now that they can't crack. Once quantum computing arrives, that old data becomes readable. All of that historical encrypted information is suddenly at risk.

Greg McCord: That's actually an interesting parallel to military intelligence. When I was an interrogator working at the SERE school — Survival, Evasion, Resistance, and Escape — for Special Forces, one of the things you teach people is to resist just long enough for your information to become irrelevant. Once you're captured, the battle moves on. If you can hold out 72 hours, whatever you know has hopefully been superseded by events. Then you can talk.

I'm hopeful that a lot of the data collected in past breaches — if it gets cracked by quantum computing eventually — will be too outdated to matter. But some of it won't be. That gets into deep questions about privacy, identity, and what it means when people have adopted the mindset of "it's already out there, what can I do?" That's not the right mindset. The right mindset is: how do I protect my data? How do I protect my identity — both digital and physical — because increasingly, our physical identities are becoming bits of data?

Matthew Connor: And we're heading into an interesting future when it comes to verification itself. Even on a video call right now, AI is getting to the point where you can't be sure you're talking to the real Greg or the real Matt. With enough data available about either of us, it could all be synthesized in real time. And just by recording and posting this video publicly, we've given platforms enough material to clone our voices and likenesses. It's a fascinating and somewhat unsettling frontier.

Obviously we could theorize about this forever. But practically speaking, today, you have to think carefully about what AI means for your organization and be smart about it. It can't just be "let everyone use whatever LLM they want, ask it whatever they want, it's free." There's a reason it's free — you are the payment.

Greg McCord: Exactly.

Matthew Connor: Great advice, and a great place to land. Before we go, can you tell everyone where they can find out more about you, Lightcast, and McCord Keystone Advisory?

Greg McCord: Sure. For Lightcast, it's easy to find: www.lightcast.io. You can find more information about the security trust program I built there at trust.lightcast.io — it covers our certifications and has answers to commonly asked questions. The team loves it, and so do the sales team.

For more about me personally, you can find me on LinkedIn — that's where I share most of what I'm working on and thinking about. I'm becoming much more active there, so feel free to drop me a line. I'll try to get back to you as soon as I can.

Matthew Connor: And McCord Keystone Advisory?

Greg McCord: That's www.mccordkeystoneadvisory.com. I tried to make it easy. There's a contact form there if you want to learn more about the services we provide and how we can help.

Matthew Connor: Fantastic. Well, Greg, until next time.

Greg McCord: Until next time. Thank you, Matt.

Matthew Connor: Thank you.