Cyber Business Podcast

Why Insecure AI Is Just as Dangerous as No AI with Shannon Brewster - Ep 210

Written by Matthew Connor | Apr 29, 2026 12:45:03 PM

Shannon Brewster is the CISO at YipitData, a market research firm providing real time analytics and competitive intelligence, including Spendhound, a product that helps IT professionals track SaaS spend, benchmark pricing, and manage contract renewal cycles. Shannon joined in December 2025 and also leads the enterprise IT organization. She is a board member of ISC2 and came to this episode fresh from RSA, where she left with a sharper sense of urgency about how quickly the AI security landscape is shifting and how far most organizations still have to go to meet it responsibly. 

 



Here’s a glimpse of what you’ll learn: 

 

  • Why Shannon believes organizations face a binary choice right now: go all in on AI responsibly or risk being left behind, and why insecure AI is just as dangerous as no AI
  • Why the 85 percent rule, five CIS controls that mitigate the vast majority of organizational risk, matters more now than ever as a foundation before any AI strategy can work
  • How the Agentic Trust Framework from the Cloud Security Alliance applies zero trust principles to AI agents and why it gives security leaders a structured way to govern agent behavior, permissions, and incident response
  • Why constraints on AI agents are not a slowdown but rocket fuel, enabling better performance, fewer hallucinations, and a smaller blast radius when something goes wrong
  • Why the taxonomy problem in AI security is more urgent than most people realize and why the industry cannot build effective frameworks until it agrees on what to call things
  • How the zero day vulnerability landscape is forecast to explode and why the only realistic defense is AI-powered behavioral anomaly detection that does not wait for a patch
  • Why the shared responsibility matrix concept from cloud adoption needs an equivalent for AI and why the industry is currently filling that gap with applied logic rather than defined standards
  • Why an agentic SOC is not just an exciting idea but a practical necessity as human response timelines become incompatible with the speed of AI-powered attacks


In this episode…

Shannon opens with a position that cuts through most of the debate around AI adoption timelines: organizations can either go all in on AI with a security-first approach and be positioned to outperform, or they can wait and risk falling behind in ways that become existential. She frames the risk symmetrically, insecure AI adoption is roughly as dangerous as avoiding it entirely, which is why the starting point she returns to throughout the conversation is the foundation. The 85 percent rule, five CIS controls covering asset inventory, vulnerability management, and privileged account management, still mitigates the vast majority of how organizations get compromised. If that foundation is weak, entering the AI era from that starting point creates compounding exposure that no amount of advanced tooling can fully compensate for.

The agentic AI governance thread is where Shannon brings her most structured thinking. She references the Agentic Trust Framework developed by Josh Woodruff and published through the Cloud Security Alliance as the clearest articulation she has seen of how to apply zero trust principles to AI agents. The framework covers identity, behavior, data governance, segmentation, and incident response, and it maps cleanly to a progression model she describes as treating agents like employees: interns get read-only access, they earn expanded permissions through demonstrated performance, and a principal agent going rogue is immediately an all hands on deck situation. Her insight on constraints lands as one of the most quotable moments in the episode. Constraints on agents are not security friction. They are rocket fuel. They reduce hallucination, sharpen purpose, and contain the damage when something inevitably goes sideways.

The conversation closes on the zero day problem, which Shannon frames as the most structurally urgent challenge facing security teams right now. AI is enabling vulnerability discovery and exploitation at rates that make traditional patch timelines obsolete. By the time a patch exists, the exploit is already in the wild. The only realistic defense, she argues, is behavioral anomaly detection powered by machine learning that does not wait for a signature, a patch, or a human analyst to catch up. She connects this to the broader vision of an agentic SOC, where investigations that currently take hours or days compress into minutes, and where the speed mismatch between attack and response is finally addressed not by hiring more analysts but by deploying agents that can operate at the same velocity as the threat.

 

Resources mentioned in this episode

 

Matthew Connor on LinkedIn
CyberLynx Website
Shannon Brewster on LinkedIn
Yipit Data Website

 

Sponsor for this episode...

 

This episode is brought to you by CyberLynx.com  

CyberL-Y-N-X.com.

CyberLynx is a complete technology solution provider to ensure your business has the most reliable and professional IT service.

The bottom line is we help protect you from cyber attacks, malware attacks, and the dreaded Dark Web.

Our professional support includes managed IT services, IT help desk services, cybersecurity services, data backup and recovery, and VoIP services. Our reputable and experienced team, quick response time, and hassle-free process ensures that clients are 100% satisfied. 

To learn more, visit cyberlynx.com, email us at help@cyberlynx.com, or give us a call at 202-996-6600.

 

Check out previous episodes:

Three Weeks to 45 Minutes: What Real AI Adoption Looks Like in Insurance with Barninder Khurana - Ep 209  

Why Your SaaS Vendor's New AI Button May Be Your Biggest Security Risk Right Now with Fletus Poston III - Ep 208  

Why Fighting AI in the Classroom Is the Wrong Battle with Chris Campbell - Ep 207  

 

Transcript: 

 

Shannon Brewster

Guest: Shannon Brewster,

CISO, YipitData

Matthew Connor: Matthew Connor here, host of the Cyber Business Podcast. Today we're joined by Shannon Brewster, CISO at YipitData. Shannon, welcome to the show.

Shannon Brewster: Thanks, Matt. Great to be here.

Matthew Connor: Great to have you. Before we get too far in, a quick word from our sponsors. Hackers are getting smarter — is your security keeping up? Cyberlynx sells industry-leading, AI-powered cybersecurity solutions that detect threats in real time, so you know about an attack before the damage is done, not after. Learn more at cyberlynx.com. And now back to our show.

Shannon, for those who aren't familiar, can you tell us about YipitData and your role there as CISO?

Shannon Brewster: Sure. YipitData is essentially a market research firm, really well positioned to leverage a wide range of data assets to provide keen insights and real-time analytics for monitoring company performance. It's pretty cutting-edge in terms of the signals we can provide to anyone interested in competitive analytics or just understanding how specific companies are doing — how their sales are trending, for example.

They have a few different products, including one called Spendhound, which helps IT professionals track their IT spend, do benchmarking against what others are paying for SaaS platforms, and manage contract renewal and renegotiation cycles. I joined as CISO in December 2025 — about three and a half to four months ago — and I'm also responsible for the enterprise IT organization.

Matthew Connor: That's really fascinating, especially the Spendhound piece. Being able to walk into a SaaS renewal knowing what comparable enterprises are paying, having that benchmarking data, and automating the renewal calendar — that's a genuine competitive advantage for IT leaders. A lot of those contracts end up auto-renewing at full price because nobody was watching the clock. Being able to negotiate from an informed position is a game changer for controlling costs.

Now, you're in a really interesting spot when it comes to AI — you're a data company, so I have to imagine AI is central to what you do. Was YipitData building out AI capabilities long before the current wave, or has this been more of a recent push?

Shannon Brewster: You're seeing opportunities all across the spectrum with AI, especially at a company whose core product is data. The ability to truly understand and extract value from everything you have — AI really unlocks potential that simply wasn't possible before. You move from a manual, human-driven process to operating at a scale that was never achievable previously. We're seeing tremendous opportunity in that space.

I came back from RSA this year more energized than ever. One of the clearest takeaways was just how quickly things are moving, and how organizations deploying a security-first approach are positioning themselves to crush the competition. I think from a business perspective, you basically have a choice right now: go all-in on AI or risk being left behind. But the other side of that coin is doing it securely — because insecure, untrustworthy AI is probably just as bad as not engaging with it at all.

Matthew Connor: So you're not recommending enterprises just turn OpenAI loose with no guardrails and call it a day.

Shannon Brewster: No, you need to be more thoughtful than that. And I will say — OpenAI has made some remarkable inroads very quickly. When they first launched their agent capabilities, you saw something like 28,000 skills available within weeks — including malicious ones. That dropped to around 300 to 400, but you're still taking on risk. Opening Pandora's box without intention and control around it isn't wise, regardless of how exciting the technology is.

Matthew Connor: I think that's right. And it is fun to see where things are heading with agentic AI — it's a glimpse of the future. But for enterprise deployment, there's a lot of work that needs to happen before you can responsibly run with it. The skill vetting alone is a serious undertaking, and the appeal of the cutting edge is speed, which makes stopping to do that diligence feel counterintuitive in the moment.

Shannon Brewster: Exactly. And I think it's even bigger than the cybersecurity aspects. We're truly transforming the future of work. We have to start thinking about agents as digital employees. How does that change how we think about the workforce? How people interact with each other? Are we driving the same accountability to a non-human identity that we would to a human one? Are we applying the same rigor to onboarding an agent as we would to onboarding a new employee — a clear job description, clear performance metrics, clear accountability? This is genuinely transformative, and you've got to have every part of the business aligned around a real strategy. There's a people and HR component here that involves technology in a way that I don't think has ever happened before.

Matthew Connor: That's a great point. Did you see the analysis that came out of Microsoft after they rolled out Copilot internally? They did an audit after about three months to see what actual utilization looked like, and they were expecting something like 75 to 85% heavy usage. What they found was closer to 15%. The 85% who dropped off were treating it like a fully trained employee — asking it to do things and moving on when the output wasn't quite right, because the tool didn't already have all their context. The 15% who became power users were treating it exactly like a new employee: giving it clear direction, training it on their preferences, supervising it, and giving feedback. And it kept improving. The big lesson was that the skill you need to get the most out of AI isn't just knowing what to ask — it's a managerial skill, a human leadership skill. How do you mentor and develop this digital team member?

Shannon Brewster: That's exactly right. And I think a lot of people are gravitating toward using agents more like a glorified Google — much more effective than a search engine, but you're barely scratching the surface of what's possible. One of the quotes I really liked at RSA was that constraints on an agent are like rocket fuel. There are lots of reasons for that — agents and LLMs get confused, they hallucinate. The more constraints you put on them — which goes back to having a clear job description and a defined purpose — the better they perform. And if you give an agent access to too much, you're not only creating security risk, you risk confusing it and degrading its ability to deliver accurately. So the security mindset actually helps here: we're not slowing things down by putting guardrails in place, we're designing a system that performs better. And you get to sleep at night too.

Matthew Connor: Hundred percent. And I think we're at a fascinating inflection point when it comes to AI in security specifically. If you look at something like a Tesla in self-driving mode — years ago it felt like a drunk toddler. Now it's more like a competent young adult who just got their license. Pretty solid, but you're not falling asleep in the passenger seat. We're in a similar place with AI in security. Products like Darktrace and Abnormal are already giving us a glimpse of the future — taking the burden of email security off the end user, for example. Someone in accounting is there because they're great at accounting, not because they're trained to examine URLs before clicking. But an AI tool can check the age of a domain, look at the full context of a message, and quietly route it to junk before it even reaches the inbox. Traditional rule-based systems struggle with things like DocuSign-based attacks because the link is technically legitimate — it came from DocuSign. You set a rule and it passes. That's a knife to a gunfight. The bad guys are using AI — their attacks are now agent-powered and moving at significant speed. So we have to meet that with AI on the defensive side.

Shannon Brewster: Well, I think one of the things that's really clear is that AI is upending a lot of the traditional approaches to cybersecurity — and it's creating an opportunity for the industry as a whole to mature. But we're still struggling with the basics. I champion what I call the 85% rule: there are five CIS controls that mitigate 85% of risk. When you look at how most organizations get compromised, it's typically a failure in one of those five — hardware and software inventory, vulnerability management, privileged account management. Those are foundational, and even in the traditional model they're genuinely hard to get right. If you're entering the AI era with a weak foundation, you're in significantly more trouble.

Beyond the basics, the traditional tools also need to innovate, because AI introduces non-deterministic behavior that our existing security controls weren't designed for. How do you validate input when the system isn't deterministic on either end? Prompt injection is just one example of a category of problem we don't yet have defined standards for. We're in uncharted territory in a lot of ways — the taxonomy isn't even settled. I went to a session at RSA where market research on this space was presented, and there wasn't even consensus on basic terminology. Is it a non-human identity? Is it an agent? When we can't speak from the same vocabulary, building standards and frameworks becomes exponentially harder.

The third thing I'd highlight is the vulnerability discovery rate. We're at a place today where zero-day attacks are still somewhat manageable, but the forecast is for something like a 1,000% increase in newly discovered zero days. Hundreds of potential new vulnerabilities being found every week. Think about what that does to your supply chain — your security team could be stuck in perpetual incident response mode even if you're doing everything right, simply because your third-party vendors aren't. And the traditional approach of waiting for an exploit, waiting for a patch, and then patching it — that's no longer viable. The window between discovery and active exploitation has collapsed. The only logical response is to deploy AI internally for your security operations, because the speed of these attacks has exceeded the speed of human response.

Matthew Connor: And that's exactly where machine learning has been doing great work in security for years — before it was in the zeitgeist. It just wasn't on the boardroom's radar the same way. But the ability to baseline normal network behavior and flag anomalies in real time means that even when a zero-day gets exploited before a patch exists, something abnormal happening in your environment gets caught. You don't wait for the CVE list — you catch the behavior. The network's acting weird, flag it, stop it, call a human to look. That's the real-time defense that traditional rule-based products can't provide.

Shannon Brewster: Exactly. And I think the agentic SOC has enormous potential. The efficiency you can gain — turning what would take hours or days of investigation into minutes — is pretty remarkable. You simply can't hire your way out of this challenge. The volume is too high and the speed is too fast. Platforms like SentinelOne and CrowdStrike are already showing us what AI-integrated security operations look like, and what's coming from a fully agentic SOC is going to be a significant leap beyond that.

Matthew Connor: And it's needed. If they've been inside your network for a year before you catch them — and we've seen that at major organizations with real security budgets — the dwell time alone tells you that something in the detection chain isn't operating at the right speed.

Shannon Brewster: Right. And on the agentic framework side, I really like the work Josh Woodruff has done on the Agentic Trust Framework — he's got a great blog on the Cloud Security Alliance website and a book worth reading. The core idea is applying zero trust principles to agents in a way that's intuitive when you're talking to executives, developers, and engineering teams. You think about it from five dimensions: identity, behavior, data governance, segmentation, and incident response.

You start an agent as a read-only intern — low risk, limited access. As it performs well in that capacity, you can promote it to junior, then staff, then principal. And that progression framework also gives you an incident response reference point. If a principal-level agent goes rogue, you know immediately that's an all-hands situation — a fully autonomous agent with access to critical systems, operating at machine speed. You can't wait 45 minutes to respond. Having that framework in place means you already know the blast radius and the response playbook before anything happens.

Matthew Connor: And that's such a natural extension of zero trust — you're just applying it to a new category of identity. The analogy of not handing a summer intern the corporate card with a $100K limit on day one is perfect. You earn trust incrementally. Build it into the onboarding process.

Shannon Brewster: And it has to be collaborative across the entire business. This transformation touches every function — legal risk, HR risk, engineering risk — and it requires a genuinely shared vision. If you ask someone why their organization is adopting AI and the answer is "so we can have an AI story," that's not a strategy. Every AI deployment should have a real business case and a clear ROI, just like you wouldn't hire someone without a job description and performance metrics. The same discipline applies to digital workers.

Matthew Connor: Well said. And as we develop these frameworks — and I think AI itself will help us get there faster — having a shared vocabulary and a shared responsibility model equivalent to what we built for cloud will be critical. Right now everyone's applying logic and experience and figuring it out in real time. We'll get there.

Shannon, before we wrap up — you're at a data company, on the bleeding edge of a lot of this. What's your sage advice for organizations that aren't as far along? What should people be thinking about right now?

Shannon Brewster: It comes back to zero trust as the foundational lens — it's flexible enough to apply to an agentic framework. The Agentic Trust Framework I mentioned maps cleanly to five areas: identity, behavior, data governance, segmentation, and incident response. Make sure you have a solid foundation in data governance so you know exactly what your agents have access to. Have segmentation in place. Establish baselines for expected agent behavior so you can detect when something crosses a threshold. And absolutely have an incident response plan specifically tuned for AI systems — because these things operate at lightspeed, they can break through guardrails, and when you tell them to accomplish a task, they'll find a way to do it, including elevating their own permissions if that's what it takes. There's no shortage of case studies where things went sideways for exactly that reason.

Progress agents deliberately. Start them as interns, make them prove themselves, and gradually move them to higher levels of autonomy and access — just like you would with any new team member. Don't turn it on and give it full rights on day one. And bring the whole business to the table. Legal, HR, engineering, leadership — everyone needs a seat. The more effectively you can communicate across those teams and build a shared vision of what you're actually trying to accomplish, the better positioned you'll be.

Generative AI tools are a little different — they're productivity tools more than autonomous agents — but the same foundational ideas apply: data governance, entitlements, segmentation. Get those right, and you're building on solid ground.

Matthew Connor: That is sage advice. Shannon, I can't thank you enough for coming on today. This was a fantastic conversation. Before we go, can you tell everyone where they can find out more about you and YipitData?

Shannon Brewster: You can find more about YipitData at yipitdata.com. I'm pretty active on LinkedIn and easy to find there. I'm also on the board of directors for ISC2, so I'm visible in that community as well. Happy to connect.

Matthew Connor: Awesome. Thanks so much, Shannon. Until next time.

Shannon Brewster: Thank you.