Zero Trust, AI, and Security Leadership in Healthcare with William O'Connell

William OConnel

William O'Connell serves as the Information Security Officer at VHC Health, a hospital system based in Arlington, Virginia, just outside Washington, DC. With more than seven years at the organization, O'Connell was brought in to help jump start and mature the healthcare system’s cybersecurity program. His background spans network engineering, firewalls, VPNs, and early infrastructure security, giving him a practitioner’s perspective on how security has evolved from perimeter defense to continuous risk management. Today, his work focuses on balancing patient care, operational access, and modern security controls in one of the most complex and regulated environments in IT.

 

apple
spotify
stitcher
google podcast
Deezer
iheartradio
tunein
partner-share-lg

Here’s a glimpse of what you’ll learn: 

 

  • Why zero trust should be treated as an ongoing strategy rather than a finished project
  • How hospital security mirrors physical access control in real world healthcare settings
  • Where AI adds value in cybersecurity and where it introduces new risks
  • Why agentic AI still requires strong human oversight
  • How CISOs should evaluate AI tools in regulated environments like healthcare
  • The importance of governance and third party risk assessment for AI adoption
  • Why storytelling matters when communicating security metrics to executive leadership

In this episode…

William O'Connell explains that zero trust is often misunderstood as a project with an end date, when in reality it is a guiding security concept that requires continuous improvement. He uses a healthcare analogy to clarify the idea, explaining that hospitals must allow access to many people while still protecting highly sensitive areas. This same principle applies to digital environments where access must be intentional, segmented, and constantly reviewed.

The conversation also explores the role of AI in modern security operations. O'Connell shares how healthcare organizations must carefully assess AI tools to ensure patient data is not exposed or reused in unintended ways. While AI can dramatically improve visibility and response time, he cautions against blindly attaching large language models to every system without understanding the risks, including prompt injection and unintended data exposure.

As the discussion turns to agentic AI, O'Connell highlights both the promise and the concern. Automation can reduce repetitive tasks and improve efficiency, but it also removes traditional learning paths for junior staff and introduces trust challenges when AI is given autonomy. He emphasizes the importance of maintaining a human in the loop and applying zero trust principles even to AI driven systems.

The episode closes with practical leadership insight on reporting and communication. O'Connell stresses that security leaders must translate metrics into stories that resonate with executive teams. Data alone is not enough. Clear narratives tied to business outcomes are what drive understanding, alignment, and investment in cybersecurity initiatives.

 

Resources mentioned in this episode

 

Matthew Connor on LinkedIn
CyberLynx Website
William O'Connel on LinkedIn
VHC Health Website

 

Sponsor for this episode...

 

This episode is brought to you by CyberLynx.com  

CyberL-Y-N-X.com.

CyberLynx is a complete technology solution provider to ensure your business has the most reliable and professional IT service.

The bottom line is we help protect you from cyber attacks, malware attacks, and the dreaded Dark Web.

Our professional support includes managed IT services, IT help desk services, cybersecurity services, data backup and recovery, and VoIP services. Our reputable and experienced team, quick response time, and hassle-free process ensures that clients are 100% satisfied. 

To learn more, visit cyberlynx.com, email us at help@cyberlynx.com, or give us a call at 202-996-6600.

 

Check out other related episodes:

 

Inside a Real World Ransomware Incident and Recovery with Zach Lewis
Where AI Helps, Where It Hurts, and Why Governance Matters with Olivia Phillips
Balancing AI, Privacy, and Risk at a Public University with Malcolm Blow

Transcript: 

 

Cyber Business Podcast – William O'Connell, Information Security Officer at VHC Health


Matthew: Matthew Connor here, host of the Cyber Business Podcast. Today we're joined by Bill O'Connell, Information Security Officer at VHC Health. Bill, welcome to the show.

Bill: Thank you for having me.

Matthew: Thanks for coming on. Before we get too far in, a quick word from our sponsors.

[SPONSOR READ: This episode is brought to you by CyberLynx.com. Do you know if a hacker is in your system? Most people and most companies don't — until it's too late and the hacker has already done damage. A hacker's job is to bypass your security, so companies need a way of knowing when someone has gotten past their defenses. That's where CyberLynx comes in. We've partnered with the best cybersecurity companies in the world to provide our clients with the best solutions at the best prices — whether it's managed SIEM, SOC, EDR, MDR, or XDR. We'll help you find the right solution at the right price. Find out more at CyberLynx.com.]

And now back to our show. Bill, for those who aren't familiar, can you tell us about VHC Health and your role there as Information Security Officer?

Bill: Sure. VHC Health is a hospital system in Arlington, VA, just outside of DC. I've been with the organization for about seven years now. I'm originally from Chicago — they brought me in to help jump-start their cybersecurity program.

Matthew: Very nice. We're here in Bethesda, so you're not far at all. Now, one of the things I find really interesting and important right now — beyond AI — is zero trust. I think it's not understood well enough, and there's a lot of confusion around it with all the products and solutions that claim to offer it. Can you walk us through what it actually is and how you communicate it to stakeholders?

Bill: Sure. Zero trust isn't so much a specific project you're going to complete — it's more of a concept. There are a lot of different frameworks that will help you get there, and they share some common elements: network segmentation, micro-segmentation where needed, various access management controls, and so on. But one thing I've seen consistently is that stakeholders think of it as a project. "We're doing zero trust — when are you going to be finished?" And the honest answer is: we're really never going to be finished. We're always going to be looking for areas to improve.

One of the best analogies I've heard — especially relevant in healthcare — is the hospital itself. People can walk into the front desk, walk into a doctor's office. You don't want them walking into a surgeon's room or an operating room. That's the concept behind zero trust: making sure the people who need access can get it, while restricting access to places they shouldn't be, and putting those boundaries in place.

Matthew: I really like that analogy — I haven't heard it put that way before, and it's so spot on. Everyone gets the hospital reference. And applying it to the digital world makes sense because too often people expect security to reach a finished state. You can't just be "cybersecure" — that's not a checkbox you tick and walk away from. You're always working on those controls. So that's a great framing.

Now, we've had probably over 100 CIOs, CTOs, and CISOs on the show, and no two have had the same entry into tech. What got you interested in technology in the first place, and how did you get from there to here?

Bill: It was a long time ago. I was actually a pressman working old printing presses for an organization that produced phone cards and stored value cards. Honestly, I was pretty bored — it was a good job, but I needed something more exciting. That was right around the time Microsoft started rolling out their MCSE certification, back in the Windows NT 4.0 days. So I went back to school for that, and when an opening came up in the IT department, I moved over. I started out in network support — firewalls, VPNs — working with clients like AT&T and Kmart on the phone card side. Back then, those cards weren't activated at the checkout. They were live. You'd have a cradle of phone cards worth hundreds of thousands of dollars sitting there. So it was an interesting environment to learn security in.

Matthew: It really is incredible how far we've come since the Windows NT days. Even back then, getting a computer to recognize an image was practically science fiction. Creating any kind of graphic was cumbersome. And now we stand on the shoulders of those technology giants and people are using AI to generate images, video, software — it's remarkable. And I think one of the most interesting topics right now is AI in security. One of my new favorite things — if I'm being Oprah about it — is Darktrace. For those unfamiliar, instead of bolting an LLM onto an existing email security product, they use self-learning machine learning to understand how you write, who you communicate with, and to flag URLs that haven't been reported as malicious yet but clearly look bad. That's a glimpse into the future of security. But from your perspective, living in the "now" — where there are tons of products bolting LLMs onto things and introducing prompt injection vulnerabilities in the process — how do you balance that? What's your take on AI and security going forward?

Bill: From a security standpoint, we run third-party risk assessments on any new tools. Everyone is claiming to have AI in their product now, and some carry more risk than others. Being in healthcare, one of the biggest concerns is patient data being used in the learning algorithms for these AI systems — where that data goes, whether it's been properly scrubbed or anonymized. I read an article about six months ago that said even anonymized data, once fed into an AI system, can potentially be reconstructed by AI agents that correlate it with other available data. So you're in this cycle where you're trying to protect data from something that's genuinely useful to you, while that same usefulness can expose the data if misused.

Matthew: That's a real challenge. And I suppose the answer is good governance — asking the right questions about where the data goes, whether the AI is learning from your emails and potentially sharing that with others. Where do you see AI heading in security from your vantage point?

Bill: What I find genuinely exciting is agentic AI — using AI as a resource in a SOC or other security context. That said, it's a double-edged sword. On one hand, it can automate a lot of basic, repetitive tasks. On the other hand, those tasks are often the ones you assign to new team members so they can learn. Agentic AI risks cutting out that entry-level development path for a lot of people.

As for trusting AI agents with autonomy — I'm a little apprehensive. I love technology and I'm always right there on the leading edge. I think embracing technology rather than fighting it or giving up is really important. But when it comes to giving AI agents real agency to take action, I still feel compelled to check the work very carefully. It's not like onboarding a new employee where you assign them entry-level tasks, check in periodically, and feel reasonably comfortable. I don't feel that same comfort level with agentic AI yet. That said, having a human in the loop and letting the AI handle certain things within defined boundaries — how much autonomy you extend is really a decision based on how comfortable you are with the underlying platform and how well secured it is.

And it ties right back to zero trust. How do you make sure a bad actor can't get to that platform and compromise it? If you connect an LLM to your calendar so it can schedule events, now it has access to your calendar — and potentially your email. A bad actor who schedules something with you could prompt inject their way into reading your data, disclosing information, or worse. That's the one we know about. The bad guys only have to be right once while we have to be right every time. So I'm more reluctant to let AI interact autonomously with outside parties until we have better guardrails in place.

I actually read about an email vulnerability where an attacker could embed an AI prompt inside an email. An email agent reading and summarizing your inbox would see the prompt and execute it — downloading an installation file, opening a backdoor. That kind of thing is pretty concerning. So for me, it comes back to the right technology for the right use case. LLMs are fantastic for drafting an email, summarizing a 100-page document, giving you the CliffsNotes. That makes enormous sense. But slapping an LLM onto everything is the scary future. And we're not at general intelligence — not even close. I'm not fully convinced we'll get there, because we don't even fully understand how intelligence works in the first place. Maybe it happens through enough iterations and we stumble upon it. Maybe not. But in the meantime, the LLM is not the end-all-be-all, and the question in security is always: what is the right technology for this specific use case?

Matthew: And you can't lock everything down and say no indefinitely. People are going to use these tools whether you sanction them or not — I'd rather control and monitor what they're using than have them go around me. Security used to be the department of no. We've learned as an industry to say yes more — yes, you can do that, and here's how we keep it safe and implement it correctly.

I think going back to Darktrace — and yes, I sound like a fanboy too — for OT security in hospitals especially, it's a great example of AI used in the right way. A surgeon brings in a device that needs to connect to the network to feed into the EMR. The moment it connects, you've got a foreign device with potential access to a lot of sensitive information. Being able to see that immediately and get alerted — "why is this device in the operating room suddenly accessing all this data in the EMR?" — is invaluable. And if you're comfortable with it, letting that AI step in and control or cut off access. With obvious caveats, of course — you don't want to cut off a robot surgeon mid-procedure.

Bill: Exactly. And CrowdStrike does something I really like on the AI side as well — what they call Attack Path. It allows AI to present you with an attack chain that might involve several low-level vulnerabilities you wouldn't individually consider urgent. But chained together in the right sequence, they can lead a threat actor straight to Active Directory or to electronic health records very quickly. That shines a light on concerning situations that traditional tools might miss entirely.

Matthew: And SentinelOne has a similar capability — Purple AI — where instead of just getting a raw security alert about some software doing something unusual on a workstation, the AI translates the entire event into plain English for the technician: here's what this is, here's why it's suspicious, here's what it appears to be doing. Something like that could have caught SolarWinds early. Traditional security software was bound to trust a SolarWinds certificate — it was legitimate, so it passed. But it was doing bad things behind that legitimacy. AI looking at behavior rather than signatures would have flagged the anomaly.

Bill: Exactly right. And the broader point is that AI disruptions in history — fabric automation, the automobile — each solved one specific problem or replaced one specific function. AI is different. It can affect many industries and many job functions simultaneously. Electronic switching eliminated telephone operators. AI can do that and dozens of other things at the same time.

Matthew: And that's what scares people. But as we've seen with every prior technology, it tends to create far more jobs than it destroys. The microwave didn't replace the stove. The Internet created an entirely new economy. AI will do the same — times 100. The lower-level, easy-to-automate tasks will absolutely be affected. But for the people who continuously learn, re-skill, and learn to use AI to make themselves better at what they do — they're the ones who benefit. It's a choice people will have to make.

Bill: Agreed. And one thing I think is really valuable as a skill in this environment — and something AI can actually help with — is analysis. The ability to dig deeper rather than stop at the surface-level explanation.

Matthew: Absolutely. How do you help your team build those analytical skills?

Bill: One thing I push at all levels: when you're looking at a problem, don't stop too early. So many people get to a couple of symptoms and call it the root cause. If you had a power surge that took out a server and nobody noticed for a week, and your root cause analysis just says "power surge" — that doesn't give you anything you can fix. The real problem is that it wasn't detected. You have to keep digging.

The framework I use with my team is the Five Whys. You just keep asking why. People couldn't log into Active Directory — why? Because the time was out of sync between the AD server and the workstations — why? Because the network time service crashed — why? Because of a power surge — why wasn't it caught for a week? Because there was no monitoring in place. Now you've found something you can actually fix: implement monitoring. That prevents the problem from recurring, or at least reduces the likelihood significantly.

The concept is typically attributed to Toyota — I believe it originated with an oil leak on a machine. The bearings kept going out and the machine kept having to be shut down. Someone kept asking why until they discovered that the root cause had nothing to do with the bearings at all — it was a vibration issue elsewhere in the system. Fix that, and the bearings last five times as long. It's just not settling for the easy explanation you can write in a report and hand in.

Matthew: And I think that ties directly into reporting. Knowing how to present your findings to senior leadership is a real skill — and it connects to the Five Whys pretty naturally. What's your approach to that?

Bill: Whenever you're reporting or presenting, the most important thing is to know your audience. Senior leadership doesn't want dry numbers thrown at them. Put things in charts, make it visual, make it meaningful — but more importantly, make it a story. If you need to use numbers, build a narrative around them. "We reduced vulnerabilities by 50% by doing X and Y" — connect it, explain why it matters, show what it means for the business. Without the story, you lose your audience.

At the end of the day, everything comes down to storytelling and, in a sense, sales. You're always selling something — an idea, a budget request, a direction. And you don't sell with stats alone, especially not to non-technical leadership. You sell with a story that either moves them through concern or through excitement. The key is it has to be tied to an outcome that has a real effect on the business. If you can't make that connection, it won't land — no matter how accurate the data is. And the most effective story isn't the one you want to tell. It's the one your audience wants to hear.

Matthew: That is just about as well said as it gets. Bill, I can't thank you enough for coming on today. Before we go, can you tell everyone where they can find out more about you and about VHC Health?

Bill: For VHC Health, you can visit vhchealth.org — there's a lot of exciting work happening there. And you can find me on LinkedIn — William O'Connell, or linkedin.com/in/williamoconnell1 if you want the direct URL.

Matthew: Fantastic. Bill, until next time.

Bill: Appreciate it.



 

Check out other related episodes:

Read On

Securing AI and Modernizing Care Delivery in Long Term Facilities with Vikas Sachdeva

Securing AI and Modernizing Care Delivery in Long Term Facilities with Vikas Sachdeva

Vikas Sachdeva serves as Chief Information Officer at HealthDrive Corporation, a healthcare...

Read more
How AffirmedRX Is Using Technology to Fix a Broken Healthcare System with Laurel Cipriani - Ep 201

How AffirmedRX Is Using Technology to Fix a Broken Healthcare System with Laurel Cipriani - Ep 201

Laurel Cipriani is the Chief Information Officer at AffirmedRX, a transparent pharmacy benefits...

Read more
Teaching Artificial Intelligence the Right Way with Dr. Sergio Sanchez

Teaching Artificial Intelligence the Right Way with Dr. Sergio Sanchez

Dr. Sergio Sanchez, CIO and CSO at Coleman Health Services, returns to The Cyber Business Podcast...

Read more