Scott opens with something that does not come up often enough in these conversations: the emotional dimension of the work. He chose to come into healthcare specifically because he does not want attackers picking on sick people. The framing is simple and it is genuine. Hackers are bullies. Hospitals are targets. People have died because of cyberattacks on healthcare facilities, and he intends to be in the way. That motivation runs underneath everything else he says in this episode and gives his technical arguments a weight that purely strategic conversations rarely carry. He also brings something most CISOs cannot: a decade in military intelligence and direct experience working alongside the FBI, Department of Defense, and Department of Commerce. He does not just understand how defenders think. He understands how attackers think, which is a different skill entirely and one he applies every day at AnMed.
The most practically useful section of this episode is Scott's argument about what the security community owes each other after a breach. He is direct: the stigma around disclosure is helping the attackers. When an organization gets hit and goes quiet to manage the reputational damage, it withholds exactly the information that could allow every other organization to close the same door before the attackers find it. Scott's position is not that organizations should be reckless with sensitive information. It is that the focus of disclosure has to shift from what was exposed to how it happened and what others should do right now to protect themselves. He makes a pointed analogy to community resilience more broadly, drawing on a personal story about a neighbor who pulled a truck off him without stopping to weigh the legal liability. That instinct to help rather than hesitate is what he wants to see from the security community.
Scott closes with the AI argument that most vendors are not making loudly enough because it is uncomfortable for them: the danger is not just that AI can be weaponized by attackers, it is that over-reliance on AI erodes the critical thinking that defenders need most when things go wrong. He uses his own SOC as a concrete example. When he introduced an AI-powered email security product, he did not let it run silently. He showed his analysts exactly what the tool was flagging and why, teaching them to think the same way so that the tool was developing their judgment rather than replacing it. That is the model he argues the industry needs to internalize before AI becomes a liability masquerading as a defense.
Resources mentioned in this episode
Matthew Connor on LinkedIn
CyberLynx Website
Scott Dickinson on LinkedIn
AnMed Health LinkedIn
This episode is brought to you by CyberLynx.com
CyberL-Y-N-X.com.
CyberLynx is a complete technology solution provider to ensure your business has the most reliable and professional IT service.
The bottom line is we help protect you from cyber attacks, malware attacks, and the dreaded Dark Web.
Our professional support includes managed IT services, IT help desk services, cybersecurity services, data backup and recovery, and VoIP services. Our reputable and experienced team, quick response time, and hassle-free process ensures that clients are 100% satisfied.
To learn more, visit cyberlynx.com, email us at help@cyberlynx.com, or give us a call at 202-996-6600.
Breaking Things on Purpose: An Honest Take on AI Readiness and Leadership with Shawn Hamm - Ep 213
Why Machine Learning Is the Unsung Hero of the AI Era with Ben Wilcox - Ep 212
Defending Critical Infrastructure in the Age of AI Attacks with Sean Murphy - Ep 211
Guest: Scott Dickinson,
CISO, AnMed Health
Matthew Connor: Matthew Connor here, host of the Cyber Business Podcast. Today we're joined by Scott Dickinson, CISO at AnMed Health. Scott, welcome to the show.
Scott Dickinson: Thank you. Thanks for having me, Matthew.
Matthew Connor: Thanks for being on. Before we get too far in, a quick word from our sponsors. Hackers are getting smarter — is your security keeping up? Cyberlynx sells industry-leading, AI-powered cybersecurity solutions that detect threats in real time, so you know about an attack before the damage is done, not after. Learn more at cyberlynx.com. And now back to our show.
Scott, for those who aren't familiar, can you tell us about AnMed Health and your role there as CISO?
Scott Dickinson: One of the things I love about AnMed is that I get to be their first-ever CISO — though it's not my first CISO role, so I've been through this rodeo before. You get to come in and build security almost from scratch. Not that they didn't have security before — they had an Information Security Officer who was doing good work. But they recognized they needed someone to lead a cybersecurity department and take their maturity to the next level, and that's why they brought me in.
AnMed is a not-for-profit hospital system in Anderson, SC. We have three main hospitals, we just opened our first freestanding 24/7 emergency department, and we're opening another in 2027. We're growing and we're very focused on serving our community and patients.
Part of what brought me to this role is my background — I've worked with the FBI twice, the Department of Commerce, the Department of Defense, and various state agencies. I know a lot about the threat actors and what's out there. To me, hackers and attackers are bullies, and I don't want bullies picking on sick people who are trying to get better. That mission resonates deeply with me.
Matthew Connor: I feel exactly the same way. And cyber criminals don't just target the weak, but they certainly target the vulnerable — the elderly population, healthcare organizations. People have died because of cyberattacks on hospitals. What you're doing is of paramount importance. And the threat is evolving faster than it ever has, largely because of AI. What's your perspective on AI's role in cybersecurity right now?
Scott Dickinson: The way I think about it: humans are already a form of AI. We take everything we've learned throughout our lives, all the stories and experiences and information we've absorbed, and we use it to make judgments going forward. We're just not as good at it as computers are, because computers don't forget. They can scan enormous amounts of data very quickly and identify patterns at a scale no human can match. AI is essentially humans at a magnified scale with better memory.
I was in military intelligence for about ten years, and one of my favorite aspects was working with the cyber teams — though we didn't even call it "cyber" back then. Just in the early 2000s, there were over 10,000 attempted intrusions per day on classified systems — and that wasn't even the most classified tier. I can't imagine what those numbers look like today. Bringing that experience to a healthcare environment is what these organizations need right now.
Matthew Connor: Let's talk about where that sits today. When I look at the trajectory of cybersecurity over the last twenty or thirty years, it's almost unrecognizable. And then you get products like Darktrace, which I think gives us a real glimpse into the future — using machine learning properly for security rather than just bolting an LLM onto an existing product. The machine learning approach makes sense for email security, for instance, because it understands what normal looks like for a specific user. Suddenly Scott is sending emails at 3 AM and he's never done that — flag it. That's the kind of intelligence that's hard to fake and hard to evade. And I think we'll eventually see that running locally on devices, monitoring everything in real time, telling you when a phone call is a scam before you've finished answering it. Where do you see that playing out in healthcare specifically?
Scott Dickinson: We have to be careful about AI on both sides of the equation. AI is genuinely helpful — and that means the criminals will look at it and ask the same question: how can I use this? They're going to find ways to manipulate it in ways we didn't intend.
I saw a presentation recently about Ring doorbells. Someone proposed linking neighborhood cameras together so they could detect a lost dog — sounds great. But an attacker could use that same capability to identify which houses don't have dogs, and therefore which houses might be easier to target. Or use it to request access to a specific camera under a false pretense. The technology wasn't designed for those uses, but that doesn't matter.
So as we bring AI in, we have to be proactively thinking: how could this be misused? What safeguards do we need?
That said, I'm very excited about what AI can do for healthcare. Pattern recognition is one of AI's great strengths. Looking at cancer cells, for instance — if we know how malignant cells behave, AI can flag anything operating in that pattern for closer examination. The potential is real and significant. We just have to build in the right safeguards alongside it.
Matthew Connor: And that connects directly to what Anthropic is doing with their upcoming Claude capabilities around vulnerability discovery — finding bugs that have been sitting in systems for twenty-plus years, undetected. They've been wise enough to partner with major tech companies first, giving them advance access to find and patch those vulnerabilities before releasing the capability more broadly. That's exactly the dual-use challenge you're describing. The same tool that lets good actors harden systems will eventually find its way to bad actors too. But having something that can do thorough, intelligent penetration testing at scale — work that currently costs five or six figures with no guarantee of finding anything — changes the calculus for organizations that could never afford that level of testing before.
Scott Dickinson: Absolutely. And you raise an important point about the announcement itself — when you publicly name the 40 organizations getting early access, you've just given bad actors a roadmap. Don't bother trying to hack the source; go find the lowest-paid person at the smallest of those 40 organizations and target them. There's enough money in cybercrime now that you can simply buy your way in. Six years ago, making serious money as a cybercriminal required real technical skill. Today, you can rent the exploits, rent the ransomware, rent the whole toolkit. It's software-as-a-service for bad guys, and the barrier to entry is almost nothing.
And it's not just sophisticated hackers I think about. The basic scammers cost us just as much. I got one of those texts recently — "Your Apple Pay was just charged $124" — coming from a Gmail account. Apple doesn't use Gmail. But a lot of people see something that looks official and react before they think. The elderly are especially vulnerable because they were raised to trust who's on the other end of a phone call, and to be polite enough not to hang up. Those instincts get weaponized against them. You provide just enough believable information to get past their defenses, and before they realize what's happened, they've been separated from their savings. I think that's deplorable.
Matthew Connor: It really is. And social engineering at the enterprise level isn't much different — even well-funded, security-conscious organizations like MGM get taken down that way. If a team of professionals can social-engineer MGM, what does grandma have to work with? Which is exactly why I think the AI-assisted tools that can catch these in real time — the phone call that says "I think this is a scam, you should hang up" — are so important and so close.
Scott Dickinson: And when something like the MGM breach happens, that should be a prompt for every organization to look at their own environment. What would we have done if that had been us? We immediately review our help desk procedures after any major public incident. Our policy now is that a password reset and an MFA reset can't both happen within a 48-hour window without the user proving their identity in person. That's a direct lesson from that type of attack.
What I'd love to see — and I'll step on this soapbox — is more openness in the community about how breaches actually happen. When an organization gets hit, instead of going quiet out of shame, I wish they'd say: here's the misconfiguration we had in our Office 365 environment, go check yours right now. By staying silent you're protecting the attackers, not yourself. The attackers are already bragging about it. The breach will get out. The only question is whether the community benefits from knowing exactly how it happened.
Matthew Connor: I think the stigma around breaches is genuinely shifting. Ten years ago, getting breached meant you dropped the ball, full stop. Now the reality is that it's not a matter of if but when, and organizations are starting to evaluate candidates on how they performed under breach conditions rather than whether a breach ever happened.
Scott Dickinson: Exactly. I've heard colleagues say they were actually asked in interviews whether they'd been through a breach — because the organization wanted to know if they had the composure to function when everything is on fire. I'll share my own experience: the FBI once called to let me know a bad actor had gotten into our systems, that they'd caught him and had our data. One of my firewall admins at the time was physically sick with anxiety. My response was: stop. We don't know how bad it is yet. Do you know how bad it is? No. So let's find out before we react. We got in, assessed it, found they'd copied a router configuration — serious, but manageable. We wiped it, rebuilt it from scratch, and moved on. They'd been in and out the same day, and we didn't find out for over a year. But by that point, whatever happened had already happened. The only question is what you do next.
That composure comes from resilience, and resilience comes from doing hard things. I build cars, I weld, I've rewired and replumbed and rebuilt houses. Hands-on, problem-solving work builds the kind of confidence that says: whatever breaks, I can fix it. That mindset translates directly into how you handle a breach.
Matthew Connor: And you can't know how you'll react until you're in it. Some people find that under pressure they get calm and focused. Others find the opposite. You don't know until the bullets are flying. Which is why tabletop exercises, incident response planning, and learning from other organizations' breaches are so valuable — they at least give people a rehearsal.
On the product side — with AI-powered security products proliferating at an almost comical rate, how do you cut through the noise as a CISO? Every vendor has AI in it now. How do you decide what's worth looking at?
Scott Dickinson: I love innovative companies and I love innovative technology. But Lou Holtz made a point that's always stuck with me — he was asked whether it's harder to get to the top or stay there, and he said staying on top is harder. When you're climbing, everyone is hungry and motivated. Once you're at the top of the Gartner Magic Quadrant, some organizations start resting on their laurels. The hungry challengers below them don't. So I don't automatically assume a Magic Quadrant leader is the best solution for every problem.
That said, the bigger concern I have with AI in security is that we don't lose the critical thinking piece. I used to use Nessus with my SOC analysts and walk them through what it was flagging and why. A machine might report an informational finding — blue, not urgent — but if one machine has 35 listeners and another comparable machine has 135, that difference matters. Blue doesn't mean ignore it. It means understand it.
Recently I brought in an email security product with AI built in, and when it flagged something, I walked my SOC analyst through its reasoning: new domain, registered in the last 24 hours, sender never seen before, combination of factors that don't add up. I wanted them to understand the why so they could apply that same logic themselves — not just let the tool do the thinking. If you stop building that critical thinking muscle, you become dependent in the same way people stopped remembering phone numbers once smartphones existed. The day something goes wrong with the tool, you're helpless.
The Microsoft Copilot study is instructive here. They expected 85% power users internally after 90 days and found it was the opposite — only about 15% were still using it heavily. The difference wasn't technical skill. The 15% were treating it like a new employee: here's who we are, here's what this task requires, here's the context you need. They brought leadership and management skills to it. The 85% expected it to just know and perform, and when it didn't, they moved on. If we use AI well — as a team member we're actively developing, not a magic button — we build stronger organizations. If we use it badly, we atrophy.
And from a security standpoint, full trust in AI creates its own attack surface. If everyone in your organization is relying heavily on a particular LLM, poisoning that LLM becomes one of the highest-leverage attacks possible. I always think about how a criminal would use whatever system I'm relying on. The GPS manipulation example is a good one — someone once proposed linking Ring cameras across neighborhoods to spot lost dogs. That's helpful. But it's also a mechanism to identify which houses don't have dogs, or to route traffic past something you want noticed. The criminals are always looking for the angle we didn't think of.
Matthew Connor: And they're endlessly creative about it — using what we consider normal behavior as the attack vector. The GPS traffic manipulation with a bunch of phones in a cart slowly walking down a street is a perfect example. Real, simple, already happening. Which is why talking about the criminal mindset — as you do when you speak at conferences — is so valuable. Most defenders think defensively. The attackers think offensively, creatively, and with no rules.
Scott, this has been an absolute blast. Before we go, can you tell everyone where they can find out more about you and AnMed Health?
Scott Dickinson: I'm on LinkedIn, and AnMed has its own website as well. If you're in the upstate South Carolina area and you need medical attention, our new freestanding 24/7 emergency department is right off I-85 — just follow the signs. And I speak at conferences regularly, so follow my LinkedIn and you'll see where I'll be.
Matthew Connor: Awesome. Thanks, Scott. Until next time.
Scott Dickinson: Thank you, Connor.