The Human Side of Cybersecurity Leadership with Kara Schlageter
Kara Schlageter is a cybersecurity executive with a career that bridges human resources, technology, and security leadership. Formerly Deputy CISO at First Citizens Bank, she brings a rare perspective shaped by early consulting experience, large scale transformation work at Bank of America, and deep exposure to identity and access management. Her path into cybersecurity began not with firewalls or endpoints, but with people, culture, and organizational change. Today, Kara is known for advocating a human centered approach to cybersecurity that treats leadership, empathy, and ethics as core security controls.
Here’s a glimpse of what you’ll learn:
- Why cybersecurity failures are driven more by people than by technology
- How an HR background can strengthen security leadership
- Why culture and empathy are critical security enablers
- How AI should complement human judgment rather than replace it
- The ethical risks of AI adoption without governance
- Why risk tolerance and values must guide technology decisions
- How leadership roles like the CISO are evolving beyond technical expertise
In this episode…
Kara Schlageter explains why cybersecurity must be demystified and understood as a human problem first. She challenges the common perception that security is primarily about tools, arguing instead that breaches happen because of human behavior, incentives, and culture. Her background in HR allows her to view cybersecurity through the lens of motivation, trust, and organizational design rather than purely technical controls.
She shares how her career evolved through consulting, identity and access management, and large scale transformation at Bank of America. While helping organizations grow rapidly, Kara learned that hiring decisions, culture, and leadership alignment matter as much as technical skill. That experience shaped her belief that understanding people is a force multiplier in cybersecurity.
The conversation also explores AI and its growing role in both security and leadership. Kara emphasizes that AI is a powerful tool, but one that must be governed carefully. She stresses the importance of transparency, ethical use, and intentional guardrails, especially as organizations rush to adopt AI driven capabilities without fully understanding long term risk.
As the discussion turns toward leadership, Kara outlines how the CISO role is changing. Modern security leaders must communicate risk in business terms, define culture, and align technology decisions with organizational values. Technical expertise still matters, but it is no longer sufficient on its own. The future of cybersecurity leadership belongs to those who can balance innovation with humanity.
Resources mentioned in this episode
Matthew Connor on LinkedIn
CyberLynx Website
Kara Schlageter on LinkedIn
Sponsor for this episode...
This episode is brought to you by CyberLynx.com
CyberL-Y-N-X.com.
CyberLynx is a complete technology solution provider to ensure your business has the most reliable and professional IT service.
The bottom line is we help protect you from cyber attacks, malware attacks, and the dreaded Dark Web.
Our professional support includes managed IT services, IT help desk services, cybersecurity services, data backup and recovery, and VoIP services. Our reputable and experienced team, quick response time, and hassle-free process ensures that clients are 100% satisfied.
To learn more, visit cyberlynx.com, email us at help@cyberlynx.com, or give us a call at 202-996-6600.
Check out other related episodes:
Inside a Real World Ransomware Incident and Recovery with Zach Lewis
Where AI Helps, Where It Hurts, and Why Governance Matters with Olivia Phillips
Balancing AI, Privacy, and Risk at a Public University with Malcolm Blow
Transcript:
Cyber Business Podcast – Kara Schlageter, Cybersecurity Executive & Former Deputy CISO at First Citizens Bank
Matthew: Matthew Connor here, host of the Cyber Business Podcast. Today we're joined by Kara Schlageter, cybersecurity executive and former Deputy CISO at First Citizens Bank. Kara, welcome to the show.
Kara: Thank you, Matthew. I'm glad to be here.
Matthew: Thanks for joining us. Before we get too far in, a quick word from our sponsors.
[SPONSOR READ: This episode is brought to you by CyberLynx.com. Do you know if a hacker is in your system? Most people and most companies don't — until it's too late and the hacker has already done damage. A hacker's job is to bypass your security, so companies need a way of knowing when someone has gotten past their defenses. That's where CyberLynx comes in. We've partnered with the best cybersecurity companies in the world to provide our clients with the best solutions at the best prices — whether it's managed SIEM, SOC, EDR, MDR, or XDR. We'll help you find the right solution at the right price. Find out more at CyberLynx.com.]
And now back to our show. Kara, as a cybersecurity executive, let's talk about the human side of cyber. People tend to think of cybersecurity and picture firewalls and technology — but they don't often think about humans. And I think that's a mistake. Can you share your thoughts on that?
Kara: Yeah. I think it's absolutely a mistake, and I feel very passionately about it. I think it's important to demystify cybersecurity. People have this image of us in a dark room, hacking away at our computers. I tell people: I'm a cybersecurity leader, I've been in cybersecurity since 2017 — but I didn't start there. I started in human resources. People ask me, "How in the world does a former HR leader end up in cybersecurity?" And my response is: it's not the technology that causes the breach, it's the people behind it. If you know and understand human capital, you can drive culture, drive change, and protect your data and information far more effectively than if you just focus on tools and technology.
Matthew: That is a really great point. We've seen time and time again that the weak link in security is the human at the keyboard — not the endpoint, not the firewall, not the applications. Those are actually harder targets than the human. I'm fascinated by the HR-to-cyber transition. We've had well over 100 guests on this show and I don't think any two have had the same path — and this is a first. What was the catalyst? What inspired the move from HR to cyber?
Kara: I was fortunate to start my career in consulting right out of university — I worked for one of the Big 4 firms. I loved that work. I think I was born into my career through consulting, and I still have a very consultative, strategic advisory nature in everything I do, even working in industry now.
Somewhere along the way, after leaving my consulting firm, I went to a small identity and access management consulting company. I didn't know anything about IAM — I didn't even know what those three words meant. I was hired not to do the technical consulting work, but to help the company build and grow from the inside out. The two founders were excellent at winning new business and knew their product, but they didn't have as much experience building a rapidly scaling organization in terms of talent, people, and structure. While they were building the business from the outside in, I was building it from the inside out. In roughly a year and a half, we grew from about 50 employees to just under 300. When you have that kind of growth, you have to be extremely careful about who you're hiring. It's not just about skills — it's about culture, personalities, and really understanding the environment and technology the company is selling. So I learned a lot more about IAM than I ever probably wanted to.
Fast forward: I spent 15 years at Bank of America, starting in HR doing large-scale transformational change — this was during the era when Bank of America was acquiring LaSalle, Countrywide, and Merrill Lynch. Massive integrations, bringing HR systems together. Once those wrapped up, the question became: what's next? I love change. I love building and implementing things. My boss came to me and said, "Kara, I want you to take ownership of the end-to-end online HR employee experience." I'm sure I looked at him like a deer in headlights. But a year later, a tool called HR Connect was born. My team conceptualized and built it internally — it's the front end to Workday at Bank of America, and I believe they still use it today. Every time they've evaluated replacing it, HR Connect still fits their needs. I'm very proud of that.
But after you've built something — it feels like giving birth in a lot of ways — you start to think, now what? I felt like I'd capped out what I could do in the HR-IT world. I had no interest in being a straight HR generalist anymore. I really love rapidly changing technology. And somewhere along the way, a leader at Bank of America found out about my small IAM background and I had built a reputation within the company for building and implementing new things. So I was asked to stand up a frontline risk, control, editing, and access management team. That was 2017, and I haven't looked back. I did a lot of IAM risk work, then moved to IAM transformation at Truist Bank, did regulatory remediation in the cybersecurity space, kept getting broader scope, and then was approached with the opportunity at First Citizens — leading strategy, transformation, and execution. In a lot of ways, I was all things people within the cyber department. It's been a surprisingly natural transition to lean heavily on my HR roots in a technical environment.
Matthew: That gives you such an advantage. Far too often in IT and security, we're focused on the technology — people get into tech because they love technology. But leadership at its core is about the human aspect. The higher up you go, the more it becomes general leadership and business management rather than technical depth. And for you, that human focus is completely natural.
Kara: Exactly. As we were discussing in prep, I think it's easier to teach someone how to code — and in today's world, AI can help with that. I can teach someone how to configure a firewall, stand up endpoint security, or build a phishing campaign. But it is much harder to teach someone empathy. You can't easily teach someone how to motivate a team, drive cultural change, define the culture you want and then deliver it. Those are the things that truly require human skill.
And this is becoming even more important in the age of AI. There's a lot of negativity out there — "AI is going to take my job." I heard someone on a panel recently say: "You're not going to be replaced by AI. You're going to be replaced by someone who uses AI better than you do." And I thought that was a really powerful statement. It highlights the need for the human component. Understanding the ethical and human use of AI is going to be critical. The leaders who survive and thrive are going to be the ones who know how to use AI to amplify their voice, create efficiencies, and find gaps in their environment. It may replace some jobs, but it will also create new ones. If we stay too focused on tools and technology, we're missing the opportunity to build a culture that sets us up for long-term success.
Matthew: Culture is so important and gets overlooked so often. You can't just sit someone down and say "let's learn empathy." It's a leadership thing. When you lead with compassion and empathy, you create an environment where it's safe to be compassionate. The tech industry tends to be tech-centric — and a lot of people who went into tech did so because they love the technology, not necessarily because of a human-centric background. But if you build a culture that is safe and compassionate, I think you can bring that out in people. I'm curious your thoughts.
Kara: A couple of things come to mind. First, the CISO role is evolving. I'm not sure if you've had John Imparato, the CISO at Hanesbrands, on the show — he does a keynote about the evolving role of the CISO and covers everything you just described. It's about leadership: how to communicate to a board of directors, how to motivate teams, how to set a three-to-five year strategic direction, how to manage risk. At the end of the day, cybersecurity is a control function — it's about reducing risk. That's not a technical function, it's a business function. And we're seeing more CISOs who come from other areas of business rather than the technical ranks, bringing more experience in leading teams, driving transformational change, and setting strategy.
The second thing is talent sourcing. I think we need to get creative about where we're finding talent. I've heard people say it's hard to find strong cybersecurity talent. I don't think that's true — I think they're not looking in the right places. Look for people with natural leadership, communication, collaboration, and engagement skills, and then teach them the technical side.
The third thing is personal experience. At the ripe old age of 18, I did an internship in an HR department — that's what drew me to HR. The following summer I went back to the same company and found myself working on implementing an HR technology system. I found it fascinating that HR was one of the last areas to be infiltrated with technology. People in HR tend to have strong communication skills, empathy, and people orientation — but they often don't have a technical background. And I discovered very early that I could operate in both worlds. I went back to the dean of my college and said I wanted a degree in both HR and IT. They said you can't. And I thought, watch me. I ended up getting a Bachelor of Science in Business Information Systems with a concentration in IT and a Bachelor of Arts in Business Administration — so two business degrees with concentrations in both HR and IT. That HRIT focus has served me extremely well. And today there are more university programs blending the people side with the technical side, which is exactly where we need to go.
And when you ask people to describe the characteristics of their favorite leaders, they never say, "I liked Bob because he was super technical." They say, "I liked Bob because he cared about me. He mentored me. He gave me great feedback I could act on. He was transparent. He rolled up his sleeves and got in the work with me." None of those are about technology. There's an old saying that when someone speaks to you, you may not remember what they said — but you remember how they made you feel. That is so important in leadership.
Matthew: I love that. And I have a hot take on where AI takes us that I think you'll appreciate. I don't think when AI is fully integrated into everything, it turns us into the WALL-E scenario where everybody's lounging around in recliners and can't function. I think we move into a kind of new Renaissance — where we value creators, artisans, and artists more than ever. We already see glimpses of it. Ten years ago, if your kid said they wanted to be a YouTube influencer, you'd think they were crazy. Now it's a genuinely viable career. Nobody wants AI Shakespeare. Nobody wants to see an AI Tom Cruise hanging off the side of a plane — you want to see the actual 75-year-old Tom Cruise doing it, because it's human. That's what makes it meaningful. AI-generated content doesn't have the soul, the passion, the humanity. It's like fast fashion versus something hand-crafted — we don't actually value the stuff spat out by a machine. I think instead of people diving into pure tech, which AI is going to handle increasingly well, we're going to value the arts and humanities more. That's my hot take.
Kara: I completely agree. I think we're already starting to see it. AI isn't new — it's just a new term. Large language models, automation, machine learning have been around for years. But it's becoming more mainstream with agentic AI and ChatGPT and Claude and Perplexity. We all carry these little supercomputers in our pockets that can now give us richer, more nuanced answers than a Google search ever did. We're in an almost experimental phase with agentic AI right now, figuring out what we can best use it for. And I think we're going to figure out that AI is a tool — it's meant to complement, not compete with, humans. AI can't be truly creative. AI can give us information to make a decision, but we still need a human to make the decision. AI can't check itself for accuracy. And it has inherent biases that we have to be aware of. Understanding what AI is best used for, and where it should not be used, and putting guardrails around that — that's the work ahead.
Matthew: And there's a real concern right now about the ethics of AI and how you implement it. Bad actors are using AI to do bad things. And even well-intentioned implementation can cause real damage if you're not doing it intelligently and ethically. Just because we can do something doesn't mean we should — or that we shouldn't. But how do we progress as a society with AI, implementing it in corporations intelligently, ethically, and socially responsibly?
Kara: I was speaking on a panel a couple of weeks ago about authenticity in an AI world — specifically, making sure AI doesn't flatten our personalities and communication. When you ask AI to generate a thank you note, all the information might be accurate but it's wordier than you'd normally write and uses words you wouldn't use. It doesn't sound like you. That's one of my real concerns — I want to use AI in ways that complement who I am, not change it.
My favorite leadership book for women is Likable Badass by Alison Fragale. I've actually copied aspects of that book into a leadership thread in my ChatGPT. I've also added quotes that resonate with me, quotes I try to live by as a leader, as a sister, as a wife, as an aunt. The more I've taught my AI who I am and what's important to me, the more it knows my voice. When it produces a thank you note now, it does sound like me. Occasionally there are still words or phrases I'd adjust — and that editing step is critical. You can't take AI output at face value. You always have to review, validate, and edit it. But we have to be intentional about how we use it and make sure it's helping us, not detracting from who we are as people and as leaders.
Matthew: I think you can't help but look at social media as the cautionary tale here. I don't think anyone intended it to be so detrimental to so many people. I think the intentions were probably genuine — build a good company, give people the content they enjoy. But giving people only what they want to see creates an echo chamber. The 24-hour news cycle made it worse — suddenly you're competing for people's attention all day, every day, and the only way to hold it is with stories that agitate and divide. And that algorithm that's trying to keep your eyeballs has, however unintentionally, split people apart in ways that are genuinely damaging. I don't think CNN or Zuckerberg set out for this — but neither of them is stopping it. And with AI, we face the same question: where's the line between something that's a genuinely useful assistant and something that makes your brain atrophy to the point where you stop thinking for yourself?
Kara: What comes to mind for me is the next generation. I don't have children, but my sister has four — ages 6 to 13. The older two are in middle school and they're already using AI to help them with research and come up with more detailed answers. I wonder: are we teaching them how to learn, or just teaching them to learn differently? A three-year-old today will never know a world without AI. I went to college with a word processor. I got my first email address at NC State. My first smartphone came after I graduated. A Palm Treo, then a BlackBerry. That wasn't that long ago — and the pace of change since then has been staggering. This next generation will grow up in a much more information-saturated environment, for better or for worse. And I think that makes it even more critical to really hone in on your values — who are you as a leader, what do you stand for, and are you leveraging AI to enhance those values or are they working against them? That's what my sister and I try to teach the kids in our lives: how to use this technology responsibly, ethically, and intentionally.
Social media is a personal example for me. I've gone through highs and lows with it. There was a time I was almost addicted to Instagram — I'd go down a rabbit hole and an hour would be gone. That doesn't align with my values. So I've become very intentional about curating my feed so it doesn't pull me in directions I don't want to go. Anything in this world can be used for good or for bad. We have a responsibility as leaders and as human beings to decide what's important to us and to use technology as a tool that aligns with our values.
Matthew: And it comes down to intentionality. Even something as simple as an Apple Watch — if you don't actively control what notifications you allow, it takes over your life. And you don't even notice until you've either turned it off or decided exactly how you want to use it. AI is that same dynamic at a much larger scale, with much bigger implications. And we're not yet at a point where you can hand over your thinking to it, because it will hallucinate and make things up. We're not near AGI — that super-intelligence where everything it outputs is accurate and well-researched. We may get there eventually, but it won't be for a long time. In the interim, people have to learn how to use this thoughtfully. And there isn't a clean answer to that, which is frustrating — but this one is just too human and too messy. We'll keep finding new problems to work through.
Kara: I got an Apple Watch for exactly that reason — I wanted to set my phone down and walk away, but still catch what was genuinely important. I've set it up so I only get notifications that matter. I look quickly, decide if it needs my attention, and if not, I move on and give you my full attention. It's been a great tool when used intentionally.
On the corporate side, I think governance and controls are absolutely critical to the responsible use of AI — particularly from a cybersecurity standpoint. So many companies are moving quickly into AI without thinking through the long-term consequences. My advice: think about your governance structure first. What is the foundation for how you'll control information, data, and the use of AI before you begin implementing it? Then understand, through dialogue with your vendors, which tools and technologies you're purchasing that leverage AI — and make sure those companies are disclosing that.
I had a situation where a vendor we were using did not disclose their use of AI, and it was producing false positives. To me, that is the antithesis of ethical AI use. It's like posting AI-generated content on social media without disclosing it — I would never take credit for someone else's work. I'm far more offended by hiding it than disclosing it. So the questions every organization needs to be asking are: What's your governance framework? What are the controls? Do you have a policy around the ethical use of AI? What is your risk tolerance for AI?
I started at Bank of America in 2007 and in my first year, risk appetite and risk tolerance were ingrained in me. But having done strategic advisory work for non-regulated industries, I've found that many of them simply don't think that way. That's going to have to change. In the world of AI, all companies — regulated and non-regulated — are going to need to define their risk framework, their risk tolerance, and how all of it aligns with their culture, their values, and their mission. None of that is technical. It's entirely about people.
Matthew: Couldn't agree more. And we're in an era right now that reminds me of the early automobile industry — hundreds of companies, and a few will win. Like the dot-com bubble, thousands of businesses, and some will come out on top. We'll see the same with AI. And among the winners, I think you'll see companies like Darktrace — they've been using machine learning for 13 years, built specifically for security, not bolted on. Their email security is exactly what you'd want: it looks at a beautifully written email, spots a suspicious URL that hasn't been flagged yet, and says "this is a phishing attempt that would fool most humans, but not me." That's the future. Traditional email security is going the way of the dinosaur — the writing is on the wall. Once you've used AI-powered security and seen how much better it is, there's no going back. And that same principle applies everywhere: get the right AI doing the right job, governed properly.
Kara: Exactly. And risk tolerance is the lens through which all of this has to be evaluated. In financial services, risk is a constant conversation. But non-regulated industries don't think about it nearly enough, and in the world of AI, they're going to have to. Companies need to define their risk framework, align it with their culture and values, and then implement AI accordingly. Those that start with that foundation will be the ones that benefit most and avoid the biggest pitfalls.
Matthew: Couldn't agree more. Kara, this has been an absolute pleasure — and I think we could go for another four hours. Before we go, can you tell everyone where they can find and connect with you?
Kara: Absolutely. You can find me on LinkedIn — Kara Martin Schlageter. I love meeting new people, building my network, and having conversations like this one. I'm building a strategic advisory and public speaking practice, so if you're looking for someone to speak on these topics, please reach out. I'd love to have a conversation.
Matthew: Fantastic. And I hope they do — you're clearly great at this. Until next time, Kara.
Kara: Thank you, Matthew.







