David Mashburn serves as Chief Information Security Officer at Embry-Riddle Aeronautical University, one of the world’s leading institutions focused on aviation, aerospace, and applied engineering. With residential campuses in Florida and Arizona alongside a large global online population, Embry Riddle operates in a highly complex technology and security environment. David oversees cybersecurity across academic, research, and administrative systems, balancing innovation, safety, and operational resilience. His background spans enterprise security, incident response, and leadership roles in both higher education and large scale commercial environments, giving him a pragmatic perspective on how security must enable the mission it protects.
David Mashburn explains how Embry Riddle’s aviation focused mission creates unique security requirements. With flight training, aerospace research, and global online education, systems must remain available and trusted at all times. Security exists to support learning and operations rather than slow them down.
He shares why AI in cybersecurity should be viewed as a natural progression of existing analytics. From SIEM platforms to cloud security tools, machine learning has been embedded in security workflows for years. The current wave of AI expands scale and speed while introducing new governance considerations.
The conversation dives deep into Zero Trust principles as a practical necessity. With thousands of unmanaged devices accessing university systems daily, security decisions rely on identity verification, behavior analysis, and continuous monitoring instead of network location.
David also discusses the balance between automation and accountability. While AI can reduce analyst workload and surface insights faster, final decisions must remain human. Automation supports judgment but does not replace responsibility.
The episode closes with David’s career journey, from early exposure to technology through his family, to coaching athletics, to enterprise security leadership. He explains how coaching shaped his leadership philosophy and how those lessons translate directly into managing security teams under pressure.
Resources mentioned in this episode
Matthew Connor on LinkedIn
CyberLynx Website
David Mashburn on LinkedIn
Embry-Riddle Aeronautical University Website
This episode is brought to you by CyberLynx.com
CyberL-Y-N-X.com.
CyberLynx is a complete technology solution provider to ensure your business has the most reliable and professional IT service.
The bottom line is we help protect you from cyber attacks, malware attacks, and the dreaded Dark Web.
Our professional support includes managed IT services, IT help desk services, cybersecurity services, data backup and recovery, and VoIP services. Our reputable and experienced team, quick response time, and hassle-free process ensures that clients are 100% satisfied.
To learn more, visit cyberlynx.com, email us at help@cyberlynx.com, or give us a call at 202-996-6600.
Building Modern Communities Through People-First Technology with Brianne Bustos
The CISO Who Sees Around Corners: Rick Scot on AI, Fraud, and the Future of Security
Building Trust, Not Turnover: Jason Frame's Guide to Public Sector IT
Matthew: Matthew Connor here, host of the Cyber Business Podcast. Today we're joined by David Mashburn, CISO at Embry-Riddle Aeronautical University. David, welcome to the show.
David: Thank you so much for having me, Matthew. Really appreciate it.
Matthew: Thanks for joining us. Before we get too far in, a quick word from our sponsors.
[SPONSOR READ: This episode is brought to you by CyberLynx.com. Do you know if a hacker is in your system? Most people and most companies don't — until it's too late and the hacker has already done damage. A hacker's job is to bypass your security, so companies need a way of knowing when someone has gotten past their defenses. That's where CyberLynx comes in. We've partnered with the best cybersecurity companies in the world to provide our clients with the best solutions at the best prices — whether it's managed SIEM, SOC, EDR, MDR, or XDR. We'll help you find the right solution at the right price. Find out more at CyberLynx.com.]
And now back to our show. David, for those who aren't familiar, can you tell us about Embry-Riddle Aeronautical University and your role there as CISO?
David: Sure. Embry-Riddle is renowned for being a flight school. It had its origins in the 1920s, and we're actually coming up on our centennial anniversary in 2026 — the 100th year of the university. It started as a flight school and eventually settled in Daytona Beach, FL as the primary campus, with a second campus in Prescott, AZ. With a little bias from being there, I'd say we're probably the premier flight school in the country. Everything we do — every course of study — has some tie back to aviation: air traffic control, maintenance, space, and of course unmanned systems, which is huge these days. In terms of my role, I oversee pretty much all aspects of security across the residential campuses as well as our very large online programs. It's a big environment, but not overwhelming — big enough to be interesting without taking over your life.
Matthew: Fantastic. Let's dive in. When it comes to security in the modern age, I think AI is very much the "fight fire with fire" approach. One of my new favorites on the security side is Darktrace — I love how they go about things. Instead of bolting an LLM onto an existing email security product, they use proper machine learning that's been purpose-built for a long time. It does exactly what you'd expect: understands the user, reads email as it comes in, catches the bad stuff. I think that's a great example of AI being right-fitted and a real glimpse into the future. And now we're seeing AI used in attacks as well — not just to write better phishing emails, but to automate the attacks themselves, making them incredibly fast. So it's becoming a genuine arms race. Where do you see AI in security currently, and where do you see it going?
David: I'll start in general. Your observation about how vendors have approached this is a great one. But I'd also say that in the security context especially, the amount of data that is security-relevant has always been historically very large. So it's always been a ripe field for machine learning. Vendors have had that experience over many years — whether it's Darktrace, the analytics revolution that came from products like Splunk, or the large cloud providers like Microsoft who have already been doing that type of analysis for a long time. To me, what we call AI today is really just a natural evolution of the machine learning products we've had in the security space for years.
On the fight-fire-with-fire point — I was actually reading an article this morning about a paper I believe was published by Anthropic discussing the use of Claude Code to conduct some automated attacks. There was healthy debate about the legitimacy of some of the claims, but it still shows that adversaries — as they have in many cases — tend to be early adopters of technology. I always joke with my students that attackers were all over PowerShell before our admins were. They were using it to automate attacks before we had internalized it as the operational tool it was designed to be. AI is the same story.
Matthew: And that's the real challenge — as the good guys, we have to balance being early adopters with protecting an organization we're responsible for. The bad guys have nothing to lose. They can try a million times and only have to succeed once. We have to defend a million times and catch them quickly when they do get in. So what's the right balance in this modern environment?
David: The idea of stopping it and putting the genie back in the bottle — that ship has long since sailed. It's not going back. In our environment, we have very different cohorts to manage. We have students who bring their own systems, which we have limited control over. We have areas where we have specific control, and others where we have to give researchers the freedom to do what they need to do — which often involves using generative AI or large language models. So it's definitely a balancing act. The approach we've taken is to say: we're here to help people do what they want to do, but with guardrails. What data are we putting into these tools? How is the provider handling it? Are they using it for further training? If you're using our enterprise license, you're probably in good shape. If you're using a free tool outside our enterprise offering, we don't necessarily say no — we explain your obligations to protect university data and the pitfalls you may run into. If those are too much to overcome, we suggest using the solution we're already offering.
Matthew: And every university is in that unique position — it's essentially the world's greatest BYOD nightmare. Thousands of students on all kinds of devices needing access to various systems, and you can't manage their devices. You can perhaps monitor the network for indicators of compromise, but even that has its challenges. Can you walk us through how you think about securing that environment so people can put it in perspective?
David: To some extent, this is the original zero trust model. We have lots of assets we don't control accessing applications and resources in our environment. Where they originate no longer really matters — we've moved well beyond "this IP address is safe." That hasn't been a reliable assumption for a long time.
So we've approached it through the lens of behavior and identity, which are core tenets of zero trust. I have students accessing from personal devices, from campus labs or classrooms, and students who never set foot on campus — all accessing the same types of resources. The commonality is: are they properly authenticated? How are we tracking potential account takeovers? What mitigations do we have in place? And then behaviorally: once they're in, are they properly scoped within the application, and what are they actually doing?
A good example I can freely share — and I've talked about this publicly before — is something that's affected a lot of higher education institutions. It's called "payroll pirates." Account takeovers now go well beyond sending spam. Attackers go in and use HR self-service tools to change payroll deposit information — targeting faculty, staff, or student employees. And alongside that is "ghost students" — creating fake accounts to commit financial aid fraud. All of these things, internal and external, come back to the same fundamentals: what is actually happening behaviorally and at the identity level?
And managing that data at scale — even in a Microsoft 365 tenant with a relatively small number of users — is a lot. The volume of logs just for identity protection alone is staggering. You can't go through that manually. It's literally impossible. There aren't enough hours in the day for a person, let alone a small team.
Matthew: So where do you fall on the SIEM question — traditional SIEM, or the newer AI-powered approaches that are coming out?
David: Right now we have what I'd call a hybrid setup. We have a traditional SIEM for log aggregation and detections aligned to frameworks like MITRE ATT&CK, focused especially on post-exploitation behaviors — what are they doing after they've gained access. We combine that with Microsoft's security toolset since we're a Microsoft shop — tools like Defender for Identity that are already leveraging machine learning, and almost certainly AI soon. We're largely relying on our vendors to bring those features forward for us. We're a relatively small security team, and we're not in the business of building our own AI tooling. It's just not in the cards.
Matthew: And I find it really interesting when universities partner with startups to build out custom tools. Super cool on both ends — but you're investing a lot of time and effort into something that may never launch or may not work.
David: We've had similar inquiries — researchers reaching out about grant opportunities to bring AI into a SOC environment, with an aviation or space angle naturally. We have researchers trying to make that operational, which makes sense — the university as a laboratory for figuring some of these things out. Some things won't pan out commercially, but you learn enough that a commercial application emerges beyond the scope of the original research. We support those efforts — providing redacted data sets, advising on grant submissions — but we're not trying to do that ourselves as a security team.
Matthew: So let's zoom out a bit. Do you think AI ultimately permeates every aspect of security?
David: Vendors are probably going to find a way to put AI into everything. Whether it's a value add everywhere is a different question — probably not. There are definitely places where it can add real value, but I have a specific concern: I have both full-time employees and student employees in my SOC. We're giving students valuable real-world experience and they're doing real work for us — it's a great trade-off for both sides. What I don't want is AI replacing that, because those students would be losing foundational experience: how to approach a problem, what data to use in an investigation, how to follow a process. Those are skills you build by doing.
Where I do see a clear sweet spot is AI as a coach. When an analyst is working in a console or in the SIEM, they can ask the AI a question, bring in additional context — whether through retrieval-augmented generation, MCP connections to other systems, or whatever the mechanism is. It can answer questions that help the Tier 1 analyst without replacing them. And it frees up senior staff to focus on the things that actually require their depth. That to me is the real sweet spot.
Matthew: That's really interesting because it raises a broader question — is the SOC analyst a modern-day telephone operator? A critically important role at a certain point in time, that eventually gets so thoroughly augmented and automated that it morphs into something unrecognizable? Does AI get to the point where the analyst is doing 10% of the work and AI is doing 90% of it?
David: I think it ends up closer to what you describe — AI doing a lot of the work, but directed by the analyst. Instead of typing out tickets and manually building timelines through log data, the analyst is directing AI agents and reviewing outputs. It'll be more like managing agentic resources than doing the raw analysis themselves. And as the cost comes down and AI gets integrated into the products, smaller teams become as effective as much larger ones. Does that mean larger teams shrink, or does it mean they finally get to work on the things that never got attention? Security always has more to do than it has time for. I think a lot of it will be the latter — the things that got ignored before now get addressed.
Matthew: And accountability is key here. I love your point about wanting to know that a human consciously made the decision to block or remediate — not that the system just did it passively. We're in this transitional period where AI is essentially the world's most advanced autocomplete. Like self-driving cars — Tesla is driving, but if someone gets hurt, the person behind the wheel is still responsible. The technology isn't yet capable of bearing that accountability. So how do you balance the benefits of automation with the responsibilities of security?
David: The lesson from almost any kind of automation is that you don't try to automate every edge case — you find the sweet spot. If I can cover 80% of cases with automation, I've probably maximized the value. Keep that in mind and you stop trying to have everything fully automated. Just handle the common, fast cases where you know the outcome is good.
And honestly, the technology is moving faster than we think. I was recently in Austin and Waymos are everywhere now. You see the first one and you're amazed — but then you realize they're everywhere, and the technology is further along than most people realized. The same thing is going to happen in the security realm. The AI features that let analysts query data naturally and wield it effectively are coming sooner than we expect. And in some ways, they're already here — CrowdStrike and SentinelOne already have products that do something like this. Analysts get event data and the AI says, here's what you should consider next. It's already here.
Matthew: It really is. We're living in what used to be science fiction. Self-driving cars, HD video calls from anywhere. And AI in security is just as exciting to me. Going back to the email example — traditional filtering lets something through because a URL hasn't been flagged as malicious yet. But an AI that understands how people write, who they communicate with, and what a suspicious URL pattern looks like can catch it before it ever reaches the inbox. That is where AI shines and saves the end user, because the weakest link is always the person at the keyboard. It's so much easier to social-engineer Joey than to break through a hardened endpoint. I'm also looking forward to the day when AI is doing this for people in their personal lives — especially senior citizens who are getting hammered by increasingly sophisticated scams. One of your previous guests mentioned their grandmother in her 90s, who was sharp enough to ask a deepfake voice for the name of her granddaughter's childhood pony. That's brilliant — but most people aren't that sharp, and they shouldn't have to be.
David: That hits on exactly the right philosophical point — what we're really trying to do is make sure people can do their jobs. The reason we do security here is to make sure the planes keep taking off and people can do their classes and the university administration can pay bills and process payroll without disruption. And you've put your finger on something I deal with constantly: I can go to someone in accounts receivable whose entire job is to take vendor invoices and pay them — invoices that have to come via email — and tell them "don't open anything you're not expecting." But how do they know what they're expecting that day? It's almost an absurd thing to say, because I'm essentially telling them not to do their job.
On the deepfake scam point — with generative AI, someone could take this podcast recording and call my father, and it would sound exactly like me. Very convincing unless you have some kind of mechanism in place — like asking something only I would know. I think the best future is one where AI is removing friction for people doing their jobs while adding smart speed bumps when something looks off — having done the deep analysis invisibly, and intervening only when appropriate.
Matthew: Exactly. And the mortgage industry is a good example of where hard-won lessons forced the right processes. Title companies got hit badly by wire fraud — it was a pricey lesson. The good ones now never send wiring instructions by email and require you to pull them from a secure portal. When I recently bought a house, the warnings were all over it. That kind of human intervention — does this make sense, pick up the phone, verify against information we already have — is always going to be valuable. Business email compromise is getting better, but we haven't solved it. AI is where we finally do, because it catches those patterns so cleanly. It knows who in the organization writes to whom, and when something doesn't fit.
David: Absolutely. And the financial controls piece is interesting because even with great AI at the email layer, you still want those downstream process controls — they live in a different domain, typically under audit and financial requirements. Having a business process that says if a vendor sends new banking information, someone picks up the phone and verifies it — that's a layer AI can complement but probably shouldn't replace. The human intervention somewhere in the chain asking "does this make sense?" adds both security and accountability.
Matthew: Well, we could honestly go on for hours. Before we wrap up — I love origin stories. What got you into IT in the first place, and how did you get from there to where you are today?
David: My dad was a 30-year IBM employee, so we were always around computers growing up. But that wasn't where I thought I'd end up. When I went to university for undergrad, I started in electrical engineering, decided it wasn't really my thing, and completely pivoted to exercise science because I was involved in coaching and athletics. When I graduated, I somehow ended up in an IT job — it just kind of happened. I coached for a while and did the IT work in parallel, then eventually decided I wanted a house and a normal life, so I moved to IT full time.
I followed a fairly traditional path — help desk to sysadmin, some application development, some database work. I worked at a series of startups around the dot-com era, which gave me great experience. Eventually I moved into networking, and from networking the transition into security was a natural one. And once I got there, I knew it was where I wanted to be. The idea of building a story from small pieces of evidence you have to go find and put together was very appealing to me — and still is to this day.
Getting to Embry-Riddle wasn't really part of the plan. I came from Salesforce, where I was doing large-scale incident management. Fantastic people, great security culture there. But my daughter was a student at Embry-Riddle and kept nudging me to apply for an open position. I kept saying no, I'm happy where I am. She kept at it. I applied, and it turned out to be a great fit. I have a great team and great leadership here who are genuinely supportive of the security mission.
Matthew: And you're actually the second coach we've had on — it's not a common origin story in this field. Do you find the coaching background carries into how you lead today?
David: Definitely. When I was young as a coach, I was very gung-ho — it's all about the work, run through walls. As I evolved, I realized a lot of it is about understanding people's motivations, meeting them where they are, and helping map a path forward for them. That applies directly to managing people in the workplace, because everyone has different things going on in their lives and is at a different point in their career. You're communicating what you need from them as a minimum while also giving them a path to grow and advance, and making sure they have the feedback they need to know where they stand. A lot of coaching is really about how you treat people and how you prepare them mentally — and that doesn't go away when you leave the field.
Matthew: That's fantastic. David, before we go — can you tell everybody where they can find out more about you and about Embry-Riddle?
David: You can find out everything about Embry-Riddle's programs at erau.edu. As for me, you can look me up on LinkedIn. I keep a fairly low profile by nature of being a security person — not super active on social media. But I do some periodic teaching on cybersecurity topics, so if you liked what you heard today, you might find me somewhere doing that again.
Matthew: Love it. David, thank you so much for joining us today. Until next time!
David: Thank you so much. Really appreciate the opportunity. It was a pleasure.