Balancing AI, Privacy, and Risk at a Public University with Malcolm Blow

Malcom Blow IMAGE

Malcolm Blow serves as Chief Information Security Officer at Bowie State University, where he leads cybersecurity strategy for a complex higher education environment that includes students, faculty, research programs, and public sector obligations. With nearly three years running the cyber program at Bowie State, Malcolm is responsible for protecting institutional data while preserving the academic openness that defines university life. His background spans more than a decade in federal cybersecurity operations across defense, intelligence, and scientific agencies, experience that directly informs his pragmatic approach to risk, governance, and executive decision making. In addition to his university role, Malcolm is the founder of Quantiuum, where he advises organizations on translating technical risk into executive and board level understanding. 

 

apple
spotify
stitcher
google podcast
Deezer
iheartradio
tunein
partner-share-lg

Here’s a glimpse of what you’ll learn: 

 

  • Why higher education security resembles managing a small city
  • How universities balance open access with cybersecurity controls
  • Where AI fits into modern security operations and governance
  • Why human oversight remains essential in regulated AI use cases
  • How privacy laws shape AI adoption in public institutions
  • What provable compliance looks like in higher education
  • How the CISO role is evolving into a business enabling function


In this episode…

Malcolm Blow outlines the unique cybersecurity challenges facing universities, where thousands of students connect multiple personal devices to campus networks every day. Unlike traditional enterprises, higher education must secure faculty and staff systems while simultaneously supporting student access, research freedom, and academic experimentation. Malcolm explains how isolating environments by use case allows institutions to manage risk without disrupting learning or innovation.

The discussion moves into artificial intelligence and cybersecurity, where Malcolm emphasizes that AI is no longer optional for organizations trying to compete and defend themselves. He explains that technology is often the fastest lever to pull in environments constrained by limited budgets and staffing. At the same time, regulatory requirements around privacy and AI use demand careful implementation, particularly in public sector and educational settings.

Malcolm shares real examples of where AI systems can create unintended consequences when human oversight is removed. From admissions decisions to security monitoring, he explains why having a human in the loop is often required to meet regulatory expectations and avoid reputational or legal harm. Compliance alone is not sufficient if systems are not designed with accountability and context.

The conversation concludes with an in depth look at how the CISO role has changed. Malcolm describes the shift from security as a blocking function to security as a strategic partner. Today’s CISO must translate cyber risk into business terms, guide executive decision making, and enable the organization to move faster while staying within defined risk tolerance.

 

Resources mentioned in this episode

 

Matthew Connor on LinkedIn
CyberLynx Website
Malcolm Blow on LinkedIn
Bowie State University Website

 

Sponsor for this episode...

 

This episode is brought to you by CyberLynx.com  

CyberL-Y-N-X.com.

CyberLynx is a complete technology solution provider to ensure your business has the most reliable and professional IT service.

The bottom line is we help protect you from cyber attacks, malware attacks, and the dreaded Dark Web.

Our professional support includes managed IT services, IT help desk services, cybersecurity services, data backup and recovery, and VoIP services. Our reputable and experienced team, quick response time, and hassle-free process ensures that clients are 100% satisfied. 

To learn more, visit cyberlynx.com, email us at help@cyberlynx.com, or give us a call at 202-996-6600.

 

Check out other related episodes:

 

Measuring and Managing Technical Debt with Dr. Ken Knapton
AI, Lead Automation, and the Future of Automotive Tech with Yuriy Demidko
How Window World Scales Technology and AI Adoption with Glenn Rumfellow

 

Transcript: 

 

Cyber Business Podcast – Malcolm Blow, CISO at Bowie State University & Founder of Quantiuum


Matthew: Matthew Connor here, host of the Cyber Business Podcast. Today we're joined by Malcolm Blow, CISO at Bowie State University and founder of Quantiuum. Malcolm, welcome to the show.

Malcolm: Pleasure. Thank you for having me.

Matthew: Thanks for joining us. Before we get too far in, a quick word from our sponsors.

[SPONSOR READ: This episode is brought to you by CyberLynx.com. Do you know if a hacker is in your system? Most people and most companies don't — until it's too late and the hacker has already done damage. A hacker's job is to bypass your security, so companies need a way of knowing when someone has gotten past their defenses. That's where CyberLynx comes in. We've partnered with the best cybersecurity companies in the world to provide our clients with the best solutions at the best prices — whether it's managed SIEM, SOC, EDR, MDR, or XDR. We'll help you find the right solution at the right price. Find out more at CyberLynx.com.]

And now back to our show. Malcolm, for those who aren't familiar, can you tell us about Bowie State University and your role there?

Malcolm: Yes. Full disclosure — I serve as Chief Information Security Officer, or CISO, at Bowie State University. I've been running the cyber program there for coming up on three years next month, and it's been a great experience. I love the mission. My job is to keep the students, faculty, and data safe — along with the broader community as well.

Matthew: And that's a really interesting challenge that I think a lot of people don't fully appreciate. At most businesses you've got clear boundaries — you control the office and the staff, and you secure them. But at a university you've got that traditional challenge of securing faculty and staff, plus the unique challenge of students who come in with all kinds of personal devices onto the network. Balancing giving them access to what they need while not managing their devices — it's like the world's worst BYOD experience. Walk us through how you handle that.

Malcolm: That's pretty spot on when it comes to the randomness and nuances of working in higher education. A good example: the last survey I read indicated that any student living on campus has at least five different devices of their own. And to put that in perspective, anything with an IP address counts — a smartwatch, a PlayStation or Xbox, a laptop, a tablet, a smartphone, a smart TV. Multiply that by thousands of students. They're on our campus, they're on our network, and it's up to us to protect them — even from themselves, within reason. You don't want to over-restrict them either. If a cybersecurity student wants to research malware or hacking for a homework assignment, you can't just outright block that — that causes a real disruption.

On top of that, a university is like a small city. There's a police department, a wellness center with HIPAA data, a research environment with all sorts of controlled unclassified information and data governed by agreements with vendors and research partners. The communication flows in multiple directions. My approach is to treat each of those as a separate business unit with its own use case and its own isolated environment. The student network is isolated so it can't affect the business networks. Then you treat each one individually — do your risk assessments and manage them accordingly.

Matthew: That makes perfect sense. And as AI becomes more ubiquitous — the good guys have it, the bad guys have it — you really have to fight fire with fire. It's no longer just about supercharging Google searches or writing emails. My new favorite example is Darktrace. They've been around about 13 years and they use AI for cybersecurity in exactly the right way — not bolting an LLM onto an existing product, but self-learning machine learning from the ground up. It does what you'd want: it looks at your email, understands the user, and catches things traditional tools miss — like a URL that isn't flagged as malicious yet, or someone claiming to be the COO when the pattern just doesn't fit. I think that's the future of cybersecurity. So with AI where it is and you where you are, where do you see AI playing a role in cybersecurity?

Malcolm: In almost every facet of business, AI is at the moment essentially critical to have — and that's only going to continue. At the executive level, leadership wants to do more, better, and faster. When you're competing with anyone who has a computer and access to AI — and there are free tools, highly affordable tools, people building their own — you almost have to use AI in some capacity across every business unit, including security, just to keep up. Not only with the competition, but with the threats themselves. Darktrace, SIEM tools, and many others are building it in. And between people, practices, and technology, technology is the easiest to scale. The other two require a lot more time and budget — and working in the state, local, and education space, time and budget are both precious commodities we can't throw around. Technology lets us reach that scalability more efficiently.

That said, one point of friction is the changing regulatory environment. Privacy has become a major focus across different states and countries in recent years, and AI regulation is following the same path. Different states are adopting AI policies governing how it can be used in business contexts, including security, and implementation becomes very granular. Here in Maryland, and across much of the space I work in, there are a lot of nuances to consider when implementing AI — especially when it touches health or individual rights. In those cases, having a human in the loop is the best and sometimes the only way to meet regulatory requirements. So figuring out where AI fits and at which decision gates is really where I spend most of my time when I'm adding these capabilities.

Matthew: I think that's really interesting because you can be compliant but not safe, and you can be safe but not compliant — the two aren't mutually exclusive. It puts us in a fascinating place as cybersecurity professionals. We have this great new technology at our fingertips, and you don't want to fall behind — as others adopt AI, they gain a competitive advantage. But you have to weigh that against the security concerns: what data is it accessing, how is it being controlled, how vulnerable is it to something like a prompt injection where a bad actor weaponizes your own chatbot against you. How do you balance that when evaluating products and making those decisions?

Malcolm: Great question. Let me step outside the security context for a moment to make it more concrete. A huge part of the business in higher education is admissions. We want to bring in as many students as we can support, and the right students — the ones who will benefit most from our programs. Even in admissions, best practice and regulation require a human in the loop. AI can make recommendations, help with processing, reformat documents, and handle bulk assessments — but it can't unilaterally pick one student over another without a human reviewing that decision.

And here in the Maryland area, there was a school that had an AI weapons detection system as part of their surveillance — which in concept I fully support. However, it misidentified a student's bag of chips as a weapon. A student going about their normal day was suddenly confronted by multiple police officers responding to what they believed was a valid threat. Now, a human could theoretically have made the same mistake — but with current regulations, a pure machine making that error carries far greater scrutiny and consequence. That's my primary concern: some mistakes are historically normal and expected of humans. But when a pure machine makes the same mistakes in a security or operations context, regulators view it as a much graver offense. The bar is different, and the accountability is different.

Matthew: And I think that's such an important point — AI isn't flawless, and it really does require human supervision. Going back to your admissions example, the volume of applications universities receive today is staggering. With common applications, students are applying to dozens, sometimes hundreds of universities. It's genuinely not humanly possible to read every one. There has to be some AI-assisted processing to make that manageable. And we live in a world right now where it has to be heavily supervised. Every time someone just lets the technology run unchecked — lawyers citing cases that don't exist, students getting confronted over a bag of chips — we're reminded that we're very easily fooled by how capable it seems. How do you put safeguards in place? Is it education? Process? How do you go about it?

Malcolm: As a CISO — and even when I'm wearing a CIO or CTO hat, as I have in previous roles — one of our primary responsibilities is to highlight the risk. That's where my focus is. I'm pro-AI and working as hard as possible to enable the business, but the risks are usually not technology issues. A good example: meeting transcriptions are a game changer. I love them — I'm not sure I could live without them at this point. But when you're transcribing meetings and retaining those records, you have to keep in mind that those transcripts are potentially discoverable in a legal case. At a public institution, they may also be covered by the Freedom of Information Act. You're not trying to hide anything, but certain conversations that weren't designed for public consumption — discussions about how to approach a situation that could be misconstrued out of context — carry real risk when they're transcribed and retained. That's where my thinking usually goes. It's rarely a pure technology problem that makes me say no to something. It's more often the lack of process or training around it that creates the greater risk.

Another example: in cybersecurity we use a lot of acronyms and shorthand. A few months ago I was reviewing a transcription and noticed it had captured "NIST 853" instead of "NIST 800-53." A small error, but if you're working in the private sector and something like that appears throughout a proposal or RFP, it's a business problem. I've reviewed vendor proposals that were clearly AI-generated because custom diagrams had mixed-up labels on the same image. So training, education, and awareness around how to use these tools properly is where I think the focus needs to be when you're implementing AI to enable a business.

Matthew: Let's talk about Quantiuum. How did you start it and where are you with it now?

Malcolm: Sure. I spent over a decade in federal service — across the intelligence community, scientific agencies like NASA and NOAA, and most of my time in the Department of Defense, doing cyber operations and mission planning. What inspired me to start consulting on nights and weekends was a pattern I kept seeing. I'd lead a mission, prove a vulnerability through a red team engagement, present the report to leadership, and then — months or even years later, working with those same stakeholders again — find out the issue was never addressed. That happened repeatedly. Over time, it inspired me to start consulting so I could more directly influence leadership and help organizations actually address risk rather than just document it.

My approach is to translate technical risks into executive-level language: here's the issue, here's the likelihood it happens, here's the cost if it does, here's the cost to fix it. That's where I get the most traction. And that's what inspired Quantiuum. I work with a range of organizations — large enterprises, startups, and SMBs. After building the consulting practice, I eventually left the federal world to do executive and consulting work full time, and it's been a great path ever since.

Matthew: With your government background, does that lean you toward the CMMC side of things in your consulting work?

Malcolm: It's somewhere in the middle — and honestly it's not because of my government background specifically. I just think CMMC, which is based on NIST 800-171, is the best-balanced framework out there. It hits the right balance between being prescriptive and descriptive. That said, wherever I am, I use whatever framework is the best fit for that organization. My usual approach is to establish a core GRC platform built around one primary framework and then crosswalk out from there. At the university level, for example, whatever we anchor to internally, we crosswalk out to HIPAA, FERPA, GLBA, and all the applicable privacy regulations. Having that one core platform in the middle gives us what I call provable compliance — one set of artifacts and evidence collected through regular operations, and when an audit comes, we give the auditors access to the GRC platform and the controls are typically met with very few issues because of those crosswalks.

Matthew: You know, hearing all of this, I can't help but think about how much the CISO role has changed — not just in the last decade, but even recently. It's not an old role to begin with, but the pace of evolution has been remarkable. I'd love to get your take on how it's changing.

Malcolm: I definitely agree. Even watching my own career progression, the CISO role has shifted — and I'd say for the better. Right now the focus is much more on being an enabler of the business and a full member of the C-suite, not just a security function off to the side. Historically, cybersecurity was seen as the team of no — and the top security person was often just the analyst or engineer who worked the hardest and got promoted until they hit their ceiling. They were very strong technically and very focused on making things as secure as possible, which created friction with other business units. Now the CISO is typically focused on risk, GRC, and how security can actually drive business value.

In some of my consulting engagements, for example, I've helped organizations identify the right compliance framework or audit certification that would actually speed up their sales process and open new business opportunities. That's a completely different way of thinking about the role. The CISO is now much more intertwined with — and in some cases, a true partner to — the CIO and CTO. I've worn all of those hats, and I've seen organizations where the CIO actually reports to the CISO, because of the CIA triad: confidentiality, integrity, and availability. The CIO tends to focus on availability, while the CISO holds responsibility across the other pillars as well.

A very recent example is Microsoft, which broke out its security leadership structure. Their global CISO now has roughly a dozen functional CISOs under them across different horizontals and verticals — because CISOs are now responsible for GRC, privacy, AI security, business continuity, disaster recovery, traditional SOC operations, and threat intelligence. It's simply not practical to be a mile wide and a mile deep in all of those areas simultaneously. Breaking them down, with the top-level CISO focused on translating across all of them and ensuring informed decision-making at the executive level — that's the evolution.

The other major shift I've seen: historically, CISOs had all the responsibility and none of the authority. That's changing. Now they're able to properly place responsibility with whoever holds the authority, and make sure that decisions about risk are made by the stakeholders who own them. Whether it's which AI tool to adopt, which vulnerability to push to the next sprint cycle, or which risk to defer to the next funding cycle — whoever makes that call owns the outcome if things go wrong. Every risk has an impact and a likelihood, and making sure those are accurately characterized means that if a breach does happen — and most organizations will experience one at some point — you can address it properly, report it appropriately, and investigate it in time.

Matthew: And that shift — from the team of no to a genuine business partner — is huge. The cybersecurity function has gone from being a pure cost center playing defense to being a force multiplier that helps the organization move forward.

Malcolm: Exactly. And one of the first things I focus on when I join an organization is establishing that mindset within my own team. We find the skeletons in the closet, get them on a roadmap, make sure leadership has a documented risk tolerance, and give the security organization the authority to operate within it. Anything outside that tolerance becomes a conversation, not a roadblock.

And to fight the decades of bad taste that other business units have built up around security, I've worked hard to make security something people actually want to engage with early. After about two years at Bowie State, we've gotten to the point where people come to security proactively — because they know our answer is never just no. When we get a request, the response is always: here's what it will take to make this happen. Our job is to show them the path. Anything is possible — it just has to be done through the right controls. That shift, from gatekeepers to enablers, changes everything.

Matthew: That is such a big shift — seemingly small, but it ends up being enormous. Being the "no guys" is demoralizing for everyone, and it's just not helpful. But "here's what it will take to make this happen" — that's a completely different relationship.

Malcolm, I can't thank you enough for coming on. Before we go, can you tell everyone where they can find out more about you and about Quantiuuum?

Malcolm: Sure. The easiest way to reach me is on LinkedIn. Please do reach out — my only request is that you send a message letting me know who you are and what you're looking for. I'm happy to connect with anyone, and I look forward to seeing how the industry continues to change for the better.

Matthew: Love it. Malcolm, until next time — thank you.

Malcolm: Bye.

Read On

Building Impactful Tech in the Nonprofit Sector with Kathryn Mattie

Building Impactful Tech in the Nonprofit Sector with Kathryn Mattie

Kathryn Mattie serves as the Chief Information Officer at Brightpoint, a nonprofit serving over...

Read more
Building a Threat-Informed Cyber Strategy in Fintech with Kyle Draisey

Building a Threat-Informed Cyber Strategy in Fintech with Kyle Draisey

Kyle Draisey serves as the Chief Information Security Officer and Head of Cybersecurity at Sagent,...

Read more
Why Cyber Teams Fail Without Process, Practice, and Pressure-Tested Leaders

Why Cyber Teams Fail Without Process, Practice, and Pressure-Tested Leaders

Fred Clayton serves as Chief Information Security Officer at Akumin, the second-largest radiology...

Read more