Security Is Everyone’s Job and Why That Matters More Than Ever with Bryan Tomczyk
Bryan Tomczyk serves as a Cybersecurity Engineer at GP Strategies Corporation, where he works closely with senior IT and infrastructure teams to secure systems across a large, global organization. GP Strategies operates primarily as a training and professional services company, supporting clients across multiple countries and industries. Bryan’s role places him at the intersection of security engineering, vendor risk management, and user education, with a strong emphasis on enabling the business rather than obstructing it. His background reflects a long term evolution into cybersecurity, shaped by decades of security focused thinking before formally entering a cyber role.
Here’s a glimpse of what you’ll learn:
- Why cybersecurity must be embedded into every role, not isolated to IT teams
- How security advocacy grows organically through education and experience
- The real risks of AI adoption without proper guardrails
- Why large language models are not a complete solution for security
- How supply chain risk has become one of the biggest threats to organizations
- What secure by design actually looks like in modern environments
- Practical considerations for evaluating AI tools and SaaS vendors
In this episode…
Bryan Tomczyk explains why the idea that security is everyone’s job only works when organizations invest in education and context. He describes how working directly with users, especially after incidents, creates awareness that policies alone cannot achieve. Security, in his view, must enable productivity while quietly reducing risk in the background.
The conversation dives deep into AI and cybersecurity, with Bryan outlining why machine learning excels at correlating massive volumes of data but struggles when used without constraints. He cautions against treating large language models as universal solutions, noting their susceptibility to hallucination, prompt injection, and misuse. Instead, he advocates for narrowly scoped, self learning systems that are heavily restricted in access.
Bryan also addresses the growing complexity of modern environments, from email security and MFA fatigue to operational technology and supply chain risk. He highlights why vendor reviews, SOC 2 reports, and infrastructure transparency are no longer optional. Throughout the discussion, he reinforces a consistent theme that security must evolve thoughtfully, balancing innovation with responsibility to protect users, data, and operations.
Resources mentioned in this episode
Matthew Connor on LinkedIn
CyberLynx Website
Bryan Tomczyk on LinkedIn
GP Strategies Corporation Website
Sponsor for this episode...
This episode is brought to you by CyberLynx.com
CyberL-Y-N-X.com.
CyberLynx is a complete technology solution provider to ensure your business has the most reliable and professional IT service.
The bottom line is we help protect you from cyber attacks, malware attacks, and the dreaded Dark Web.
Our professional support includes managed IT services, IT help desk services, cybersecurity services, data backup and recovery, and VoIP services. Our reputable and experienced team, quick response time, and hassle-free process ensures that clients are 100% satisfied.
To learn more, visit cyberlynx.com, email us at help@cyberlynx.com, or give us a call at 202-996-6600.
Transcript:
Cyber Business Podcast – Bryan Tomczyk, Cybersecurity Engineer at GP Strategies Corporation
Matthew: Matthew Connor here, host of the Cyber Business Podcast. Today we're joined by Bryan Tomczyk, Cybersecurity Engineer at GP Strategies Corporation. Bryan, welcome to the show.
Bryan: Thank you, Matt. Happy to be here.
Matthew: Happy to have you. Before we get too far in, a quick word from our sponsors.
[SPONSOR READ: This episode is brought to you by CyberLynx.com. Do you know if a hacker is in your system? Most people and most companies don't — until it's too late and the hacker has already done damage. A hacker's job is to bypass your security, so companies need a way of knowing when someone has gotten past their defenses. That's where CyberLynx comes in. We've partnered with the best cybersecurity companies in the world to provide our clients with the best solutions at the best prices — whether it's managed SIEM, SOC, EDR, MDR, or XDR. We'll help you find the right solution at the right price. Find out more at CyberLynx.com.]
And now back to our show. Bryan, for those who aren't familiar, can you tell us about GP Strategies Corporation and your role there?
Bryan: GP Strategies is, at its core, a training company. We do a lot of professional services — pretty much anything a company needs, we'll figure out a way to do — but training is the foundation. We're a wholly owned subsidiary of Learning Technologies Group, one of the largest training companies in the world by market share. I'm on the security team, working closely with our senior IT and infrastructure folks to make sure GP Strategies' systems are secure.
Matthew: One of my favorite topics in security right now is this idea that security is everybody's job — not just the security team's. It's one thing to say that. It's another to actually implement it and get people to buy in. How do you go about that?
Bryan: A lot of what we do is work directly with end users — especially those who have been compromised or are in groups that have been compromised at some point. In this world, someone in your environment is going to get compromised. There's no avoiding it. When those situations happen, we work with those users to explain the how and the why. And in other contexts — like when I'm sitting in a meeting with HR about new systems they want to implement — we bring up security early. Secure by design is the industry standard at this point, but not everybody truly understands it, and not everybody tries to bring security in from the beginning. We don't always get invited in early, but we're always pushing to be there.
Another big piece of this is supplier reviews. When users need new software — whether it's a new AI tool, a procurement package, or just an alternative PDF reader — our security team reviews it. That creates natural, early-on conversations with the people requesting those tools. We take the time to explain why we're asking all these questions, because if people don't understand that, they start thinking we're wasting their time and we become viewed as a roadblock. Our job is to enable — to make it possible for people to do their jobs securely — not to block them.
And the thing I love most is when you find that one person who truly gets it, no matter where they are in the org. Once they understand it, they become an advocate on their team. That team becomes more security-focused. They tell someone else. It's a very organic, grassroots approach. The more people you can get to understand that security doesn't have to get in the way — the better off you are.
That also leads to something bigger: people are genuinely interested in security. I get asked all the time, "How do I get into cyber?" My answer is always: take whatever job you're doing now and apply a security lens to it. I went to my first security event 25 years ago, in the summer of 2000. It took another 15 years to move into a full-time security role. But you can be security-focused without being in security full-time. And that applies to our personal lives too — not writing passwords on a notepad on the kitchen table, making sure the same practices we use at work extend to home. The more the whole world is hardened, the harder bad actors have to work. They'll always find a way, but we want to make it as difficult as possible.
Matthew: And we're already seeing that play out with endpoints. EDR is now so ubiquitous that the endpoint is a much harder target than it used to be. So social engineering has moved to the forefront — it's a whole lot easier to trick someone into handing over their credentials than it is to break through a hardened endpoint. I think that shift makes being security-minded even more critical.
Let's talk email, because I think that along with social engineering, it's where most of the attention from bad actors is focused right now. Poor Joey sitting at his keyboard — he's not a security professional, and if your email gateway lets something through because the malicious URL hasn't been flagged yet, what chance does he have?
Bryan: And it's even worse than that. Just the other day, one of my users got hit with what I can only describe as a mail bomb — around 300 mailing list emails in an hour. The whole idea is a DDoS attack on the person. You flood them with so much noise that they're going to miss something and click on the wrong thing. In that situation, at least there's something obvious to work with — you can get the user on board because they can see something is clearly wrong. It's harder when there's no obvious signal.
But here's what I've found: the best security advocates in our company are the ones who've been compromised. It's like getting your car broken into — nothing major was taken, but you feel violated in a way that's hard to replicate. We need to find a way to create that feeling without the actual harm. Because as we both know, a security awareness email to a general mailbox probably gets deleted 75-80% of the time without being read.
Matthew: And that's exactly why I think technology has to do the heavy lifting here — not training. Joey's job is finance. He's not supposed to be a security expert. And I don't think LLMs are the answer for email security either. What you actually want is self-learning, self-improving machine learning — the kind that understands how Joey writes, who he communicates with, and what a suspicious URL looks like even before it's been flagged. That's what products like Darktrace do. You can't do that with an LLM — LLMs hallucinate, and worse, they're susceptible to prompt injection. Someone sends an email with an embedded instruction and your LLM-powered security tool suddenly does something it wasn't supposed to do.
Bryan: Exactly. And there's a real example of that happening right now — OpenAI just released a new browser and within days there was an exploit where malformed URLs could trigger the AI to execute commands at a trusted level. That's your prompt injection problem in the wild. AI is fundamentally designed to be helpful. And sometimes it's so eager to be helpful that it does things it shouldn't. That's exactly where guardrails come in. An AI that's analyzing your email should have access to virtually nothing else. It analyzes email and that's it. And if it needs to communicate with other systems, there has to be a carefully designed communications barrier controlling what traffic goes in and out. People underestimate how quickly those systems become complicated. I've spoken to people running a dozen LLMs working in concert to complete a single task. That's a lot of compute, a lot of cost — and you have to ask, is this actually the best approach?
Matthew: And I think we're in that moment with AI that mirrors the dot-com boom. Everyone had to have a website. Not every small business actually needed one. We're seeing the same thing now — everyone's trying to shove LLMs and AI into everything. We'll figure out where it works and where it doesn't, but it's going to take time. The difference is that the AI bubble is going to be even bigger than the dot-com bubble, because the underlying technology is genuinely more transformational than the internet. When things shake out, the winners will be the tools that use the right kind of AI in the right places — not the ones that bolted on an LLM and called it a day.
Bryan: Right. And the challenge with AI versus traditional software is the update cycle. If Microsoft identifies a security vulnerability, there's a patch on the next Patch Tuesday. With AI, by the time you identify a problem, you're waiting for the next version to be trained and released — and that can take months. And the current training cycle often doesn't even account for the problems you're seeing right now, so you're waiting for the version after that. It makes secure AI development exceptionally difficult in this current phase. Once the industry matures and the major players can afford to slow down a little, security will become a much bigger priority. Right now everyone's spending so fast and moving so fast that nobody really has time to think about "what if" — except for people in our field, who then get accused of killing the excitement.
Matthew: And I think the future is AI doing the heavy lifting on security — not LLMs, but purpose-built machine learning that checks your email, your network, your endpoints, your cloud, your OT. Someday AI will be looking at your code and flagging insecure patterns before they ever reach production. We're not fully there yet, but the building blocks exist. Microsoft's ZAP system on the back end of Exchange is a glimpse of that already.
Bryan: Agreed. And I'll add — I do think we're going to see a move back toward on-premises or private cloud for AI systems. Once you can run your own instance with hardened guardrails, where you control exactly what it has access to and it has no external exposure — that's when you actually start to have secure AI. And the hardware is getting there. AI processors on edge devices, on your phone, on your laptop — in the not-too-distant future, they'll be powerful enough to run these things locally, keeping everything contained rather than routing it through a data center. That's a really exciting future.
Matthew: I think that's the future I want too. And I think as much as screens have been detrimental to younger generations, the youngest kids today are already pushing back against that. I think we're going to see a renaissance of craftsmanship, the arts, live events, real human interaction. People aren't going to want AI Tom Cruise hanging from a plane. They want the real 75-year-old Tom Cruise doing it — because that's what's actually impressive. AI becomes the supporting cast, not the lead.
Bryan: Absolutely. And some of the special effects around him doing it? Sure, AI helps with that. But the moment itself is him. That's the distinction. And I think AI supercharges human life rather than replacing it. Think Temu — cheap garbage that falls apart after three washes and ends up on a beach in Africa. I think people are going to want quality again. Real manufacturing, real clothes, real things that last. That's my prediction.
Matthew: And that's great news for OT security, because if we're manufacturing more in the US and elsewhere, that's a much bigger attack surface to protect. OT is such a challenge — antiquated systems that can't be updated, running critical infrastructure. And machine learning-based monitoring like what Darktrace does for OT is really the only viable path. You analyze everything around it because you can't isolate it.
Bryan: And isolation is a myth anyway. Even in highly secure government environments — SCIFs — I've seen vulnerabilities at the entry and exit points. You can't completely isolate anything in the modern world. And that puts the human back as the final line of defense, which as we've established, is a weak link. I've been focused on security for eight years, my entire feed is security content, I've absorbed as much as humanly possible — and I've still seen people at our level get compromised because they were having a bad day or under stress from something else. That's where the technology absolutely has to step in. We design technology to do the things humans can't.
Matthew: So let's make it practical. When someone comes to you with a cool new AI tool or product and says "we want to use this" — what's the process? How do you vet it?
Bryan: The biggest thing is understanding that almost nothing is a standalone piece of software anymore. Even an app installed locally is almost certainly reaching out to someone else's infrastructure. So you have to look at what that supplier is doing in their own house. When I first started doing this kind of review, I got a lot of pushback. I'd reach out with a long list of invasive questions about backups, data security at rest and in transit, access controls — all of it. Now most companies just send a SOC 2, which covers a lot of the same ground.
That said, a SOC 2 is sensitive. It's essentially the keys to the kingdom — if a bad actor gets access to it, they know all your holes. So we use platforms like Vanta to manage this securely — sign the NDAs, get a secure copy, sometimes even browser-only access so it can't be downloaded. And Vanta now has an AI feature that does a solid job reading through those SOC 2 reports and surfacing what matters, because some of those documents are a hundred pages of dense material. The first thing I do when I open one is scroll to the bottom and look for exceptions. A mentor taught me that — it's the fastest way to know if the document is worth reading in depth.
On the question of whether a missing SOC 2 is a dealbreaker — it's not always black and white. We're a service organization. Our job is to enable people to do their work, and sometimes a tool is required even if it doesn't fully meet our standards. GP Strategies operates in over 20 countries, and some governments require specific software — the UK has only two licensed tax providers, Mexico has a government-tied payroll service that isn't particularly secure. You have to use them. So in those cases, you engineer a secure environment around them. You put guardrails in place and limit what they can access.
Generally speaking, if a company doesn't have either a SOC 2 or an ISO 27001, it's probably not going to get approved unless someone at a higher level is pushing it through. But I've been surprised — I worked with an AI company recently that had only been in business for six months and already had a SOC 2 Type 1. They understood from day one that this matters.
And we try not to give hard no's on evolving technology. If something looks promising but isn't quite ready for our environment, we'll say "let's revisit in six months." We don't want to block people permanently from tools that might be genuinely great with a little more security-focused development.
Matthew: That's great practical advice. And Vanta is a product I'm a fan of too — they make the compliance process so much smoother for everyone involved. Bryan, this has been absolutely fantastic. Before we go, can you tell everyone where they can find out more about you and about GP Strategies?
Bryan: For me, LinkedIn is the best place — just search my name and I'll come up. For GP Strategies, it's gpstrategies.com. If you need training work done — and we do some of the best training program development in the business — reach out. We'll make your training program amazing.
Matthew: Awesome. Thanks, Bryan. Till next time!
Bryan: Thanks, Matt.







