Where AI Helps, Where It Hurts, and Why Governance Matters with Olivia Phillips

Olivia Philips IMAGE

Olivia Phillips is the founder of Wolfbyte Technologies, an AI focused consulting firm that helps organizations understand where artificial intelligence truly fits within their existing technology and security foundations. In addition to leading Wolfbyte Technologies, Olivia serves as Vice President of the USA Chapter for the Global Council for Responsible AI, where she works alongside global stakeholders to promote structured, ethical, and secure AI adoption. With a background spanning cybersecurity, intelligence, and hands on operations, Olivia brings a practical and security minded perspective to conversations that are often dominated by hype. Her work consistently centers on preparedness, responsible implementation, and protecting people as technology accelerates.

apple
spotify
stitcher
google podcast
Deezer
iheartradio
tunein
partner-share-lg

Here’s a glimpse of what you’ll learn: 

 

  • Why AI should be layered onto a strong foundation rather than rushed into production
  • How self learning AI differs from large language models in security use cases
  • Why responsible AI requires structure, governance, and human oversight
  • How deepfakes and AI driven fraud are impacting real people today
  • Why separation of systems and access still matters in a highly automated world
  • How AI can support security teams without replacing human judgment
  • What aspiring professionals should understand about careers, certifications, and networking

In this episode…

Olivia Phillips explains why many organizations are approaching AI backwards by focusing on tools before understanding their own environments. She describes how Wolfbyte Technologies helps clients inventory assets, understand dependencies, and ensure foundations are stable before introducing AI. Without that groundwork, she warns that AI can amplify existing weaknesses rather than solve problems.

The conversation dives deeply into AI and cybersecurity, particularly the difference between self learning machine learning systems and large language models. Olivia outlines why self learning systems are better suited for threat detection, while LLMs introduce risks such as hallucinations and prompt injection. She emphasizes that AI should reduce analyst workload, not create more busy work or new attack paths.

As Vice President of the Global Council for Responsible AI USA Chapter, Olivia shares real world examples of AI misuse, including deepfakes targeting family members. She stresses that responsible AI means placing structure around how systems are built, accessed, and monitored. Throughout the episode, she reinforces that technology alone cannot solve trust issues and that verification, separation, and human awareness remain essential.

 

Resources mentioned in this episode

 

Matthew Connor on LinkedIn
CyberLynx Website
Olivia Phillips on LinkedIn
Wolfbyte Technologies LinkedIn
Global Council for Responsible AI USA Chapter Website

 

Sponsor for this episode...

This episode is brought to you by CyberLynx.com  

CyberL-Y-N-X.com.

CyberLynx is a complete technology solution provider to ensure your business has the most reliable and professional IT service.

The bottom line is we help protect you from cyber attacks, malware attacks, and the dreaded Dark Web.

Our professional support includes managed IT services, IT help desk services, cybersecurity services, data backup and recovery, and VoIP services. Our reputable and experienced team, quick response time, and hassle-free process ensures that clients are 100% satisfied. 

To learn more, visit cyberlynx.com, email us at help@cyberlynx.com, or give us a call at 202-996-6600.

 

Transcript: 

Cyber Business Podcast – Olivia Phillips, Founder of Wolfbyte Technologies & VP of Global Council for Responsible AI (USA Chapter)


Matthew: Matthew Connor here, host of the Cyber Business Podcast. Today we're joined by Olivia Phillips, founder of Wolfbyte Technologies and Vice President of the Global Council for Responsible AI, USA Chapter. Olivia, welcome to the show.

Olivia: Thank you for having me.

Matthew: Thanks for joining us. Before we get too far in, a quick word from our sponsors.

[SPONSOR READ: This episode is brought to you by CyberLynx.com. Do you know if a hacker is in your system? Most people and most companies don't — until it's too late and the hacker has already done damage. A hacker's job is to bypass your security, so companies need a way of knowing when someone has gotten past their defenses. That's where CyberLynx comes in. We've partnered with the best cybersecurity companies in the world to bring you the best solutions at the best prices — whether it's managed SIEM, SOC, EDR, MDR, or XDR. We'll help you find the right solution at the right price. Find out more at CyberLynx.com.]

And now back to our show. Olivia, for those who aren't familiar, can you tell us about Wolfbyte Technologies?

Olivia: Absolutely. Wolfbyte Technologies is an AI consulting firm. We're not here to sell you something with an AI sticker on it — we're here to help you find the fine line between what you want and what you actually need. Everybody wants to implement AI, but there's groundwork that has to be in place first. Do you know your asset management? Do you know all the tools you have in-house? Do you understand how everything is connected? If AI gets layered onto an unstable foundation, it's going to break things. We help organizations get their stack in order so that when AI does go in, it lands correctly. We can also help with funding and connecting organizations with the right resources.

Matthew: I think that's really valuable because there is so much AI innovation happening so fast that it leaves a lot of people genuinely confused about which direction to go, what products to use, and how to use them responsibly. So Wolfbyte is essentially the trusted advisor that helps them sort through all of that?

Olivia: Yes. And we're very direct about it. We're not here to impress you — we're here to make sure you're successful.

Matthew: Let's talk about some of that. What are your favorite uses of AI right now — do you have a new favorite toy?

Olivia: I'm a big fan of Darktrace. I really like how they approach their AI products, especially when it comes to threat intelligence. We all grew up — or at least I grew up — with G.I. Joe, and you know the phrase: knowing is half the battle. Darktrace helps with that. I also keep my eye on Anomali and what they're doing in the AI space. Those are my top two right now.

Matthew: I'm a big Darktrace fan as well. For people who aren't as familiar, can you break down how Darktrace actually uses AI in a way that helps organizations?

Olivia: There's a lot of value there — particularly the fact that you don't need a person staring at a screen all day hunting for threat intelligence. AI can do that for you. It's not perfect — like humans, AI still has bugs and is still being refined — but it gives you a much bigger picture much faster. When I used to do cyber intel work manually, it would take me hours to do what AI can now do in seconds. And beyond identifying threats in the environment, it can help you determine which threats are actually relevant to your specific environment and how to protect against them. It shifts you from reactive to proactive, which is where we all want to be.

Matthew: And what I really love about Darktrace specifically is that it does exactly what you'd picture when you say "AI for security." For email, for example — it learns how Joey writes emails. So when Joey suddenly starts sending very different emails to different people at 2:00 in the morning, it catches that and intervenes. It's intuitive. And there's an important distinction worth clarifying for people — when we talk about AI with Darktrace, we're really talking about self-learning machine learning, not LLMs.

Olivia: Right, and that matters a lot. If you use an LLM for email security, you open yourself up to prompt injection attacks. Someone can craft an email that manipulates the LLM into taking unintended actions. That's a real and growing problem. Self-learning machine learning is a completely different approach — and a much more appropriate one for this use case. A lot of companies are just bolting an LLM onto their existing security products, calling it AI, and not really solving the problem. Worse, they may be introducing new ones.

Matthew: Exactly. You're using up compute and potentially creating new attack surfaces without adding real value. Now let's talk about your other role — the Global Council for Responsible AI. What's that about and what does it do?

Olivia: The Global Council for Responsible AI is exactly what the name says — it's about making sure AI is used responsibly. Not letting it become the Wild West. There has to be structure, framework, and governance behind how AI is implemented. How do we make sure that when we deploy AI, it doesn't just run amok? We're working with councils in the UK, Dubai, Singapore — seeing what frameworks they're putting in place and helping companies make sure their AI implementations are secure and responsible.

One major area is OT environments. A lot of companies want to implement AI in OT, SCADA, and ICS systems — but you can't just throw AI at that. You have to think strategically. How are you analyzing sensor data? When do you create alerts? How do you secure the implementation? Can you protect against back-door entry? At the end of the day, someone still has to be watching. You let AI go forth and do its job, but you're monitoring it too.

Matthew: And speaking of monitoring — one of the areas I think about a lot is AI agents. When you connect an agent to your email so it can access your calendar and help schedule meetings, that same agent can potentially be prompt injected by a bad actor and suddenly they have access to far more than you intended. What's your take on how people should be thinking about that?

Olivia: You have to be very intentional about what you give it access to. Don't give it everything. Yes, here's my name, here's my calendar — but only for specific calendars, not all of them. I personally have four separate calendars with different passwords, and they are not connected to each other. That comes from the principle of separation of duties — something everyone in this field learned from CompTIA. My personal email, my work emails, they're all separate. It's tedious to manage. But if one gets compromised, they all don't go down.

I know this firsthand. I was personally compromised through a third-party breach — my Equifax and TransUnion data was exposed, and I had to go through the very real pain of trying to get that cleaned up. Which, as you know, is never 100% fixable. Since then I freeze my credit at all three bureaus. I have credit monitoring — which I got for free from the company whose breach caused it. And I have alerting set up so that if anyone pulls my credit or accesses anything tied to my accounts, I get an immediate notification to verify. It's a lot of work. But it's necessary.

Matthew: And I think that's advice everyone should take — if you're not actively applying for credit, freeze it. It takes a few minutes, it's reversible, and the cost of not doing it can be enormous. Your data is likely already on the dark web from some third-party breach you had nothing to do with. Freezing your credit is one of the easiest and most effective things you can do.

Olivia: Absolutely. And it's especially important going into the holiday season — the fake Amazon websites, the AI-generated scam content — it's getting increasingly convincing. Even as a security professional, it's alarming to see how polished these are now.

Matthew: Phishing emails used to be easy to spot — broken English, suspicious formatting. Now they're often indistinguishable from legitimate communications. And that's why I'm such a proponent of using AI to defend against AI. We can't keep relying on training end users to spot threats. Jane in finance is great at finance — she shouldn't have to also be a trained security professional. The goal should be to take the end user out of the security equation as much as possible.

Olivia: I think AI versus AI is genuinely where this is heading. The future of cyber defense is going to be AI systems defending against AI-powered attacks, with humans setting the parameters and monitoring. The developers who write the initial code set the AI in motion, but from there it learns its environment and adapts. That raises important questions, though — at what point does a learning AI become more capable than the humans overseeing it? And are we losing critical thinking skills along the way? ChatGPT has helped millions of people write better emails and documents, but if you never have to think through a proposal yourself, are you losing that capacity over time?

Matthew: I go back and forth on that. On one hand, I think about GPS — most people couldn't navigate with a paper map anymore. But I'd argue we haven't lost intelligence, we've just freed up cognitive resources for other things. You only have so much mental energy in a day. If AI is handling the proposal formatting while you focus on the strategic thinking that goes into it, that might actually be net positive. But I do wonder about the creativity question — especially for people in creative fields. Will heavy AI reliance diminish the creative muscle?

Olivia: That's the question I keep coming back to. For me, AI helped me create something when I couldn't get a designer, and it was useful — but it was also a negotiation. It kept going in its own direction and I kept pulling it back to what I actually wanted. It made me rephrase and rethink how I was communicating my vision, which was its own kind of exercise. So maybe it's not replacing creativity so much as forcing you to articulate it more precisely.

Matthew: That's a really interesting way to look at it. Now — let's talk about your background, because you have an extraordinary number of certifications. I think it's 83?

Olivia: Yes.

Matthew: That's remarkable. Can you walk us through your origin story and how certifications played into it? Because I know people in this field have very different views on certs versus degrees versus experience.

Olivia: When I started about 21 or 22 years ago, it was all about degrees. I got my associate's degree. Then the focus shifted — especially from HR — toward certifications. I'm not a great test taker. I know the material, but translating that to a test is a challenge for me. So I had to work hard at it. I studied hard for CompTIA Security+, all the CompTIA tracks, then SANS and others. But the honest reason I got most of them? To meet DoD requirements. My job was on the line and I needed the paper. That continued throughout my career — every new role or contract had new requirements, and I met them. Eventually I also went back and got my bachelor's degree on top of everything else.

Here's my honest take: experience often outweighs certifications, and sometimes even degrees. I've seen people with doctorates who couldn't navigate a Linux system in a real-world scenario because what they learned in school and what the job actually requires are different things. Technology moves faster than academic curricula can keep up with. That doesn't mean a degree is worthless — a computer science degree gives you a foundational understanding of how things work that can be genuinely valuable. But it's not the end of the story, and it's certainly not a prerequisite for success in this field. The Bill Gates, Steve Jobs, and Mark Zuckerbergs of the world are proof of that.

What has consistently mattered more than any credential in my career is networking. My dad always said it's not just what you know — it's who you know. When I was moving from government contract to contract, it wasn't my cert list that carried me. It was the people who knew my work, who said, "She's coming with us to the next contract." That's how careers in this field actually move. If you get invited to dinners, B-Sides events, Women in Cyber gatherings, conferences — go. Be present. Those relationships are how opportunities happen.

Matthew: That is really solid advice, and I don't think it gets nearly enough attention in tech. The culture has always been heavy on the technical side and lighter on the people skills side. Your next opportunity is as likely to come from who you know as from what's on your resume. And for younger people coming into the field — those are muscles worth building early.

Olivia: Absolutely. And I'll say that even when I was getting all the certifications, that networking thread was always running in the background. It's not one or the other — you need to be good at what you do and you need people to know it. That combination is what actually builds a career.

Matthew: Olivia, this has been an absolute blast. Thank you so much. Before we go, can you tell everybody where they can find out more about you and about Wolfbyte Technologies?

Olivia: Yes! We're doing a Wolfbyte release around Halloween — we want people to join the pack before you're bitten. That's our slogan and part of our mission: getting people together the way wolves do, to defend against the attackers out there. And you can find me on LinkedIn — just search "Meet Olivia Phillips."

Matthew: Fantastic. Olivia, until next time — this was amazing and I can't wait for our next conversation.

 

Read On

Securing Aviation, Education, and Innovation with David Mashburn

Securing Aviation, Education, and Innovation with David Mashburn

David Mashburn serves as Chief Information Security Officer at Embry-Riddle Aeronautical University

Read more
Building Resilient Security Programs Across Industries with Jess Vachon

Building Resilient Security Programs Across Industries with Jess Vachon

Jess Vachon is a three time CISO, the founder of Vigilant Violet LLC, and the host of the Voices...

Read more
Security Is Everyone’s Job and Why That Matters More Than Ever with Bryan Tomczyk

Security Is Everyone’s Job and Why That Matters More Than Ever with Bryan Tomczyk

Bryan Tomczyk serves as a Cybersecurity Engineer at GP Strategies Corporation, where he works...

Read more