Cyber Business Podcast

Why Machine Learning Is the Unsung Hero of the AI Era with Ben Wilcox - Ep 212

Written by Matthew Connor | May 6, 2026 4:47:57 AM

Ben Wilcox is the CTO and CISO at ProArch, a Microsoft partner organization with 20 years of experience helping companies across healthcare, manufacturing, energy, and independent software development take data to intelligence. In his dual role, Ben leads both the technology and security functions, working directly with clients to build AI-ready foundations, deploy agentic workflows, and secure the environments that support them. He brings hands-on experience across operational technology, data architecture, and enterprise security that spans hundreds of client engagements.

 



Here’s a glimpse of what you’ll learn: 

 

  • Why data quality is the foundational problem that determines whether AI agents succeed or create chaos
  • How ProArch helped a critical infrastructure client turn a six-figure reactive problem into millions in first-year ROI using machine learning
  • Why machine learning is the unsung hero of the AI era, especially in security, and how it differs from simply bolting an LLM onto a legacy product
  • How voice cloning and AI-powered social engineering are changing the threat landscape and what the defensive response looks like
  • Why tabletop exercises need to include finance, not just the security team, and how they surface controls gaps that technology alone cannot fix
  • Why tool sprawl and legacy systems that do not talk to each other will be the critical failure point as AI-powered attacks accelerate
  • What zero trust looks like when agents are part of the workforce and why identity and accountability have to extend to non-human identities

 

In this episode…

Ben opens by drawing a line that cuts through a lot of the noise around AI adoption: your agents can only act on the quality of data they have access to, and if that data foundation is broken, the consequences of agentic AI are not just poor reports, they are bad actions taken at machine speed with no human in the loop to catch them. That framing sets the tone for the whole conversation. Before any organization can meaningfully benefit from AI agents, it needs asset visibility, data governance, and identity accountability extended to non-human systems. Ben is direct that most organizations are not there yet, and that the governance frameworks needed to treat agents like accountable digital employees are still being built in real time across the industry.

The critical infrastructure case study Ben walks through is one of the most concrete AI ROI stories the podcast has featured. A piece of equipment roughly 15 years old was generating six-figure outages with no predictive visibility, just a reactive cycle that could stretch beyond 72 hours depending on parts availability. ProArch pulled time series data from the client's historian system, mapped the operational curve for normal equipment behavior, identified the anomaly signatures that preceded every past failure, and built a model that could predict future failures before they happened. The result was the ability to either intervene proactively or schedule planned outages before unplanned ones could occur. The project cost just over six figures. The first-year ROI was five to six million dollars. Ben uses this example not just to illustrate what machine learning can do in an operational context, but to make a broader point: the organizations hearing that AI ROI is not there are often looking at the wrong kind of AI for the wrong kind of problem.

That brings him to what he calls the unsung hero of the current moment: machine learning as distinct from LLMs, and why the security industry in particular needs to be thinking about the two differently. Bolting a large language model onto a legacy email security product does not make it smarter, it opens a prompt injection surface that nobody has fully solved. Machine learning that understands user behavior, writing patterns, access norms, and anomalous traffic is what actually changes the defensive equation. Ben is equally direct about where LLMs do belong: surfacing insights in language a non-technical user can act on, finding decade-old vulnerabilities in codebases in minutes, and helping close the gap between how fast attackers are moving and how fast security teams can review and respond. His closing advice on breach preparedness follows the same logic: get finance in the tabletop, understand what controls you would have done differently, and do not wait for something expensive before you ask whether a passkey would have stopped it.

 

Resources mentioned in this episode

 

Matthew Connor on LinkedIn
CyberLynx Website
Ben Wilcox on LinkedIn
ProArch Website

 

Sponsor for this episode...

 

This episode is brought to you by CyberLynx.com  

CyberL-Y-N-X.com.

CyberLynx is a complete technology solution provider to ensure your business has the most reliable and professional IT service.

The bottom line is we help protect you from cyber attacks, malware attacks, and the dreaded Dark Web.

Our professional support includes managed IT services, IT help desk services, cybersecurity services, data backup and recovery, and VoIP services. Our reputable and experienced team, quick response time, and hassle-free process ensures that clients are 100% satisfied. 

To learn more, visit cyberlynx.com, email us at help@cyberlynx.com, or give us a call at 202-996-6600.

 

Check out previous episodes:

 

Defending Critical Infrastructure in the Age of AI Attacks with Sean Murphy - Ep 211   

Why Insecure AI Is Just as Dangerous as No AI with Shannon Brewster - Ep 210   

Three Weeks to 45 Minutes: What Real AI Adoption Looks Like in Insurance with Barninder Khurana - Ep 209  

 

Transcript: 

 

Ben Wilcox

Cyber Business Podcast

Guest: Ben Wilcox, CTO & CISO

ProArch

Matthew Connor: Matthew Connor here, host of the Cyber Business Podcast. Today we're joined by Ben Wilcox, CTO and CISO at ProArch. Ben, welcome to the show.

Ben Wilcox: Thank you for having me, Matthew.

Matthew Connor: Thanks for being on. Before we get too far in, a quick word from our sponsors. Hackers are getting smarter — is your security keeping up? Cyberlynx sells industry-leading, AI-powered cybersecurity solutions that detect threats in real time, so you know about an attack before the damage is done, not after. Learn more at cyberlynx.com. And now back to our show.

Ben, for those who aren't familiar, can you tell us about ProArch and your roles there as CTO and CISO?

Ben Wilcox: Sure. ProArch is a Microsoft partner operating within the Microsoft ecosystem, and our goal is to help organizations take data to intelligence. That means different things depending on where a business is in its journey. We might come in at the infrastructure layer, work through the data side, build out analytic reports, and then — where everyone's conversation is heading these days — build AI agents.

Matthew Connor: You said everybody's favorite words. AI agents are absolutely top of mind right now. Let's dig into that — what does ProArch actually do with AI agents, and how are you helping customers implement them?

Ben Wilcox: Most organizations we work with have foundational problems, and those problems typically stem from data quality. Your AI agents can only act on good data — otherwise you get strange, unexpected outcomes. The same principle that applies to traditional reporting is true with agents, but the stakes are exponentially higher because agents take actions. You don't want automated actions being taken on bad data. An agent doesn't have the human judgment to look at something and say "that doesn't seem right, I'll do something different" — it just executes.

We also see significant challenges around identity and governance. Agents need to be treated like human coworkers in many ways — they all need identity, accountability, and a clear understanding of what actions they're permitted to take. That governance framework is largely missing in most organizations today. And frankly, in the cybersecurity world it's still quite new — agents weren't a real conversation more than nine months ago. They were a concept, something coming in the future. Now we're actually seeing them in day-to-day workplaces.

Wherever an organization is in that journey, we step in, help fix the foundational problems, get them to the next stage, and ultimately build a scalable platform. That's where the real value is.

Matthew Connor: Walk me through a concrete example. Who is an ideal client, and what does a success story look like?

Ben Wilcox: I'll give you one from the critical infrastructure space. Critical infrastructure falls under significant compliance obligations, lots of sensitivity around how data moves, and it's traditionally on-premises — cloud usage is limited or different from what you'd see in other sectors.

We started with a secure foundation: complete asset visibility. We needed to understand every asset in the environment before we could do anything meaningful with the data. We used tools like Microsoft Defender for IoT, which can scan networks through both passive and active methods, combined with manual discovery — in operational technology environments, you often literally have to walk the floor. We have consultants who know that space well.

Once we established solid visibility, got the right tools in place for ongoing monitoring, and built a layer of trust with the client — which is essential in any major transformation — we asked: what's your most pressing business problem?

Their answer: they had a piece of equipment about fifteen years old. When it failed, it caused a six-figure outage. And they were completely reactive — they'd only know the root cause 72 hours after the fact, sometimes longer if a part had to be sourced.

So we said, let's take the operational data from that equipment and understand how it behaves. They already had all this information in what they called a historian — a system of record where time-series sensor data from across the plant was being continuously logged. We went back through historical outage records, started identifying the anomalous patterns that preceded each failure, and built what we call an operational curve — a baseline of normal behavior and a signature of the conditions that lead to failure.

From there, we could predict every future failure before it happened. The next step was the agentic piece: now that we can see it coming, how do we assist the operator proactively? The system notifies the operator and, based on the organization's own knowledge base, the agent surfaces the typical remediation steps. The operator can then act on the equipment before the failure occurs — or, if they have enough lead time, schedule a controlled outage rather than suffer an unplanned one.

That project was a little over six figures in cost. Their ROI in the first year was five to six million dollars.

Matthew Connor: That is fantastic — and a really important story, because you constantly hear that AI ROI just isn't there. Here's a concrete example of the right kind of AI delivering a massive return. And I think it helps business owners and technology leaders shift their thinking away from the shiny LLM everyone's focused on and toward machine learning, which has been the unsung hero for years. Companies like Darktrace have been doing this in security for ten to fifteen years. It wasn't in the cultural zeitgeist the way AI is now post-ChatGPT, but the technology has been delivering real value all along. The same principles apply — you can't just bolt an LLM onto an email security product and call it AI-powered. Now you've opened yourself to prompt injection, which we haven't solved yet. Machine learning in the right use case is genuinely the right tool for the right job. Where do you see that going?

Ben Wilcox: I completely agree on the machine learning point. And going back to the email example — what machine learning does really well is build a behavioral baseline. Is this a normal type of communication from this sender to this recipient? Is this a typical URL for that user's role? Once you've built those segments, you can surface anomalies in a meaningful way. And that's where an LLM actually does add value: when something looks suspicious, the LLM can communicate that to the user in plain business language — not tech speak — in a way Joey in accounting actually understands. "Hey, this one looks a little strange, you might want to run it past your SOC before clicking." You're using each technology where it belongs. Machine learning for detection and baselining, LLMs for communication and context.

Matthew Connor: Exactly. And the security use case goes deeper than email. Machine learning can understand how a user normally writes, when they access data, what types of data they typically touch — and then flag when something breaks that pattern. That's a perfect use case. And then you've got something like Claude's Metapath capability, where LLMs are being used to find vulnerabilities that humans have missed for decades, discovering them in minutes. I was just talking to a guest last week about how AI is being used by threat actors to find and exploit zero-days at a pace that's almost impossible to keep up with. And literally the next day, Anthropic announced they were working with Google and Microsoft specifically to use that same capability to find and patch vulnerabilities before releasing it more broadly. We live in genuinely exciting times. What's your take on where that takes us?

Ben Wilcox: I'm super excited and super frightened at the same time — and I think both are appropriate responses. AI is advancing so fast that it's genuinely hard to keep up. As a concrete example: Microsoft just released a couple of LLMs that can clone a voice in a couple of seconds with near-perfect quality, producing a 60-second audio clip in about one second. My first thought was threat actors. Last year I spent an hour or two to create a decent voice clone of myself. Now that capability is essentially a commodity. That's terrifying from a social engineering standpoint, but also clearly has legitimate productivity uses — better transcription, accessibility, and so on.

On the Metapath side specifically, I think it's going to catalyze a whole new wave of application modernization. All that legacy technical debt we've accumulated? This technology is going to surface vulnerabilities in it that have been sitting there undiscovered. I read about an example where it successfully chained multiple low-severity vulnerabilities into a critical one — that's essentially the holy grail of penetration testing. The speed and accuracy at which it did it was remarkable.

The challenge is that this capability will trickle down to threat actors. If you look at the innovation curve, security has historically trailed by six to nine months, sometimes more. AI is widening that gap across multiple dimensions simultaneously — from code review to vulnerability discovery to social engineering. We need AI on the security side more urgently than ever, and I think Anthropic has recognized that opportunity clearly.

Matthew Connor: And that's really where this becomes a strategic imperative. AI writing code is great, but AI writing secure code is where we need to get to — and we're getting closer. I'm an optimist. I think the good guys win this, but only if we embrace AI on the defensive side before the gap widens further. The arms race is real, and the layered security approach is more important than ever. When your EDR gets taken down by a signed driver exploit — which we're already seeing — what's your next layer? You need AI watching the network, watching the endpoints, seeing that traffic that shouldn't be happening and flagging it before the damage is done. Traditional tools alone can't operate at the speed this requires anymore.

Ben Wilcox: One hundred percent. And the visibility piece is foundational to all of it. Network, endpoints, identities — and increasingly, SaaS platforms. Think back to the Salesforce situation last summer: threat actors were pulling data out of Salesforce and many corporate customers didn't even have access to the logs that would have shown them it was happening. The visibility wasn't there because people hadn't thought to look for it. Now you have to assume any cloud platform, any SaaS product, any part of your environment could be a vector. Zero trust, AI-assisted monitoring, and visibility across all of it — that's not optional anymore.

And you're right about the "when, not if" framing. The best posture now is preparation. Tabletop exercises, executive buy-in, documented incident response processes — go through a ransomware scenario, go through a business email compromise scenario. BEC is still the number one thing most organizations are facing. What are your internal finance processes? What controls does your finance team already have in place that you as a security leader may not even know about? Get everyone in the room, walk through the scenarios, document what you'd do, and use that as an opportunity to identify where a process change, a better tool, or even just education could eliminate the risk entirely. Not all of it has to be expensive — sometimes it's moving your finance team to passkeys instead of just MFA, or putting a simple verification step in place for wire transfers. Find the easy wins alongside the big architectural investments.

Matthew Connor: That is fantastic advice. Ben, before we wrap up — tell us a little more about ProArch and who the ideal client is.

Ben Wilcox: We focus on four main segments: healthcare, manufacturing, energy, and ISVs — independent software vendors, companies that build products. The reason we love these spaces is that they all have significant opportunities for better data management, stronger security, and meaningful business outcomes. Energy and manufacturing share a lot of operational technology overlap, so our experience translates well across both. Healthcare has enormous data complexity and compliance requirements that we're well-suited to address. And ISVs — we have a heritage in product development, building complex cloud-based applications. That's how ProArch started. We've been around 20 years and have worked with hundreds of organizations across these verticals, across everything from operational technology to AI implementations.

Matthew Connor: I love it. And I think what you're doing is exactly what people need to hear — concrete, ROI-backed use cases that demonstrate AI being applied correctly, not the hype. The right tool for the right job. Ben, before we go, can you tell everyone where they can find out more about you and ProArch?

Ben Wilcox: For ProArch, it's proarch.com — there's an active blog section there that I contribute to regularly, and the team posts on a monthly basis as well. On LinkedIn, you can find me at Ben-Wilcox — I typically post three or four times a week. Happy to connect and engage with anyone who wants to dig into any of these topics further.

Matthew Connor: Fantastic. Ben, thanks so much for coming on. Until next time.

Ben Wilcox: Thank you, Matt. Appreciate it.