Why Your SaaS Vendor's New AI Button May Be Your Biggest Security Risk Right Now with Fletus Poston III - Ep 208

Fletus IMAGEFletus Poston III is the Director of Security and Systems Architecture at 3D Systems Corporation, one of the world's leading additive manufacturers. In his role, he oversees the overall architecture for both the cybersecurity and IT functions, including infrastructure, cloud, and the governance models that tie them together. 3D Systems builds commercial 3D printers from the board level up, produces its own materials including plastics and metal powders, serves the personal healthcare market with surgical guides and aids, and has recently entered the DoD and aerospace space, currently pursuing CMMC Level 2 certification. Beyond his industry role, Fletus is an adjunct professor at Appalachian State University teaching an AI business cybersecurity course, giving him a dual perspective as both a practitioner shaping enterprise architecture and an educator preparing the next generation of security professionals. 

 

apple
spotify
stitcher
google podcast
Deezer
iheartradio
tunein
partner-share-lg

Here’s a glimpse of what you’ll learn: 

 

  • Why AI governance in enterprise environments starts with knowing what SaaS products are doing with your data before the AI button even appears in the interface
  • How MCP servers, API gateways, rate limiting, tokenization, and caching are becoming the new vocabulary of AI architecture governance
  • Why machine learning and LLMs serve fundamentally different functions and why confusing the two leads to the wrong tool being applied to the wrong problem
  • How deepfake voice cloning and AI-assisted identity fraud are already being weaponized in hiring processes, wire transfers, and social engineering at scale
  • Why the energy infrastructure challenge is one of the most underappreciated constraints on AI development and what mobile and space-based data centers mean for the future
  • How to create a challenge phrase system that defeats deepfake voice and video calls in both personal and professional contexts
  • Why the acceleration of AI model capability is now measured in days rather than months and what that means for governance frameworks trying to keep pace
  • Why human safety must remain the first principle in any security architecture and why slowing down by three seconds is still some of the best advice in the field


In this episode…

Fletus opens from a vantage point that most security leaders do not occupy simultaneously: practitioner, architect, educator, and manufacturer all at once. His perspective on AI governance is grounded not in abstract frameworks but in the operational reality of a company that builds printers, produces materials, serves surgical teams, and is now entering the DoD supply chain. In that context, the question of where your data goes when a SaaS product adds a new AI button is not theoretical. It is a compliance and contractual issue that most organizations have not yet accounted for, and Fletus is among the clearest voices in the industry on why that needs to change. He draws a direct line from API gateway concepts that security teams have understood for years to the new vocabulary of AI governance: tokenization, rate limiting, caching, MCP servers, and hybrid model decisions that now sit at the center of every serious enterprise AI conversation.

The machine learning versus LLM distinction lands differently coming from someone who has been teaching it and living it. Fletus explains that machine learning handles normalization and data analytics, the backbone of every security tool since the early days of cloud adoption. LLMs handle language temperature checks, learning the ambiguity and context of how humans actually communicate. The two are not interchangeable, and organizations that treat them as the same thing will deploy the wrong tool in the wrong place and wonder why they got the wrong result. He pairs this with a candid look at the hardware shift happening in parallel, where the processing power once requiring data centers is now sitting in M series laptops and NPU-equipped devices, bringing AI back to the edge in ways that change both capability and attack surface simultaneously.

The deepfake and identity fraud section of this episode is among the most practically urgent content on the podcast this year. Fletus walks through how North Korean actors embedded themselves in remote workforces using AI-assisted personas, how a deepfake video board meeting in Hong Kong led to an approximately 32 million dollar wire transfer, and why current HR systems and ATS platforms have no reliable way to verify human identity in a world where AI can generate resumes, conduct screening calls, and clone voices at scale. His countermeasures are low-tech and effective: challenge phrases that no AI can answer correctly, geolocation-specific questions that test genuine local knowledge, and the simple physical act of standing up and taking a breath before responding to any urgent financial request. In a landscape of increasingly sophisticated threats, the three-second pause remains one of the most powerful defenses available.

 

Resources mentioned in this episode

 

Matthew Connor on LinkedIn
CyberLynx Website
Fletus Poston III on LinkedIn
3D Systems Corporation Website

 

Sponsor for this episode...

 

This episode is brought to you by CyberLynx.com  

CyberL-Y-N-X.com.

CyberLynx is a complete technology solution provider to ensure your business has the most reliable and professional IT service.

The bottom line is we help protect you from cyber attacks, malware attacks, and the dreaded Dark Web.

Our professional support includes managed IT services, IT help desk services, cybersecurity services, data backup and recovery, and VoIP services. Our reputable and experienced team, quick response time, and hassle-free process ensures that clients are 100% satisfied. 

To learn more, visit cyberlynx.com, email us at help@cyberlynx.com, or give us a call at 202-996-6600.

 

Check out previous episodes:

 

Why Fighting AI in the Classroom Is the Wrong Battle with Chris Campbell - Ep 207  

Maritime Cybersecurity, AI Governance, and the Threats No One Sees Coming with Amit Basu - Ep 206 

The Arms Race, the Energy Gap, and the Ethics of Teaching AI to Be Good with Alex Dalay - Ep 205 

 

Transcript: 

 

Fletus Poston III 

Cyber Business Podcast

Guest: Fletus Poston III

Director of Security and Systems Architecture, 3D Systems Corporation


Matthew Connor: Matthew Connor here, host of the Cyber Business Podcast. Today we're joined by Fletus Poston III, Director of Security and Systems Architecture at 3D Systems Corporation. Fletus, welcome to the show.

Fletus Poston III: Excited to be here, man.

Matthew Connor: Excited to have you. Before we get too far in, a quick word from our sponsors. Hackers are getting smarter — is your security keeping up? Cyberlynx sells industry-leading, AI-powered cybersecurity solutions that detect threats in real time, so you know about an attack before the damage is done, not after. Learn more at cyberlynx.com. And now back to our show.

Fletus, for those who aren't familiar, can you tell us about 3D Systems Corporation and your role there as Director of Security and Systems Architecture?

Fletus Poston III: Sure. I'm Fletus Poston III, Director of Security and Systems Architecture. What that means is I oversee the overall architecture for both our cybersecurity and IT sides — infrastructure, cloud, and the governance models around security.

3D Systems is an additive manufacturer. We produce our own materials and build everything from the printer up. We have commercial printing, but our biggest vertical is personal health services. We have a jaw-up and jaw-down team, so we print a lot of surgical guides and aids. We also do significant industrial work — printing molds for jewelry casting, automotive applications, and so on. And we recently announced a move into DoD aerospace, where we're building out capabilities for naval and space initiatives.

Matthew Connor: I've got a bunch of questions. When you say "from the printer up" — you actually build the printers?

Fletus Poston III: We build our printers from the boards up. That's the manufacturing side. We produce commercial 3D printers from the hardware level — we lay down the firmware, build the software, assemble the actual printer — all the way through to the materials inside them: plastics, pellets, metal powders, and so on.

Matthew Connor: You guys aren't playing around. That's serious work. And the DoD space — that's an interesting addition. When did that start?

Fletus Poston III: We've been in a pilot program for the last couple of years, building out models and doing RFPs across different vectors in the aerospace and DoD world.

Matthew Connor: So CMMC affects you then?

Fletus Poston III: Correct. We have our Level 1 and we are actively working on Level 2 right now. Our CISO and I have been heads down getting prepped for that, working with our auditor for our C3PAO engagement.

Matthew Connor: Congratulations — that's no small feat even just getting to this point. And I noticed you said C3 instead of C3PAO — nice accidental Star Wars reference.

Fletus Poston III: Ha — I didn't even catch it. There you go.

Matthew Connor: Very nice. Now, you also do some board and chip manufacturing, right?

Fletus Poston III: We produce our own boards in-house. We lay down either Windows IoT or Debian-based Linux on top of them, so we handle the hardware and software firmware all the way up the stack.

Matthew Connor: That's super cool. You're not just a manufacturer — you're the manufacturer's manufacturer. Impressive. So, you can't have a conversation these days without talking about AI. How does it affect what you're doing at 3D Systems?

Fletus Poston III: I'll pull back a bit and give you a little more context on myself first. I'm also an adjunct professor teaching an AI, business, and cybersecurity class at Appalachian State here in North Carolina — up in the mountains. What I've been working to do both with my students and at 3D Systems is move past conversational AI. Everyone defaults to "let's just send a prompt." What I'm teaching and applying is how to augment, how to get to autonomous workflows, how to build agents, and how to think about the future of machine learning.

We've had machine learning in our security tools since roughly 2012 to 2015 — it really took off as cloud adoption grew and SaaS became the dominant delivery model. At 3D Systems, we have highly repetitive processes that are well-suited for MCP — Model Context Protocol — implementations. We're looking at how we can leverage the various copilots and AI productivity tools available: whether it's helping with business services, translations, master data management, or understanding where our data lives.

The biggest questions we have to answer are: what model do we build, where do we host it, and who do we partner with? On the security architecture side — which I teach and talk about constantly — the critical questions are: where is my data, who has my data, and when I interact with a SaaS provider's chatbot, what LLM is actually running under the hood?

You come in on a Monday morning, open your SaaS product, and there's a new "Summarize" button that wasn't there last week. Did your legal team know about that? Did compliance sign off? Those are the third- and fourth-party risk conversations I'm having constantly as we continue expanding our SaaS footprint.

The bigger concept I want people to understand is what an AI API gateway actually looks like. When I send a prompt, where does it go? How do I monitor tokenization? How do I control spend? Do I rate-limit my AI usage? Do I cache? Do I use an MCP server or not? Do I use a local model, a hybrid model? These are questions that flow through my head from a research, application, architecture, and governance standpoint every day. It's never dull.

Personally, I use a range of models — including a local model on my M-series Mac — because running locally lets me control the attack surface and know exactly what data I'm interacting with.

Matthew Connor: That's exactly the right way to be thinking about it. It's not as simple as "do we use Claude or Gemini?" That's just one of many questions. The data governance piece is enormous and can't be an afterthought. Where do you see this heading for 3D Systems specifically? Are you moving toward more local deployment as hardware becomes more capable?

Fletus Poston III: That's the key question. We're in manufacturing, so we have an R&D arm with teams that are eager to move fast in whatever direction leadership points. The real decisions are: is this the right time to invest in GPU-heavy devices? Do we build our own MCP infrastructure or host someone else's? Do we adopt a shared responsibility model? Do we just enable all available MCPs and let governance grow with us?

And Moore's Law as we knew it no longer applies to AI — the cycle is now days, not months. What took minutes or hours in January is now happening in seconds. That acceleration is both exciting and terrifying. My bigger concern is: how quickly can I put a human back in the loop? When you give agents autonomy, we've seen what can happen — the incidents with OpenAI's systems and other models acting outside expected parameters are instructive. Every day there's a new term: prompt injection, poisoned models, rogue agents. Even consumer subscriptions reflect this — the cost of AI services has jumped significantly in the past year because AI is no longer a buzzword, it's infrastructure, and organizations are now absorbing the real cost of that.

I was talking with some friends at dinner last night, and I believe we'll start paying for AI energy consumption the same way we pay for data center compute — because the infrastructure footprint is exploding. Oracle just announced headcount reductions to reallocate spend toward data center buildout. A couple of years ago, if you told me we'd be building more data centers, I would have laughed — we went all-in on virtualization, containerization, and cloud. Now we're building physical data centers in locations chosen purely for cheaper energy, including underwater deployments for cooling efficiency. That brings real emissions and grid stability concerns. I spent time earlier in my career at a regulated utility, and we talked about grid modernization constantly — the U.S. grid's stability is a genuine concern, which is part of why the government has invested heavily in hardening it after incidents in Texas, the Midwest, California, and here in the Carolinas.

Matthew Connor: And space-based data centers are an interesting frontier too — great cooling, great energy once you're up there, and no terrestrial grid dependency.

Fletus Poston III: Absolutely. And there are also mobile data centers now — fully pre-built racks on wheels that you can load onto a truck and physically move from California to Texas to Chicago. The model moves with you from site to site without introducing network latency or data-in-transit risks. And that transit risk is real — data encrypted at rest on a server may not be encrypted in transit. Traffic at the router level is often in the clear unless you've implemented end-to-end encryption or mutual TLS. When your AI agents are communicating on your behalf across the internet, the data leakage exposure from routable LLMs is something most organizations haven't fully thought through.

Matthew Connor: A hundred percent. And on the energy question — it's always struck me as strange that nuclear hasn't been pushed harder. It's clean, safe, and abundant. The regulatory timeline has historically been the killer — a decade to get approved, another decade to build. But we now have much smaller, more portable reactors that can power a data center without a massive facility footprint. I don't understand why that isn't being fast-tracked, because at some point the energy demand from AI infrastructure is going to hit hard — especially when our competitors have significantly more generating capacity than we do.

Fletus Poston III: You're right, and the regulatory burden has historically been the bottleneck. But the technology is genuinely different now — smaller footprint, safer design. At its core, power generation is just a turbine: the question is what fuel you use. Nuclear, hydro, fossil, renewables, biomass — different countries use different sources. The infrastructure buildout will force a reckoning on that. And the underwater and space deployments aren't science fiction — large data centers are already operating underwater to manage heat and satisfy environmental regulations around thermal discharge. We solved similar problems with nuclear plant cooling decades ago. So between space cooling, underwater deployments, and next-generation reactor designs, there are real paths forward. The urgency just needs to match the scale of the problem.

Matthew Connor: Now, you mentioned machine learning earlier, which I want to dig into — because most people were oblivious to AI until ChatGPT arrived and suddenly everyone knew what an LLM was. But machine learning has been embedded in security tools for years, and products like Darktrace use it really effectively. It's far superior to just bolting an LLM onto a traditional security product, which introduces new attack surfaces like prompt injection. What's your take on matching the right AI model to the right job?

Fletus Poston III: We keep using "AI" as a catch-all, but the field has been around for decades. The intelligence part has always been the question — that's where heuristics, machine learning, and behavioral baselining came in. Machine learning is fundamentally about data analytics and normalization. Without it, we couldn't run R, Python, or visualization tooling at scale. It churns through data to normalize it — that's the core value.

LLMs operate differently. They're doing temperature checks, learning to interpret human language in all its ambiguity. And human language is genuinely hard. Take the word "plane" — are you talking about the aircraft, a plain bagel with no flavor, or the plains where buffalo are grazing? Machine learning won't disambiguate those. LLMs are trying to resolve that through pattern recognition in the token structure, understanding context from surrounding language. That's a fundamentally different problem.

And as we move toward voice interfaces — more and more people speak to models rather than type — that gap matters even more. Voice to text has to interpret accent, cadence, regional patterns. I'm Southern, I draw out my vowels, I slow down my consonants. A model has to learn how I speak. That's why NPUs — neural processing units — are now built into chipsets alongside CPUs and GPUs. Hardware has come back to the forefront because local inference is increasingly viable and necessary. These M-series laptops are effectively supercomputers. Asus is shipping GPU-based machines that qualify as supercomputers. Dell and Lenovo aren't far behind. All the hardware manufacturers figured out they had to play in this space and play fast. The rack has to be small form factor now, stackable, mobile — whether it's on wheels, on a train, or on a shuttle.

Matthew Connor: Exactly — we went from desktop to cloud and now we're coming back local. And for good reason. When AI processing happens on your device, it's faster, safer, and more private. You don't have data traveling to a remote data center and back. You can't fully verify how a cloud provider is isolating your data. And the raw compute in a current iPhone is staggering — the processing power that used to require a mainframe is now in your pocket.

Fletus Poston III: Right, and to put that in perspective: the Apollo launch sequences ran on less code than most basic apps we use today. Your Notepad app has more lines of code than the first shuttle launch. We're talking maybe 32 kilobytes — you could print it on a few sheets of paper. That's how far we've come.

Matthew Connor: Which opens up incredible new possibilities. I look forward to the day when Siri can locally monitor a call in real time and tell grandpa that the person claiming to be Microsoft support is actually a scammer — all processed on-device, nothing leaving the phone.

Fletus Poston III: That's exactly where it's heading. It'll screen the call, take the first ten to fifteen seconds, analyze the voice against known patterns — because it's already learned the voice tendencies of people you actually talk to — and flag anomalies. Which ties directly into the deepfake and voice cloning problem. I actually did a fully remote job a few years back and didn't meet my CISO in person until almost two years in. I remember asking him: how did you know I was actually the person you interviewed? For the first few weeks I didn't even turn my camera on.

And that's not hypothetical anymore. We saw nation-state actors — North Korea specifically — get hired at U.S. companies, have laptops shipped to them, sit in call centers, and use freelancing platforms to farm out the actual work. Every morning their AI would summarize the day's calls and extract personal details — "Matt mentioned he drinks black coffee" — so the next check-in would include a reference to it, building false rapport. That's social engineering at scale, automated.

And our HR systems aren't built to catch this. Applicant tracking systems are getting gamed on both ends — AI-generated resumes being screened by AI agents, with no human in the loop at any point. The first "phone screen" may have been a bot asking preset questions. That's where we're heading, and I'd argue we should be — those repetitive screening tasks shouldn't require a human anyway. The goal is to free people up for the work that actually requires human judgment.

Matthew Connor: That's such a good point. The technology is taking the tasks that humans shouldn't have been doing manually in the first place. But the deepfake side creates real exposure. The video of someone — the unauthorized use of a person's likeness — that's an area where we need both better detection and better legal frameworks. We're seeing entire communities of kids generating deepfakes of classmates, and legally almost nothing happens because they're minors. That's doing real, lasting damage to real people.

Fletus Poston III: Absolutely. I have a friend who's a deputy CISO at one of the identity providers used by TSA and CLEAR. They've had to build deepfake detection into their verification workflows — analyzing light angles, facial geometry patterns, physical consistency across frames. Perry Carpenter has done great public work on this if people want to dig in — he did a demonstration where he stood on a desk chair on video and the deepfake never broke character, never showed the distortions you used to be able to catch by putting your hand in front of your face. And this was with inexpensive consumer tools.

The practical countermeasures are things like geolocation-based knowledge checks. If someone claims to be in Austin, I'll ask something specific about Austin. I know whether the restaurant exists, who the owner is, what their specials are — because I was just there. If you live a block from it, you should know it. It works the same way for video interviews: you can catch someone who's not where they claim to be with a few casual local knowledge questions that an AI agent answering on someone's behalf will get wrong.

The challenge phrase is another strong countermeasure for personal use. I tell my family: pick a question whose answer is something a machine would never guess. The question could be "where's my car parked?" The answer isn't "in the driveway" — it's "purple dinosaur, twelve forty-two." An AI is going to give you a reasonable answer to that question. It won't give you that one. If whoever is calling you can't produce that phrase, it's not who they say they are.

Matthew Connor: That's brilliant — it's literally the challenge password concept from centuries of cryptography applied to the modern deepfake problem. Simple, undefeatable, no technology required.

The same principle applies to business email compromise. How many billions were lost before organizations adopted the simple rule of never wiring money without a voice confirmation? Like the Hong Kong case where a finance team wired over $30 million after a deepfake Zoom call where the entire "board" was fabricated. The fix was always simple: just call the person directly on a known number.

Fletus Poston III: Exactly. And from a social engineering standpoint, I always tell people: when you get an urgent request — especially one that feels just slightly off — stand up, take a lap around your desk, get a sip of water. That physical context switch breaks the emotional urgency that attackers are deliberately triggering. It resets you enough to ask: Matt has never asked me to do this before. Why now? Why me instead of Sally? Let me just ping Sally. Let me actually call my boss directly.

I've used phishing simulation tools and have called my boss mid-exercise saying, "I need you to know — if I didn't know exactly what this was, I would have fallen for it." It's that convincing. And attackers know to target privileged users, and they know to time attacks around holidays when people are distracted and coverage is thin. So I tell people: it's 2026, everyone says go faster, do more with less. I'm giving you permission to spend three seconds instead of one. Take a breath. That's it.

Matthew Connor: Wise words. Simple, practical, and it works. Fletus, I can't thank you enough for coming on today. This has been a fantastic conversation. Before we go, can you tell everyone where they can find out more about you and 3D Systems?

Fletus Poston III: Sure. Personally — shameless plug — we have our Cyber Summit coming up next week at Appalachian State. There's a Women in Tech dinner and then the main Cyber Summit, which is a fundraising event with all proceeds going to student scholarships. Mark Burner from Lowe's is our keynote, and Lowe's is a premier platinum sponsor. If you're in the Virginia, Tennessee, or Carolinas area, come check it out.

I'm active in the Charlotte community, so you'll run into me there. I have a YouTube channel where I post commentary on topics like what we covered today, and I'm active on LinkedIn with a lot of my writing. Just search for Fletus Poston and you'll find me.

On the 3D Systems side — our biggest focus areas are personal healthcare and industrial applications, but aerospace is an exciting and growing part of the business. We're always out at summits and conferences, and we're always looking for talent. If commercial 3D printing, bioengineering, or aerospace interest you, check out our careers page.

And at the end of the day, whatever we build and whatever technology we deploy, it comes back to the person. Safety first, always. Whether you hold a security certification or you're just trying to navigate the world, it's human safety first and technology second. Slow down, think, stay safe, and stay secure.

Matthew Connor: Well said. Thanks, Fletus, and until next time — take care.

 

Read On