Securing AI, Data, and Infrastructure at Government Scale with Steve Orrin
Steve Orrin serves as Chief Technology Officer and Senior Principal Engineer at Intel Federal, where he operates at the intersection of advanced computing, cybersecurity, and national security missions. In his role, Steve works closely with U.S. federal agencies and the Defense Industrial Base to translate mission requirements into hardware, firmware, and software capabilities that can operate at massive scale and under elevated security demands. He also feeds those real world requirements back into Intel’s product and research teams, helping shape future platforms that support government, critical infrastructure, and highly regulated industries. His background places him in a unique position to explain how technologies pioneered for government use often become the next standards adopted across the commercial sector.
Here’s a glimpse of what you’ll learn:
- Why federal government requirements often predict future commercial security standards
- How AI and cybersecurity must be addressed across the full lifecycle
- Where AI delivers real value in security operations versus where expectations fall short
- What confidential computing solves and why data in use is the next security frontier
- How post quantum cryptography timelines are being driven by government mandates
- Why hardware based security controls matter for cloud, edge, and mission systems
- How memory safe technologies can eliminate entire classes of cyber attacks
In this episode…
Steve explains his role at Intel Federal as a three part function. He helps government agencies adopt the right technologies for their missions, translates those requirements back to Intel’s internal product and engineering teams, and supports innovation where standard commercial solutions do not fully meet government needs. This two way translation ensures that future platforms align with real world mission and security demands.
The discussion moves into AI and cybersecurity, which Steve frames across three dimensions. Organizations must secure AI systems themselves, use AI responsibly to improve cybersecurity operations, and defend against adversaries that are also leveraging AI. He emphasizes that AI cannot be treated like traditional software. It requires governance, validation, and continuous monitoring across data sourcing, training, tuning, and deployment.
Steve outlines where AI is delivering tangible value today. Rather than detecting entirely new threats in isolation, AI excels at automating repetitive, high volume security tasks. By reducing the operational burden of routine alerts, patching, and triage, AI allows security teams to focus their expertise on higher impact risks and emerging threats.
A key segment of the conversation focuses on confidential computing. Steve explains how protecting data in use closes a long standing security gap that encryption at rest and in transit cannot address. Through trusted execution environments, memory encryption, isolation, and attestation, organizations can protect sensitive workloads even from compromised operating systems or untrusted cloud environments. This capability is especially relevant for AI models, intellectual property, and mission critical workloads deployed across cloud, edge, and disconnected environments.
The episode concludes with a forward looking discussion on post quantum cryptography and secure mission platforms. Steve explains why the threat is not limited to future quantum computers, but to data being harvested and stored today for later decryption. Government driven timelines are accelerating adoption, and commercial industries will benefit from following the same path as compliant products become broadly available.
Resources mentioned in this episode
Matthew Connor on LinkedIn
CyberLynx Website
Steve Orrin on LinkedIn
Intel Corporation Website
Sponsor for this episode...
This episode is brought to you by CyberLynx.com
CyberL-Y-N-X.com.
CyberLynx is a complete technology solution provider to ensure your business has the most reliable and professional IT service.
The bottom line is we help protect you from cyber attacks, malware attacks, and the dreaded Dark Web.
Our professional support includes managed IT services, IT help desk services, cybersecurity services, data backup and recovery, and VoIP services. Our reputable and experienced team, quick response time, and hassle-free process ensures that clients are 100% satisfied.
To learn more, visit cyberlynx.com, email us at help@cyberlynx.com, or give us a call at 202-996-6600.
Check out other related episodes:
Why Collaboration Beats Competition in Cybersecurity with William Curtiss of Evanta
Inside AeroVironment: Managing Global Cybersecurity for Uncrewed Defense Systems
Securing Mortgage Data in a 50-State Compliance Maze with Rohbair Jean
Transcript:
Cyber Business Podcast – Steve Orrin, CTO & Senior Principal Engineer at Intel Federal
Matthew: Matthew Connor here, host of the Cyber Business Podcast. Today we're joined by Steve Orrin, Chief Technical Officer and Senior Principal Engineer at Intel Federal. Steve, welcome to the show.
Steve: Thanks for having me today, Matthew.
Matthew: Thanks for joining us. Before we get too far in, a quick word from our sponsors.
[SPONSOR READ: This episode is brought to you by CyberLynx.com. Do you know if a hacker is in your system? Most people and most companies don't — until it's too late and the hacker has already done damage. A hacker's job is to bypass your security, so companies need a way of knowing when someone has gotten past their defenses. That's where CyberLynx comes in. We've partnered with the best cybersecurity companies in the world to provide our clients with the best solutions at the best prices — whether it's managed SIEM, SOC, EDR, MDR, or XDR. We'll help you find the right solution at the right price. Find out more at CyberLynx.com.]
And now back to our show. Steve, what's it like being the CTO and Senior Principal Engineer at Intel Federal?
Steve: It's a fascinating role — it really has three parts. A large part of the job is working with the federal government and the broader ecosystem, including the Defense Industrial Base, to understand their needs and requirements. That means helping them adopt the right technology, the right architectures, and the right features within both our hardware and our ecosystem of OEMs and software providers that leverage Intel capabilities — all in service of their mission and enterprise goals.
So there's an external-facing component that's really about helping the federal government make the best use of the technology they have today and plan for what's coming, so they can acquire what's needed to make their missions successful.
The other half of the job is translating those requirements, those mission environments, and those government-specific needs back to our business units. That way, as we're building, designing, and researching the next generation of hardware, firmware, and software, we're meeting the government where they are and where they're going. So it's a bidirectional function — helping the government adopt capabilities, and translating their requirements back internally so our products are ready for them.
The third part is where we get to do some interesting innovation. Not everything available off the commercial shelf is going to fit the government's mission and enterprise requirements — whether that's the sheer scale of their computing environments, elevated security requirements, or specialized form factors for embedded and mission platforms. Planes don't operate the same way as laptops. So we get the opportunity to do customization, enhanced COTS, and research into areas that could become standard commercial products in the future but right now address a specific government need. Some of that is funded by the government, some by Intel or in collaboration with them.
One of the interesting things about working with the government is that it's in many respects a macrocosm of the private sector. The VA is one of the largest health providers in the world with one of the largest insurance operations — it has the same problems as a Blue Cross Blue Shield, just at a bigger scale and with enhanced security requirements. CMS and HHS are probably the largest payment systems in the world, so you see many of the same application challenges you'd find in financial services, just at increased scale or in more controlled environments. So the work is really about figuring out where the government is going and how to take products that work in the private sector and adapt them to the specialized environments and scale the government operates at.
The other interesting thing is that in many cases the government is a vanguard for what commercial will need. Especially in the security domain — the threats the government is dealing with today, financial services will be facing in the not-too-distant future. If we can build for the government and make it good enough for them, you'll start to hear the same requirements coming out of financial services and other regulated and critical infrastructure industries. A great example is FIPS — for a long time in the 90s and 2000s it was purely a government standard. Now you can't do financial computing without seeing that requirement in any procurement. It's become a global industry standard. And we see that trend continuing across advanced detection capabilities, network security, and beyond. The government is often driving the industry a couple of years ahead.
Matthew: That's really fascinating — and I think most people don't realize that's actually what's happening. It's out of necessity, because the federal government is the largest and most targeted attack surface in the world. The number of attempted attacks they face every single day is absolutely insane. It's hard to even fathom the scope of what they have to defend. And so they have to be cutting edge — which is kind of the last thing you'd associate with government in general. Maybe NASA, sure — but a place like the VA is one of the last places you'd expect innovation, and yet when it comes to security, it has to be there.
I'm excited to dig into all of this, and especially where AI and cybersecurity intersect. You're in a really unique and fascinating position, and I don't want to waste it. So when it comes to Intel, the government, cybersecurity, and AI — where do you land? Where is this going?
Steve: One way to look at it is that AI and cybersecurity really have two or three sides to them — a three-dimensional coin, if you will. The first question, which a lot of organizations globally are grappling with, is: how do I secure my AI? Everyone is adopting AI faster than they can secure it. The flip side of that coin, which you hear a lot from CISO organizations and security operations, is: how can I leverage AI to enhance my cybersecurity? And then the third face — maybe the edge of the coin — is: how do I protect myself from AI-enabled adversaries? So you have to think about it from all three dimensions: managing the risk AI introduces to your organization, using AI to reduce risk, and recognizing that the adversaries are adopting it too — possibly even faster.
Let's start with the first piece. One of the key things to recognize is that AI isn't just a piece of software you can deploy and protect like other applications. There's a life cycle involved. AI starts with data, and it consumes data at incredible scale. So protecting your AI starts at the very beginning — applying tried-and-true process, people, and technology governance across the entire life cycle. Where did the data come from? How was it governed? Was it inspected for quality, bias, or malicious code before you used it? What happened to the data and the models during training and tuning? Every step of that process needs to be addressed from a risk management perspective.
Because if all you end up with is an inferencing algorithm ready to deploy, you have a black box — and no idea what intrinsic vulnerabilities you're bringing to the table that you now have to secure. Once you get there, the next mistake is thinking a guardrail and a firewall are enough and you're done. Nobody's actually actively monitoring whether the AI is still behaving as expected, or whether someone's running injection attacks or manipulating the infrastructure. It requires continuous monitoring. Zero trust applies to AI — it's just a different way of applying it. There are conferences this year specifically addressing the nuances of agentic identity, hallucination detection, and model drift. All of that is part of the overall story: thinking about AI from an ongoing perspective, with continuous monitoring and validation, and the ability to roll back to a known good state if something goes wrong.
It reminds me of the early web application days, when everyone was focused on protecting the web server from OS-level attacks but no one was looking at the input field. And of course, all the attacks came through the inputs — cross-site scripting, SQL injection. Now we have a new term: prompt injection. It's the same problem. We're living the past. The guidance really comes down to: understand how to secure your AI throughout its life cycle, and how to protect it once it's deployed — because AI is no longer just a ChatGPT interface. It's being distributed into things, into systems, into sensors, drones, planes, and cars. You have to protect it where it lives, surrounding it with the right controls, validation, and attestation to make sure it's always operating in a known good state.
Matthew: And I think another area that I really get excited about — speaking of the end user — is products like Darktrace, where they're using machine learning in exactly the right way. Like their email security — it actually understands how Jane writes her emails. Suddenly Jane's sending 600 emails at 2:00 in the morning and Jane is usually not working at that hour — got it. That's the kind of thing where you see the future. And I'm looking forward to the day where AI can help protect vulnerable people too — like being able to detect a scam call in real time on your phone and say, "This is clearly a scam. Microsoft is not calling you. I'm going to hang up on them now." That's where AI fitting into cybersecurity in practical, human-centered ways gets really exciting.
Steve: Absolutely. And taking that a step further — wouldn't it be even better if the AI was on the telecom network itself, so that call never even reached the senior? That's the real win. Because then all the scam texts — the guy sending out 12,000 a day — you just shut that down at the source. Moving that capability further up the food chain is exactly the right direction.
And I want to pick up on something you mentioned about where an LLM can actually be useful. I'll use CrowdStrike as an example — several other companies are doing this too. They're not just using AI to automate cybersecurity operations. One of their big innovations was including an AI chatbot for the operator — the person actually using their products. Large security companies like CrowdStrike or Palo Alto have a huge myriad of products, and a typical IT organization manages anywhere from 70 to 150 security tools. There's constant churn. Do you remember how to use all of them? Having an AI chatbot augment that is a real efficiency gain. You can say, "I've got this event — what do I do?" or "What was that network IP address we just patched?" Being able to interact with your tools conversationally means you're not picking up the phone and calling somebody. That's a meaningful efficiency.
It's the same way we've used chatbots to automate customer service — now we're seeing AI chatbots help automate the process of interacting with your own security products. I've actually played with the product — it's really cool. You can ask it, "I've got a hit in my SIEM — what control did we use last week? Apply that." Done. No dashboard-diving, no writing a line of code. That's a powerful use of AI for efficiency — not trying to catch something no one has ever seen before, but making the day-to-day work dramatically faster.
Matthew: And I really love that CrowdStrike and SentinelOne both did brilliant implementations of that. Such a great use of an LLM in the right context. And you see things like Copilot that are still maturing — you can see where it's going, and when it finally does what you want it to do consistently, it's going to be tremendous. But CrowdStrike and SentinelOne really knocked it out of the park. Solid, well-targeted uses of AI in a cybersecurity product.
Steve: Totally agree.
Matthew: So let's move on — because there's another topic I'm really excited about that we haven't touched on at all yet on this podcast: confidential computing. This is the last-mile problem of data and application security. For those who aren't familiar, can you walk us through it?
Steve: Absolutely. Let's start with the basics of data security. At the end of the day there are three aspects: data at rest, data in transit, and data in use. Data at rest — that's full disk encryption, file encryption, BitLocker and similar tools we've been using for years. Data in transit — that's your network security protocols: TLS, IPsec, VPNs. Well understood. The last mile, as you put it, is: what about when data is actually being used? When it's being transacted upon, sitting in memory, available to the application — it's no longer encrypted on disk, it's not going over a network, it's in the clear being actively worked on. How do you protect it there?
That's what confidential computing targets. It uses a technique called a Trusted Execution Environment, or TEE. There are two main aspects to that, plus a third piece that ties it together. First is memory encryption — from the CPU to memory, the data is encrypted as it leaves the CPU and travels to the memory card. So its entire life in memory is as encrypted data. Second, there are specific controls in the CPU itself that isolate that data and those memory accesses from the rest of the applications, OS, drivers, and BIOS. So even if you have compromised firmware running at the lowest level in software, the CPU-level control means it can't get access to the data in that trusted environment. Third is attestation. How do you verify that your data is actually running in that kind of environment, or that the environment is available and trustworthy before you put your secrets and keys into it? Attestation takes a measurement — a cryptographic quote and digital signature — that incorporates both the environment and the CPU it's running on with a unique verifiable key. That allows you to confirm: yes, I'm running on a legitimate platform, and this trusted enclave is operational where my application and data reside.
Those three pieces form the foundation of confidential computing. Now, there are different implementations depending on what we call your Trusted Control Boundary — how large an environment you need to protect. The most basic form is Total Memory Encryption: all of memory is encrypted with a single root key, no separation between applications. That protects against someone popping a DIMM or probing memory as it flows — cold boot attacks and similar physical-level threats. Good, but it doesn't differentiate between applications or data sets.
Moving up the stack, if you want to protect a whole VM — think cloud environments where you want Coke protected from Pepsi, where you're running as a tenant and don't know who else is on the same hardware — Intel's technology for that is called Trust Domain Extensions, or TDX. Other vendors have their equivalents. It gives you VM-level protection, your entire VM in its own trusted execution environment.
Then the most restrictive version is Software Guard Extensions — SGX. That takes it further: it's just one application and its data, isolated even within the OS. There's a small protected space with a controlled call gate between the rest of the app and that protected region. Use cases for that include putting your TLS stack in an enclave if you're running a web server, keeping DRM keys protected, or handling classified documents where the authentication and data should only exist in that protected space while the rest of the application operates normally.
So there's a spectrum: ease of use goes up as you broaden the scope, and security granularity goes up as you narrow it. You pick based on your risk model — a domain extension can handle a terabyte or two of data, whereas an enclave is much more constrained, and TME covers all of memory. These technologies are available today — they've been around for years and are available in all the major cloud providers.
What confidential computing does is close that last-mile gap: how do I protect my data when it's being used, whether in a cloud where I can't fully verify what the provider can see, or at the edge where you don't have guards and locks — you've got a distribution station sitting out in a field with a fence, a poorly maintained camera, and a cabinet lock. Putting these technologies into those servers means that even if someone physically gets in, they can't access the data. They might be able to cause a denial of service by taking the system, but the data is protected.
And this ties back directly to AI. You've spent millions of dollars training a model. How do you protect those weights — keep someone from stealing or tampering with them? Putting your model into a confidential computing environment means you can deploy it into the field, into a car, into the cloud, and maintain control over it as it operates. That is genuinely super cool.
Matthew: It really is. It's a little geeky, which I love. And it ties back perfectly to something you mentioned earlier — FIPS. That was a government-only standard for decades, and now it's a global industry requirement. Along those lines, what do you see the government implementing now that industry might not recognize for years?
Steve: There are a couple of areas. I'd be remiss not to mention post-quantum cryptography — that's probably the biggest topic in government right now. January 1st, 2027 is one of the first hard deadlines: national security systems must start procuring equipment, software, and services that are post-quantum cryptography compliant. Then 2030, 2031, and 2035 are the deadlines by which everything must be post-quantum. The US government is the most aggressive in its timeline for exactly the reason you mentioned — they're the largest and most at-risk target for a nation state eventually fielding a quantum computer.
The thing that people don't fully understand is why the urgency is now, even if a relevant quantum computer might be 20 years away. It's the harvest now, decrypt later problem. It's not that your data in 20 years will be compromised when a quantum computer comes online — it's that the data you're encrypting today is being downloaded off the internet by adversaries right now and stored in massive data centers, waiting for that day. All of the communications and data being transported or stored today that you want to protect long-term is going to be at risk the moment a sufficiently powerful quantum computer becomes available to a nation state. And when that happens, they can go back in time and decrypt everything going back to whenever they started recording — in trivial amounts of time.
That's why the risk is so high and why the government is pushing hard now. No one knows exactly when that threshold will be crossed — some say five years, some say twenty. Honestly, the exact timing doesn't matter. It's that whenever it happens, everything encrypted with today's algorithms going back in time is at risk. The UK government has a 2028 deadline. Everyone is moving toward this. And the good news is that because the government is such a large purchaser of hardware, software, operating systems, and cloud services, their mandates will push all of those vendors to deliver post-quantum solutions. So when financial services and healthcare and retail decide this is the risk they're going to invest in — the ecosystem will have already figured it out for the government, and those products will be ready.
Financial services is actually tracking this fairly well, because the keys that encrypt credit card and cash transaction data are long-lived. You don't want someone going back and decrypting those. Same with blockchain and crypto — you don't want someone going back in time and changing values so they magically have more crypto in their wallet. Those are things you want to protect and migrate now.
The second area is what in the government space we call mission platforms — what industry calls IoT and industrial IoT. The difference is really just that one is flying fast in a plane and one is controlling robots on a factory floor. Under the covers, a lot of the same technologies apply to both: verified boot, verified firmware, network micro-segmentation, all the security techniques the government is applying to advanced mission platforms. In a few years you'll see the IoT and industrial IoT and critical infrastructure world adopting those same hardware, network, and infrastructure security approaches that the government has been pioneering. NIST is already publishing sector-specific guidance for industry — healthcare devices, vehicles, industrial IoT and manufacturing — so when the private sector is ready, the documentation is there. It's not as immediately exciting as post-quantum cryptography, but as manufacturing floors become fully automated and AI gets connected to operational controls, protecting that infrastructure becomes critical. Things could go very wrong very fast if it's not secured.
Matthew: Those are both great examples. You know I have my new favorite thing with Darktrace — what's one of yours? What's got you excited right now at Intel Federal?
Steve: It's a hard question because there's so much going on, but I'll pick two things. The first — we announced it at CES a couple weeks ago — is the Panther Lake processor, the Intel Core Ultra 3 built on the 18A process node. What that translates to in practice: I actually saw a demo running the latest video game on a small form factor gaming device with that 18A processor, and I swear you're watching reality in 4K. And it was simultaneously running twelve other applications — Word, PowerPoint, a Zoom call — without breaking a sweat. The computing power that puts at your fingertips is a genuine game changer.
Now add AI on top of that, and then think about the government use case. In-theater battlefield management — assimilating video feeds from air support, Navy, drones, personnel in the field, and ground sensors simultaneously, then processing all of that into situational awareness and mission planning and execution tracking. That would have taken racks of servers in the past. Now you can do it on something that looks like a gaming handheld in the field where you need it. And critically, warfighters often don't have access to the cloud — they're in denied environments. Being fully self-contained and operationally capable is an enormous capability leap. That's exciting not just for gaming but for real-world mission computing.
The second is a broader industry trend that Intel is leading: memory-safe languages. We're building hardware technologies to make that transition not just happen, but happen faster — so that when you use a memory-safe language like Rust, you don't pay a performance penalty. You don't get the slowdown from those extra safety steps. From a security perspective, this is less immediately flashy than AI in gaming — but when it's widely adopted, whole categories of attacks disappear entirely. Not just "harder to execute" — they stop working, period. That is a fundamental shift in the cat-and-mouse game we've all been fighting for decades.
We've actually done this before. With Control Flow Enforcement Technology, we eliminated a class of attacks called return-oriented programming — essentially the misuse of legitimate code to do malicious things. At the time, 60% of advanced persistent threats were using that technique. When you knock that out entirely, you do good for the entire industry. Memory-safe languages will do the same thing. When we get them performant and easy to adopt, we change the dynamic so adversaries have to work harder, spend more, and can't rely on easy, cheap techniques anymore. We push them further up the stack where they're easier to detect. That's something I'm genuinely looking forward to.
Matthew: Steve, I cannot thank you enough for coming on today. This has been one of my favorite episodes ever. Before we go, can you tell everyone where they can find out more about you and about Intel Federal?
Steve: Sure. The best place to find information about Intel Federal is intel.com/publicsector — you can see the innovations, technologies, and use cases and what we're doing to help governments, and by extension the whole industry, do better with computing. And the best place to find me personally is LinkedIn — linkedin.com/in/sorrin.
Matthew: Awesome. Thanks, Steve. Until next time!
Steve: Thank you.







