Role-Based AI, Culture-First Hiring, and the Future of Human-Centered Tech with Laurel Cipriani - Ep 204

Lauren ImageLaurel Cipriani returns to the Cyber Business Podcast for a second conversation that goes deeper and broader than the first. As CIO at AffirmedRX, a transparent pharmacy benefits management company and public benefit corporation legally obligated to put patients ahead of profits, Laurel brings a background unlike almost any other CIO in the industry. She trained in psychology, became a registered nurse, spent years in health administration and clinical quality, and arrived in IT through a path that has given her a perspective on people, culture, and human-centered technology that is genuinely rare at the executive level. She is also an active member of the Digital Economist think tank in Washington DC and is joining the World Technology Congress, a Switzerland-based international think tank, as this episode records. 

 

apple
spotify
stitcher
google podcast
Deezer
iheartradio
tunein
partner-share-lg

Here’s a glimpse of what you’ll learn: 

 

  • How Laurel is rolling out a role-based AI strategy at AffirmedRX where tool access, permissions, and accountability are all determined by what each person actually does
  • Why she is considering hiring dedicated AI fact checkers and what that says about the current state of AI output reliability in high-stakes environments
  • What the representation gap for women in IT leadership actually looks like from the inside and why culture fit may be more important than credentials in closing it
  • How AI is currently reinforcing gender bias through scraped training data and what that means for the next generation of models
  • Why Laurel believes AI could eventually help solve the root causes of gender inequality if developed and governed thoughtfully
  • How the anonymity of the internet has amplified harmful behavior and why removing it may be more beneficial than most people are willing to admit
  • What it means to lead a technology team with compassion as a core value and why that quality is becoming more important as AI takes over more execution work
  • Why Laurel believes the most important question for this generation is not whether to use AI but how to use it without losing what makes us human

 

 

In this episode…

Laurel opens this return visit with an origin story that sets the tone for everything that follows. From aspiring grief therapist to floor nurse to health informaticist to CIO of a public benefit corporation, her path into technology was never linear and never conventional. What runs through all of it is a single thread: a desire to help people and a belief that technology is most powerful when it is built around human needs rather than the other way around. That philosophy is now embedded in how she is building the AI strategy at AffirmedRX, where every steward in the company will have a clearly defined set of tools, permissions, and accountability structures tied directly to their role. No one gets unfettered access. No output goes unreviewed. And no AI system will ever make a decision without a human signing off.

The conversation on women in IT leadership is honest and specific in ways that broader industry discussions rarely are. Laurel notes that virtually every person on her own team is male, not by design but by the reality of a candidate pipeline that still skews heavily toward men. Her response is not to lower the bar but to raise the profile of culture as the primary filter in hiring, something AffirmedRX does formally through a culture screening call before any other evaluation takes place. She makes the case that as AI raises the floor on individual capability, the differentiator between good teams and great ones will increasingly be how people work together, not what any individual can produce alone. That shift, she argues, naturally favors the holistic, relationship-oriented thinking that women have historically been undervalued for bringing to technical roles.

The deepest thread in this episode is the one that connects AI governance to human development in ways that go well beyond the enterprise. Laurel is conducting original research through the Digital Economist on how AI and internet anonymity are amplifying harmful behavior toward women, how gender bias baked into training data is being reinforced at scale in AI models, and what it would take to actually interrupt those cycles rather than just acknowledge them. Her conclusion is not pessimistic. She believes AI, if governed with the same intentionality she is applying at AffirmedRX, could become the most powerful tool ever built for identifying and dismantling the cultural patterns that have kept inequality in place for generations. Getting there requires the same thing everything else in this conversation requires: humans staying in charge, staying accountable, and refusing to let speed become an excuse for carelessness.

 

Resources mentioned in this episode

 

Matthew Connor on LinkedIn
CyberLynx Website
Laurel Cipriani on LinkedIn
AffirmedRX Website

 

Sponsor for this episode...

 

This episode is brought to you by CyberLynx.com  

CyberL-Y-N-X.com.

CyberLynx is a complete technology solution provider to ensure your business has the most reliable and professional IT service.

The bottom line is we help protect you from cyber attacks, malware attacks, and the dreaded Dark Web.

Our professional support includes managed IT services, IT help desk services, cybersecurity services, data backup and recovery, and VoIP services. Our reputable and experienced team, quick response time, and hassle-free process ensures that clients are 100% satisfied. 

To learn more, visit cyberlynx.com, email us at help@cyberlynx.com, or give us a call at 202-996-6600.

 

Check out previous episodes:

 

Why Every CISO Must Use AI Now and How to Do It Without Losing Control with Greg McCord - Ep 203
Identity Is the New Perimeter: A Cybersecurity Director's Playbook with Jason Lawrence - Ep 202
How AffirmedRX Is Using Technology to Fix a Broken Healthcare System with Laurel Cipriani - Ep 201

 

Transcript: 

 

Laurel Cipriani Interview Transcript

Cyber Business Podcast – Return Episode 

Host: Matthew Connor

Guest: Laurel Cipriani, CIO, AffirmedRX


Matthew Connor: Matthew Connor here, host of the Cyber Business Podcast. Today, Laurel Cipriani, CIO of AffirmedRX, returns to the show. Laurel, welcome back.

Laurel Cipriani: Thank you! I am so excited to be back today. I couldn't wait.

Matthew Connor: We're so excited to have you back. I've been looking forward to this — there are so many things we didn't get to cover last time. But before we get into all of that, a quick word from our sponsors.

This episode is brought to you by Cyberlinks.com. Hackers are getting smarter — is your security keeping up? Cyberlink sells industry-leading, AI-powered cybersecurity solutions that detect threats in real time, so you know about an attack before the damage is done, not after. Learn more at cyberlinks.com. And now back to our show.

Laurel, like I said, there's so much we didn't get to talk about. One of my favorite topics is women in IT — and really, women in leadership more broadly. The two are very closely connected, and I think they trace their roots to the same place. So let's start there. I'm sure we'll solve the whole problem by the end of this episode. But let's dive in — you've had a really fascinating career path. Can you give us your origin story? How did you get to where you are today?

Laurel Cipriani: I would love that. Not to brag, but I do think I have an interesting story as far as CIOs go — I think it's somewhat unusual. I started out studying psychology as an undergrad. I wanted to be a therapist — specifically a grief therapist. I wanted to help people who were mourning. But I found that my heart is a bit too sensitive for that role. I was going home every night in tears; I got really attached to everyone I worked with. So I pivoted to nursing, became a floor nurse very, very briefly, and got my master's degree in nursing. The "very briefly" is because I also turned out to be a terrible fit for floor nursing — I'm very squeamish. I literally fainted in an OR and threw up when someone else threw up. I was a terrible fit, although I loved the patients deeply.

I still wanted to stay in the healthcare field, so I moved toward health administration and the business side. I went to work in an office as a nurse researcher — what today you'd call informatics — and helped tell stories through clinical data at a population health company. I was there for many years, and then I went to work for a large insurer in the Medicare Advantage space, where I ran clinical quality and designed quality programs for seniors to improve their health and satisfaction with the health plan.

From there, I moved to a different large insurer, working with their Medicaid plan doing very similar work — designing quality programs to reach lower-income and more vulnerable populations. That was a fun and challenging role because it wasn't just about delivering care. We literally had to find the patients, since they were constantly in transit with very temporary housing. Lots of great challenges and learning there.

Then I moved into on-site healthcare, also called direct healthcare, which means providing care at the sites of large employers — so if an employee has a sore throat or a bad back, they can just walk down the hall and see a provider. Some sites even had pharmacies; we had a clinic and a pharmacy ourselves. It was a really great benefit model.

I was there for eleven years, and when I started, they placed me in IT immediately. I thought, "This is interesting" — I'd never been in IT before, though I'd certainly worked with technology and loved it. They started me with less technologically sophisticated responsibilities: overseeing training for technology, managing the trainers who taught staff how to use the EMR, and running governance, where we reviewed requested changes to our clinical applications. Over time, they gave me a team of analysts handling configuration and support for some of our applications. Little by little, I grew what I was responsible for, and I really enjoyed it.

That made me want to learn more and become a genuine expert. So I went back for a second master's degree, this time in technology leadership. The coursework focused on how to lead people through periods of rapid innovation and technological change — which is perfectly suited to the world we're living in right now. And the timing was almost perfect, because I was finishing that degree right at the beginning of all the AI hype. I walked out with my degree and into a world where I was being asked to help people be comfortable with AI while also being responsible about how they use it.

While still in that on-site healthcare role, I joined a think tank based in Washington, D.C., whose focus is on keeping humans first in technology and innovation. That's really my true calling. Since I was a little girl, I've just wanted to make the world better. The think tank deeply resonated with me, and I've spent the last twelve months there. I was just invited to do a second year, which I'm very excited about. Through the think tank, I was introduced to my current employer, AffirmedRX. Given my background in healthcare, technology, and ethical AI, I was a natural fit — because AffirmedRX is a transparent company and a public benefit corporation, which means we're legally obligated to put patients ahead of profits. That perfectly aligns with my personal mission. I've been there about ninety days now, and it's been great. I could never have planned this path. When I was little, I just knew I wanted to help people and work hard. This long and winding road has led me exactly where I needed to be.

Matthew Connor: You're right — it's a very unorthodox path. We've had probably 100 to 150 IT leaders on this podcast — CIOs, CISOs, founders — and very few have had anything like your trajectory. We've had the truck driver to IT, but nurse to IT? That's a new one. Congratulations.

I can't help but think about how relevant your background is to what's happening today. You got your master's in technology leadership right at the start of the AI wave, and now we're seeing — since the conflict with Iran began — roughly a 250% uptick in cyberattacks on U.S. infrastructure, particularly in healthcare, finance, and critical sectors. I think AI is a critical part of the defense, when done right. I don't think slapping AI onto everything is the right answer. I think right now it's about using the right kind of AI in the right way — machine learning-based AI, not just the LLMs everybody thinks of. Products like Darktrace using machine learning for identity security, for instance, would have stopped the Stryker breach. Traditional tools didn't get us there, but that kind of AI would have.

And then you have the McKinsey example from last week, where a red team used AI to infiltrate systems and extract large amounts of data in about two hours — something traditional penetration testing wasn't achieving. These are great illustrations of how AI can be the gun in a knife fight. The bad guys are already picking up guns. Where do you think business leaders should be focusing when it comes to implementing AI today?

Laurel Cipriani: What a big question — my mind is in a lot of different places on this. We actually had an all-hands operations meeting this week in Franklin, Tennessee, with our operational and company leaders meeting in person. Part of what I presented was my AI plan for the year and the philosophy we have and are now rolling out company-wide.

The way I'm thinking about it: we will define, based on a person's role, how they can use AI, how they should use AI, and what tools they'll have access to. It will look different depending on what you do. Our patient care advocates — the people who call patients, help them navigate prior authorization issues, or find more affordable medications — will have one specific tool they can use in a defined way. Our sales team writing RFPs will have a different tool. Members of our IT team will have access to more sophisticated tools with more sophisticated permissions. It's truly role-based.

We're also going to do a lot of fact-checking. I'm actually considering hiring dedicated AI fact checkers, because so much AI output is only about 80% accurate, and what we do is too important for errors. At AffirmedRX, we will never produce something from AI without a human signing off on it. That's a cornerstone of our philosophy.

There's also some fear on the team — someone mentioned yesterday that people are worried AI will replace them. That is not something we're doing at AffirmedRX. We are committed to being a humans-first company. We will be one of the organizations that uses AI as an enhancement and augmentor, not a replacement.

So to summarize: role-based usage, tools and permissions tied to role, heavy auditing, and accountability. Whatever you do with AI in your role, you are responsible for the output. If something goes wrong and you didn't fact-check it, that falls on you.

Matthew Connor: I think that's fantastic. It doesn't matter how great the tool is — if it's a hammer and you've got a screw, it just doesn't work. Applying the right tool to the right job is critical.

Let me use this as a pivot into IT leadership and women in IT. You mentioned that everybody was at the meeting this week, which got me thinking — statistically, where are we when it comes to women in IT leadership? I know the numbers are improving for people entering the field, but at the leadership level, the ratio is still nowhere near 50/50. What's your experience been?

Laurel Cipriani: That's such a good question. I actually looked up the percentage of CIOs who are women at some point, and it was definitely not 50% — not even close.

It's funny — I was working from home today and my parents stopped by. After my team call, my mom walked in and said, "Is every single person on your team a man?" And I was a little embarrassed, but I said, "Every person on that particular call was, yes. I do have one other woman on my team." Now, I want to be clear — I didn't build this team. I've been there ninety days. And I have an amazing team; I wouldn't trade anyone on it. But going forward, I would genuinely love to grow the number of women, not just for the sake of the numbers, but because women bring a different and valuable perspective.

And something else happened at the Franklin meeting that I thought was telling. A woman from another part of the organization came up to me after I'd done an IT lunch-and-learn for the whole company about my roadmap for the year. She said, "You are so unusual for a CIO." I asked why. She said, "You're very warm and fuzzy. It's just so strange for a CIO to seem so compassionate." I think that's part of why I was hired — we are a company built around compassion, and that's not typically what people associate with IT leadership.

And as AI takes on more and more, and people are debating what the world looks like fifty years from now, I think it's more important than ever that we have IT leaders who are thoughtful, compassionate, and human-first. Not that men can't be that — of course they can. But in my experience, IT culture has tended to skew toward introversion, toward heads-down technical work. The field has historically attracted fewer people who are drawn to the kind of relational, outward-facing leadership that I think the moment requires. Women, on average, often bring a different dimension to that — and it matters. So I hope to grow those numbers on my own team and to see the broader industry do the same.

Matthew Connor: It'll be really interesting to see how that plays out. You mentioned implicit bias, and I think that's where so much of this originates — starting from the very earliest cultural messages. You look at baby clothes: boys get "future superhero," girls get "daddy's little princess." What message are we embedding from day one? How do you think we get to a place where hiring reflects more of that holistic view — where you're looking at how this person fits the team, the culture, the mission, rather than just whether their résumé matches the job description?

Laurel Cipriani: DEI is a passion of mine for a lot of reasons, and particularly around women. We talked about this at work a couple of weeks ago. The commitment is to always hire the right person — we're not going to hire anyone just to check a box. But I do think part of the underlying issue is that the applicant pool itself skews heavily male for IT roles. That's improving, but it makes it harder for the best candidate to be a woman if 80% of applicants are men.

On culture — that is our absolute number one priority at AffirmedRX, ahead of skill set, ahead of everything else. Our CEO and the entire senior leadership team will not tolerate people who don't uphold the culture. Period. Every candidate goes through a culture screen first — a thirty-minute call with someone from HR — and if they don't pass, they don't move forward, regardless of what their résumé looks like. Because one person who doesn't fit the culture, or who is toxic, can genuinely damage an entire organization.

And I can't help but wonder — is this an area where AI can actually help? If AI increasingly functions as a force multiplier, leveling the playing field on skills, then the differentiating factor becomes something else. It becomes judgment, communication, cultural fit. The delta between the world's best performer and a strong, AI-empowered contributor keeps shrinking. If that's true, then hiring for culture and for how someone will contribute to the team becomes even more important. It's like sports — the best teams aren't built on one superstar. They're built on people who play well together.

Matthew Connor: That's exactly right. And it actually opens up a great point about AI and how we prepare people to use it responsibly.

Laurel Cipriani: Yes — part of my AI approach is training all of our team members, which we call stewards, to be responsible and empowered users of AI.

And your point about the "super person" idea is one I've been thinking about a lot. I have a niece and nephew — teenagers — and we got into a philosophical discussion in my family about whether using AI for homework is cheating. My honest view: it's really not, because they need to know how to use AI to be successful in the future. Having AI write a paper from scratch and turning it in — that's cheating. But using it to get a first draft started, pull together facts, or work backward from an answer to understand the reasoning? I don't think that's cheating. I think that's using the tools they're going to need to know.

Because of AI, we're being forced to rethink everything — what's acceptable, what our cultural norms are. My mom said, "I think people are forgetting how to study." And I said, "I'm not sure everyone needs to study the way we used to." Some roles don't require it. Now, I personally am an input person — I read constantly, all day, all night. Some of us are just wired that way. But I don't think we can force that on everyone, especially when the world they're entering looks so different from the one we were trained for.

Matthew Connor: That's a really interesting point, and it raises some uncomfortable questions too. We've already seen what cell phones have done — some argue they've atrophied certain cognitive muscles, and social media has had measurable negative effects. Could AI amplify that? If the goal of education is fundamentally to teach people how to think and problem-solve, and AI can increasingly do that better than people, do we find ourselves rolling back what we expect of humans? Are we heading toward a world where human value is concentrated in creativity, art, and culture — things that we intrinsically prize precisely because a human made them?

Laurel Cipriani: You just tapped into one of my other passion projects within the think tank — I'm doing quite a lot of research on the potentially damaging effects of AI and technology on the brain, on social skills, on relationships, and on well-being broadly.

And you make a great point about art and music. There was a celebrity recently who made a comment suggesting something along the lines of "no one cares about opera or ballet anymore." The internet erupted — some agreed, many were horrified. I'm firmly in the latter camp. I love art, I love music, I love culture, and the research is clear: looking at art, creating art, listening to and playing music are genuinely good for our brains. The thought of moving toward a world where people are creating less of that themselves is genuinely scary to me. What happens to our brains if we stop studying, stop learning things ourselves, stop dancing, stop going to museums?

AI is captivating — I get hooked on it myself late at night, churning things out. There's almost a high to it. But it's equally important the next day to get outside, get fresh air, do the things that life is actually about. Otherwise, I think we're all going to suffer for it.

There's actually a concept the Romans had — I'm blanking on the exact term — for how one uses unstructured time. In an era of prosperity, when most Romans didn't have to labor the way we do, the question became: what do you do with your free time? The highest form was considered to be the arts and learning — building yourself as an educated, cultivated person. I wonder if we're heading somewhere similar, where the people who deliberately invest in their intellect, their culture, their physical health, become the new definition of "cool." It's already starting to happen, in a way. And maybe two generations from now, that kind of intentional human development is what distinguishes people, rather than what you can produce or automate.

Matthew Connor: That's a fascinating and genuinely optimistic framing. Let's pivot to the women-in-leadership conversation more directly. When we look at the cultural undercurrents — the ways boys and girls are raised differently, the biases baked in from infancy — why do you think it's been so hard to shift? And why, for instance, is a sexist comment still more likely to get a pass in a room full of men than an overtly racist one?

Laurel Cipriani: You've opened a Pandora's box — and I'm glad. The research I'm doing specifically looks at how AI and related technologies are, in many ways, perpetuating the very dynamics you're describing. It's become easier than ever to express misogyny anonymously online — in forums, in gaming spaces, in the metaverse. I've read about things like AI-powered physical "companion" dolls being rented out at so-called brothels in Europe. A journalist went undercover and found these dolls had been badly damaged — beaten, dismembered. The argument made by some proponents is that it provides men with an "outlet" so they don't harm real women. But the counter-argument is devastating: would we create spaces where teenagers could go shoot mannequins dressed as classmates to "work out their aggression"? Of course not. The outlet argument doesn't address the underlying issue; it normalizes and reinforces it. It's like telling someone who drinks themselves into an abusive rage every night that what they need is a punching bag — when what they actually need is to address the alcoholism and whatever pain is driving it.

And similar dynamics play out in the metaverse and elsewhere online — women being harassed, assaulted, demeaned, and then silenced when they speak up about it. The anonymity that the internet provides has made it easier for people to act out impulses that social accountability would otherwise suppress. Which raises the question: what's actually lost by removing anonymity? If what you're doing is fine, why do you need to hide it?

Matthew Connor: I think you're onto something important. And I'll share my own perspective, for what it's worth. I grew up very poor in Pennsylvania, joined the Army to escape that, and used it as a springboard to get to where I am. But I have siblings and people from my hometown who think very differently. And I think the roots go really deep — from the earliest messaging we give boys and girls, the clothes, the colors, the toys, the "boys don't cry" culture. We're stacking so many pressures on boys from infancy — be tall, be strong, be a provider, don't show weakness — and then when they can't live up to an impossible standard, something breaks. That isn't an excuse for bad behavior. But I think those pressures create a kind of fragility that, under the right conditions, curdles into resentment or hostility. The guys who seem most at ease, most genuinely confident, tend to be the ones who somehow got unconditional acceptance early — great parents, strong communities, something that told them their worth wasn't contingent on meeting an impossible checklist. How do we make that the norm?

Laurel Cipriani: I think you're right. And from my psychology background, I think some men genuinely are struggling to adjust to how rapidly women's roles have changed. For men who grew up with a very fixed picture of what a man is and does — provider, authority, the one on the pedestal — watching women step confidently into those same roles, earning well, leading organizations, choosing not to have children or stay home, can feel destabilizing in ways that are hard to articulate. And if no one has given them the emotional tools to work through that, it can come out sideways.

I don't think that's kooky. I think there may genuinely be a need for some kind of cultural acknowledgment — we know your world has changed, and here's how we help you find your identity within that. Not to coddle bad behavior, but to address the root of it.

And then bringing it back to AI — all of those gender dynamics get encoded into the models. When the internet is scraped and fed into AI systems, the biases embedded in our culture get reinforced and amplified. We're making progress in addressing that, but the work is enormous, because the history runs so deep and the present isn't much better in a lot of spaces.

Matthew Connor: Here's a perspective I want to offer: maybe AI is actually what fixes it. Not "AI fixes everything" — but if AI becomes smart enough to trace the origins of these patterns, connect the dots from infant clothing to cultural norms to institutional bias to measurable outcomes, and then clearly show us where the interventions are — maybe that clarity is what finally moves people. Maybe two generations from now, with that kind of guidance baked into how we raise children, it really does look different.

Laurel Cipriani: I love that. And I actually believe it. If AI is as capable as we think it is, it's going to look at all of this data, see how it all connects, and say — here's where it starts. Here's what you change. And people will believe it because the evidence will be undeniable. Maybe it's two generations. Wouldn't that be something?

Matthew Connor: Laurel, we could do this all day — and I mean that in the best way. Another fantastic episode. Before we go, can you tell everyone where they can find out more about you, AffirmedRX, and your think tank?

Laurel Cipriani: Absolutely. Please check out AffirmedRX — it's one word — we have a great website. And you can find me on LinkedIn at Laurel Cipriani. I would love to connect with anyone who wants to geek out on history, culture, technology, psychology, or healthcare. Bring it on.

I also want to give a shout out to the Digital Economist, the think tank I've been with for the past twelve months. And I'm thrilled to share that I've just been invited to join the World Technology Congress, based in Switzerland, as well — so I'm excited to be part of an international think tank and see what good we can do there.

Matthew Connor: As that chapter unfolds, we'll definitely have you back to talk about it. Thank you, Laurel — until next time.

Laurel Cipriani: Thanks for having me. See ya!

Read On

Blending Technology, Facilities, and Leadership in Hybrid Work with Chris McCay

Blending Technology, Facilities, and Leadership in Hybrid Work with Chris McCay

Chris McCay serves as Vice President for Corporate Infrastructure at Brailsford and Dunlavey, a...

Read more
Women in IT, Allyship, and the Future of Technology Leadership with Shannon Thomas

Women in IT, Allyship, and the Future of Technology Leadership with Shannon Thomas

Shannon Thomas serves as Chief Information Officer at Mitchell Hamline School of Law in Saint Paul,...

Read more
Measuring and Managing Technical Debt with Dr. Ken Knapton

Measuring and Managing Technical Debt with Dr. Ken Knapton

Dr. Ken Knapton is the Chief Information Officer at Win Brands, the parent company of Costa Vida...

Read more