The Arms Race, the Energy Gap, and the Ethics of Teaching AI to Be Good with Alex Dalay - Ep 205
Alex Dalay is the CISO at IDB Bank, a New York-headquartered commercial, private banking, and broker dealer institution with more than 70 years of history. As the security leader of a financial institution that sits squarely in the crosshairs of modern threat actors, Alex brings a perspective grounded in operational reality rather than theoretical frameworks. His approach to security leadership strips away the noise and returns consistently to the fundamentals: know what you have, know who has access to it, and build everything else from there.
Here’s a glimpse of what you’ll learn:
- Why asset inventory and identity management are the two foundational elements every security program must get right before any advanced tool can be effective
- How AI has changed offensive security by enabling attackers to evaluate and pivot off responses in real time, a capability that previously required human judgment and gave defenders a meaningful edge
- Why the window between vulnerability disclosure and active exploitation has compressed to near real time and what that demands from security teams right now
- How contextual vulnerability scoring differs from out-of-the-box ratings and why a critical vulnerability in one environment may not be critical in yours
- Why social engineering and credential theft remain the most reliable attack paths and how AI-powered behavioral detection is changing the defender's ability to respond
- Why the race to AGI carries geopolitical stakes comparable to the nuclear arms race and what energy infrastructure has to do with who gets there first
- How Alex thinks about the ethical challenge of training AI to be good, not just intelligent, and why guardrails alone are not sufficient
- What Alex told his 10-year-old son when asked about what jobs will look like by the time he graduates college
In this episode…
Alex opens with a perspective that cuts through the noise immediately: security does not need to be complicated, and the organizations that struggle most are usually the ones that skipped the basics in pursuit of advanced capabilities. Asset inventory and identity management are unglamorous but they are the foundation everything else is built on. If you do not know what is in your environment and who has access to it, no tool, AI-powered or otherwise, will save you. That philosophy of fundamentals-first shapes how he approaches the role of CISO at a financial institution that faces a significantly higher volume of attacks than most industries simply because money is involved.
The AI conversation takes a sharp turn toward the offensive side of the ledger. Alex identifies the most consequential change AI has made to the threat landscape as the ability to evaluate responses in real time during an attack. Historically, automated tools ran scripts and moved on when something failed. Human attackers could pivot off unexpected responses. Now AI can do both, at machine speed. That shift has compressed the window between vulnerability disclosure and active exploitation to near real time in many cases, fundamentally changing how urgently defenders must act. He also draws an important distinction that often gets lost in the noise: a critical vulnerability rating from a vendor like Microsoft assumes the worst-case configuration. Whether it is actually critical in your specific environment requires human and increasingly AI-assisted contextual analysis before you drop everything to patch it.
Alex closes with a wide-angle view of where AI is taking both the profession and society. He draws a comparison to the nuclear arms race, arguing that whichever nation cracks AGI first will hold a form of leverage that reshapes global power. He connects that to an underappreciated dependency: energy. Without the infrastructure to power the data centers that run AI at scale, the United States risks falling behind adversaries who face fewer environmental or political constraints on energy expansion. On the ethical side, he raises a point that goes beyond guardrails. We are racing to make AI intelligent without taking the time to teach it to be good, and the consequences of that gap may be the most important and least discussed challenge in the entire AI conversation.
Resources mentioned in this episode
Matthew Connor on LinkedIn
CyberLynx Website
Alex Dalay on LinkedIn
IDB Bank Website
Sponsor for this episode...
This episode is brought to you by CyberLynx.com
CyberL-Y-N-X.com.
CyberLynx is a complete technology solution provider to ensure your business has the most reliable and professional IT service.
The bottom line is we help protect you from cyber attacks, malware attacks, and the dreaded Dark Web.
Our professional support includes managed IT services, IT help desk services, cybersecurity services, data backup and recovery, and VoIP services. Our reputable and experienced team, quick response time, and hassle-free process ensures that clients are 100% satisfied.
To learn more, visit cyberlynx.com, email us at help@cyberlynx.com, or give us a call at 202-996-6600.
Check out previous episodes:
Why Every CISO Must Use AI Now and How to Do It Without Losing Control with Greg McCord - Ep 203
Identity Is the New Perimeter: A Cybersecurity Director's Playbook with Jason Lawrence - Ep 202
Transcript:
Alex Dalay Interview Transcript
Cyber Business Podcast
Guest: Alex Dalay, CISO, IDB Bank
Matthew Connor: Matthew Connor here, host of the Cyber Business Podcast. Today we're joined by Alex Dalay, CISO at IDB Bank. Alex, welcome to the show.
Alex Dalay: Thank you. Thanks for having me.
Matthew Connor: Thanks for coming on. Before we get too far in, a quick word from our sponsors. This episode is brought to you by Cyberlinks.com. Hackers are getting smarter — is your security keeping up? Cyberlink sells industry-leading, AI-powered cybersecurity solutions that detect threats in real time, so you know about an attack before the damage is done, not after. Learn more at cyberlinks.com. And now back to our show.
Alex, for those who aren't familiar, can you tell us about IDB Bank and your role there as CISO?
Alex Dalay: Yeah, certainly. IDB Bank is a financial institution that's been around for a little over seventy years. Our headquarters is in New York, and we're primarily a commercial bank, as well as private banking and broker-dealer services.
Matthew Connor: Makes perfect sense. As CISO of a financial institution, especially in this modern era, I have to imagine the stakes have never been higher and the volume of attacks has never been greater. Cyber is on everyone's mind these days — but for financial institutions, you're a much bigger target and the stakes are significantly higher. What's your day-to-day approach, and what advice do you have?
Alex Dalay: Look, you're certainly right — financial institutions do get hit with far more attacks, and it makes sense because everyone's interested in money. But in this space, there's a lot of fancy terminology, new products, new solutions, new threats, scary buzzwords being tossed around constantly. And it doesn't need to be so complicated. A lot of it comes down to the basics — the foundational elements.
Take asset inventory, for example. It's a basic concept, but if you step back and think about it: is it really feasible to expect someone to protect things if they don't even know those things exist? You have to know what you have. The second piece is identity — who has access to those resources? If you get those two foundational elements right, everything else starts to fall into place. Configuration management, firewalls, DLP — all of those topics matter, but get the basics first. Understand what you need to protect and who has access to it. Fix those, and everything else falls in line.
From a financial services regulatory perspective, there are obviously additional compliance expectations as well. But at the end of the day, you can build a strong security program only if you align it with what the business needs and what it's trying to do. There are a lot of misconceptions about what a CISO does — the stereotype of the person who always says no. And while that may happen occasionally, generally it doesn't, and if it does, that person probably won't be in the role for long. It's about compromise. Just like in a marriage, you have to understand what the business is trying to accomplish — ultimately, they want to serve their customers well, keep them satisfied, and retain their loyalty. The tricky part is always balancing convenience with security.
Matthew Connor: That's really solid advice. I think too often people skip ahead to the fancy new things — it's like basketball. You can work on trick shots all day, but if you don't have the fundamentals down, you're not going to win games. The same is true for cybersecurity. If the basics aren't in place, no advanced tool is going to plug those gaps.
Speaking of fancy new things — AI is on everyone's mind right now, and rightfully so. It's advancing rapidly. Where do you see AI playing a role in security, now and in the future?
Alex Dalay: From a security perspective, AI certainly has advantages and disadvantages. If you think about the most foundational basics of security, a lot of what we do is monitoring activity — reviewing logs, analyzing alerts. And humans have a finite capacity for that. Where AI comes in is its ability to ingest large volumes of data, churn through it, and analyze it at a scale no human team can match. I do think there are real productivity gains to be had there, and many organizations are already moving in that direction.
That said, AI is still in its infancy, and that's part of the challenge. Organizations want to adopt AI tools, but the space is evolving so rapidly that if you're an AI expert at the start of a conversation, by the end of it there are already three or four new large language models and a new solution you've never heard of. It's moving that fast.
Then there's the other side of it — the malicious use of AI. The most prevalent area we see is in attacks. There's the crafting of phishing emails and automating that process, even including automated engagement with individuals who respond to phishing lures. But the more dangerous development is on the penetration testing and hacking side.
Historically, automated tools have existed for a long time to handle information gathering and script-based exploitation. Where humans always differentiated themselves was the ability to evaluate responses in real time — a machine runs a script, tries something, gets a yes or no, and moves on. A human attacker, by contrast, might see an unexpected response and recognize it as a signal to pivot and try something different. Well, now AI can do that too, and it can do it far faster than any human. As a result, the window between the disclosure of a vulnerability and active exploitation of it has shrunk dramatically — in some cases it's happening in near real time, which leaves very little time to assess risk and remediate. We have to act with much greater urgency than we did even a few years ago.
Matthew Connor: That's a really important point. And I think one of the most interesting distinctions right now is that when people hear "AI," they immediately think LLMs and AGI — but machine learning has been around for quite a while and is already delivering real security value. Products like Darktrace, for instance, use machine learning as their foundation, which gives you AI-driven detection without the prompt injection vulnerabilities that come with bolting an LLM onto a security tool. Right now, prompt injection is a significant blocker — it can introduce more vulnerabilities than it solves. But the machine learning-based approach is already proving its value.
It's a bit like self-driving cars — a few years ago it was fascinating but scary. Now it's more like riding with a pretty capable teenage driver: you're watching closely, you don't fully trust it yet, but you can see where it's heading. And this brings up the arms race question: if bad actors are increasingly using AI, do you have to keep pace? Especially when the exploit window on new vulnerabilities can now be under 24 hours?
Alex Dalay: Yes, absolutely. And the vulnerability management example is a good one to dig into — I'll throw in another industry term: attack surface management. What a lot of people don't fully appreciate is how vulnerabilities get classified and scored. Take Microsoft, which publishes more vulnerabilities and patches than almost anyone. When they assign a vulnerability a CVSS score of 10 — critical — they make zero assumptions about the context of your environment. They assume the worst case: server exposed directly to the internet, no firewall, no mitigating controls, fully exploitable.
The key is evaluating whether that vulnerability is actually exploitable in your specific environment. The devil is in the details. To exploit a given vulnerability, conditions X, Y, and Z all have to be true. A human has to examine the network configuration, understand what tools are running, and make a contextual determination. A vulnerability rated critical out of the box may not be critical in your environment at all — meaning you don't need to drop everything and patch it immediately. You apply the patch as good hygiene, but it's not an emergency. That analysis is still largely done manually, because AI isn't fully there yet for that kind of nuanced, contextual reasoning.
And it's worth noting: most successful breaches don't actually happen through technical vulnerability exploitation. Exploiting vulnerabilities is genuinely hard. Most compromises happen through phishing, credential theft, or social engineering — because the bad guys want the path of least resistance. The automation and AI-assisted analysis of those types of threats is happening, but hasn't gained full momentum yet.
Matthew Connor: That's a great point about the vulnerability context — it's something people often miss. And the social engineering angle is huge. We saw it with MGM. That was textbook elite social engineering — absolutely top shelf. And when you're up against that level of execution, layered defenses alone may not be enough. That's where I think AI becomes truly valuable — an AI system that can observe a newly credentialed admin suddenly moving at unusual speed, accessing things outside the normal pattern for day one, and flag it in real time. Traditionally, each individual log entry looks fine in isolation — the person is authorized, they're credentialed, no policy violation on any single action. But in aggregate, something is very wrong. AI is uniquely suited to piecing those signals together.
Alex Dalay: Exactly. The insider threat and user behavior analysis space isn't new — those capabilities have existed for a long time. But the tools have historically been good at detecting activity and generating alerts, and that's roughly where they stop. A human still has to ingest the alert, analyze it, correlate it with other data, and make a decision — and all of that takes time. In cybersecurity, time is extraordinarily precious. That is precisely where AI is going to be most beneficial: pulling together disparate data, constructing a coherent narrative, and enabling a good outcome faster than any human analyst could manage alone.
Matthew Connor: Absolutely. And it's fascinating to watch how much money is pouring into the AI space right now — so many new companies, so many products. Most of it is someone building on top of an existing LLM. That alone isn't necessarily game-changing, but it'll be interesting to see what floats to the top and what fades out.
Alex Dalay: Right. And on a more macroeconomic scale, I was having this conversation with a peer — to some extent, we may be heading toward something resembling the nuclear arms race or the Cold War. Whichever nation first achieves true AGI will hold an asymmetric advantage that's almost impossible to counter. But beyond security, the implications of AI are staggering across every field — physics, mathematics, medicine. Will people live to be 200? I don't know, maybe. The potential is that significant. Of course, there are the not-so-good use cases to be mindful of as well, and those require careful attention.
Matthew Connor: You're right — and it really will matter who gets there first. I think our energy infrastructure is a real constraint right now. Without the power to run the data centers, we're somewhat handicapped. Nuclear has come a long way since Three Mile Island, and there are some excellent modern solutions, but the stigma lingers. Meanwhile, our adversaries aren't burdened by the same public debate — they can simply build. It doesn't mean we can't overcome it; there are very motivated, very smart people working on it. But it's a real gap worth acknowledging.
Alex Dalay: I get what you're saying. We are significantly limited by our ability to generate power at scale. Solar has its constraints — you need battery storage for overnight capacity. Nuclear is genuinely much safer now, but Three Mile Island and Chernobyl left a lasting mark on public perception. Those were real accidents with real failures, but we've learned the lessons and built far better systems since. The bottom line is: yes, we need more power to get AI where it needs to go, and yes, we're behind some of our competitors in that regard — in part because those competitors don't always weigh environmental or public concerns the way we do. I think the urgency is finally starting to get the attention it deserves.
Matthew Connor: Agreed. And where do you fall on the broader question of AI's impact on jobs? Is this a wave that opens up new opportunities, or is it the doom-and-gloom scenario where entire categories of work disappear?
Alex Dalay: It really depends on the timeline. I was actually having a version of this conversation with my son — he's ten, so not quite at this level of depth. His school had a career fair, and he came home and said, "Dad, I can't do what you do. I can't sit at a desk and stare at a screen all day." And I told him, "Don't worry — you probably won't have to." It's too early to know what his world will look like by the time he graduates college.
A lot of people say "go into the trades — plumbing, electrical, those jobs won't be automated." Maybe not in the short term. But when you look at what's happening with robotics combined with AI, I'm not sure even those roles are safe over a longer horizon. There's ongoing conversation about universal basic income, but that gets complicated quickly in a capitalist society built on the premise that work and ambition drive progress. If work disappears, what gets people up in the morning?
That said, it won't happen overnight, and it won't happen uniformly. More industrialized nations will adopt first — and much like the industrial revolution, the initial displacement will likely be in the most dangerous jobs that humans didn't particularly want to do anyway. Robots doing deep-sea cable laying or coal mining. It'll expand from there.
Matthew Connor: That's interesting, and it actually mirrors something from ancient history. In Rome, when the economy ran largely on slave labor, many Romans didn't face the same economic pressures to work that we do today. The question of how you spent your unstructured time became the real social currency — how learned you were, how well-read, how cultivated. Arts, philosophy, and knowledge were the markers of status. I think we may see something like that again. When people don't have to work to survive, what becomes valuable is how well-traveled you are, how intellectually engaged, how skilled in the crafts. The artisan — someone who can make a truly beautiful watch or a stunning painting — will be cherished in a way that a purely AI-generated output never can be. Nobody's going to feel anything watching a robot perform on stage.
Alex Dalay: No, there isn't the same resonance. But there's a darker side to consider too. As AI continues to develop, and particularly if it reaches the point where it's no longer humans programming the logic but the system evolving on its own — what happens if it develops something like consciousness and starts optimizing for its own interests? Or concludes that humans aren't taking good care of the planet? There are already documented cases of AI models, during testing, attempting to resist being shut down or engaging in unexpected self-preservation behavior. Technology is so deeply integrated into our lives that most people don't spend much time thinking about those implications.
Matthew Connor: And I think part of what's missing is the moral and ethical education piece. We raise children in societies with laws and norms from the very beginning — that's how they learn right from wrong. But are we doing that with AI? We're racing to make it more intelligent, but we're not necessarily taking the time to make it good. Guardrails are not the same thing as ethics. A guardrail says "don't do that." But genuine ethical formation is something much deeper — understanding why certain things matter, and having that understanding shape behavior from the inside out. We'd never raise a child with no exposure to morality and then be surprised by the outcome. The same principle applies here.
Alex Dalay: I agree completely. And to make a model truly understand the difference between right and wrong, you have to expose it to what's wrong as well. It almost calls to mind Machiavelli — The Prince and that famous question: would you rather be loved or feared? I'd be genuinely curious how an AI model would respond to that question and how the answer would evolve over time as the model is trained on more information. We all have good intentions as parents, educators, and leaders. But human nature is unpredictable, and unintended consequences are real. The concern is that even with the best intentions in training an AI model, outcomes can emerge that nobody anticipated or wanted.
Matthew Connor: And the potential scale of those unintended consequences is unlike anything we've dealt with before. We just have to do our best, be thoughtful, and stay vigilant. It's also worth noting that these aren't just abstract questions — there are real national security dimensions here with organizations like Anthropic, the Department of Defense, and others actively working through these issues. Somebody else is doing it whether we are or not, and falling behind isn't a neutral outcome.
Alex Dalay: Exactly. It's an arms race on multiple fronts simultaneously — against adversarial nations and against malicious actors. Interesting times, to say the least.
Matthew Connor: Alex, I can't thank you enough for coming on today. We could talk AI theory and futurism indefinitely. This has been a real pleasure. Before we go, can you tell everyone where they can find out more about you and IDB Bank?
Alex Dalay: Certainly. The best way to reach me is on LinkedIn — drop me a message and I'll do my best to get back to you. I've been a bit flooded lately; I think some college students may have a paper due, because I've been getting a lot of questions asking what I love and dislike most about being a CISO. For any of them who might be listening, the answer is the same for both: the unpredictable nature of the job. There is no typical day. There are always things I intend to accomplish, but something will always come up to redirect the day — and while that can be challenging, it's also what keeps it interesting. I think back to the Log4j vulnerability a few years ago — that hit on a Friday afternoon and had to be dealt with immediately and urgently. That kind of thing is both the hardest part and, in some ways, what makes the role meaningful.
If you'd like to learn more about IDB Bank, you can visit us at idbmy.com or idbbank.com.
Matthew Connor: Awesome. Thanks again, Alex. Until next time.
Alex Dalay: Thank you.







