How Movie Studios Defend IP at Massive Scale with Dan Meacham
Dan Meacham serves as Vice President of Cyber and Content Security at Legendary Entertainment, a global film and television production company behind some of the most recognizable franchises in modern media. In his role, Dan is responsible for securing not only traditional enterprise systems, but also the creative content, intellectual property, and complex supply chains that power large scale movie and television production. His work spans cyber defense, digital forensics, vendor risk, and emerging AI driven security models in an industry where collaboration extends far beyond corporate boundaries.
Here’s a glimpse of what you’ll learn:
- Why securing a movie studio is fundamentally different from securing a traditional enterprise
- How content production relies on thousands of external collaborators and temporary environments
- The role of digital forensics and watermarking in protecting unreleased media
- How sophisticated attackers target individuals through social engineering and custom applications
- Why AI driven analytics are essential for threat detection at massive scale
- How long term log retention enables rapid decision making during incidents
- What shared learning intelligence could mean for the future of security operations
In this episode…
Dan Meacham explains how Legendary’s business model reshapes cybersecurity strategy. Each film or television project operates like its own company, complete with a unique technology stack, vendor ecosystem, and lifecycle. Security must adapt quickly to environments that appear and disappear over months or years.
He walks through the realities of protecting creative content across the production pipeline. From dailies and post production workflows to global distribution, large media files are constantly replicated, shared, and transformed. Watermarking, stenography, and forensic techniques play a critical role in tracing leaks back to their source.
The conversation highlights how attackers exploit human behavior rather than systems alone. Dan shares real world examples where threat actors built targeted applications to extract photos from personal devices, demonstrating how deeply personal and contextual modern attacks have become.
Dan also outlines how AI and machine learning have long existed in both filmmaking and cybersecurity. Today’s challenge is not adopting AI, but governing it across devices, platforms, and supply chains. He introduces the concept of shared learning intelligence as a way to aggregate insights from multiple AI systems without centralizing sensitive data.
The episode closes with a discussion on scale and speed. By retaining over a decade of security logs, Dan’s team can quickly identify anomalous behavior and shut down access before damage spreads. AI accelerates analysis, but human accountability remains central to every decision.
Resources mentioned in this episode
Matthew Connor on LinkedIn
CyberLynx Website
Dan Meacham on LinkedIn
Legendary Entertainment Website
Sponsor for this episode...
This episode is brought to you by CyberLynx.com
CyberL-Y-N-X.com.
CyberLynx is a complete technology solution provider to ensure your business has the most reliable and professional IT service.
The bottom line is we help protect you from cyber attacks, malware attacks, and the dreaded Dark Web.
Our professional support includes managed IT services, IT help desk services, cybersecurity services, data backup and recovery, and VoIP services. Our reputable and experienced team, quick response time, and hassle-free process ensures that clients are 100% satisfied.
To learn more, visit cyberlynx.com, email us at help@cyberlynx.com, or give us a call at 202-996-6600.
Check out other related episodes:
Building Modern Communities Through People-First Technology with Brianne Bustos
The CISO Who Sees Around Corners: Rick Scot on AI, Fraud, and the Future of Security
Building Trust, Not Turnover: Jason Frame's Guide to Public Sector IT
Transcript:
Cyber Business Podcast – Dan Meacham, VP of Cyber and Content Security at Legendary Entertainment
Matthew: Matthew Connor here, host of the Cyber Business Podcast. Today we're joined by Dan Meacham, VP of Cyber and Content Security at Legendary Entertainment. Dan, welcome to the show.
Dan: Thank you. I appreciate it. Glad to be here.
Matthew: Great to have you. Before we get too far in, a quick word from our sponsors.
[SPONSOR READ: This episode is brought to you by CyberLynx.com. Do you know if a hacker is in your system? Most people and most companies don't — until it's too late and the hacker has already done damage. A hacker's job is to bypass your security, so companies need a way of knowing when someone has gotten past their defenses. That's where CyberLynx comes in. We've partnered with the best cybersecurity companies in the world to provide our clients with the best solutions at the best prices — whether it's managed SIEM, SOC, EDR, MDR, or XDR. We'll help you find the right solution at the right price. Find out more at CyberLynx.com.]
And now back to our show. Dan, for those who aren't familiar, can you tell us about Legendary Entertainment and your role there as VP of Cyber and Content Security?
Dan: Sure, happy to. Legendary is a content production company. We produce comics, television, and feature films, and we have a full digital presence through Nerdist and Geek & Sundry. We produce content and partner with other studios and platforms to distribute it — whether that's Warner Brothers putting films in theaters or Netflix and Amazon Prime distributing other content.
Matthew: This is really interesting because we've had well over 100 CIOs, CTOs, and CISOs on the show and we've never had someone in your position. We get a lot of insight into traditional businesses and what it's like to head up cyber at a standard company — but a movie production company is a whole different animal. What's it actually like being the VP of Cyber and Content Security?
Dan: I'll say I had hair before I started the job. But in all seriousness — for the most part, yes, we have a standard IT department, finance, accounting, legal, HR. The standard blocking and tackling still applies. However, our product is stories. And when you look at how we produce those stories, it involves an enormous number of players. If you've ever sat through a full movie credits roll — God bless you, because there are sometimes 6,000 names or more. That's what it takes to make a film. And of those people, maybe a very small percentage are actual Legendary employees. You're talking about tens of thousands, sometimes hundreds of thousands, of external collaborators in your space at any given time across a single production.
That makes things really interesting from a security perspective. A lot of the time we're working with other people's equipment — rental computers for editing that are only on-site for a few months, equipment from editing companies, visual effects studios, other studios entirely. If we wanted endpoint security on a production and told Disney, "Hey, you need to install our agent because you're touching our data" — that's not going to happen. And then you have people working out of their garage doing music or sound editing from home. There are a lot of layers to navigate.
Here's the other thing that makes it really unique: each movie is treated as its own company. A straightforward production might take eight months to a year. A large-scale visual effects film can take up to three years and cost hundreds of millions of dollars. During that period, some collaborators come in for a couple of days and are gone. Others are there for the duration. Each production also has its own tech stack — depending on the director and producer, they may prefer a particular camera system or collaboration platform that the next production wouldn't touch. That stack gets built up and then dismantled in a couple of years. So I can't sign extended contracts to lock in the best pricing. And when a sequel comes around, the team will say, "We don't want that three-year-old camera — we want the latest and greatest." So the capabilities you have at any given moment are constantly changing, and you have to figure out how to secure them while also thinking about what survives into the next project.
And beyond the cyber side, there's the content protection piece. You might have talent on set wanting to take selfies and post them on social media, and you have to say, "No — nobody knows we're making this yet" or "That's a major spoiler" or "The marketing team has a specific reveal schedule." So there's a lot to unpack.
Do you want to walk through how it all works — from pre-production through principal photography, post-production, and all the way to distribution? Because even at that final stage, you're dealing with pirated streams, unlicensed merchandise using your IP, people selling T-shirts of your characters. There's a lot to cover.
Matthew: I think there are about 10,000 things I want to dive into. Let's start with something that I think is a big mystery even in the tech world — the sheer scale of the media files. Everybody's experienced filming a 4K video of their kid's recital and suddenly it's too big to share. This is a whole other level. Can you walk us through how content storage and collaboration actually work?
Dan: Absolutely. Just to give you a sense of scale — 20 seconds of adding fur to a CGI animal can be a petabyte of data. It's ridiculous. And when we produce content, we have to film at the highest resolution possible. Even though your TV can do 4K or maybe 8K, we have to shoot at least twice that resolution — because someday you might have a 12K television, and if we didn't capture it at that fidelity, it'll burn your eyes like watching a VHS tape on a modern screen.
Some directors still prefer shooting on film, which actually makes content protection a bit easier. But even then, we're still streaming footage from cameras to devices and capturing behind-the-scenes photos for continuity — making sure the blood spatter is in the right place, the ponytail is on the correct side, all of that.
For digital production, the footage is shot onto massive storage cards — terabytes and petabytes worth — at extremely high resolution. At the end of each shoot day, that footage gets uploaded to an asset management system for dailies. A producer or co-director on the other side of the world may need to see what was shot that day to determine whether a scene needs to be reshot. That's a standard part of the workflow.
Once principal photography wraps, the entire drive array often gets physically moved to the post-production facility. They have fiber connectivity down to the edit bays, and increasingly we do remote editing as well. But not everyone is working on the same files simultaneously. Different vendors have different chunks of the content. If someone is doing visual effects, they might have large segments of the actual film. Someone doing Foley — the sound effects — has another piece. The colorist and the team doing filters have another. At any given point, you can have a dozen vendors each holding almost your entire movie, and you have to have a high degree of trust that their security controls are solid.
There are countermeasures built into how we distribute that content. For instance, if we're sending footage to a dubbing team that's localizing dialogue, we'll often send a high-contrast, black-and-white version — they can see what they need to work with, but they're only adding a layer on top. And everything is watermarked. We can have your name, your file number, and other identifiers embedded throughout. If there's a leak, we know exactly where in the workflow it originated. It's not about accusing anyone of intentional leaking — it's about knowing if something in the supply chain has been compromised so we can address it fast.
Then once the film is packaged, it goes to a vendor that puts it into their vault. From there — and this might surprise people — each theater projector has its own unique number and key. The movie gets pushed down to that specific projector for that specific theater, for specific hours on specific days. The file and the key are delivered separately. If a projector goes down, you can't just copy the file to another one — everything is keyed to that specific device. And once the license expires, the file self-deletes. The encryption has multiple layers — trust me, there's no viable way to decrypt it.
Matthew: The watermarking and digital forensics component is fascinating. I've heard that some of it is hidden even in what looks like a near-final product — embedded in individual frames so you can trace exactly whose copy leaked. You mentioned audio forensics too?
Dan: Yes. There are multiple layers of forensics — audio and visual. On the audio side, the track itself can be identified. On the visual side, you may have a cartoon where a leaf is turned a specific way, or some other subtle visual element. Depending on the property, the forensic markers can go all the way down to identifying a specific theater, a specific auditorium, or even a specific showing. So when something shows up online recorded on a smartphone — and today's smartphones record better than any camcorder — we can often identify exactly where and when it was recorded.
And beyond the forensics, there are active countermeasures too. Infrared lights can be deployed in a theater that are invisible to the audience but interfere with a camera recording the screen — all you capture is a white screen. Or a white noise cancelling device can be placed in a room so that any microphone recording nearby audio just picks up white noise. It's a continuous loop of countermeasures and counter-countermeasures, like a radar and a radar detector. Whenever we develop something new, the other side eventually develops a workaround.
One thing worth noting: most leaks happen with finished or near-finished product, not raw production footage. Nobody wants to watch a movie full of green screens and reference strings. They want the finished version they'd see on a streaming service. But the really creative attacks are the ones that target the production workflow itself.
Matthew: Tell me about some examples of those.
Dan: Sure. We had a monster movie in development, and a vendor who builds creatures for various productions posted a behind-the-scenes tour of their studio on YouTube. They were showcasing a horror film they'd worked on — totally legitimate. But in the background, on someone's computer screen, was a tiny image of a three-headed monster. Barely the size of a thumbnail. And because the film credit community is very well-connected, fans who follow production credits recognized immediately whose next project that had to be. It took us about a day and a half to track down the source. We found a partial watermark visible in the image that pointed back to the vendor's YouTube video. The company was mortified and took the video down immediately — they had no idea it was even visible. We ultimately deflected the story publicly by suggesting the image was from a 1954 archival production, which gave people enough plausible deniability to let the speculation die down. Crisis managed.
But the more sophisticated example is one where a targeted social engineering attack was used to extract a behind-the-scenes photo during production. During COVID, a scene was photographed on set using the production's streaming workflow from camera to iPad, and the image leaked. When it was reported to us, we traced it back through the workflow — the aspect ratio matched the specific camera setup they were using, we knew what day the scene was shot, and we knew who was present. Nobody on that set would risk their career by leaking intentionally; one bad leak and you never work in the industry again.
What actually happened was a group of dedicated fans had mapped the supply chain. They looked through credits to identify who would have access to behind-the-scenes photos on that production. Then they used a data broker — for literally a few dollars — to profile that person. They found out this individual liked a particular type of card game. So they built a card game app. It passed all the app store review requirements because on its surface it was legitimate — a real, functional card game. It requested access to your photos, like many apps do, and as the user you'd just click through to allow it. Once installed, the app had access to the entire photo library. They then bought targeted ads — maybe $4 — to push that free app to the specific demographic they'd identified: people in that geographic market, using that type of device, with that interest profile. That narrowed it to around 25 people. One of them downloaded the app. That's how they got the photo.
This is why we do extensive onboarding for everyone who comes onto a production — employees, contractors, gig workers, craft services, everyone. We tell them: in your first two weeks, you're going to get a text message that says "Hi, this is the CEO. I need a favor." Ask yourself how the CEO got your personal cell number, and why they'd ask you rather than their three assistants. If you get a message like that, report it to us — and we'll give you a gift card for coffee, because now you're hunting for it rather than just being a target. We also tell them: if you want to post anything on social media from the production, email us first. Because even if we're not going to use a particular concept design, we might use it in a sequel. Even if a costume seems harmless, it might be part of a major reveal. Everything gets cleared.
The goal is to protect the data, the device, and the user. We have to educate not just our internal team but our entire extended supply chain — all the way down to craft services — because if they see something, we need to know about it. There may be an element in the workflow that's been quietly compromised that we don't know about yet.
Matthew: That app story is next-level. That's practically nation-state level targeting — profiling one specific individual and building a purpose-built tool to extract exactly what you need from them. Most end-user security training can't prepare you for that. You're in a constant cat-and-mouse game where the bad guys only have to succeed once.
Dan: Exactly. And here's where AI comes into all of this in a really interesting way. We've been using machine learning and AI in our industry for decades — how else do you put Forrest Gump in a film alongside historical figures? The entertainment industry arguably invented the deepfake. So when we onboard new vendors or build out new workflows, AI governance is already part of our assessment process. We go through detailed questionnaires about what models are being used, how data is being handled. We have a major office in Beijing, and there are models available in different jurisdictions that create real governance questions.
But the thing that concerns me most in our supply chain isn't the AI models directly. It's the ambient AI that lives on everyone's personal devices. Think about how much corporate information ends up on personal phones — iPhones, Androids. And think about how deeply integrated AI has become in those operating systems. Siri, Gemini, whatever it is — it's reading your emails, your attachments, your calendar, your meeting notes. That's enormously useful. But when someone on our production has confidential documents in their personal email, those documents are being surfaced and potentially trained on by the AI built into their phone. And here's the attack vector: if I write an app that requests access to your phone's AI — maybe framed as a marketing personalization tool — I can potentially use it to learn what's on your device and exfiltrate targeted documents. That merger document. That script. That storyboard. And there's currently no real e-discovery layer inside these AI systems. The major cloud productivity suites have robust e-discovery tools that can trace how you've shared documents — across email, chat, voice messages. But there's nothing equivalent that says, "This concept for a monster movie was processed by your phone's AI, and here's where those tokens went."
This is what's driving the work we're doing on what I call the Shared Learning Intelligence Platform — SLIP. The concept is a "fetch" capability: rather than dumping everything into a single LLM, the SLIP can go out and request AI enrichments from different platforms — pull a contract summarization from one system, pull security telemetry from another, pull behavioral analytics from a third — and give me a unified view. Right now many platforms don't want you touching their AI via API; you have to log in manually. But there have to be ways to pull those enrichments together. Because once I have that, I can apply it to threat hunting, orchestration, automated playbooks — I can recognize patterns across structured and unstructured data, across platforms, and take action on them.
A concrete example: in 2017 I had built out a behavioral analytics system with over a decade of logs. We had a person who had always worked on Godzilla productions — always accessed from Australia, always on a Mac, consistent behavior patterns for years. Then one day, that same user account touched a storyboard for a Kong production, from Romania, at unusual hours, on an Android device. My system — back in 2017, using scripted logic rather than modern AI — checked: who else from that region has accessed our systems in the last 30 to 90 days? Nobody. Who else has touched this storyboard? A handful of people. Does any of that match? No. In 40 seconds, it terminated all access for that account — not just the one file, but everything. Forty seconds is actually a long time in a cloud-first environment; a lot of damage can be done. But for most organizations, 40 seconds to detect and respond to an anomaly like that would be extraordinary.
Now, with modern AI, I can layer on top of that: how many emails have these two groups — the Godzilla team and the Kong team — exchanged? Have they collaborated on any shared files? Was this person explicitly invited to this document by someone authorized to invite them? Maybe there's a Godzilla vs. Kong film in development and they genuinely need access. AI can trace those relationships across structured and unstructured data and tell me: yes, there's a documented relationship here, this access was expected; or no, this is anomalous and needs immediate attention.
The thing that makes our environment uniquely complex — and analogous to a hospital in some ways — is that we have massive numbers of collaborators who aren't employees, who come in for days or years, and who may disappear and reappear on a different production four years later. A hospital has doctors, anesthesiologists, pharmacies, outside physician groups all operating within one building — most of them not direct employees of the hospital. We have the same structure, just built around storytelling. And in both cases, just because someone has access and entitlements doesn't mean they're authorized for everything. You still need the user behavior analytics to tell the full story.
Matthew: That is the power of AI — making connections across massive datasets that no human mind could hold simultaneously. Ten years of logs is nothing for an AI to traverse. And I think that's what gets really exciting about where this goes: as you move AI to edge devices, to endpoints, and tie that back into a centralized platform like your SLIP, you get coverage everywhere with intelligence aggregated in one place. That's where the good guys start winning this arms race. We're in this really exciting, sometimes overwhelming stage where AI is enormously useful but still needs supervision — like a self-driving car that's gotten incredibly good but still needs a human watching. I think you're operating at an 11 out of 10 in terms of complexity and threat level. The things you're doing are genuinely cutting-edge, and I could spend another four hours on this. We'll have to bring you back.
Before we go, can you tell everyone where they can find out more about you and about Legendary Entertainment?
Dan: Sure. Legendary is easy to find — legendary.com. You'll recognize us by the Celtic knot logo at the beginning of hopefully most of your favorite movies. And if you look at recent productions, you may even see my name in the credits. You can also find me on LinkedIn — happy to connect with anyone.
Matthew: That's awesome, Dan. Thanks again for coming on. Until next time!
Dan: Until next time. Thank you.







