Join your hosts, Anton Chuvakin and Timothy Peacock, as they talk with industry experts about some of the most interesting areas of cloud security. If you like having threat models questioned and a few bad puns, please tune in!
Is Network Detection and Response (NDR) coming back after being shoved to the side by EDR a bit? Is this for real?
What's the value proposition of NDR in 2026, because some people still don't understand it? How does NDR apply to the world of WFH, cloud/SaaS, encryption, high bandwidth, etc?
Is the value of NDR the same, or different, when it comes to public (or private) cloud?
How does NDR fill visibility gaps that identity and agent-based solutions cannot?
What does NDR offer that built-in cloud security tooling (as of right now) does not? Would you call NDR a key cloud security control?
“10X SOC” sounds great. But for an organization stuck in "SIEM 1.0" with poor data quality and manual workflows, is “AI-native MDR” a "leapfrog" opportunity or a recipe for disaster?
We’ve seen the rise of "Decoupled SIEM" and security data lakes. Does a "Modern SIEM" even need to exist if an MDR platform has an agentic layer doing the heavy lifting?
You’ve argued for AI-native over AI-bolted-on. For an end user, what are the tangible differences of using "AI inside a legacy SIEM" versus using an "AI-native separate product"?
What is the one task you thought AI would handle by now that still requires a senior human analyst to step in?
If a CISO is using an AI MDR, "Mean Time to Detect" (MTTD) starts to look like a vanity metric because the machine is instant. What is the new golden metric for an AI-powered SOC? Is it "Time to Context," "Reduction in Human Toil," or something else?
How do you help a skeptical SOC Manager—who has been burned by false positives for a decade—trust an autonomous agent to perform a "containment" action at 3:00 AM?
We just saw a security tool (Trivy) get used to pop an AI infrastructure tool (LiteLLM) to eventually pop end users. Have we reached the point where our security tooling is actually our largest unmanaged attack surface?
Why now? Software supply chain security had the perennial vibe of “not top concern” for most organizations, right?
TeamPCP pushed malicious code to existing GitHub tags. We’ve been screaming about pinning versions to SHAs for years, but clearly, nobody is listening. Is it time to admit that 'convenience' is the primary enemy of supply chain security?
The Axios incident showed a victim compromised in under two minutes. In a world of auto-updating dependencies, is the concept of a human-in-the-loop for software updates officially dead, or do we need to look very hard at version pinning and such?
With XZ Utils case, we saw a long-game social engineering attack. Beyond just 'watching npm closely,' what are the realistic architectural safeguards for an org that knows they can't audit every line of an update?
We’ve spent the last three years talking about SBOMs (Software Bill of Materials) like they were a pill for supply chain health. But if the scanner producing the SBOM is the one that's compromised, isn't the SBOM just a signed receipt for your own house being on fire?
What is the one practical thing they can do to ensure their CI/CD isn't a credential-exfiltration-as-a-service platform?
You argue that declaring existing SIEM being obsolete is a "marketing slogan" rather than a true thesis. What is the real pain point and the actual gap in traditional SIEMs as opposed to the more sensational claims?
You highlight that "correlation, state, timelines, and real-time detection require locality," making centralization a necessary trade-off. Can a truly federated or decoupled SIEM architecture achieve the same fidelity and real-time performance for complex, stateful detections as a centralized one?
You call the rise of independent security data pipelines the "SIEM Trojan Horse." How quickly is this abstraction layer turning SIEM into a “swappable” component, and what should SIEM vendors have done differently years ago to prevent this market from existing?
This "AI SOC" thing, is this even real? Is AI in a SOC a better label? Do you think major SIEM vendors will own this very soon, like they did with UEBA and SOAR?
If volume-based pricing is flawed because it penalizes good security hygiene, what is a better SIEM pricing model that fairly addresses compute, enrichment, and retention costs without just shifting the volume cost to unpredictable query charges?
You question the idea that startups can find a better way to release detection rules than large vendors with significant content teams. What metrics should security leaders use to evaluate the quality of a vendor's detection engineering (DE) output beyond just coverage numbers? Can AI fix DE?
Your book focuses on the US, China, and Russia. When you were planning the book did you also want to cover players like Israel, Iran, and North Korea?
Most of our listeners are migrating to or operating heavily in the cloud. As nations refine their “digital battlefield” strategies, does the "shared responsibility model" actually hold up against a nation-state actor?
How does a company’s detection strategy need to change when the adversary isn't a teenager looking for a ransom, but a state-funded group whose goal might be long-term persistence or subtle data manipulation? How should people allocate their resources to defending against both of these threats?
How afraid are you of a “bad guy with AI” scenarios? Mild anxiety or apocalyptic fears?
Do you see AI primarily helping "Tier 2" nations close the capability gap with the "Big Three," or does it just further cement the dominance of the nations that own the underlying compute and models?
You’ve spent a lot of time as an analyst looking at how enterprises buy and run security tech. For a CISO at (say) mid-tier logistics company, should 'nation-state cyberattacks' even be on their threat model? Or is worrying about the spies just a form of security theater when they haven’t even solved basic credential theft yet?
Harmonic Security focuses on securing generative AI in use. Can you walk us through a real, anonymized example of a data leak caused by employee AI usage that your platform has identified?
AI governance gets thrown around a lot. What does this mean in the context of Shadow AI? How should organizations be thinking about governing AI in light of upcoming AI regulations in the US and in the EU?
If we generally agree that employees are using AI tools before they are sanctioned, how can organizations control this? Network, API, endpoint?
Many organizations struggle with the "ban vs. embrace" debate for generative AI. Based on your experience, what's a compelling argument for moving from a blanket ban to a managed, secure adoption model? Can you share a success story where this approach demonstrably reduced risk?
The term "shadow AI" is often used interchangeably with "shadow IT" (but for AI-powered applications) but you've highlighted that AI is a different beast. What is the single biggest distinction between managing the risk of unsanctioned AI tools versus unsanctioned IT applications?
Looking forward, where do you see the biggest risks in the evolution of shadow AI? For instance, will the next threat be from highly specialized AI agents trained on proprietary data, or from the rapid proliferation of new, unmonitored open-source models?
Given the speed of change in this space, what's one piece of advice you'd give to a CISO today who is just beginning to get a handle on their organization's shadow AI problem?
We’ve spent decades obsessed with MTTD (Mean Time to Detect) and MTTR (Mean Time to Respond). As AI agents begin to handle the bulk of triage at machine speed, do these metrics become "vanity metrics"? If an AI resolves an alert in seconds, does measuring the "mean" still tell us anything about the health of our security program, or should we be looking at "Time to Context" instead?
You mentioned the Maturity Triangle. Can you walk us through that framework? Specifically, how does AI change the balance between the three points of that triangle—is it shifting us from a "People-heavy" model to something more "Engineering-led," and where does the "Measurement" piece sit?
Google is famous for its "Engineering-led" approach to D&R. How is Google currently measuring the success of its own internal D&R program? Specifically, how are you quantifying "Toil Reduction"? Are we measuring how many hours we saved, or are we measuring the complexity of the threats our humans are now free to hunt?
Toil reduction is a laudable goal for the team members, what are the metrics we track and report up to document the overall improvement in D&R for Google’s board?
When you talk to your board about the success of AI in your security program, what are the 2 or 3 "Golden Metrics" that actually move the needle for them? How do you prove that an AI-driven SOC is actually better, not just faster?
We often talk about AI as an "assistant," but we’re moving toward Agentic SOCs. How should organizations measure the "unit economics" of their SOC? Should we be tracking the ratio of AI-handled vs. Human-handled incidents, and at what point does a high AI-handle rate become a risk rather than a success?
What is the right way for people to bridge the gap and translate executive dreams and board goals into the reality of life on the ground?
How do we talk to people who think they have "transformed" their SOC simply by buying a better, shinier product (like a modern SIEM) while leaving their old processes intact?
What are the specific challenges and advantages you’ve seen with a federated SOC versus a centralized one? What does a "federated" or "sub-SOC" model actually mean in practice?
Why is the message that "EDR doesn't cover everything" so hard for some people to hear? Is this obsession with EDR a business decision or technology debt?
How do you expect AI to change the calculus around data centralization versus data federation?
What is your favorite example of telemetry that is useful, but usually excluded from a SIEM?
What are the Detection and Response organizational metrics that you think are most valuable?
Is the continued use of Excel an issue of tooling, laziness, or just because it is a fundamentally good way to interact with a small database?