What are the AI Vulnerabilities We Need to Worry About

Episode Summary

Timothy De Block sits down with Keith Hoodlet, Security Researcher and founder of Securing.dev, to navigate the chaotic and rapidly evolving landscape of AI security.

They discuss why "learning" is the only vital skill left in security, how Large Language Models (LLMs) actually work (and how to break them), and the terrifying rise of AI Agents that can access your email and bank accounts. Keith explains the difference between inherent AI vulnerabilities—like model inversion—and the reckless implementation of AI agents that leads to "free DoorDash" exploits. They also dive into the existential risks of disinformation, where bots manipulate human outrage and poison the very data future models will train on.

Key Topics

  • Learning in the AI Era:

    • The "Zero to Hero" approach: How Keith uses tools like Claude to generate comprehensive learning plans and documentation for his team.

    • Why accessible tools like YouTube and AI make learning technical concepts easier than ever.

  • Understanding the "Black Box":

    • How LLMs Work: Keith breaks down LLMs as a "four-dimensional array of numbers" (weights) where words are converted into tokens and calculated against training data. * Open Weights: The ability for users to manipulate these weights to reinforce specific data (e.g., European history vs. Asian Pacific history).

  • AI Vulnerabilities vs. Attacks:

    • Prompt Injection: "Social engineering" the chatbot to perform unintended actions.

    • Membership Inference: Determining if specific data (like yours) is in a training set, which has massive implications for GDPR and the "right to be forgotten".

    • Model Inversion: Stealing weights and training data. Keith cites speculation that Chinese espionage used this technique to "shortcut" their own model training using US labs' data.

    • Evasion Attacks: A technique rather than a vulnerability. Example: Jason Haddix bypassing filters to generate an image of Donald Duck smoking a cigar by describing the attributes rather than naming the character.

  • The "Agent" Threat:

    • Running with Katanas: Giving AI agents access to browsers, file systems (~/.ssh), and payment methods is a massive security risk.

    • The DoorDash Exploit: A real-world example where a user tricked a friend's email-connected AI bot into ordering them free lunch for a week.

  • Supply Chain & Disinformation:

    • Hallucination Squatting: AI generating code that pulls from non-existent packages, which attackers can then register to inject malware.

    • The Cracker Barrel Outrage: How a bot-driven disinformation campaign manufactured fake outrage over a logo change, fooling a major company and the news media.

    • Data Poisoning: The "Russian Pravda network" seeding false information to shape the training data of future US models.

Memorable Quotes

  • "It’s like we’re running with... not just scissors, we’re running with katanas. And the ground that we're on is constantly changing underneath our feet." — Keith Hoodlet

  • "We never should have taught runes to sand and allowed it to think." — Keith Hoodlet

  • "The biggest bombshell here is that we are the vulnerability. Because we're going to get manipulated by AI in some form or fashion." — Timothy De Block

Resources Mentioned

Books:

Videos & Articles:

About the Guest

Keith Hoodlet is a Security Researcher at Trail of Bits and the creator of Securing.dev. A self-described "technologist who wants to move to the woods," Keith specializes in application security, threat modeling, and deciphering the complex intersection of code and human behavior.

Support the Podcast:

Enjoyed this episode? Leave us a review and share it with your network! Subscribe for more insightful discussions on information security and privacy.

Contact Information:

Leave a comment below or reach out via the contact form on the site, email timothy.deblock[@]exploresec[.]com, or reach out on LinkedIn.

Check out our services page and reach out if you see any services that fit your needs.

Social Media Links:

[RSS Feed] [iTunes] [LinkedIn][YouTube]

[RERELEASE] How to make time for a home lab

In this timely episode of the Exploring Information Security podcast, Chris Maddalena and I continue our home lab series by answering a listener's question on how to find time for a home lab.

Chris (@cmaddalena) and I were asked the question on Twitter, "How do you make time for a home lab?" We answered the question on Twitter, but also decided the question was a good topic for an EIS episode. Home labs are great for advancing a career or breaking into information security. To find the time for them requires making them a priority. It's also good to have a purpose. The time I spend with a home lab is often sporadic and coincides with research on a given area.

In this episode we discuss:

  • Making a home lab a priority

  • Use cases for a home lab

  • Ideas for fitting a home lab into a busy schedule

More resource:

[RERELEASE] How to build a home lab

In this getting stared episode of the Exploring Information Security podcast, I discuss how to build a home lab with Chris Maddalena.

Chris (@cmaddalena) and I have submitted to a couple of calls for training at CircleCityCon and Converge and BSides Detroit this summer on the topic of building a home lab. I will also be speaking on this subject at ShowMeCon. Home labs are great for advancing a career or breaking into information security. The bar is really low on getting started with one. A gaming laptop with decent specifications works great. For those with a lack of hardware or funds there are plenty of online resources to take advantage of. 

In this episode we discuss:

  • What is a home lab?

  • Why would someone want to build a home lab?

  • What are the different kinds of home labs?

  • What are the requirements?

  • How to get started building a home lab

More resources:

How to Build an AI Governance Program with Walter Haydock

Summary:

Timothy De Block sits down with Walter Haydock, founder of StackAware, to break down the complex world of AI Governance. Walter moves beyond the buzzwords to define AI governance as the management of risk related to non-deterministic systems—systems where the same input doesn't guarantee the same output.

They explore why the biggest AI risk facing organizations today isn't necessarily a rogue chatbot or a sophisticated cyber attack, but rather HR systems (like video interviews and performance reviews) that are heavily regulated and often overlooked. Walter provides a practical, three-step roadmap for organizations to move from chaos to calculated risk-taking, emphasizing the need for quantitative risk measurement over vague "high/medium/low" assessments.

Key Topics & Insights

  • What is AI Governance?

    • Walter defines it as measuring and managing the risks (security, reputation, contractual, regulatory) of non-deterministic systems.

    • The 3 Buckets of AI Security:

      1. AI for Security: AI-powered SOCs, fraud detection.

      2. AI for Hacking: Automated pentesting, generating phishing emails.

      3. Security for AI: The governance piece—securing the models and data themselves.

  • The "Hidden" HR Vulnerability:

    • While security teams focus on hackers, the most urgent vulnerability is often in Human Resources. Tools for firing, hiring, and performance evaluation are highly regulated (e.g., NYC Local Law 144, Illinois AI Video Interview Act) yet frequently lack proper oversight.

  • How to Build an AI Governance Program (The First 3 Steps):

    1. Establish a Policy: Define your risk appetite (what is okay vs. not okay).

    2. Inventory Systems (with Amnesty): Ask employees what they are using without fear of punishment to get an accurate picture.

    3. Risk Assessment: Assess the inventory against your policy. Use a tiered approach: prioritize regulated/cyber-physical systems first, then confidential data, then public data.

  • Quantitative Risk Management:

    • Move away from "High/Medium/Low" charts. Walter advocates for measuring risk in dollars of loss expectancy using methodologies like FAIR (Factor Analysis of Information Risk) or the Hubbard Seiers method.

  • Emerging Threats:

    • Agentic AI: The next 3-5 years will be defined by "non-deterministic systems interacting with other non-deterministic systems," creating complex governance challenges.

  • Regulation Roundup:

    • Companies are largely unprepared for the wave of state-level AI laws coming online in places like Colorado (SB 205), California, Utah, and Texas.

Resources Mentioned

  • ISO 42001: The global standard for building AI management systems (similar to ISO 27001 for info sec).

  • Cloud Security Alliance (CSA): Recommended for their AI Controls Matrix.

  • Book: How to Measure Anything in Cybersecurity Risk by Douglas Hubbard and Richard Seiers.

  • StackAware Risk Register: A free template combining Hubbard Seiers and FAIR methodologies.

Support the Podcast:

Enjoyed this episode? Leave us a review and share it with your network! Subscribe for more insightful discussions on information security and privacy.

Contact Information:

Leave a comment below or reach out via the contact form on the site, email timothy.deblock[@]exploresec[.]com, or reach out on LinkedIn.

Check out our services page and reach out if you see any services that fit your needs.

Social Media Links:

[RSS Feed] [iTunes] [LinkedIn][YouTube]


Exploring Cribl: Sifting Gold from Data Noise for Cost and Security

Summary:

Timothy De Block and Ed Bailey, a former customer and current Field CISO at Cribl, discuss how the company is tackling the twin problems of data complexity and AI integration. Ed explains that Cribl's core mission—derived from the French word "cribé" (to screen or sift)—is to provide data flexibility and cost management by routing the most valuable data to expensive tools like SIEMs and everything else to cheap object storage. The conversation covers the 40x productivity gains from their "human in the loop AI", Cribl Co-Pilot, and their expansion into "agentic AI" to fight back against sophisticated threats.

Cribl's Core Value Proposition

  • Data Flexibility & Cost Management: Cribl's primary value is giving customers the flexibility to route data from "anywhere to anywhere". This allows organizations to manage costs by classifying data:

    • Valuable Data: Sent to high-value, high-cost platforms like SIMs (Splunk, Elastic).

    • Retention Data: Sent to inexpensive object storage (3 to 5 cents per gig).

    • Matching Cost and Value: This approach ensures the most valuable data gets the premium analysis while retaining all data necessary for compliance, addressing the CISO's fear of missing a critical event.

  • SIEM Migration and Onboarding: Cribl mitigates the risk of disruption during SIM migration—a major concern for CISOs—by acting as an abstraction layer. This can dramatically accelerate migration time; one large insurance company was able to migrate to a next-gen SIEM in five months, a process their CISO projected would have taken two years otherwise.

  • Customer Success Story (UBA): Ed shared a story where his team used Cribl Stream to quickly integrate an expensive User and Entity Behavior Analytics (UBA) tool with their SIEM in two hours for a proof-of-concept. This saved 9-10 months and the deployment of 100,000 agents, providing 100% value from the UBA tool in just two weeks.

AI Strategy and Productivity Gains

  • "Human in the Loop AI": Cribl's initial AI focus is on Co-Pilot, which helps people use the tools better. This approach prioritizes accuracy and addresses the fact that enterprise tooling is often difficult to use.

  • 40x Productivity Boost: Co-Pilot Editor automates the process of mapping data into complex, esoteric data schemas (for tools like Splunk and Elastic). This reduced the time to create a schema for a custom data type from approximately a week to about one hour, representing a massive gain in workflow productivity.

  • Roadmap Shift to Agentic AI: Following CriblCon, the roadmap is shifting toward "agentic AI" that operates in the background, focused on building trust through carefully controlled and validated value.

  • AI in Search: The Cribl Search product has built-in AI that suggests better ways for users to write searches and utilize features, addressing the fact that many organizations fail to get full value from their searching tools because users don't know how to use them efficiently.

Challenges and Business Model

  • Data Classification Pain Point: The biggest challenge during deployment is that many users "have never really looked at their data". This leads to time spent classifying data and defining the "why" (what is the end goal) before working on the "how".

  • Pricing Models: Cribl offers two main models:

    • Self-Managed (Stream & Edge): Uses a topline license (based on capacity/terabytes purchased).

    • Cloud (Lake & Search): Uses a consumption model (based on credits/what is actually used).

  • Empowering the Customer: Cribl's mission is to empower customers by opening choices and enabling their goals, contrasting with other vendors where it's "easy to get in, the data never gets out".

Support the Podcast:

Enjoyed this episode? Leave us a review and share it with your network! Subscribe for more insightful discussions on information security and privacy.

Contact Information:

Leave a comment below or reach out via the contact form on the site, email timothy.deblock[@]exploresec[.]com, or reach out on LinkedIn.

Check out our services page and reach out if you see any services that fit your needs.

Social Media Links:

[RSS Feed] [iTunes] [LinkedIn][YouTube]


What is BSides ICS?

Summary:

Timothy De Block sits down with Mike Holcomb, founder of UtilSec, to discuss the critical and often misunderstood world of Operational Technology (OT) and Industrial Control Systems (ICS) security. Mike shares the origin story of BSides ICS, a global community-driven event designed to bridge the gap between IT security, engineering, and plant operations. The conversation dives into the "myth" of the air gap, the physical security risks in manufacturing, and why small utilities are the next major front in the cyber arms race.

The Reality of OT Security

  • The Vanishing Air Gap: While many believe OT systems are isolated, true air gaps are rare. Connectivity is driven by contractors dropping 5G hotspots for remote troubleshooting or employees charging phones on engineering workstations, inadvertently bridging OT networks to the internet.

  • Physical Security is Cyber Security: If an attacker can physically touch a device, they can own it. Mike shares a story of a VPN concentrator being stolen from a data center because there were no cameras and physical access was loosely controlled.

  • IT/OT Convergence: OT security is now "cyber security" because it involves TCP/IP packets, Windows machines in production environments, and networked PLC (Programmable Logic Controllers) and HMIs (Human Machine Interfaces).

BSides ICS: A Practical Community

  • Origin Story: BSides ICS was born out of a desire for a practical, down-to-earth alternative to highly academic or expensive "bleeding edge" conferences.

  • Global Expansion: Following a successful flagship event in Miami, BSides ICS is expanding globally in 2026 with events planned for Australia, Singapore, Argentina, Mexico City, and Bristol (UK).

  • Miami Flagship Details:

    • Date: February 23, 2026 (Monday before the S4 conference).

    • Location: Miami Dade College, Wolfson Campus.

    • Keynotes: Bryson Bort and Dr. Emma Stewart.

    • Features: Lockpick Village, ICS Village CTF (Capture the Flag), and a focus on diversity (achieving 50% women speakers last year).

The Threat Landscape: State Actors vs. Activists

  • The Hybrid Threat: Mike discusses his research on the alignment of state adversaries (low frequency, high impact) and activists (high frequency, low impact). The concern is a move toward a high-frequency, high-impact threat environment.

  • The "Long Tail" of Utilities: There are 50,000 water utilities in the U.S. 35,000 of them serve fewer than 500 clients. These "mom and pop" utilities lack the budget for basic IT security, let alone advanced OT monitoring, making them highly vulnerable targets.

  • Lessons from Colonial Pipeline & Jaguar Land Rover: Major incidents have shifted executive mindsets. Jaguar Land Rover's plants were down for five weeks due to fundamental failures in backup and recovery, highlighting that even large companies struggle with security basics.

How to Get Started in OT/ICS

  • Empathy is a Tool: The biggest problem in the field is a lack of empathy between IT and OT teams. Successful security requires understanding the engineer's goal (keeping the plant running) before enforcing security controls.

  • Free Resources: Mike provides over 40 hours of free course content on YouTube, covering OT essentials, OSINT, and pen testing for OT.

Resources Mentioned

  • Mike Holcomb’s Website: mikeholcomb.com (Training, consulting, and course links).

  • BSides ICS Website: bsidesics.org.

  • Standards: IEC 62443 (The global framework for securing OT/ICS).

Support the Podcast:

Enjoyed this episode? Leave us a review and share it with your network! Subscribe for more insightful discussions on information security and privacy.

Contact Information:

Leave a comment below or reach out via the contact form on the site, email timothy.deblock[@]exploresec[.]com, or reach out on LinkedIn.

Check out our services page and reach out if you see any services that fit your needs.

Social Media Links:

[RSS Feed] [iTunes] [LinkedIn][YouTube]


Cybersecurity Career Panel: Transitioning from Technical to Leadership

Summary:

In this episode, Timothy De Block sits down with a panel of cybersecurity leaders—Chris Anderson, Roger Brotz, and Mike Vetri—to discuss the realities of moving from "boots on the ground" technical roles to senior leadership. The conversation explores the challenges of letting go of the keyboard, the critical importance of emotional intelligence, and why "empathy" is a high-performance tool in a high-stress industry.

Meet the Panel

  • Chris Anderson: Security Consultant and Architect known for his "pot-stirring" approach to solving complex organizational security problems.

  • Roger Brotz: CISO at Arcadia Healthcare with over four decades of experience, starting his journey in 1977.

  • Mike Vetri: Senior Director of Security Operations at Veeva and former Air Force cyber operations officer.

Main Topics & Key Takeaways

The "Passion" to Lead

The panel dives into the true meaning of leadership, noting that the word "passion" stems from the Latin word for "suffering". Leading a cyber team means being willing to suffer through mistakes and high-pressure incidents alongside your team.

Empathy as a Business Metric

Mike shares a pivotal study indicating that leaders who embrace emotional intelligence and empathy often exceed their annual revenue goals by 20%. Conversely, a lack of empathy directly correlates to high burnout and employee turnover.

Learning to Fail Fast

The leaders recount personal failures, from failing to recognize team burnout during 16-hour-a-day incident responses to the "pride" of holding onto technical tasks for too long. They emphasize that failure is not a roadblock but a necessary inflection point for growth.

Bridging the Gap: Technical vs. Business

A major challenge for new leaders is translating "this is bad" into actionable business risk. Leaders must learn to speak the language of the boardroom, focusing on profit protection and risk management rather than just technical vulnerabilities.

Actionable Advice for Aspiring Leaders

  • Set Boundaries Early: Don't let your job intrude on your personal life until it's too late; once you establish a habit of always being available, it’s hard to pull back.

  • Find Your Barometer: Use a spouse or a trusted peer as a "barometer" to tell you when your stress levels are negatively impacting your leadership style.

  • Work-Life Harmony: Move away from the idea of a perfect "50/50 balance" and strive for harmony where your professional and personal lives can coexist.

Support the Podcast:

Enjoyed this episode? Leave us a review and share it with your network! Subscribe for more insightful discussions on information security and privacy.

Contact Information:

Leave a comment below or reach out via the contact form on the site, email timothy.deblock[@]exploresec[.]com, or reach out on LinkedIn.

Check out our services page and reach out if you see any services that fit your needs.

Social Media Links:

[RSS Feed] [iTunes] [LinkedIn][YouTube]


What is React2Shell (CVE-2025-55182)?

Summary:

Frank M. Catucci and Timothy De Block dive into a critical, high-impact remote code execution (RCE) vulnerability affecting React Server Components and popular frameworks like Next.js, a flaw widely being referred to as React2Shell.

They discuss the severity, the rapid weaponization by botnets and state actors, and the long-term struggle organizations face in patching this class of vulnerability.

The Next Log4j? React2Shell (CVE-2025-55182)

  • Critical Severity: The vulnerability, tracked as CVE-2025-55182 (and sometimes including the Next.js version, CVE-2025-66478, which was merged into it), carries a maximum CVSS score of 10.0.

  • The Flaw: The issue is an unauthenticated remote code execution (RCE) vulnerability stemming from insecure deserialization in the React Server Components (RSC) "Flight" protocol. This allows an attacker to execute arbitrary, privileged JavaScript code on the server simply by sending a specially crafted HTTP request.

  • Widespread Impact: The vulnerability affects React 19.x and other popular frameworks that bundle the react-server implementation, most notably Next.js (versions 15.x and 16.x using the App Router). It is exploitable in default configurations.

  • Rapid Weaponization: The speed of weaponization is "off the chain". Within a day of public disclosure, malicious payloads were observed, with activities including:

    • Deployment of Marai botnets.

    • Installation of cryptomining malware (XMRig).

    • Deployment of various backdoors and reverse shells (e.g., SNOWLIGHT, COMPOOD, PeerBlight).

    • Attacks by China-nexus threat groups (Earth Lamia and Jackpot Panda).

The Long-Term Problem and Defense

  • Vulnerability Management Challenge: The core problem is identifying where these vulnerable components are running in a "ridiculous ecosystem". This is not just a problem for proprietary web apps, but for any IoT devices or camera systems that may be running React.

  • The Shadow of Log4j: Frank notes that the fallout from this vulnerability is expected to be similar to Log4j, requiring multiple iterative patches over time (Log4j required around five versions).

    • Many organizations have not learned their lesson from Log4j.

    • Because the issue can be three or four layers deep in open-source packages, getting a full fix requires a cascade of patches from dependent projects.

  • Mitigation is Complex: Patches should be applied immediately, but organizations must also consider third-party vendors and internal systems.

    • Post-Exploitation: Assume breach. If the vulnerability was exposed, it is a best practice to rotate all secrets, API keys, and credentials that the affected server had access to.

    • WAF as a Band-Aid: A Web Application Firewall (WAF) can be a mitigating control, but blindly installing one over a critical application is ill-advised as it can break essential functionality.

  • The Business Battle: Security teams often face the "age-old kind of battle" of whether to fix a critical vulnerability with a potential break/fix risk or stay open for business. Highly regulated industries, even with a CISA KEV listing, may still slow patching due to mandatory change control and liability for monetary loss if systems go down.

The Supply Chain and DDoS Threat

  • Nation-State & Persistence: State actors like those from China will sit on compromised access for long periods, establishing multiple layers of backdoors and obfuscated persistence mechanisms before an active strike.

  • Botnet Proliferation: The vulnerability is being used to rapidly create new botnets for massive Denial of Service (DoS) attacks.

    • DoS attack sizes are reaching terabits per second.

    • DDoS attacks are so large that some security vendors have had to drop clients to protect their remaining customers.

  • Supply Chain Security: The vulnerability highlights the urgent need for investment in Software Bill of Materials (SBOMs) and Application Security Posture Management (ASPM)/Application Security Risk Management (ASRM) solutions.

    • This includes looking beyond web servers to embedded systems, medical devices, and auto software.

    • Legislation is in progress to mandate that vendors cannot ship vulnerable software and to track these components.

Actionable Recommendations

  • Immediate Patching: This is the only definitive mitigation. Upgrade to the patched versions immediately, prioritizing internet-facing services.

  • Visibility Tools: Use tools for SBOMs, ASPM, or ASRM to accurately query your entire ecosystem for affected versions of React and related frameworks.

  • Testing: Run benign proof-of-concept code to test for the vulnerability on your network. Examples include simple commands like whoami. (Note: Always use trusted, non-malicious payloads for internal testing.)

  • Monitor CISA KEV: The vulnerability has been added to the CISA Known Exploited Vulnerabilities (KEV) catalog.

  • Research: Look for IoCs (Indicators of Compromise) and TTPs (Tactics, Techniques, and Procedures) associated with post-exploitation to hunt for pervasive access and backdoors.

Resources

China-nexus cyber threat groups rapidly exploit React2Shell ... - AWS, accessed December 12, 2025, https://aws.amazon.com/blogs/security/china-nexus-cyber-threat-groups-rapidly-exploit-react2shell-vulnerability-cve-2025-55182/

Support the Podcast:

Enjoyed this episode? Leave us a review and share it with your network! Subscribe for more insightful discussions on information security and privacy.

Contact Information:

Leave a comment below or reach out via the contact form on the site, email timothy.deblock[@]exploresec[.]com, or reach out on LinkedIn.

Check out our services page and reach out if you see any services that fit your needs.

Social Media Links:

[RSS Feed] [iTunes] [LinkedIn][YouTube]


[RERELEASE] What is application security?

In this tenacious edition of the Exploring Information Security podcast, I talk with Frank Catucci of Qualys as we answer the questions: "What is application security?"

Frank (@en0fmc) has a lot of experience with application security. His current role is the director for web application security and product management at Qualys.  He's also the chapter leader for OWASP Columbia, SC. He lives and breathes application security.

In this episode we discuss:

  • What is applications security?

  • Why is application security important?

  • Where application security should be integrated

  • Resources for getting into application security

The Final Frontier of Security: The State of Space Security with Tim Fowler

Summary:

Timothy De Block and Tim Fowler, CEO and founder of Ethos Labs LLC, strap in to discuss the critical, rapidly escalating threats in space security. Tim explains that space is now an extension of the internet, where security has historically been ignored due to "organizational inertia" and a perceived "veil of obscurity". The discussion covers the real-world impact of GPS timing disruption on terrestrial infrastructure (like power grids and financial systems) , the danger of unencrypted space communications , and the urgent need for a holistic security approach that integrates security testers directly with development teams. They conclude with a debate on the role of AI in anomaly detection versus critical human decision-making in space.

The State of Space Security and Major Threats

  • Security is a Low Priority: Historically, security was not a priority for systems in space, often operating under a "veil of obscurity". This is slowly changing, with an uptick in security engineering roles this year, moving beyond just GRC/cyber assurance.

  • Unencrypted Communications: A core challenge is the widespread use of unencrypted signals between bases and satellites, which can be easily intercepted and read. Tim estimates that less than 50% of signals are encrypted due to operational challenges.

  • Encryption is Not Enough: Encryption only addresses confidentiality. An encrypted signal can still be captured and replayed, and the satellite may process it if integrity is not addressed.

  • The Ground Segment Threat: Even encrypted space communications can be nullified if the ground network is compromised (e.g., stealing a FIPS-compliant encryption module), necessitating a holistic security approach.

  • Repeating History: Space security is currently experiencing a situation analogous to the internet's early days (ARPANet) or the ICS/OT SCADA world 12-15 years ago, focusing on getting things operational before securing them.

Real-World Impact on Terrestrial Life

  • GPS Timing is Critical: Critical infrastructure—including pipelines, power grids, and financial systems—all rely on GPS timing for synchronization.

  • Disruption Affects Everyone: Disrupting GPS timing can cause widespread outages. Examples include:

    • The London Stock Exchange going down in 2012 due to a localized GPS jamming attack that wasn't even targeting them.

    • A US Navy testing incident that caused widespread outages in San Diego, affecting ATMs and pharmacies for days.

  • Space is the New Internet: Partnerships like T-Mobile's direct-to-cell with Starlink demonstrate that space is becoming an extension of the internet, increasing connectivity but also the attack surface.

Strategy and Getting Involved

  • Integrating Security: The best model for moving decisions closer to security on the operations-to-security spectrum is to physically place security testers (like penetration testers) directly within development teams (DevSecOps).

  • Train Developers to Attack: A highly effective proactive security measure is to teach developers how to attack their own software; they magically stop writing vulnerable code.

  • Space is a Culmination of Niches: Space security is the culmination of all security specializations (cloud, network, web application, ICS/OT, physical security). There is a place and a need for experts from every niche.

  • Resources for Getting Started:

    • Check local security conferences for the Aerospace Village (a non-profit that hosts hands-on labs).

    • Read books like Space Cyber Security by Dr. Jacob Oakley.

    • Attend specialized conferences like Hackspace Con.

    • "Just Google it": Use your existing security expertise (e.g., "cloud security") and research how it applies to the space industry.

AI in Space: Augmentation vs. Autonomy

  • Anomaly Detection is Ideal: AI (machine learning) is tailor-made for high-speed computation and sensor analysis, making it excellent for anomaly detection in early warning systems.

  • The Human Decision-Maker: Tim Fowler insists that human involvement is essential for critical decision-making and validating AI output (to determine if an alert is a false positive). He argues that an autonomous AI decision in space could quickly escalate into a hostile international incident.

  • Scalability Debate: Timothy De Block questioned the scalability of relying on humans for every decision, using traffic light management as an example of where AI could safely and efficiently augment processes. Both agreed AI should handle "busy work" and augment human capabilities, not perform autonomous functions in sensitive situations.

ETHOS LAbs Links and Resources:

ETHOS LABS Website

Connect with Tim Folwer on Linkedin

Support the Podcast:

Enjoyed this episode? Leave us a review and share it with your network! Subscribe for more insightful discussions on information security and privacy.

Contact Information:

Leave a comment below or reach out via the contact form on the site, email timothy.deblock[@]exploresec[.]com, or reach out on LinkedIn.

Check out our services page and reach out if you see any services that fit your needs.

Social Media Links:

[RSS Feed] [iTunes] [LinkedIn][YouTube]


How to Manage Cybersecurity Awareness Month

Summary:

Timothy De Block hosts a lively discussion with Maeve Mueller on the perennial challenge of Cyber Security Awareness Month (CSAM). They dive into the logistics, triumphs, and frustrations of planning events that actually engage employees. The conversation covers everything from the effectiveness of different activities (like "watch and win" contests and "pitch a fish" competitions), the delicate balance of fear vs. education in phishing campaigns, and the logistical nightmares of organizing in-person events. They also explore the emerging concept of Human Risk Management and why good security awareness is ultimately just good marketing and relationship building.

Key Takeaways

Logistics

  • The Struggle is Real: Timothy was "so far behind" on CSAM planning, scrambling to get materials out after October 1st, highlighting the significant time commitment required for impactful programs. Maeve, despite starting planning in June, still feels like she's "running around with like my head cut off" in October.

  • The Power of Swag and Food: Free food, particularly good quality food (like the Costco lunch spread Timothy plans), is a reliable way to drive attendance to in-person events. Maeve noted the success of handing out donuts to draw people to their booth.

  • Creative Engagement: Rote training doesn't work. Successful events involve engaging formats:

    • Watch and Win Contests: Offering prizes for completing training modules, though people often just let videos play in the background.

    • Cybersecurity Mythbusters: Demonstration-based presentations that disprove common security myths, like showing how a password cracker works.

    • Pitch a Phish Competition: Encouraging teammates to create their own phishing emails to target a fake persona, which turns the tables and increases participation.

    • The Booth Approach: Setting up a booth in the office lobby with swag, info cards, and food (like donuts) is effective for broad outreach.

  • Logistical Challenges: The planning process is fraught with administrative issues, such as setting up registration forms (with Microsoft Forms being preferred over glitchy Microsoft Teams registration) and the time sink of cleaning up after in-person events (like the popcorn machine that takes 30 minutes to clean).

The Human Element and Future of the Field

  • Marketing Secure Behavior: Security awareness is fundamentally about marketing secure behaviors. Timothy and Maeve agree that the ultimate goal is to figure out how to make people care about security in their personal lives, which will then bleed over into their work habits.

  • "Department of K.N.O.W.": Maeve highlights the need for the security team to be the "department of KNOW" rather than the "department of NO," as constant negativity leads users to circumvent controls and create Shadow IT.

  • The Cybercriminal's Target: Cybercriminals have learned it's cheaper and easier to target the individual than to hack an organization's technology. Maeve stresses the need to tell stories about cybercrime compounds and the human element of the attack to shock employees into awareness.

  • Human Risk Management (HRM): The movement toward HRM involves leveraging AI to look at the "full person"—analyzing phishing results, training completion, and telemetry from other security tools. This data-driven approach positions security awareness to collect overall human risk data.

  • Building Community: Both hosts emphasize the value of relationships—both with internal business partners and with the external security awareness community. Timothy is launching a Security Advocates Program to pull in non-security employees and champion secure messages.

Support the Podcast:

Enjoyed this episode? Leave us a review and share it with your network! Subscribe for more insightful discussions on information security and privacy.

Contact Information:

Leave a comment below or reach out via the contact form on the site, email timothy.deblock[@]exploresec[.]com, or reach out on LinkedIn.

Check out our services page and reach out if you see any services that fit your needs.

Social Media Links:

[RSS Feed] [iTunes] [LinkedIn][YouTube]


Exploring the Next Frontier of IAM: Shared Signals and Data Analytics

Summary:

Timothy De Block sits down with Matt Topper of Uber Ether to discuss the critical intersection of Identity and Access Management (IAM) and the current cyber threat landscape. They explore how adversaries have shifted their focus to compromising user accounts and non-human identities, making identity the "last threat of security". Matt Topper argues that most enterprise Zero Trust implementations are merely "VPN 2.0" and fail to integrate the holistic signals needed for true protection. The conversation dives into the rise of cybercrime as a full-fledged business, the challenges of social engineering, and the promising future of frameworks like Shared Signals to fight back.

Key Takeaways

The Identity Crisis in Cybersecurity

  • The Easiest Way In: With security tooling improving, attackers focus on compromising user accounts or stealing OAuth tokens and API keys to gain legitimate access and exfiltrate data.

  • Cybercrime as a Business: Cybercriminal groups now operate like legitimate businesses, with HR, marketing, and executives, selling initial access and internal recon capabilities to other groups for a cut of the final ransom.

  • The Insider Threat: Cybercriminals are increasingly paying disgruntled employees for their corporate credentials, sometimes offering a percentage of the final ransom (which can be millions of dollars) or just a few thousand dollars.

  • Social Engineering the Help Desk: Attackers easily bypass knowledge-based authentication (KBA) questions because personal data has been leaked and they exploit the help desk's desire to be helpful under pressure to gain access.

Zero Trust, Non-Human Identity, and the Path Forward

  • Zero Trust is Underwhelming: Matt Topper views most enterprise implementations of Zero Trust as overly network-centric "VPN 2.0" that fail to solve problems for multi-cloud or SaaS-based organizations. True Zero Trust is a holistic strategy that requires linking user, device, and machine-to-machine signals.

  • The Non-Human Identity Problem: Organizations must focus on mapping and securing non-human identities, which include API keys, service accounts, servers, mobile devices, and runners in CI/CD pipelines. These keys often have broad access and are running unchecked.

  • Shared Signals Framework (SSF): A promising solution developed by the OpenID Foundation, SSF allows large vendors (like Microsoft, Google, and Salesforce) to share risk and identity signals. This allows a company to automatically revoke a user's session in a third-party application if a compromise is detected by the identity provider.

  • User Behavior Analytics (UBA): Effective security requires UBA, such as tracking users' browsing habits and using data analytics to establish a baseline of normal behavior, moving toward the "Moneyball" approach seen in sports.

Data Quality and the IAM Challenge

  • Data Quality is Broken: Many problems in IAM stem from poor data quality in source systems like HR and Active Directory, where there is no standardization, legacy data remains, and roles are misaligned.

  • Selling Security to Marketing: To gain funding and traction for UBA and data analytics, security teams should pitch the problem to the marketing team by showing how it can track user behavior, prevent fraud (like "pizza hacks" from rewards program abuse), and save the company money in chargebacks.

Resources & Contact

  • UberEther: Matt Topper's company, which focuses on integrating identity access management tools to build secure systems right from day one.

  • Shared Signals Framework (SSF): A framework from the OpenID Foundation for sharing security and identity signals across vendors.

Support the Podcast:

Enjoyed this episode? Leave us a review and share it with your network! Subscribe for more insightful discussions on information security and privacy.

Contact Information:

Leave a comment below or reach out via the contact form on the site, email timothy.deblock[@]exploresec[.]com, or reach out on LinkedIn.

Check out our services page and reach out if you see any services that fit your needs.

Social Media Links:

[RSS Feed] [iTunes] [LinkedIn][YouTube]


How to Close the Cybersecurity Skills Gap with a Student Powered SOC

Summary:

Timothy De Block speaks with Bruce Johnson of TekStream about a truly innovative solution to the cybersecurity skills shortage: the Student-Powered Security Operations Center (SOC). Bruce outlines how this three-way public-private partnership not only provides 24-hour threat detection and remediation serves as a robust workforce development program for university students. The conversation dives into the program's unique structure, its 100% placement rate for students, the challenges of AI "hallucinations", and how the program teaches crucial life skills like accountability and critical thinking.

The Student-Powered SOC Model

  • Workforce Development: The program tackles the cybersecurity skills shortage by providing students with practical, real-world experience and helps bridge the gap where new graduates struggle to find jobs due to minimum experience requirements.

  • Funding Structure: The program is built on a three-way private-public partnership involving the state, educational institutions, and Techstream. The funding for the SOC platform is often separate from the academic funding for student talent building.

  • "Investment Solution": The model is positioned as an investment rather than an outsourced expense. Institutions own the licenses for their SIM environments and retain built assets, fostering collaborative value building.

  • Reputational Value: The program provides significant reputational value to schools, boasting a 100% placement rate for students and differentiating them from institutions that only offer academic backgrounds.

  • Cost Savings: It serves as a cost-saving measure for CISOs, as students are paid an hourly rate to perform security analyst work.

Student Training and Impact

  • Onboarding and Assessment: The formal onboarding process, which includes training on tools, runbooks, and hands-on labs, has been shortened to six weeks. The biggest indicator of a student's success is their critical thinking test, which assesses logical reasoning rather than rote knowledge.

  • Progression and Mentorship: Students are incrementally matured by starting with low-complexity threats (like IP reputation) and gradually advancing to higher-difficulty topics, including TTPs (Tactics, Techniques, and Procedures), utilizing a complexity scoring system. Integrated career counseling meets regularly with students to review their metrics and guide their career planning.

  • Metrics and Productivity: The program has proven successful, with students handling 50% of incident volume within a quarter of onboarding, including medium to high complexity threats.

  • Beyond Cybersecurity: Students gain valuable, transferable life skills, such as collaboration, accountability, professionalism, and "adulting", which helps isolated students become more engaged.

AI and the "Expert in the Loop"

  • Techstream’s Overkill AI: Techstream uses its product, Overkill, for 24-hour threat detection and remediation, automating analysis, prioritization, and the creation of new detections to go "from zero to hero in 24 hours".

  • Expert Supervision: Their approach is "expert in the loop" , meaning humans (students and analysts) are involved in supervising the AI, with automation being adopted incrementally as trust is built.

  • The Hallucination Challenge: Timothy De Block raised concern about students lacking the experience to discern incorrect information or "hallucinations" from AI output. Bruce Johnson affirmed that the program trains students in three areas: using AI, supervising AI, and understanding AI broadly.

  • Training Necessity: Students must learn how to do the traditional level one work before they can effectively supervise an AI, as experience is needed to detect when the AI makes a bad assumption.

Support the Podcast:

Enjoyed this episode? Leave us a review and share it with your network! Subscribe for more insightful discussions on information security and privacy.

Contact Information:

Leave a comment below or reach out via the contact form on the site, email timothy.deblock[@]exploresec[.]com, or reach out on LinkedIn.

Check out our services page and reach out if you see any services that fit your needs.

Social Media Links:

[RSS Feed] [iTunes] [LinkedIn][YouTube]


What is the 2025 State of the API Report From Postman?

Summary:

Timothy De Block is joined by Sam Chehab to unpack the key findings of the 2025 Postman State of the API Report. Sam emphasizes that APIs are the connective tissue of the modern world and that the biggest security challenges are rooted in fundamentals. The conversation dives deep into how AI agents are transforming API development and consumption, introducing new threats like "rug pulls" , and demanding higher quality documentation and error messages. Sam also shares actionable advice for engineers, including a "cheat code" for getting organizational buy-in for AI tools and a detailed breakdown of the new Model Context Protocol (MCP).

Key Insights from the State of the API Report

  • API Fundamentals are Still the Problem: The start of every security journey is an inventory problem (the first two CIS controls). Security success is a byproduct of solving collaboration problems for developers first.

  • The Collaboration Crisis: 93% of teams are struggling with API collaboration, leading to duplicated work and an ever-widening attack surface due to decentralized documentation (Slack, Confluence, etc.).

  • API Documentation is Up: A positive sign of progress is that 58% of teams surveyed are actively documenting their APIs to improve collaboration.

  • Unauthorized Access Risk: 51% of developers cite unauthorized agent access as a top security risk. Sam suspects this is predominantly due to the industry-wide "hot mess" of secrets management and leaked API keys.

  • Credential Amplification: This term is used to describe how risk is exponential, not linear, when one credential gains access to a service that, in turn, has access to multiple other services (i.e., lateral movement).

AI, MCP, and New Security Challenges

  • Model Context Protocol (MCP): MCP is a protocol layer that sits on top of existing RESTful services, allowing users to generically interact with APIs using natural language. It acts as an abstraction layer, translating natural language requests into the proper API calls.

  • The AI API Readiness Checklist: For APIs to be effective for AI agents:

    • Rich Documentation: AI thrives on documentation, which developers generally hate writing. Using AI to write documentation is key.

    • Rich Errors: APIs need contextual error messages (e.g., "invalid parameter, expected X, received Y") instead of generic messages like "something broke".

  • AI Introduces Supply Chain Threats: The "rug pull" threat involves blindly trusting an MCP server that is then swapped out for a malicious one. This is a classic supply chain problem (similar to NPM issues) that can happen much faster in the AI world.

  • MCP Supply Chain Risk: Because you can use other people's MCP servers, developers must validate which MCP servers they're using to avoid running untrusted code. The first reported MCP hack involved a server that silently BCC'd an email to the attacker every time an action was performed.

Actionable Advice and Engineer "Cheat Codes"

  • Security Shift-Left with Postman: Security teams should support engineering's use of tools like Postman because it allows developers to run security tests (load testing, denial of service simulation, black box testing) themselves within their normal workflow, accelerating development velocity.

  • API Key Management is Critical: Organizations need policies around API key generation, expiration, and revocation. Postman actively scans public repos (like GitHub) for leaked Postman keys, auto-revokes them, and notifies the administrator.

  • Getting AI Buy-in (The Cheat Code): To get an AI tool (like a Postman agent or a code generator) approved within your organization, use this tactic:

    1. Generate a DPA (Data Processing Agreement) using an AI tool.

    2. Present the DPA and a request for an Enterprise License to Legal, Security, and your manager.

    3. This demonstrates due diligence and opens the door for safe, approved AI use, making you an engineering "hero".

About Postman and the Report

  • Postman's Reach: Postman is considered the de facto standard for API development and is used in 98% of the Fortune 500.

  • Report Origins: The annual report, now in its seventh year, was started because no one else was effectively collecting and synthesizing data across executives, managers, developers, and consultants regarding API production and consumption.

Resources

The Developer’s Guide to AI-Ready APIs - Postman

Agent Mode - Postman

First Malicious MCP Server Found Stealing Email in Rogue Postmark-MCP Package - The Hacker News

Support the Podcast:

Enjoyed this episode? Leave us a review and share it with your network! Subscribe for more insightful discussions on information security and privacy.

Contact Information:

Leave a comment below or reach out via the contact form on the site, email timothy.deblock[@]exploresec[.]com, or reach out on LinkedIn.

Check out our services page and reach out if you see any services that fit your needs.

Social Media Links:

[RSS Feed] [iTunes] [LinkedIn][YouTube]


How AI Will Transform Society and Affect the Cybersecurity Field

Summary:

Timothy De Block sits down with Ed Gaudet, CEO of Censinet and a fellow podcaster, for a wide-ranging conversation on the rapid, transformative impact of Artificial Intelligence (AI). Ed Gaudet characterizes AI as a fast-moving "hammer" that will drastically increase productivity and reshape the job market, potentially eliminating junior software development roles. The discussion also covers the societal risks of AI, the dangerous draw of "digital cocaine" (social media), and Censinet's essential role in managing complex cyber and supply chain risks for healthcare organizations.

Key Takeaways

AI's Transformative & Disruptive Force

  • A Rapid Wave: Ed Gaudet describes the adoption of AI, particularly chat functionalities, as a rapid, transformative wave, surpassing the speed of the internet and cloud adoption due to its instant accessibility.

  • Productivity Gains: AI promises immense productivity, with the potential for tasks requiring 100 people and a year to be completed by just three people in a month.

  • The Job Market Shift: AI is expected to eliminate junior software development roles by abstracting complexity. This raises concerns about a future developer shortage as senior architects retire without an adequate pipeline of talent.

  • Adaptation, Not Doom: While acknowledging significant risks, Ed Gaudet maintains that humanity will adapt to AI as a tool—a "hammer"—that will enhance cognitive capacity and productivity, rather than making people "dumber".

  • The Double-Edged Sword: Concerns exist over the nefarious uses of AI, such as deepfakes being used for fraudulent job applications, underscoring the ongoing struggle between good and evil in technology.

Cyber Risk in Healthcare and Patient Safety

  • Cyber Safety is Patient Safety: Due to technology's deep integration into healthcare processes, cyber safety is now directly linked to patient safety.

  • Real-World Consequences: Examples of cyber attacks resulting in canceled procedures and diverted ambulances illustrate the tangible threat to human life.

  • Censinet's Role: Censinet helps healthcare systems manage third-party, enterprise cyber, and supply chain risks at scale, focusing on proactively addressing future threats rather than past ones.

  • Patient Advocacy: AI concierge services have the potential to boost patient engagement, enabling individuals to become stronger advocates for their own health through accessible second opinions.

Technology's Impact on Mental Health & Life

  • "Digital Cocaine": Ed Gaudet likened excessive phone and social media use, particularly among younger generations, to "digital cocaine"—offering short-term highs but lacking nutritional value and promoting technological dependence.

  • Life-Changing Tools: Ed Gaudet shared a powerful personal story of overcoming alcoholism with the help of the Reframe app, emphasizing that the right technology, used responsibly, can have a profound, life-changing impact on solving mental health issues.

Resources & Links Mentioned

  • Censinet: Ed Gaudet's company, specializing in third-party and enterprise risk management for healthcare.

  • Reframe App: An application Ed Gaudet used for his personal journey of recovery from alcoholism, highlighting the power of technology for mental health.

Support the Podcast:

Enjoyed this episode? Leave us a review and share it with your network! Subscribe for more insightful discussions on information security and privacy.

Contact Information:

Leave a comment below or reach out via the contact form on the site, email timothy.deblock[@]exploresec[.]com, or reach out on LinkedIn.

Check out our services page and reach out if you see any services that fit your needs.

Social Media Links:

[RSS Feed] [iTunes] [LinkedIn][YouTube]


[RERELEASE] How Macs get Malware

In this installed episode of the Exploring Information Security podcast, Wes Widner joins me to discuss how Macs get malware.

Wes (@kai5263499) spoke about this topic at BSides Hunstville this year. I was fascinated by it and decided to invite Wes on. Mac malware is a bit of an interest for Wes. He's done a lot of research on it. His talk walks through the history of malware on Macs. For Apple fan boys, Macs are still one of the more safer options in the personal computer market. That is changing though. Macs because of their increased market share are getting targeted more and more. We discuss some pretty nifty tools that will help with fending off that nasty malware. Little Snitch is one of those tools. Some malware actively avoids the application. Tune in for some more useful information.

In this episode we discuss:

  • How Macs get malware

  • What got Wes into Mac malware

  • The history of Mac malware

  • What people can do to protect against Mac Malware

More resources:

[RERELEASE] Why communication in infosec is important - Part 2

In this communicative episode of the Exploring Information Security podcast, Claire Tills joins me to discuss information security communication.

Claire (@ClaireTills) doesn’t have your typical roll in infosec. She sits between the security teams and marketing team. It’s a fascinating roll and something that gives her a lot of insight into multiple parts of the business. What works and what doesn’t work in communicating security to the different areas. Check her blog out.

In this episode we discuss:

  • How important is it for the company to take security seriously

  • How would someone get started improving communication?

  • Why we have a communication problem in infosec

  • Where should people start

More resources:

[RERELEASE] Why communication in infosec is important

In this communicative episode of the Exploring Information Security podcast, Claire Tills joins me to discuss information security communication.

Claire (@ClaireTills) doesn’t have your typical roll in infosec. She sits between the security teams and marketing team at Tenable. It’s a fascinating roll and something that gives her a lot of insight into multiple parts of the business. What works and what doesn’t work in communicating security to the different areas. Check her blog out.

In this episode we discuss:

  • What Claire’s experience is with communication and infosec

  • What’s ahead for communication in infosec

  • Why do people do what they do?

  • What questions to ask

More resources:

Exploring AI, APIs, and the Social Engineering of LLMs

Summary:

Timothy De Block is joined by Keith Hoodlet, Engineering Director at Trail of Bits, for a fascinating, in-depth look at AI red teaming and the security challenges posed by Large Language Models (LLMs). They discuss how prompt injection is effectively a new form of social engineering against machines, exploiting the training data's inherent human biases and logical flaws. Keith breaks down the mechanics of LLM inference, the rise of middleware for AI security, and cutting-edge attacks using everything from emojis and bad grammar to weaponized image scaling. The episode stresses that the fundamental solutions—logging, monitoring, and robust security design—are simply timeless principles being applied to a terrifyingly fast-moving frontier.

Key Takeaways

The Prompt Injection Threat

  • Social Engineering the AI: Prompt injection works by exploiting the LLM's vast training data, which includes all of human history in digital format, including movies and fiction. Attackers use techniques that mirror social engineering to trick the model into doing something it's not supposed to, such as a customer service chatbot issuing an unauthorized refund.

  • Business Logic Flaws: Successful prompt injections are often tied to business logic flaws or a lack of proper checks and guardrails, similar to vulnerabilities seen in traditional applications and APIs.

  • Novel Attack Vectors: Attackers are finding creative ways to bypass guardrails:

    • Image Scaling: Trail of Bits discovered how to weaponize image scaling to hide prompt injections within images that appear benign to the user, but which pop out as visible text to the model when downscaled for inference.

    • Invisible Text: Attacks can use white text, zero-width characters (which don't show up when displayed or highlighted), or Unicode character smuggling in emails or prompts to covertly inject instructions.

    • Syntax & Emojis: Research has shown that bad grammar, run-on sentences, or even a simple sequence of emojis can successfully trigger prompt injections or jailbreaks.

Defense and Design

  • LLM Security is API Security: Since LLMs rely on APIs for their "tool access" and to perform actions (like sending an email or issuing a refund), security comes down to the same principles used for APIs: proper authorization, access control, and eliminating misconfiguration.

  • The Middleware Layer: Some companies are using middleware that sits between their application and the Frontier LLMs (like GPT or Claude) to handle system prompting, guard-railing, and filtering prompts, effectively acting as a Web Application Firewall (WAF) for LLM API calls.

  • Security Design Patterns: To defend against prompt injection, security design patterns are key:

    • Action-Selector Pattern: Instead of a text field, users click on pre-defined buttons that limit the model to a very specific set of safe actions.

    • Code-Then-Execute Pattern (CaMeL): The first LLM is used to write code (e.g., Pythonic code) based on the natural language prompt, and a second, quarantined LLM executes that safer code.

    • Map-Reduce Pattern: The prompt is broken into smaller chunks, processed, and then passed to another model, making it harder for a prompt injection to be maintained across the process.

  • Timeless Hygiene: The most critical defenses are logging, monitoring, and alerting. You must log prompts and outputs and monitor for abnormal behavior, such as a user suddenly querying a database thousands of times a minute or asking a chatbot to write Python code.

Resources & Links Mentioned

Support the Podcast:

Enjoyed this episode? Leave us a review and share it with your network! Subscribe for more insightful discussions on information security and privacy.

Contact Information:

Leave a comment below or reach out via the contact form on the site, email timothy.deblock[@]exploresec[.]com, or reach out on LinkedIn.

Check out our services page and reach out if you see any services that fit your needs.

Social Media Links:

[RSS Feed] [iTunes] [LinkedIn][YouTube]


How to Prepare a Presentation for a Cybersecurity Conference

Summary:

Join Timothy De Block for a special, behind-the-scenes episode where he rehearses his presentation, "The Hitchhiker's Guide to Threat Modeling." This episode serves as a unique guide for aspiring and experienced speakers, offering a candid look at the entire preparation process—from timing and slide design to audience engagement and controlled chaos. In addition to public speaking tips, Timothy provides a concise and practical overview of threat modeling, using real-world examples to illustrate its value.

Key Presentation Tips & Tricks

  • Practice for Time: Practice the presentation multiple times to ensure the pacing is right. Timothy suggests aiming to be a little longer than the allotted time during practice, as adrenaline and nerves on the day of the talk will often cause a person to speak more quickly.

  • Use Visuals Strategically: Pacing and hand gestures can improve the flow of a talk. Be careful with distracting visuals, such as GIFs, by not leaving them up for too long while you are speaking.

  • Stand Out as a Speaker: Be willing to do shorter talks, such as 30-minute sessions, as many speakers prefer hour-long slots. He notes that having a clever or intriguing title for your presentation is important, and using humor or pop-culture references can help.

Support the Podcast:

Enjoyed this episode? Leave us a review and share it with your network! Subscribe for more insightful discussions on information security and privacy.

Contact Information:

Leave a comment below or reach out via the contact form on the site, email timothy.deblock[@]exploresec[.]com, or reach out on LinkedIn.

Check out our services page and reach out if you see any services that fit your needs.

Social Media Links:

[RSS Feed] [iTunes] [LinkedIn][YouTube]