How to Build an AI Governance Program with Walter Haydock

Summary:

Timothy De Block sits down with Walter Haydock, founder of StackAware, to break down the complex world of AI Governance. Walter moves beyond the buzzwords to define AI governance as the management of risk related to non-deterministic systems—systems where the same input doesn't guarantee the same output.

They explore why the biggest AI risk facing organizations today isn't necessarily a rogue chatbot or a sophisticated cyber attack, but rather HR systems (like video interviews and performance reviews) that are heavily regulated and often overlooked. Walter provides a practical, three-step roadmap for organizations to move from chaos to calculated risk-taking, emphasizing the need for quantitative risk measurement over vague "high/medium/low" assessments.

Key Topics & Insights

  • What is AI Governance?

    • Walter defines it as measuring and managing the risks (security, reputation, contractual, regulatory) of non-deterministic systems.

    • The 3 Buckets of AI Security:

      1. AI for Security: AI-powered SOCs, fraud detection.

      2. AI for Hacking: Automated pentesting, generating phishing emails.

      3. Security for AI: The governance piece—securing the models and data themselves.

  • The "Hidden" HR Vulnerability:

    • While security teams focus on hackers, the most urgent vulnerability is often in Human Resources. Tools for firing, hiring, and performance evaluation are highly regulated (e.g., NYC Local Law 144, Illinois AI Video Interview Act) yet frequently lack proper oversight.

  • How to Build an AI Governance Program (The First 3 Steps):

    1. Establish a Policy: Define your risk appetite (what is okay vs. not okay).

    2. Inventory Systems (with Amnesty): Ask employees what they are using without fear of punishment to get an accurate picture.

    3. Risk Assessment: Assess the inventory against your policy. Use a tiered approach: prioritize regulated/cyber-physical systems first, then confidential data, then public data.

  • Quantitative Risk Management:

    • Move away from "High/Medium/Low" charts. Walter advocates for measuring risk in dollars of loss expectancy using methodologies like FAIR (Factor Analysis of Information Risk) or the Hubbard Seiers method.

  • Emerging Threats:

    • Agentic AI: The next 3-5 years will be defined by "non-deterministic systems interacting with other non-deterministic systems," creating complex governance challenges.

  • Regulation Roundup:

    • Companies are largely unprepared for the wave of state-level AI laws coming online in places like Colorado (SB 205), California, Utah, and Texas.

Resources Mentioned

  • ISO 42001: The global standard for building AI management systems (similar to ISO 27001 for info sec).

  • Cloud Security Alliance (CSA): Recommended for their AI Controls Matrix.

  • Book: How to Measure Anything in Cybersecurity Risk by Douglas Hubbard and Richard Seiers.

  • StackAware Risk Register: A free template combining Hubbard Seiers and FAIR methodologies.

Support the Podcast:

Enjoyed this episode? Leave us a review and share it with your network! Subscribe for more insightful discussions on information security and privacy.

Contact Information:

Leave a comment below or reach out via the contact form on the site, email timothy.deblock[@]exploresec[.]com, or reach out on LinkedIn.

Check out our services page and reach out if you see any services that fit your needs.

Social Media Links:

[RSS Feed] [iTunes] [LinkedIn][YouTube]