- High Stakes Newsletter
- Posts
- Are Your AI Policies Still “PDF”? Really?
Are Your AI Policies Still “PDF”? Really?
It’s High Time to Move to Policy-as-Code.


:Hey High Stakers,:
Good morning and welcome to the 11th issue of High Stakes!

Policy-as-code turns governance slides into working guardrails.
Else, you risk compliance breakdowns, reputational damage, and costly rework.

1-min Briefing
You’ve read the breezy “Ethical AI” slide decks, right? The ones with 17 bullets on responsible AI, but not a single guardrail in the build pipeline?
Or the compliance workshop where someone says “We’ll review that quarterly”, while an LLM is auto‑tweaking production workflows every hour.
That’s governance theatre: a worrying gap between stated intentions and operational reality that research bodies like Stanford's Institute for Human-Centered AI (HAI) have highlighted, noting the disparity between recognized AI risks and concrete actions in their annual AI Index reports.
Meanwhile, only 28 % of enterprises have any policy‑as‑code running in production.
So let’s get serious. I offer concrete ideas to turn this around!
But first let us work backwards, what happens if Policy-as-Code (PAC) is not in place?
Take these two examples.

Case 1: The One Line of Code That Blocked a $2M Risk
One enterprise team caught a bug - their LLM-generated responses were referencing outdated financial instruments, products that had long been retired.
The issue wasn’t flagged by testing. It surfaced only when a compliance analyst manually reviewed an internal memo.
What compliance teams wanted was simple: a way to ensure no AI-generated outputs used old, unsupported product definitions.
But instead of writing a new 5-page policy document (please, no!), the platform team added one line of code:
deny["model deployment"] { not input.metadata.includes("data_source_version") }
Translation: the model can’t go live unless its inputs include a validated, up-to-date source tag. Think of it like refusing to publish a report unless you know exactly which version of the policy manual it pulled from.
This blocked three upcoming model deployments. One of them had been auto-generating cost estimates based on terms retired in 2017.
Had it gone live in client tools, compliance estimated a $2M exposure from misstatements and correction fallout.
And yes, this rule didn’t come from scratch. It was adapted from starter libraries provided by vendors like OPA and Cedar.
The real work was deciding what mattered, and wiring that rule into the workflow.

Case 2: One Rule Saved a Frontline AI Agent from a PR Nightmare
In another case, a retail chain was piloting an AI assistant to help store operations teams.
Its job was to automatically suggest price discounts and notify store managers about products needing clearance based on slow-moving inventory.
Everything worked - until a test revealed the AI was suggesting discounts based on inventory data from the wrong region.
In one simulation, the assistant recommended clearance pricing for a product still selling hot at full margin in its actual market.
The fix?
deny["agent suggestion"] { input.region != input.store_region }
This simple rule stopped the agent from making cross-region recommendations without a verified match.
You could think of it like a firewall between business units: no decision gets through unless context is right.
The team caught this before rollout - but barely.
One VP estimated that had it gone live, the wrongly applied discounts could have cost the business six figures per week.
Again, this wasn’t handcrafted from scratch. The bones came from policy packs the team tweaked - rules offered by their cloud vendor’s AI policy toolkit, adapted by their engineers.
Policy-as-code isn’t about writing rules from zero.
It’s about deciding which decisions MUST NEVER go wrong - and encoding that logic in a place machines can understand and enforce.
That’s the shift we’re here to help you make.
That’s policy-as-code in action. And it's why this edition matters.

Why This Matters Now
AI is leaving innovation labs and entering mission-critical systems. But in most enterprises, the policy stack hasn't caught up.
Guidelines still rule. Enforcement is still manual. Audits are still after-the-fact.
And when things go wrong - prompt misuse, model drift, regulatory pressure - you’re stuck scrambling for logs that don’t exist.
The leading adopters are moving fast in the other direction:
Governance is shifting left, baked into CI/CD.
Models don’t ship unless they pass policy checks.
Logs and overrides are automated.
Compliance teams don’t just review, they get alerts.
This edition shows you how to get there.

Remember, you’ve to protect assets already paid for!
You’ve staffed the model team. You’ve paid for prompt tuning, vector DBs, and GPUs.
But without codified guardrails, you’re gambling that nothing breaks. Policy-as-code is how you protect the investment BEFORE a rollback kills the ROI.

Policy-as-Code in 3 Weeks
Here’s how we guide teams from policy theatre to enforcement reality.
It's not theory, it's staged execution. Three weeks, three steps.
Enough to get you started. Just enough to know you’ll want help by Day 2 🙂

Week 1: Foundation
Spot where policies fail today. (It’s almost always in the handoff between slides and systems.)
Choose a policy engine that suits your delivery pipeline.
Define 3–5 non-negotiables. These are the things your AI must never do.
Map out one place to intervene where governance can intercept a risky deployment.
Write your first runtime rule. Yes, even one line counts.

Week 2: Rollout Without Backlash
Turn your new policy rules on in shadow mode. Monitor, don’t block. Yet.
Tune for false positives. Add override flows. Earn trust.
Align your rules to the EU AI Act, NIST RMF, or ISO 42001 without slowing down shipping.
Pilot one use case with real enforcement. Watch what breaks.

Week 3: Scale + Operationalize
Build templates. Bundle rules. Reuse across AI apps.
Plug your policies into your CI/CD pipeline. Break the build if it breaks the guardrails.
Light up dashboards for risk and compliance leads. Let them see in real-time.
Set thresholds. Automate exceptions. Create an audit trail you never have to scramble to assemble.
Run your maturity check. Then ask: what’s next - certification, scale, or simplification?

This isn't a compliance report. It’s a trigger for momentum.
And if you're wondering how to wire this into your org, stack, or region-specific policies, let’s talk.
We’ve helped others do it. We’ll help you decide where to start, what to codify, and how to roll it out without drama.
Assess progress with a short checklist:
% of models covered,
% of policies enforced at runtime,
average override time,
alignment with external standards.
Decide what’s next: scale, refactor, or certify.

Where You Go From Here
By the end of Day 15, you're not just policy-ready. You're enforcement-ready.
That puts your AI roadmap on a whole new footing.Instead of waiting for compliance to catch up with innovation, your governance becomes a launch enabler.
This is the playbook the fastest-moving firms are implementing.
So ask yourself:
Are your policies still stuck in SharePoint?
Or are they part of the release pipeline?
Get this right and AI governance stops being overhead. It becomes a competitive advantage.

Best,
Srini
P.S. If your AI policy can’t block bad code, it’s just theatre. One line of policy-as-code > 50 slides in SharePoint.
Coming up next week: Building an AI‑Ready Vault because your AI is only as smart as the data you can trace, trust and tag.