
Sun Tzu for the AI Governance Era: 7 Strategic Rules for InfoSec and Compliance Leaders
Most people treat strategy as a deliverable. A roadmap, a Gantt chart, a board slide with quarterly milestones. Sun Tzu would have laughed. Twenty-five centuries ago he understood what we keep forgetting: strategy isn’t the plan — it’s how you think when the plan stops working. And in cybersecurity, compliance, and AI governance, the plan stops working constantly.
Threat actors don’t read your risk register. Regulators publish new guidance the week after you certify. Generative AI ships features faster than your governance committee can convene. Every static playbook is dying the moment it’s printed.
So let me reframe Sun Tzu’s 7 rules for the people I actually work with — CISOs, compliance officers, AI risk leaders, and the boards trying to steer through all of it.
1. Know your enemy
In war, the enemy is the army across the field. In our world, the “enemy” is plural and shape-shifting: ransomware crews, nation-state operators, insider threats, prompt-injection adversaries, model-extraction attackers, supply-chain compromisers, and increasingly the AI systems your own organization deploys without governance.
Knowing the enemy means real threat intelligence, not a copy-pasted MITRE ATT&CK heatmap. It means red-teaming your AI models for jailbreaks and data leakage. It means watching the EU AI Office, the FTC, and your sector regulator with the same discipline you bring to watching CVEs. The threat surface even includes your auditors and your regulators — not as enemies, but as forces with goals, deadlines, and patterns you must understand if you want to anticipate them rather than be surprised by them.
2. Know yourself
This is where most programs collapse. You can’t defend what you can’t inventory. You can’t certify what you can’t describe. In ISO 27001, this shows up as a broken asset register. In ISO 42001, it’s a missing AI system inventory. In EU AI Act readiness, it’s the inability to honestly classify your systems against Annex III.
Honest self-knowledge means admitting the shadow AI your sales team is already using. It means knowing which controls are operating, which are documented but theatrical, and which exist only on paper. Stage 2 auditors don’t fail organizations because they lack controls — they fail them because the organization didn’t know itself well enough to see the gap before the auditor arrived.
3. Deception — or really, unpredictability
Sun Tzu’s deception principle is widely misread as “lie to the adversary.” In modern terms it means something sharper: don’t be predictable. A predictable defender is a defeated defender.
Predictability in our field looks like patching only on Tuesdays, running the same phishing simulation every quarter, performing identical access reviews on identical schedules, deploying the same detection rules every analyst on LinkedIn just bragged about. Attackers automate against patterns. Mature programs vary their cadence, layer deception technology (honeypots, honeytokens, canary models), and stagger their controls so an adversary who breaks one assumption doesn’t get the whole map. In AI governance, the same principle says: don’t let your model behavior become so deterministic that prompt-injection paths become trivial to chart.
4. Adaptation
The rigid tree breaks; the reed bends. Compliance programs that treat ISO 27001, SOC 2, or ISO 42001 as a “get certified and freeze” exercise snap the moment the standard updates, the business pivots, or a new regulation lands. The EU AI Act’s August 2026 high-risk obligations are not a one-time hurdle. NIST AI RMF will keep evolving. HIPAA enforcement is being reshaped by AI use cases nobody anticipated five years ago.
The adaptive program builds change into its bones: continuous control monitoring, living risk registers, AI inventories that update as deployments happen, and governance committees with the authority to actually change course rather than just observe it. The reed survives because it expects the storm.
5. Timing
Patience creates power. The wrong control at the wrong time is still a failure — even if it’s the technically correct control.
Deploying an AI system before a Conformity Assessment is finished isn’t bravery, it’s regulatory exposure. Announcing a breach without coordinated counsel and forensics burns trust you could have kept. Pushing for SOC 2 Type II before you have six months of evidence wastes the audit. Certifying to ISO 42001 before you’ve operationalized the AIMS turns your certificate into a liability the first time a customer asks a hard question.
Waiting too long is the other failure mode. Organizations dragging their feet on EU AI Act readiness will find themselves competing for the same scarce notified bodies and conformity assessment capacity in 2026, paying a premium for the privilege of being late. Timing is the discipline of moving exactly when the move is decisive — neither earlier nor later.
6. Use strength against weakness
Don’t fight where the adversary is strong. And don’t audit where your control is weakest and call it strategy. Pick the terrain.
For defenders, this means leveraging what you already have. If you’re ISO 27001 certified, the majority of your ISO 42001 control set is already mapped — don’t rebuild from scratch, extend. If you have a mature third-party risk program, AI vendor governance is an extension of it, not a new function. If your detection stack is strong at the identity layer, fight there first and harden endpoints in parallel. For consultancies and internal programs alike, this also means leading with the work where your scar tissue is deepest, not competing on commoditized engagements where price has already won the race.
7. Win without fighting
The highest mastery is preventing the incident, not responding to it gracefully. Sun Tzu’s “winning without fighting” is the entire premise of preventive controls, security-by-design, and governance-by-design.
In InfoSec, it’s the patch that closes the vuln before the exploit hits, the phishing-resistant MFA that retires the credential-theft pathway entirely, the segmentation that means the ransomware can’t move. In compliance, it’s the embedded control that makes the audit boring — because there’s nothing left to find. In AI governance, it’s the model risk assessment done before deployment, the bias testing done before customer harm, the data lineage documented before the regulator asks. The breach you avoid, the fine you never receive, the audit finding that never exists — these are the wins nobody writes a case study about. They are also the most valuable wins you will ever produce.
My perspective
After 16+ years in this work, including the ShareVault ISO 42001 implementation that took us through a Stage 2 audit this year, here’s what I’ve come to believe.
Sun Tzu’s rules survive because they’re not really about war. They’re about navigating systems with intelligent, adaptive opponents under uncertainty — which is exactly what InfoSec, compliance, and AI governance are. Our adversaries are not just attackers. They include regulators, market dynamics, our own organizational inertia, and increasingly the emergent behavior of the AI systems we deploy.
The practitioners who win in this space are not the ones with the thickest binders or the most certifications on the wall. They are the ones who internalize a few things: that programs are living organisms, that honest self-assessment beats sophisticated reporting, that timing matters as much as content, and that the best outcome is usually the incident that never happened and the audit finding that never appeared.
If I had to compress Sun Tzu’s seven rules into one sentence for an AI governance leader stepping into 2026, it would be this:
Build a program that knows what it is, knows what it faces, moves when it should move, and makes most of its victories invisible.
That is strategy. Everything else is just paperwork.
DISC InfoSec helps B2B SaaS and financial services organizations operationalize ISO 27001, ISO 42001, EU AI Act, NIST AI RMF, and HIPAA — with a practitioner’s bias for governance that holds up under audit and under attack.
The AI Governance Quick-Start: Defensible in 10 Days, Not 4 Quarters
DISC InfoSec is an active ISO 42001 implementer and PECB Authorized Training Partner specializing in AI governance for B2B SaaS and financial services organizations.
AI Vulnerability Scorecard: Discover Your AI Attack Surface Before Attackers Do
Your Shadow AI Problem Has a Name-And Now It Has a Score
Most AI Security Tools Won’t Pass an Audit. Here’s a 15-Minute Way to Find Out.

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | AIMS Services | Security Risk Assessment Services | Mergers and Acquisition Security
- Sun Tzu for the AI Governance Era: 7 Strategic Rules for InfoSec and Compliance Leaders
- Your Shadow AI Inventory Is Wrong. Here’s a Free Way to Fix It.
- The AI Agent Identity Crisis Has Already Started
- OWASP 2026 GenAI Risk Catalogue Signals a New Era of AI Security Governance
- Dirty Frag Explained: Chained Linux Kernel Flaws Deliver Root Access























