Enterprise AI governance is a industry. Consultants, committees, 80-page frameworks, compliance officers, quarterly audits. It costs six figures and takes six months to implement.

You have seven employees and a Slack channel. You don't need any of that.

But you do need governance. Your team is pasting customer emails into ChatGPT, generating content with Claude, making hiring decisions with AI-scored assessments, and automating financial reports. One data breach, one biased hiring decision, one hallucinated number in a client report — and you're in real trouble.

Here's a framework that takes an afternoon to set up and actually gets followed.

The One-Page AI Policy

Every team needs this document. Not a 30-page governance manual. One page. Four sections.

1. Approved Tools

List every AI tool your team is authorized to use. Be specific:

  • Claude Pro (Team plan) — approved for writing, research, code review, internal analysis
  • ChatGPT Plus — approved for brainstorming, image generation
  • GitHub Copilot — approved for code completion in all repos
  • Not approved: Free-tier AI tools, any tool without a business data agreement

If it's not on the list, don't use it. This prevents the "I found this cool AI tool" problem where someone pastes customer data into a random chatbot.

2. Data Rules

Three categories. Simple.

Green (always OK): Public information, your own writing drafts, open-source code, general research questions. No restrictions.

Yellow (proceed with caution): Internal documents, proprietary code, business strategy. Use only with approved tools on business plans. Never include customer-identifying information.

Red (never): Customer PII, financial records, health data, passwords, API keys, legal documents under NDA. These never go into any AI tool regardless of the plan or provider.

Print this on a card. Stick it next to every monitor. When someone hesitates about whether they can paste something into Claude, they check the card.

3. Review Requirements

Not everything needs review. Set thresholds:

  • AI-generated content going to customers → Human review required before sending
  • AI-assisted code going to production → Standard code review process (AI doesn't change this)
  • AI-generated internal documents → Author is responsible for accuracy
  • AI-informed decisions about people (hiring, performance, compensation) → Manager review + documentation of AI's role

The rule: the higher the stakes, the more human oversight. A blog post draft needs a glance. A hiring recommendation needs scrutiny.

4. Incident Response

When AI produces something wrong, harmful, or embarrassing:

  1. Stop using the output immediately
  2. Notify the team lead
  3. Document what happened: what tool, what input, what went wrong
  4. Fix the immediate problem
  5. Update the policy if needed

That's it. Five steps. No incident review board. No 30-day investigation period. Fix it, learn from it, move on.

Risk Assessment in 30 Minutes

Open a spreadsheet. List every way your team uses AI. For each use case, answer three questions:

| Use Case | What data goes in? | What does it influence? | What if it's wrong? | |---|---|---|---| | Blog post drafting | Public topics, our opinions | Published content | Embarrassing but fixable | | Customer email replies | Customer names, issues | Customer relationship | Could damage trust | | Code generation | Proprietary codebase | Production software | Could cause outages | | Financial reporting | Revenue data | Business decisions | Could mislead stakeholders | | Hiring screening | Candidate resumes | People's careers | Legal liability |

Score each row:

  • Low risk: Wrong output is easily caught and fixed. No sensitive data involved.
  • Medium risk: Wrong output could cause real problems. Some sensitive data. Human review catches most issues.
  • High risk: Wrong output could cause legal, financial, or reputational damage. Sensitive data involved. Consequences are hard to reverse.

Low-risk use cases: spot-check monthly. Medium: human review of every output. High: consider whether AI should be used at all, and if so, with extensive human oversight.

This assessment takes 30 minutes. Update it quarterly or when you add new AI tools.

Data Handling That Actually Works

The complicated version involves data classification frameworks, processing agreements, consent management platforms, and a privacy officer. You don't have a privacy officer.

The simple version:

Rule 1: If data identifies a specific person, it doesn't go into AI tools. Names, emails, phone numbers, addresses, account numbers — strip them or don't use AI.

Rule 2: Use business/enterprise plans, not personal accounts. Business plans typically include data protection agreements that prevent the AI provider from training on your input. Personal plans often don't.

Rule 3: Never use AI to make final decisions about people. AI can suggest, summarize, and analyze. Humans decide. This protects you legally and ethically.

Rule 4: Keep logs. When AI produces output that influences a business decision, save the prompt and the response. If someone asks "why did you decide X?" you need to show the reasoning chain, including what the AI contributed.

Compliance Without Consultants

Current regulations you should know about:

  • EU AI Act — If you serve EU customers, AI used in hiring, credit, or safety-critical applications must meet transparency and documentation requirements
  • State privacy laws (CCPA, Colorado, Connecticut, etc.) — AI processing of personal data may require disclosure in your privacy policy
  • FTC guidance — Don't use AI to deceive consumers. If AI generates your marketing claims, they still need to be truthful
  • Industry-specific — Healthcare (HIPAA), finance (SOX, GLBA), education (FERPA) have rules that apply to AI use

For most small teams, compliance means: update your privacy policy to mention AI use, don't feed protected data into AI tools, and keep records of how AI influences decisions. If you're in a regulated industry, get a one-hour legal consultation — worth the $300.

The Quarterly Review

Block one hour every quarter. Check three things:

  1. New use cases: Is the team using AI in ways not covered by the policy? Add them to the risk register.
  2. Incidents: Did anything go wrong? What did we learn?
  3. Regulation changes: New laws or guidance that affect us?

Update the policy if needed. Share changes with the team. Done.

Start Today

  1. Write your one-page policy (30 minutes)
  2. Complete the risk assessment spreadsheet (30 minutes)
  3. Set the data handling rules and share with the team (15 minutes)
  4. Calendar a quarterly review (2 minutes)

Total setup time: under 2 hours. No consultants. No committees. No six-month implementation timeline.

Your team is already using AI. The question is whether you're governing it or just hoping for the best.

What to Read Next