Cities using AI tools without a documented classification system and reviewer lanes face legal exposure, equity liability, and public trust risk that a policy document alone won't fix.

Risk tiers and reviewer lanes are how cities keep AI use documented and defensible.

This page is for everyone who gets asked — or who needs to ask — hard questions about how the city is using AI. It lays out how AI uses get classified by risk, who reviews what, what the city is not allowed to do, and how decisions get documented. Every city that uses this toolkit starts here. And every city can shape these rules to fit their own standards, values, and legal obligations.

Three tiers. Every AI use fits into one.

The city classifies every proposed AI use before it moves forward. Classification determines who reviews it, what conditions apply, and whether it's allowed at all. These tiers are a starting point — your city can tighten them, add sub-categories, or apply different review requirements based on your own standards and obligations.

Low-risk use

Internal drafting or support work that does not affect resident rights, service eligibility, or formal records — and does not require human review before use.

Examples: Drafting a staff memo. Summarizing a meeting. Generating talking points for an internal briefing. Writing a first draft of a policy section for human review.

These uses go through a lighter review path. They are still documented. They are still subject to monitoring if scope grows.

High-risk use

Resident-facing tools, service delivery decisions, enforcement-related support, or any use that creates legal, records, equity, or public trust concerns. These require full multi-stakeholder review before going live.

Examples: AI that helps screen benefits applications. Tools that support code enforcement decisions. Chatbots that answer resident questions about city services. Any use that touches hiring, discipline, or employee evaluation.

Legal, DEI, security, and community trust review are all required here. No high-risk use goes live without documented sign-off from each relevant lane.

Prohibited use

Any use that bypasses required human review, hides the city's use of AI from residents or staff, or creates harm that outweighs any operational benefit. These are not subject to review — they are off the table.

Examples: Using AI to make final decisions on benefits without human review. Deploying AI tools without disclosing them to affected residents. Using AI outputs as official records without human verification.

If a proposed use falls here, it stops. The city does not need a workaround — it needs a different approach.

Who reviews what — and when

Every city has different staff and structures. This is the starting framework. Your city maps its own reviewers to these lanes.

Legal

Trigger: Any high-risk use, any use touching resident records, contracts, or liability.

What they review: Legal exposure, public records obligations, vendor contract terms, and whether the use creates any rights or due process concerns.

DEI / Equity

Trigger: Any use that affects resident-facing decisions, hiring, or service delivery.

What they review: Whether the AI use could produce discriminatory outcomes, whether affected communities were considered, and whether equity conditions are built into the review.

IT / Security

Trigger: Any use involving external vendors, data storage, API access, or system integration.

What they review: Data handling practices, vendor security posture, access controls, and whether resident or employee data is protected appropriately.

HR / Employee Relations

Trigger: Any use that affects how employees work, are evaluated, or are monitored — and any use that may trigger labor agreement review.

What they review: Workforce impact, training obligations, union notification requirements, and whether staff have been prepared and informed.

Community Trust / Public Affairs

Trigger: Any resident-facing use, or any use likely to generate public questions.

What they review: Whether residents will be informed, how the city will explain the use publicly, and whether the communication plan is in place before launch.

Review process

The review path, in four steps

Every proposed AI use — low or high risk — follows this path. The depth of review scales with the risk tier.

  1. Classify the use. Determine whether this use is low-risk, high-risk, or prohibited. Use the tiers above. If you're unsure, default to high-risk and escalate.
  2. Route the review. Pull in the lanes that apply to this tier. For high-risk uses, all relevant lanes review before anything moves forward. No lane can be skipped without a documented reason.
  3. Document the decision. Record what was reviewed, what conditions were set, who approved it, and what the monitoring plan is. This documentation is what makes the policy real — not just a promise.
  4. Monitor after launch. The review doesn't end at launch. If the tool changes, the scope grows, or public impact increases, the use re-enters the review path. Monitoring is not optional.

Evaluating vendors before you commit

Most AI risk enters a city through vendor contracts. The risk tier framework applies to purchased tools the same way it applies to internally built ones — but vendor use adds a second layer of questions the city needs to answer before signing.

Pre-purchase approvals

Before any AI tool is purchased or contracted, three functions should sign off: procurement or contracts (budget and compliance), IT or security (data architecture and access controls), and the CAIO or policy lead (alignment with the city's risk framework and training implications).

All three reviews should happen before contract terms are finalized, not after.

What to evaluate before signing

Price is not the only criterion. Before committing, the city should have answers to:

  • Has this vendor had documented AI ethics violations or bias incidents?
  • How is city data protected, and where is it stored?
  • Has this system produced discriminatory outcomes in other deployments?
  • Can the city pause or exit the contract if the system underperforms?

Required contract language

Every AI vendor contract should include at minimum:

  • City data is not used to train vendor models without written consent.
  • All city data is returned and deleted within 30 days of contract termination.
  • The vendor notifies the city within 24 hours of any security incident or system failure.
  • The city has audit rights over data handling practices.
  • Performance metrics are defined in the contract and reported on a regular basis.

What makes AI use legally defensible

The risk tiers and reviewer lanes are the framework. Documentation is what makes the framework real in a legal or regulatory challenge. A city that can show its process — not just its rules — is in a fundamentally stronger position than one that can only point to a policy document.

  • Risk tier assigned and recorded before deployment, not after a complaint arrives.
  • Review lanes documented. The city should be able to show which reviewers were consulted, when, and what they found. A review that was not documented did not happen.
  • Approval conditions recorded. What guardrails were attached to the approval? Who signed off and when?
  • Monitoring plan named. What triggers a re-review? Who is responsible, and how often do they check?
  • Equity review completed for high-risk uses before any deployment that affects resident-facing decisions or protected populations.
  • Incident history maintained alongside the original approval record — so the city can show how it responded, not just that something happened.

This documentation is not bureaucracy for its own sake. It is what allows the city to show it made a considered decision — and to explain that decision clearly if it is ever questioned.

The review framework is backed by working materials

The risk tiers and review steps on this page don't live only here. The public GitHub repo includes templates for documenting review decisions, checklists for each stakeholder lane, and evaluation rubrics that flag risk before a tool goes live. These materials are what turn a policy framework into something a real city team can actually use.

Your city owns the rules. The repo should help you build them.

Use the repo when your city is ready to draft the governance language, risk tiers, review triggers, and supporting materials that make the rules usable. If the city still needs help mapping reviewers, resolving risk disagreements, or setting the governance shape, use the guided help path here.

Open governance working files

Use the repo templates, prompts, and review materials that support rule drafting and review design.

Open governance working files

Get guided help with review design

If your city needs expert guidance on setting risk tiers, mapping reviewers, or drafting the governance policy, get help here.

Get guided help