What is Included in the AI Policy Builder
Last Updated: 4/21/26
Policy Sections
The Policy Builder includes 13 main sections, each addressing a key area of responsible AI use.
Every section begins with a Guidance paragraph providing context and explanation, followed by three stance levels
- Laissez-Faire (minimal policy restriction),
- Intermediate (balanced approach), and
- Restrictive (strict control and oversight) which represent degrees of policy restriction.
From each stance, you can choose a Short, Medium, or Robust version of the policy language.
- The Short option provides concise, high-level language;
- the Medium option includes more context and implementation guidance;
- and the Robust option contains detailed, prescriptive language suitable for organizations seeking stricter governance.
You will select one policy piece per section. This allows you to build a customized AI policy that fits your organization’s priorities, level of risk tolerance, and compliance requirements.
Tip: Each Guidance paragraph is meant to be read before choosing a stance—it is not part of any particular stance option but rather provides context for what you are selecting.
When you see the following sections in the builder, here’s what they are for:
- Purpose and Objectives: State why this policy exists. Balance innovation with ethics, compliance, and security. Explain organizational intent and connection to mission.
- Scope and Applicability: Define who and what the policy covers.
- Definitions: List clear, authoritative definitions for all key terms so interpretation is uniform across your organization.
- Ethical Principles and Organizational Commitments: Establish values and ethical principles guiding all AI use, such as fairness, accountability, transparency, privacy, and mission alignment.
- Acceptable Use of AI Tools: Define how AI tools may be used, from low-sensitivity exploratory use to restricted, high-sensitivity applications.
- Data Protection and Confidentiality: Set standards for safeguarding sensitive and personal information when using AI tools.
- Human Oversight and Quality Assurance: Outline the human roles and responsibilities of overseeing AI usage.
- Disclosure and Transparency: Describe when and how users must disclose AI use in documents, communications, or client work.
- Training, Governance, and Oversight: Outline staff training, oversight mechanisms, and committee responsibilities for AI governance.
- Reporting and Incident Response: Describe required reporting steps for misuse or ethical violations as defined by your organization.
- Enforcement and Disciplinary Actions: Clarify consequences for noncompliance, violations, or misuse of AI tools. Include escalation, disciplinary action, and revocation of access.
- Compliance with Legal and Ethical Standards: Reference applicable professional, legal, and regulatory frameworks — such as ABA Model Rules, privacy laws, and ethical guidelines.
- Review and Revision: Describe how and when the AI policy will be reviewed, updated, and approved. Specify the responsible parties.
Reminders
- This is a starting point. You don’t have to use every section, and you can add your own.
- Use plain, accessible language in your final policy.
- Tailor the examples and rules to reflect your organization's mission, size, and technical comfort level.
- Train staff on both how to use AI tools and how to follow the policy.
- Document approvals, usage, and reviews to protect your organization and your clients.