Skip to main content

Quick Links

  • Share your knowledge
  • Jobs & RFP's
  • Log in
Legal Services National Technology Assistance Project
  • Forum
  • Events
  • Topics

    Topics

    Go to Topics Overview
    • Client Facing
      • Artificial Intelligence (AI)
      • Diversity, Equity, & Inclusion (DEI)
      • Social Media
      • UX / Design
      • Accessibility
      • Websites
      • Call Centers
      • Self Representative Litigation
    • Organization
      • Tech Initiative Grant (TIG)
      • Project Management
      • Tech Policies
      • Training
      • Data & Evaluation
      • Remote Work
      • Security
      • Disasters
  • Tools & Resources

    Tools & Resources

    Go to Tools & Resources Overview
    • DATA Analysis Tool for All
      • Get Started
    • Other Resources
      • Toolkits
      • Articles
      • AI & Legal Information Database
      • Legal Aid Tech Stack
      • Consumer Self-Help Solutions
      • RFP Library
      • Technology Initiative Grant Data
    • Community Groups
      • Content Managers Community
  • Tech Assistance

    Tech Assistance

    Go to Tech Assistance Overview
    • Tech Assistance Program
    • Legal Aid Tech Vendors & Service Providers
  • About

    About

    Go to About Overview
    • News
  • Share your Knowledge
  • Jobs & RFPs
Search

Breadcrumb

  1. Home
  2. Tools & Resources
  3. Toolkits
  4. What is Included in the AI Policy Builder

AI Policy Builder

yellow, green, and blue angled lines

Explore This Toolkit

What is Included in the AI Policy Builder

Last Updated: 4/21/26

Download Toolkit
Facebook
X

Policy Sections

The Policy Builder includes 13 main sections, each addressing a key area of responsible AI use.

Every section begins with a Guidance paragraph providing context and explanation, followed by three stance levels

  • Laissez-Faire (minimal policy restriction),
  • Intermediate (balanced approach), and
  • Restrictive (strict control and oversight) which represent degrees of policy restriction. 

From each stance, you can choose a Short, Medium, or Robust version of the policy language. 

  • The Short option provides concise, high-level language;
  • the Medium option includes more context and implementation guidance;
  • and the Robust option contains detailed, prescriptive language suitable for organizations seeking stricter governance.

You will select one policy piece per section. This allows you to build a customized AI policy that fits your organization’s priorities, level of risk tolerance, and compliance requirements.


Tip: Each Guidance paragraph is meant to be read before choosing a stance—it is not part of any particular stance option but rather provides context for what you are selecting.


When you see the following sections in the builder, here’s what they are for:
 

  1. Purpose and Objectives: State why this policy exists. Balance innovation with ethics, compliance, and security. Explain organizational intent and connection to mission.
  2. Scope and Applicability: Define who and what the policy covers.
  3. Definitions: List clear, authoritative definitions for all key terms so interpretation is uniform across your organization.
  4. Ethical Principles and Organizational Commitments: Establish values and ethical principles guiding all AI use, such as fairness, accountability, transparency, privacy, and mission alignment.
  5. Acceptable Use of AI Tools: Define how AI tools may be used, from low-sensitivity exploratory use to restricted, high-sensitivity applications.
  6. Data Protection and Confidentiality: Set standards for safeguarding sensitive and personal information when using AI tools.
  7. Human Oversight and Quality Assurance: Outline the human roles and responsibilities of overseeing AI usage.
  8. Disclosure and Transparency: Describe when and how users must disclose AI use in documents, communications, or client work.
  9. Training, Governance, and Oversight: Outline staff training, oversight mechanisms, and committee responsibilities for AI governance.
  10. Reporting and Incident Response: Describe required reporting steps for misuse or ethical violations as defined by your organization.
  11. Enforcement and Disciplinary Actions: Clarify consequences for noncompliance, violations, or misuse of AI tools. Include escalation, disciplinary action, and revocation of access.
  12. Compliance with Legal and Ethical Standards: Reference applicable professional, legal, and regulatory frameworks — such as ABA Model Rules, privacy laws, and ethical guidelines.
  13. Review and Revision: Describe how and when the AI policy will be reviewed, updated, and approved. Specify the responsible parties.
     

Reminders

  • This is a starting point. You don’t have to use every section, and you can add your own.
  • Use plain, accessible language in your final policy.
  • Tailor the examples and rules to reflect your organization's mission, size, and technical comfort level.
  • Train staff on both how to use AI tools and how to follow the policy.
  • Document approvals, usage, and reviews to protect your organization and your clients.
     

Contents

Toolkit chapter traversal links for AI Policy Builder

  • Previous
    Setting the Stage
  • Next
    How to Use the AI Policy Builder
Join the Community! It's your best resource to get the answers you need.
Sign up today
dark purple dots
Legal Services National Technology Assistance Project

Key Resources

  • Browse by Topic
  • Tech Assistance
  • Tools & Resources
  • Log in

Get Involved

  • Share Your Knowledge
  • Events
  • Give us Feedback

Connect with us

Like on Facebook
Follow on Twitter
Subscribe on YouTube
Legal Services National Technology Assistance Project can leverage technology for better client service with LSNTAP's training, resources, and online community. Empowering them to serve their communities more effectively.

Our Partners

Logo for Michigan Advocacy Program white logo for Legal Services Corporation