🇦🇪 UAE & International Edition

K–12 AI Policy Training Programme

A complete professional development programme guiding district and school teams from initial AI landscape understanding through policy creation, adoption, implementation, and sustained governance. Aligned with UAE National AI Strategy 2031, UNESCO, ISTE, FERPA, COPPA, and PDPL.

Phases8 Training Phases
Modules12 Course Modules
Duration16–20 hours
Templates10 Ready-to-Use
AudienceAll K–12 Staff
51%
of students already use AI for schoolwork
15%
of districts have a formal AI policy
8
phases to policy excellence
12
course modules available
11
sections in the complete policy document
60+
policy documents analysed

8-Phase Journey to AI Policy Excellence

P0AILandscape P1AuditAI Audit P2TeamPolicy Team P3DraftDrafting P4LegalLegal & Ethics P5BoardAdoption P6PDImplement P7MonitorMonitoring

The 5 Pillars of a Comprehensive K–12 AI Policy

DISTRICT AI GOVERNANCE FOUNDATION ⚖️LEGALCOMPLIANCEFERPA·COPPA·PDPL 🔒DATAPRIVACYVendor Vetting · DPAs 🎓ACADEMICINTEGRITYDisclosure · AI Literacy EQUITY& ACCESSIEP · ELL · Digital Divide 🔄GOVERNANCE& REVIEWAnnual Review · Updates
Phase 0
Understanding the AI Landscape
Build shared vocabulary · 90 min
Phase 1
Comprehensive AI Audit
Inventory tools & gaps · 120 min
Phase 2
Building Your Policy Team
Assemble the committee · 90 min
Phase 3
Drafting Your AI Policy
11-section document · 180 min
Phase 4
Legal, Ethical & Equity Review
Validate the draft · 120 min
Phase 5
Approval & Board Adoption
Navigate governance · 90 min
Phase 6
Implementation & PD
90-day launch · 150 min
Phase 7
Monitoring & Annual Review
Sustainable governance · 120 min
FACILITATOR NOTE — Programme Launch

Welcome participants. Begin by asking: "What is the #1 AI challenge your school is currently dealing with?" Capture answers on a whiteboard. These become your cohort's anchor case studies throughout all 8 phases. Estimated full-programme time: 16–20 hours across 8 sessions, or compressed into a 2-day intensive workshop.

PHASE 0 · FOUNDATION

Understanding the AI Landscape

Build shared vocabulary and foundational understanding before policy work begins. Every participant must start here.

Duration90 minutes
AudienceAll staff & leadership
FormatWhole-group + discussion
StandardsISTE Educator 1a, 2a

📋 Phase Overview & Learning Objectives

Duration

90 minutes (can split into two 45-min sessions)

Audience

All district/school leadership, teachers, and key stakeholders

Materials

AI Landscape handout, taxonomy chart, discussion prompts

Standards

ISTE Educator Standards 1a, 2a; Bloom's Taxonomy L1–L2; UAE Digital Strategy

  • Define AI and distinguish it from related technologies (automation, data analytics)
  • Identify the 5 categories of AI tools used in K–12 education
  • Understand why 85% of districts do not have an AI policy — and why that creates risk
  • Recognise "Shadow AI" as a data privacy and equity crisis
  • Articulate the 5 pillars of a comprehensive AI policy
  • Assess your school's current AI policy readiness level (1–5 scale)

Section 1: Why This Matters Right Now

The Policy Gap Crisis

Generative AI tools entered mainstream student use in late 2022. By 2024, 51% of K–12 students reported using AI for schoolwork. Yet only 15% of school districts had adopted a formal AI policy. This gap creates legal exposure, equity harm, and academic integrity chaos. In the UAE, the National AI Strategy 2031 mandates AI integration — making governance even more urgent.

The urgency is real: FERPA and COPPA were written before generative AI existed. In UAE contexts, the Personal Data Protection Law (Federal Decree-Law No. 45/2021) similarly predates most generative AI tools. When a student submits their essay draft to an AI tool, that draft may contain personally protected information. Without a policy that names specific approved tools, schools have no legal basis for these interactions.

Beyond legal risk, the equity problem is severe. Students with home internet access and devices can use AI tools freely. Students without home access cannot. Without a policy that explicitly addresses equitable AI access, AI becomes another vector for widening the achievement gap.

Section 2: Defining AI for K–12 Contexts

Artificial Intelligence refers to computer systems that perform tasks typically requiring human intelligence — including language understanding, visual recognition, decision-making, and content generation. For K–12 policy purposes: AI is any software system that uses machine learning, pattern recognition, or generative modelling to produce outputs that adapt based on data.

What AI is NOT (for policy purposes):

  • Simple calculators, spell-checkers, or grammar checkers using fixed rules
  • Basic scheduling or administrative software without learning components
  • Standard search engines (though AI-enhanced search is in scope)

Section 3: Shadow AI — The Hidden Crisis

⚠️ Shadow AI Defined

Shadow AI refers to the use of AI tools by staff and students that has not been reviewed, approved, or even acknowledged by school leadership. When a teacher asks ChatGPT to generate quiz questions and pastes them into their LMS — without approval, without a DPA, without student notification — that is Shadow AI. It is happening in your school right now.

Shadow AI risk inventory — common K–12 scenarios:

  • Teachers using AI writing assistants to draft parent communications
  • Students submitting assignments to AI tools for feedback (FERPA/PDPL concern)
  • Counsellors using AI chatbots to help draft student recommendations
  • Administrators using AI for contract review (employment law concern)
  • Students under 13 creating accounts on generative AI platforms (COPPA violation)
  • IT staff using AI security tools without notifying HR or legal

Section 4: Policy Readiness Self-Assessment

LevelDescriptionWhat This Means
Level 1No awarenessLeadership doesn't know AI tools are being used in classrooms
Level 2Informal useStaff use AI tools personally; no school discussion has occurred
Level 3FragmentedIndividual teachers have informal AI norms; no school policy
Level 4Policy draftA policy draft exists but has not been formally adopted
Level 5ImplementedA formal AI policy exists, is communicated, and is actively enforced

Most schools are at Level 2–3. This programme will take you to Level 5.

AI Tool Taxonomy — 5 Categories for K–12 Education

✍️ GENERATIVE AI ChatGPT · Claude Gemini · Copilot Perplexity · Khanmigo ⚠️ HIGHEST RISK 🎓 ADAPTIVE LEARNING Khan Academy · IXL DreamBox · Zearn Newsela 📋 DPA REQUIRED 📊 ANALYTICS & ASSESSMENT Panorama · NWEA Illuminate · Schoolzilla PowerSchool AI 🔍 FERPA/PDPL REVIEW 🔒 SAFETY & SECURITY Securly · GoGuardian Bark · Gaggle Lightspeed ✅ LOWER RISK 🤖 ADMIN & OPERATIONS Frontline · TalentEd Infinite Campus AI SchoolAI · Canopy 📋 STAFF DATA

🎤 Facilitator Prompts — Phase 0

Opening question (5 min): "Without looking anything up — raise your hand if you believe a student in your school used an AI tool for schoolwork in the past week. Now keep your hand up if your school has a policy governing that use." The visual gap is your opening.

Shadow AI discussion: "Think about your own practice. Did you use any AI tool in the last month for something work-related? Was that tool on an approved list? Did you sign a DPA for it? This is not a guilt exercise — it is an honesty exercise."

Closing reflection: "What is the one thing from Phase 0 that most changes how you think about AI in your school? We'll return to these answers at the end of the programme."

🔬 Activity 0-A: AI Landscape Audit — What's in Your School?

⏱ 30 minutes 👥 Small groups of 4–5 📋 AI Taxonomy handout, sticky notes
1
BRAINSTORM (10 min) Each group lists every AI tool they are aware of being used in their building or department. Include tools used by teachers, students, counsellors, and administrators. Do not self-censor — unapproved tools belong on the list too.
2
CATEGORISE (8 min) Sort your list into the 5 taxonomy categories. Note which tools you are unsure how to categorise — those become discussion items.
3
RISK RATING (7 min) For each tool, estimate: Does the school have a Data Privacy Agreement? Do students share personal information? Rate each: 🟢 Low concern / 🟡 Needs review / 🔴 Immediate concern.
4
DEBRIEF (5 min) Each group shares their top 2 "🔴 Immediate concern" tools. Capture these for the Phase 1 formal audit.
Output

A raw, preliminary inventory of AI tools in use — the foundation for the formal Phase 1 audit. This is your school's first honest look at the scope of the challenge.

🧠 Phase 0 Knowledge Check

1. Which of these is NOT within the definition of AI for K–12 policy purposes?

2. What is "Shadow AI"?

📖 Key Vocabulary — Phase 0

Artificial Intelligence (AI)

Computer systems performing tasks typically requiring human intelligence: language understanding, visual recognition, decision-making, and content generation.

Generative AI

AI systems that create new content (text, images, audio, video, code) in response to prompts. Examples: ChatGPT, Claude, Gemini, DALL-E.

Machine Learning

A subset of AI in which systems learn from data to improve performance without being explicitly programmed for each task.

Shadow AI

AI tool use by staff or students not reviewed, approved, or acknowledged by school leadership. Creates legal and equity risks.

Data Privacy Agreement (DPA)

A contract between a school and a vendor specifying how student data will be collected, stored, used, and protected. Required for all tools accessing student data.

AI Literacy

The ability to understand, use, evaluate, and critically assess AI systems and their outputs. Essential for both educators and students.

Adaptive Learning Platform

Educational software using AI to personalise instruction, adjusting content, pacing, and difficulty based on each student's performance patterns.

Equity Gap

The difference in AI access, skills, and outcomes between student populations — often along existing lines of race, income, and disability status.

⚠️ Common Pitfalls — Phase 0

⚠️

PITFALL: Skipping Phase 0 to "get to the real work." The vocabulary established here is the foundation for every subsequent phase. Teams that skip shared definitions spend hours later arguing about what counts as AI. Phase 0 vocabulary is policy infrastructure.

⚠️

PITFALL: Treating AI literacy as optional for non-instructional staff. Your HR director using an AI resume-screening tool, your facilities manager using predictive maintenance software — these employees are also covered by your AI policy.

⚠️

PITFALL: Approaching AI policy from a place of fear. Schools that begin with "we need to stop students from using AI" consistently produce unenforceable prohibition policies. Start with "we need to govern AI in a way that serves learning and protects students."

⚠️

PITFALL: Assuming your current acceptable use policy covers AI. General technology AUPs were written for email and web browsing. They do not address generative AI, vendor data privacy, or academic integrity questions unique to AI.

✅ Phase 0 Completion Checkpoint

Check each item as your team completes it. All items must be checked to advance.

PHASE 1 · DISCOVERY

Conducting a Comprehensive AI Audit

Systematically inventory all AI tools in use across the school — approved and unapproved — and assess current policy gaps against legal requirements.

Duration120 minutes
AudienceTechnology, curriculum, HR, legal & building leaders
OutputAI Tool Inventory + Policy Gap Analysis

Section 1: Why You Must Audit Before You Draft

Many schools begin writing an AI policy based on what they think is happening with AI. This produces policies that prohibit tools nobody uses while ignoring tools everybody uses. Before writing a single word of policy, you must know the ground truth: what tools are actually in use, by whom, under what conditions, with what student data exposure.

The audit has three dimensions: (1) Tool inventory — what AI tools exist, whether sanctioned or not; (2) Legal exposure mapping — which tools create FERPA, COPPA, CIPA, ADA, or PDPL compliance questions; (3) Stakeholder awareness assessment — what staff, parents, and students know and believe about AI use.

Section 2: Legal Framework Review

Law / RegulationWhat It GovernsKey AI Policy ImplicationContext
FERPAStudent education recordsStudent data submitted to AI tools may constitute education records — vendors must operate under school official exception with DPA🇺🇸 US / International schools
COPPAOnline data from children under 13AI tools collecting data from K–8 students require verifiable parental or school consent; up to $51,744 per violation🇺🇸 US / International schools
CIPAInternet content filtering for E-rate schoolsInternet Safety Policy must address AI-generated content; AI tools generating content must be addressed🇺🇸 US
UAE PDPLPersonal data protection in UAEFederal Decree-Law No. 45/2021 — all student personal data processed by AI tools requires legal basis; data localisation requirements apply🇦🇪 UAE
ADA / Section 504Accessibility for students with disabilitiesAI tools used in instruction must be accessible; AI cannot screen students in discriminatory ways🇺🇸 US / International
State/Emirate Privacy LawsVaries by jurisdiction — many stricter than federalADEK/KHDA guidelines in UAE; 40+ US states have student privacy laws; several have specific AI provisionsAll jurisdictions

Section 3: The Audit Instrument — Four Survey Populations

Staff Survey (key questions):

  • What AI tools do you currently use for professional work?
  • Which of these tools do students interact with directly?
  • Have you reviewed the privacy policy of any AI tool before using it for work?
  • Do you know if your school has a Data Privacy Agreement with any AI tool you use?
  • Have you received professional development on AI use or AI policy?
  • Do students in your class use AI tools for assignments? Is this required, permitted, or without your knowledge?

Parent/Guardian Survey (key questions):

  • Are you aware that your child's school uses AI-powered educational software?
  • Has your child told you about using AI tools for schoolwork at home?
  • Do you have concerns about student data being shared with AI companies?
  • Should students be allowed to use AI writing tools for homework? For assessments?

Student Survey (age-appropriate, Grade 6+):

  • Have you used an AI tool to help with a school assignment in the past month?
  • Did your teacher know you were using AI? Was it required, allowed, or your own choice?
  • Do you know if your school has rules about AI use?

Section 4: Policy Gap Analysis — 10 Critical Areas

#Policy AreaGap Assessment
1AI tool approval process and vendor vetting
2Student data privacy requirements for AI vendors
3Student AI use permissions by grade level
4Academic integrity and AI disclosure requirements
5Staff AI use guidelines and professional standards
6Prohibited AI uses and absolute restrictions
7Equity provisions and universal access requirements
8Violation reporting and enforcement procedures
9Annual review and update process
10Special education and IEP/SEN accommodations for AI use

🔬 Activity 1-A: AI Audit Sprint

⏱ 45 minutes 👥 Department/role-based teams
1
TOOL INVENTORY (15 min) Using Template 1, each department team lists every AI tool used in the past 12 months. Include: vendor name, tool name, primary use case, which students interact with it, whether a DPA/data agreement exists.
2
DPA VERIFICATION (15 min) IT Director reviews the vendor contract database against the tool list. For each tool: ✅ Agreement in place / 🟡 Agreement exists but expired / 🔴 No agreement found. This is often the most uncomfortable part — prepare for surprises.
3
RISK PRIORITISATION (10 min) Review all 🔴 "No Agreement" tools involving student data. These become immediate action items: execute a DPA within 30 days or discontinue student use.
4
DEBRIEF (5 min) Produce a ranked list: Top 3 legal risks. Top 3 most-used tools without policy coverage. One item the team is most surprised by.

✅ Phase 1 Completion Checkpoint

PHASE 2 · FORMATION

Building Your Policy Team

Assemble the diverse, inclusive committee that will drive authentic, legally defensible, and educationally sound AI policy creation.

Duration90 minutes
OutputCommittee Charter + Stakeholder Plan

Section 1: Why Inclusive Policymaking Produces Better Policy

The most common failure mode for school AI policies is being written by a small group — usually IT leadership and the principal — without meaningful input from teachers, students, parents, or community members. These policies are technically drafted but practically ignored: teachers don't follow rules they had no hand in creating.

Inclusive policymaking is not just ethically preferable — it produces better outcomes. Research consistently shows that policies developed with authentic stakeholder input have higher fidelity of implementation, more sustainable buy-in, and greater resilience when challenged.

Section 2: The 9-Stakeholder Committee

RoleWhy EssentialKey ContributionTime Commitment
AI Policy Coordinator / AI LeadDrives the process; owns ongoing governanceProject management, research, draft writingSignificant — primary responsibility
Technology Director / CTOTechnical feasibility; vendor relationships; securityTool registry, DPA tracking, technical provisionsHigh (all sessions + implementation)
Curriculum / Deputy PrincipalInstructional alignment; academic integrity lensGrade-band permissions, assessment guidanceHigh (all sessions)
SEN / SENCO / Special Ed DirectorIEP/504/SEN implications; accessibilityEquity provisions, accommodation languageMedium (key sessions + review)
Teacher Representatives (2)Ground-level practice reality; faculty trustPractical feasibility, classroom implicationsMedium (all sessions)
Parent/Community RepCommunity trust; student perspective from homeFamily communication provisions, consent languageMedium (key sessions)
Student Representative (Gr. 8+)Student voice; peer credibilityStudent use reality, peer culture insightsMedium (key sessions)
Legal Counsel / DPOLegal compliance verification; data protectionFERPA/COPPA/PDPL review, liability languageLower (review sessions + final approval)
Building Principal / Vice PrincipalImplementation reality; enforcement capacityEnforcement procedures, building-level provisionsMedium (key sessions)

Section 3: Four Input Formats for Authentic Engagement

📋 Written Surveys

Anonymous surveys capture honest opinions staff won't share in meetings. Deploy before Phase 1 and again after the policy draft for comparison.

🎤 Town Hall Sessions

Open meetings for community input on draft provisions. One for staff, one for families, one student-centred. Record questions; respond in writing within 5 days.

🔬 Focus Groups

Small-group deep dives (6–8 participants) on high-stakes provisions: academic integrity, student data privacy, IEP/SEN accommodation language.

📧 Written Comment Period

30-day formal comment window after draft release. Accept written comments by email, paper, and in-person. Document and respond to all substantive comments.

Section 4: Handling Resistance

Resistance TypeWhat It Sounds LikeEffective Response
Fear-based"AI is dangerous — we should ban it all."Acknowledge the concern; redirect to evidence: blanket bans have never worked for technology. Ask: "What specifically worries you? Let's write a policy that addresses that."
Minimising"AI is just a tool — we don't need a special policy."Share the Shadow AI legal analysis. Ask: "Does our current policy address data protection requirements for AI vendors? If not, what happens if there's a data breach?"
Overconfidence"Our tech AUP already covers this."Do a live gap analysis: pull up the existing AUP and walk through the 10 gaps from Phase 1. Name the specific provisions missing.
Turf concerns"Teachers should decide this, not IT."Reframe: "This policy affects everyone — that's why everyone is at the table. Your role is specifically to contribute X."

✅ Phase 2 Completion Checkpoint

PHASE 3 · DRAFTING

Drafting Your AI Policy

Build the complete 11-section AI policy document with all required legal provisions, grade-band permission matrices, and implementation infrastructure.

Duration180 minutes (3 sessions)
OutputComplete draft AI policy document

Section 1: The 11-Section Policy Architecture

A legally sound, operationally complete K–12 AI policy requires eleven sections. Each section addresses a distinct domain of governance. Missing sections create gaps that will be exploited — by vendors, by students, by legal challengers, or by staff seeking justification for unauthorised tool use.

The 11-Section AI Policy Architecture

§1. PURPOSE & SCOPEWhat the policy governs; who is covered; definitions §2. AI TOOL DEFINITIONS & TAXONOMYOfficial definitions; 5 tool categories; what counts as AI §3. APPROVED TOOL REGISTRY & VETTING30-point vetting checklist; registry structure; approval process §4. STUDENT USE PERMISSIONSGrade-band permission matrix; disclosure requirements §5. STAFF USE GUIDELINESProfessional standards; permitted/prohibited uses; PD requirements §6. ACADEMIC INTEGRITYDisclosure standards; AI-assisted work; honour code integration §7. DATA PRIVACY & SECURITYFERPA/COPPA/PDPL compliance; DPA requirements; breach protocol §8. EQUITY & ACCESSUniversal access; IEP/SEN/ELL provisions; bias monitoring §9. PROHIBITED USES & RESTRICTIONSAbsolute prohibitions; conditional restrictions; safety provisions §10. VIOLATIONS & ENFORCEMENT4-level framework; investigation; restorative approach §11. ANNUAL REVIEW & AMENDMENTReview triggers; amendment process; version control

Section 2: Grade-Band AI Permission Matrix

Grade BandGenerative AI (Text)AI Research ToolsAdaptive LearningAI Writing FeedbackAI Image GenerationDisclosure Required
K–2🔴 Not permitted🔴 Not permitted✅ Teacher-directed only🔴 Not permitted🔴 Not permittedTeacher notifies parents
3–5🔴 Not permitted🟡 With supervision✅ Approved tools only🟡 Teacher-supervised🔴 Not permittedWritten teacher disclosure
6–8🟡 When teacher specifies✅ With disclosure✅ Approved tools🟡 With disclosure🟡 Approved tools onlyStudent AI Disclosure Form
9–12🟡 Per assignment✅ With full disclosure✅ Approved tools✅ With disclosure🟡 Per assignment, with disclosureFull AI Contribution Statement
Staff✅ Approved tools; no confidential student data✅ Approved tools✅ Approved tools✅ Approved tools✅ With copyright reviewProfessional disclosure per context

🔴 = Not permitted | 🟡 = Conditional — specific conditions apply | ✅ = Generally permitted with standard requirements

Section 3: Key Prohibited Uses — Absolute Restrictions

These prohibitions apply to ALL users at ALL grade levels — no exceptions:
  • Facial recognition of students — no AI system may use facial recognition to identify, track, or monitor students without explicit consent and board approval
  • AI-generated disciplinary decisions — no AI system may be the sole or primary basis for a disciplinary action; human review is required for all student discipline
  • AI-generated IEP/SEN decisions — AI tools may assist in data gathering but may never be the basis for special education eligibility, placement, or IEP goal decisions
  • Student biometric data collection — keystroke dynamics, eye tracking, emotional recognition without explicit consent
  • Commercial AI platforms without DPA for students under 13 — no generative AI platform collecting personal information from students under 13 may be used without an executed Data Privacy Agreement
  • AI-generated evaluations submitted as educator judgement — staff may not submit AI-generated evaluation text as their own professional assessment without disclosure

Section 4: Academic Integrity — AI Disclosure Standards

The disclosure requirement shifts the question from "Did you use AI?" to "How did you use AI?" — presupposing transparency rather than guilt.

Sample AI Disclosure Statement (Grade 9–12):

"This work was completed with the following AI assistance: [Tool name]. I used it for: [specific purpose]. The core ideas, analysis, and conclusions are my own. All AI-generated content has been reviewed, verified, and either cited or paraphrased. The total AI contribution to this work is approximately [%]."

Sample AI Disclosure Statement (Grade 6–8):

"I used [tool name] to help with this assignment. I used it for [purpose]. The rest of this work is my own thinking and writing."

✅ Phase 3 Completion Checkpoint

PHASE 4 · REVIEW

Legal, Ethical & Equity Review

Verify the policy draft is legally defensible, ethically grounded, and genuinely equitable before moving to board adoption.

Duration120 minutes
OutputRevised, legally reviewed policy draft

Section 1: Legal Compliance Review — 5 Key Areas

Area 1: FERPA / PDPL Compliance

Every provision involving student data must be reviewed. Key questions: (1) Does this provision allow a vendor to access student education records? If yes, is the vendor operating under the "school official" exception with a DPA specifying legitimate educational interest? (2) Does the data privacy section require annual DPA review and include a data breach notification timeline consistent with applicable law? (UAE PDPL requires breach notification within 72 hours.)

Area 2: COPPA Compliance

For any provision authorising AI tool use for students under 13: Has the vendor agreed to operate under school authority rather than collecting consent directly from parents? Does the DPA template include language prohibiting secondary commercial use of student data?

Area 3: Section 504/ADA/SEND Accessibility

Review equity provisions: Do all approved AI tools meet WCAG 2.1 AA accessibility standards? Does the policy prohibit tools not reviewed for screen reader compatibility? Does the IEP/SEN provision specify that AI accommodations may be embedded in a student's plan?

Area 4: UAE PDPL Specific Requirements

UAE Context

UAE Federal Decree-Law No. 45/2021 on Personal Data Protection requires: explicit consent or other legal basis for processing personal data; notification to UAE Personal Data Protection Office of certain breaches within 72 hours; data subject rights including access, correction, and erasure. Schools in ADEK jurisdiction must comply with ADEK School AI Framework; KHDA jurisdiction schools must comply with KHDA AI Guidelines.

Area 5: Academic Awarding Body Requirements

For secondary schools: verify that AI disclosure and academic integrity provisions comply with applicable awarding body guidance — IB Organisation, Cambridge CAIE, Pearson, AQA, and UAE MoE requirements. These change frequently and must be checked annually during the policy review cycle.

Section 2: Ethical Review — 4 Dimensions

Ethical DimensionReview QuestionsPolicy Sections to Check
Fairness & Non-DiscriminationDoes the policy prevent AI from producing biased outcomes by race, gender, disability, or ELL/language status? Does it require bias monitoring for AI assessment tools?§8 (Equity), §3 (Vetting), §9 (Prohibited Uses)
TransparencyDo students and parents know when AI is being used in decisions affecting them? Does disclosure meet transparency obligations?§6 (Academic Integrity), §4 (Student Permissions)
Human OversightDoes the policy ensure humans remain in the loop for consequential decisions? Are AI recommendations always subject to human review before action?§10 (Violations), §9 (Prohibited Uses)
Student DignityDoes the policy protect student dignity in AI interactions? Is there a prohibition on AI systems that surveil, score, or rank students in ways that could stigmatise?§9 (Prohibited Uses), §8 (Equity)

Section 3: Equity Impact Assessment

The Equity Test

For every major provision, ask: "Does this provision work the same way for a student with an IEP, an English Language Learner, a student whose family cannot afford internet access at home, and a student whose first language is not English?" If any of these students are disadvantaged, the provision needs revision.

Student PopulationKey Risk PointsRecommended Provisions
Students with IEPs/SEN PlansAI tools may not accommodate assistive technology; AI assessment may misinterpret disability-related work patternsRequire accessibility review in vetting; authorise AI accommodations in IEPs; prohibit AI-only disciplinary decisions
English Language Learners / EALAI translation tools create academic integrity grey areas; AI may misclassify ELL/EAL writing qualityExplicitly permit approved AI translation tools; require ELL/EAL-specific disclosure guidance; prohibit AI as sole reclassification basis
Low-Income / No Home AccessAt-home AI access creates homework equity gap; can't practise with approved tools outside schoolEnsure AI-based assignments are completable on school devices during school hours; consider device/hotspot lending programmes
International / Multilingual Students (UAE)Culturally responsive AI content may be lacking; UAE cultural norms must be respected in AI outputsRequire cultural appropriateness review in vetting; align with UAE Digital Wellbeing principles; flag tools with UAE-specific content restrictions

✅ Phase 4 Completion Checkpoint

PHASE 5 · ADOPTION

Approval & Board Adoption

Navigate the board presentation, public comment process, and formal vote with confidence. A well-prepared presentation is the difference between adoption and delay.

Duration90 min prep + board meeting
OutputBoard-adopted AI policy

Section 1: The Board Presentation — 5-Part Structure

Part 1: Why Now (5 min)

Frame urgency with data from Phase 1 audit: AI tool usage rates, legal exposure from unvetted tools, peer school comparisons. Never start with the policy — start with the problem it solves.

Part 2: Our Process (5 min)

Document the inclusive process: audit completed, 9-member committee, stakeholder surveys, town halls, legal review. Boards are most sceptical of policies that appear rushed. Show your work.

Part 3: Policy Overview (10 min)

Walk through the 11 sections at a high level. Focus on: the permission matrix, prohibited uses, vendor vetting, and enforcement framework. Do not read the policy — synthesise it.

Part 4: Stakeholder Support (5 min)

Share survey results: staff support rates, parent questions addressed, student voice incorporated. Quote specific feedback. This is often the most persuasive part.

Part 5: Implementation Plan (5 min)

Board members vote for policies they believe will be implemented. Show the 90-day launch plan, PD schedule, and tool registry timeline. Implementation readiness is often the deciding factor.

Section 2: Anticipated Board Questions — Prepared Responses

QuestionPrepared Response
"How do we enforce this?""Enforcement operates at 4 levels — teacher intervention for minor instances up to principal/superintendent review for serious violations. Consistent enforcement is actually easier with a written policy than without one."
"Will this put us behind other schools?""Schools that govern AI thoughtfully are ahead of those that simply prohibit or ignore it. Our policy enables responsible use while protecting students — which is both educationally sound and legally defensible."
"What about teachers uncomfortable with AI?""Phase 6 includes a 3-tier PD programme. Tier 1 is required of all staff — explaining the policy and what it means for each role. No teacher is expected to be an AI expert."
"What if the technology changes?""Section 11 includes an annual review process and triggers for mid-year updates. The policy is designed to be a living document — our governance infrastructure will keep updating it."
"Did parents have input?""Yes — we conducted parent surveys, hosted town halls, and maintained a 30-day public comment period. The policy reflects parent priorities: data privacy, grade-appropriate permissions, and transparency."

✅ Phase 5 Completion Checkpoint

PHASE 6 · IMPLEMENTATION

Implementation & Professional Development

Launch the adopted policy with a comprehensive 3-tier PD programme, student instruction, family communication, and the operational infrastructure needed for sustained implementation.

Duration150 min initial + 90-day rollout
OutputOperational policy + trained staff

Section 1: The 90-Day Launch Window

The Critical Window

The first 90 days after policy adoption are the most critical. Patterns established in this window tend to persist. Structure: Week 1–2: All-staff communication, IT tool registry published, help desk activated. Weeks 3–6: Tier 1 PD for all staff, student instruction begins, parent FAQ published. Weeks 7–12: Tier 2 teacher PD, Tier 3 leader PD, first compliance check.

Section 2: The 3-Tier Professional Development Model

3-Tier PD Model — Matching Depth to Role

TIER 3 — Leaders & Specialists Principals · IT · SEN · AI Policy Coordinator 12 hrs total (3 additional) TIER 2 — Instructional Staff Teachers · Coaches · Counsellors · Librarians · 9 hrs total TIER 1 — ALL STAFF Required of every employee · 3 hours minimum

Section 3: Student-Facing Instruction by Grade Band

Grade BandDurationKey ConceptsSuggested Activities
K–22 × 30 minWhat computers can/can't do; "Doing your own thinking"; when to ask an adultRead-aloud discussion; "Did a person think this?" sorting activity
3–52 × 45 minWhat AI is; approved vs. unapproved tools; why we tell the truth about helpers; basic data privacyAI tool exploration (teacher-directed); "I used AI to help" practice statements
6–83 × 50 minAI categories; permission matrix; academic integrity; data privacy; AI bias introductionPolicy jigsaw; assignment analysis; AI output critique
9–123 × 55 minFull policy; AI as professional tool; university/employer AI policies; quality disclosure; critical evaluationPolicy analysis; professional comparison; AI-assisted essay with disclosure; mock integrity hearing

Section 4: The AI Tool Registry

The AI Tool Registry is the operational heart of the data privacy pillar — the school's authoritative list of AI tools reviewed, approved, and cleared for student use. The registry must be published on the school website before the policy effective date. Without a functioning, public registry, the vendor vetting requirement is unenforceable.

Registry fields: Tool name · Vendor · AI category · Approved user groups · Approval date · DPA status and expiration · Date of last review · Approved use restrictions · Approving administrator. The AI Policy Coordinator owns the registry.

✅ Phase 6 Completion Checkpoint

PHASE 7 · SUSTAINABILITY

Monitoring, Evaluation & Annual Review

Build the governance structures and review processes that keep the policy current, compliant, and effective as AI technology continues to evolve.

Duration120 min initial + annual ongoing
OutputGovernance calendar + monitoring protocols

Section 1: Five Monitoring Streams

StreamWhat to MonitorFrequencyOwner
Compliance MonitoringPolicy violation reports; enforcement actions taken; patterns by grade level or departmentMonthlyBuilding principals → AI Policy Coordinator
Registry HealthDPA expiration dates; tools no longer in use; new tools requested; vendor changesQuarterlyIT Director → AI Policy Coordinator
Staff ImplementationPD completion rates; help desk question volume and topics; staff survey on policy clarityQuarterlyHR/PD Director → AI Policy Coordinator
Student AI LiteracyAcademic integrity incident rates; student survey on policy understanding; disclosure complianceSemesterCurriculum Director → AI Policy Coordinator
Legal LandscapeUAE/national regulatory updates; peer school policy changes; emerging legal casesMonthlyLegal Counsel/DPO → AI Policy Coordinator

Section 2: Triggered Review Criteria

  • Trigger 1: A major new AI platform achieves mainstream student adoption (20%+ student use within 90 days)
  • Trigger 2: National/UAE legislation enacted or proposed creating new compliance requirements
  • Trigger 3: A data breach or security incident involving an AI tool used by the school
  • Trigger 4: A significant disciplinary incident or parent complaint revealing a policy gap
  • Trigger 5: A peer school adopts a significantly different policy framework gaining regulatory recognition

Section 3: Annual Review Process — Three Phases

Phase A: Data Gathering (Weeks 1–4)

Compile all 5 monitoring streams data. Conduct annual surveys. Legal counsel reviews for regulatory changes. IT compiles registry audit. Identify top 5 policy gaps based on incident data.

Phase B: Committee Review (Weeks 5–8)

Full policy committee reconvenes. Reviews data from Phase A. Identifies provisions requiring revision. Drafts amendments. Circulates to all stakeholders for 2-week comment period.

Phase C: Board Update (Weeks 9–12)

Present annual findings and proposed amendments to the board. Obtain approval for substantive amendments. Publish updated policy. Communicate changes to all stakeholders.

Version Control

Every adopted version must be archived with: version number, adoption date, board resolution number, and summary of changes from prior version. All archived versions published on school website.

Section 4: National Standards Alignment

FrameworkRelevant StandardsPolicy Sections Aligned
UAE AI Strategy 2031AI integration in education; responsible AI; digital transformation§1, §4, §5, §8
UNESCO AI Competency FrameworkAI literacy; human oversight; ethical use; governance§2, §6, §8, §10, §11
ISTE StandardsEducator 1a (Learner); 4b (Collaborator); Student 1d (Empowered Learner)§4, §5, §6, §8
OECD AI Principles (2019/2024)Human-centred values; transparency; robustness; accountability§6, §7, §8, §9
IB AI Academic IntegrityAI disclosure; citation requirements; assessment design§6 (Academic Integrity)

✅ Phase 7 Completion Checkpoint — Programme Completion

🎓

Congratulations — Programme Complete!

Your school has completed all 8 phases of the K–12 AI Policy programme. You are now among the most prepared schools for responsible AI governance.

📚 12-Module Course Library

Governing AI in Schools: From Policy to Practice — a world-class professional development programme analysing 60+ authoritative AI policy documents from the US Department of Education, TeachAI, UNESCO, OECD, UAE Ministry of Education, and ADEK/KHDA.

Modules12 Comprehensive Modules
FrameworkBloom's Taxonomy Aligned
Sources60+ Policy Documents
Certificate3-Tier Certification
About This Course

Based on analysis of 60+ authoritative U.S. and international AI policy documents — including guidance from the U.S. Department of Education, TeachAI, Digital Promise, UNESCO, OECD, UAE Ministry of Education, ADEK, and 30+ State Departments of Education. Designed for K–12 education leaders, teachers, and school staff at all levels.

M1

The AI Landscape in K–12 Education

Foundations · 90 min · All Audiences

Learning Outcome (Bloom's Level 2 — Understand)

Explain the key components of a school AI governance framework, including policy architecture, stakeholder roles, and core ethical principles, and articulate why each component is essential to responsible AI governance.

Topics Covered:

  • Understanding AI types, tools, and terminology relevant to school settings
  • Current state of AI adoption by students and educators (including 'shadow AI' use)
  • National and international AI governance frameworks (US DoE, UNESCO, OECD, EU AI Act)
  • UAE National AI Strategy 2031 and its implications for K–12 education
  • The regulatory gap: why schools must develop context-specific governance
  • Key statistics: 51% student AI use, 15% school policy adoption — the urgency
Generative AI Adaptive Learning Shadow AI UAE Strategy
M2

Ethical Foundations and Values in AI Governance

Ethics · 90 min · Leaders & Teachers

Learning Outcome (Bloom's Level 4 — Analyse)

Analyse AI-related risks across multiple ethical dimensions — fairness, transparency, accountability, and human oversight — and apply structured ethical decision-making frameworks to real-world school AI governance scenarios.

Topics Covered:

  • Identifying and articulating institutional values to underpin AI policy
  • Core ethical tensions: equity, privacy, transparency, and innovation
  • Algorithmic bias and its implications for student outcomes
  • Ethical decision-making frameworks for school leadership
  • UNESCO Ethics of AI Recommendation (2021) — practical applications
  • UAE cultural values and their integration into AI governance frameworks
  • The CARE framework: Critical, Accountable, Responsible, Equitable AI
M3

Legal and Regulatory Foundations

Legal Compliance · 120 min · Leaders & Legal Staff

Learning Outcome (Bloom's Level 5 — Evaluate)

Evaluate AI tools against legal compliance standards, data protection requirements, and equity considerations, applying a structured risk assessment framework to make informed procurement and deployment decisions.

Topics Covered:

  • US federal law: FERPA, COPPA, IDEA, ADA, and Section 504
  • UAE PDPL (Federal Decree-Law No. 45/2021) — practical compliance
  • ADEK School AI Framework and KHDA AI Guidelines for Schools
  • State/emirate-level privacy laws and jurisdiction-specific requirements
  • Data Processing Agreements (DPAs) — drafting and vendor compliance vetting
  • EU AI Act — implications for international schools and vendors
  • CIPA requirements and internet safety policy updates
  • Children's online safety: KCSIE (UK), UAE Child Rights Law, COPPA equivalents
M4

AI Policy Architecture and Design

Policy Design · 120 min · Policy Committee

Learning Outcome (Bloom's Level 6 — Create)

Construct a complete, school-specific AI policy that addresses permitted and prohibited uses, data privacy obligations, academic integrity requirements, safeguarding responsibilities, and stakeholder accountability.

Topics Covered:

  • Policy vs. guidance: structure, purpose, and the document hierarchy
  • The 12 essential sections of a world-class school AI policy
  • Tiered permission models: universally permitted, pre-approved, and prohibited AI uses
  • Writing enforceable, values-aligned policy language
  • Grade-band differentiation: K–2, 3–5, 6–8, 9–12
  • Integrating BYOD (Bring Your Own Device) policies with AI governance
  • Version control and policy lifecycle management
M5

Risk Management and Safeguarding

Safety · 120 min · Leaders, IT & Safeguarding

Learning Outcome (Bloom's Levels 4–6 — Analyse/Create)

Design risk identification, mitigation, and monitoring processes that protect students from AI-related harms — including data breaches, algorithmic bias, academic dishonesty, and safeguarding risks — within their specific institutional context.

Topics Covered:

  • AI risk taxonomy: data privacy, algorithmic bias, cybersecurity, misinformation, deepfakes, student wellbeing
  • The AI Tool Risk Assessment Framework: seven dimensions of evaluation
  • Safeguarding protocols and child protection in AI contexts
  • Deepfakes, AI-generated misinformation, and online safety implications
  • Procurement and approval processes for AI tools
  • Incident response planning for AI-related breaches
  • UAE-specific safeguarding: UAE Child Rights Law, digital wellbeing guidelines
M6

Stakeholder Roles and Responsibilities

Governance · 90 min · All Leadership

Learning Outcome (Bloom's Levels 3–6 — Apply/Create)

Develop differentiated stakeholder communication and engagement strategies that build shared understanding and sustained community support for responsible AI governance across staff, students, parents, and governing bodies.

Topics Covered:

  • Governance roles: AI Lead, Data Protection Officer, Governing Body, Senior Leadership
  • Responsibilities of teaching and non-teaching staff
  • Student and parent/guardian responsibilities and communication frameworks
  • Community engagement strategies for AI policy consultation
  • The 9-stakeholder committee model: composition and facilitation
  • Communicating AI policy changes to multilingual communities (UAE context)
M7

AI and Pedagogy — Integrating AI into Teaching & Learning

Pedagogy · 90 min · Teachers & Curriculum

Learning Outcome (Bloom's Level 3 — Apply)

Apply evidence-based pedagogical frameworks to integrate AI tools purposefully into curriculum design, assessment strategies, and differentiated instruction — while maintaining the primacy of human learning relationships.

Topics Covered:

  • Bloom's Taxonomy and AI: matching tool use to cognitive level
  • SAMR framework applied to AI integration (Substitution → Redefinition)
  • UDL (Universal Design for Learning) and AI tools for accessibility
  • AI-enhanced formative assessment: possibilities and pitfalls
  • Designing assignments that promote genuine student thinking with AI as a partner
  • AI tutoring systems: evidence base, benefits, and equity considerations
  • Professional practice: AI for lesson planning, feedback, and differentiation
M8

Academic Integrity in the Age of AI

Integrity · 90 min · Teachers, Curriculum & Leaders

Learning Outcome (Bloom's Level 5 — Evaluate)

Evaluate student work for appropriate AI use, implement fair and consistent disclosure requirements, and respond to suspected AI misconduct using a principled, evidence-based investigation process.

Topics Covered:

  • Tiered AI use frameworks and assignment classification systems
  • AI declaration and disclosure requirements by grade band
  • Responding to suspected AI misconduct: investigation and outcome procedures
  • Awarding body compliance: IB Organisation, Cambridge CAIE, Pearson, AQA, UAE MoE
  • AI detection tools: capabilities, limitations, and fairness concerns
  • Designing assessment tasks that are resistant to simple AI substitution
  • The difference between AI-assisted and AI-generated work — policy implications
M9

Implementation, Change Management & Community Engagement

Change Management · 120 min · Leaders

Learning Outcome (Bloom's Level 3 — Apply)

Apply evidence-based change management strategies to lead the implementation of an AI policy, including managing resistance, building staff capacity, and engaging diverse community stakeholders.

Topics Covered:

  • The School AI Readiness Audit: five dimensions of institutional readiness
  • Change management strategies: Kotter's 8-Step Model applied to AI policy
  • Phased implementation models: 30/60/90-day plans
  • Managing the three types of resistance: fear-based, minimising, and turf-protective
  • Building teacher AI champions and peer learning communities
  • Parent and community communication: messaging, FAQs, and town halls
  • Multilingual communication strategies for international/UAE school communities
M10

Equity, Inclusion & Culturally Responsive AI Governance

Equity · 90 min · All Staff

Learning Outcome (Bloom's Levels 4–5 — Analyse/Evaluate)

Identify and address equity gaps in AI policy design and implementation, ensuring that governance frameworks actively protect and support students with disabilities, English Language Learners, and students from under-resourced communities.

Topics Covered:

  • Algorithmic bias: sources, types, and educational impact
  • The digital equity gap in AI access: home vs. school, urban vs. rural
  • IEP/SEN and AI accommodations: legal requirements and best practice
  • EAL/ELL students and AI translation tools: integrity and support
  • Culturally responsive AI governance: UAE multicultural community context
  • Gender bias in AI educational tools: identification and mitigation
  • Universal Design for Learning as an equity framework for AI governance
M11

Monitoring, Review & Continuous Improvement

Governance · 90 min · AI Policy Coordinator & Leadership

Learning Outcome (Bloom's Level 5 — Evaluate)

Design and implement a sustainable AI governance cycle — including monitoring streams, triggered review protocols, and annual review processes — that maintains policy currency as AI technology and regulation continue to evolve rapidly.

Topics Covered:

  • Policy review cycles: annual review, incident-triggered review, and technology-driven updates
  • The 5 monitoring streams: compliance, registry health, staff implementation, student literacy, legal landscape
  • Building a culture of responsible AI use across the school community
  • Key performance indicators for AI policy effectiveness
  • Policy version control: archiving, communicating changes, maintaining trust
  • Succession planning: institutional knowledge and governance resilience
M12

Capstone — Policy Presentation & Governing Body Simulation

Capstone · 180 min · Full Cohort

Learning Outcome (Bloom's Level 6 — Create)

Present a complete, school-specific AI Policy package to a simulated governing body, incorporating all elements from Modules 1–11: legal compliance, ethical grounding, equity provisions, academic integrity framework, implementation plan, and monitoring structures.

Capstone Components:

  • Workshop 1: The AI Governance Audit — Where Are We Now? (30 min)
  • Workshop 2: Policy Sprint — Drafting Your School AI Policy (60 min)
  • Workshop 3: Guidance Suite Development and Governing Body Simulation (90 min)

Deliverables for Certification Portfolio:

  • Complete school AI policy document (11+ sections)
  • Staff AI guidance document
  • Student AI charter (age-appropriate versions)
  • Parent communication letter
  • AI declaration form template
  • AI Tool Risk Assessment for 2 tools from your school's current inventory
  • AI Policy Review Cycle Tracker (12-month calendar)
Certification Framework

Foundation Certificate: Complete Modules 1–4 + Capstone Part 1. Practitioner Certificate: Complete all 12 modules + Capstone Parts 1–2. Advanced Certificate (AI Lead): Practitioner + Capstone Part 3 + school-specific policy portfolio submission.

✅ School AI Policy Self-Assessment Checklist

A comprehensive self-assessment tool for school leaders to evaluate the current state of AI governance. Check each item currently in place. Leave blank where action is still needed.

Scoring8–10 ✓ = Strong | 5–7 = Developing | 0–4 = Action Needed

Section 1: AI Policy Foundations

1.1 Policy Existence and Scope

Our school has a written AI Policy that has been formally adopted by senior leadership.

The policy covers all staff, students, and governors / board members.

The policy clearly defines what constitutes 'AI tools' within our school context.

The policy distinguishes between permitted, conditional, and prohibited AI uses.

e.g., permitted for lesson planning, conditional for student work with disclosure, prohibited for summative assessment without declaration

The policy is version-controlled and includes a review date.

The policy has been mapped against current national/regional regulatory requirements.

e.g., UAE AI Strategy, ADEK/KHDA guidance, UK DfE guidance, EU AI Act

1.2 Policy Communication and Accessibility

The AI Policy is published on the school website or intranet and is easily accessible.

All staff received a copy of the policy and signed to confirm they have read it.

The policy is available in the home languages of our main parent communities.

Students have been given an age-appropriate summary or guide to the policy.

Section 2: Governance Structure

2.1 Roles, Responsibilities, and Accountability

A named Senior Leader holds overall accountability for AI governance at our school.

A designated AI Lead / Digital Lead coordinates day-to-day AI policy implementation.

Staff roles related to AI oversight are documented in job descriptions or role profiles.

There is a clear process for staff to report AI-related concerns or incidents.

Governors / board members receive at least annual updates on AI governance.

Section 3: Data Privacy and Legal Compliance

We have a current Data Protection Officer (DPO) or Data Protection lead.

All AI tools used with students have a Data Privacy Agreement (DPA) or equivalent in place.

Our AI tool approval process includes a formal privacy impact assessment.

We maintain a live register of approved AI tools with DPA expiry dates and review status.

Our school has a documented data breach response plan that includes AI-related incidents.

For UAE schools: policy aligns with UAE PDPL (Federal Decree-Law No. 45/2021) and applicable ADEK/KHDA requirements.

Section 4: Academic Integrity

Our school has a clear AI use disclosure/declaration requirement for student work.

Teachers have been trained to identify potential AI misuse in student submissions.

Our AI policy is aligned with the requirements of relevant awarding bodies (IB, Cambridge, UAE MoE, etc.).

There is a documented, fair process for investigating suspected AI misconduct.

Students receive age-appropriate AI literacy education as part of the curriculum.

Section 5: Staff Capacity and Professional Development

All staff have completed mandatory AI policy induction / orientation training.

Teaching staff have access to AI-specific professional development opportunities.

Our PD programme for AI is differentiated by role (teacher, leader, support staff).

There is an internal AI champion or community of practice supporting staff.

Section 6: Equity and Safeguarding

Our AI vetting process includes accessibility review (WCAG 2.1 AA or equivalent) for all student-facing tools.

AI accommodations for students with IEPs/SEN plans have been considered and documented.

Our policy explicitly prohibits AI-generated disciplinary decisions without human review.

Safeguarding risks of AI tools (deepfakes, inappropriate content generation) are explicitly addressed in the policy.

Our policy considers the needs of EAL/ELL students and multilingual families (particularly relevant for international/UAE schools).

Section 7: Monitoring and Continuous Improvement

Our school has a scheduled annual AI policy review process with a named owner.

We monitor compliance with the AI policy and track incidents systematically.

AI-related incidents or near-misses are reviewed and used to improve policy and practice.

We track regulatory changes that could affect our AI policy and respond proactively.

Overall Governance Score

0 / 40 items checked — Score: 0%
0
Strong (✓ checked)
0
Action Needed

📋 Master Templates (10)

Ready-to-use templates for every phase of AI policy creation and governance. Interactive forms you can fill in and print directly from this platform.

Template 1: AI Tool Inventory

Phase 1 · Complete school AI tool audit instrument

Tool NameVendorCategoryUsersStudent Data?DPA StatusRisk
ChatGPTOpenAIGenerative AITeachers (informal)Potentially🔴 None🔴 High
Khan Academy / KhanmigoKhan AcademyAdaptive LearningStudents Gr. 3–12Yes🟡 Under review🟡 Medium
DreamBox LearningDiscovery EducationAdaptive LearningStudents Gr. K–8Yes✅ Current✅ Low
[Add your tools here]

Categories: Generative AI / Adaptive Learning / Analytics & Assessment / Safety & Security / Admin & Operations

Template 2: Policy Committee Charter

Phase 2 · Formal charter for the AI Policy Development Committee

Template 3: AI Policy Document (11-Section Scaffold)

Phase 3 · Complete policy document framework

[SCHOOL / TRUST NAME]

ARTIFICIAL INTELLIGENCE (AI) POLICY

Version: ___ | Adopted: ___ | Effective: ___ | Review Date: ___

§1. PURPOSE & SCOPE. This policy establishes the framework for responsible, equitable, and legally compliant use of Artificial Intelligence (AI) tools throughout [School Name]. It applies to all school employees, students, contractors, and volunteers who use AI tools in connection with school educational programmes or on school technology infrastructure.

§2. DEFINITIONS. "Artificial Intelligence" means computer systems performing tasks typically requiring human intelligence, including language generation, image creation, personalised learning adaptation, and predictive analytics. Five categories recognised: [Generative AI / Adaptive Learning / Analytics & Assessment / Safety & Security / Admin & Operations].

§3. APPROVED TOOL REGISTRY. The School shall maintain an AI Tool Registry — a publicly accessible list of AI tools approved for use with students. No AI tool collecting student data may be used for instruction without prior approval and execution of a Data Privacy Agreement (DPA) or equivalent data processing agreement...

[ Sections §4 through §11 follow — complete the full document using the 11-section architecture from Phase 3 ]

Template 4: AI Disclosure Statement — Student Templates

Phase 3/6 · Age-differentiated disclosure forms for student use

Grade 9–12: Full AI Contribution Statement

"The core ideas, analysis, and conclusions in this work are my own. All AI-generated content has been reviewed, verified, and either cited or paraphrased."

Grade 6–8: Simple AI Use Statement

Template 5: Vendor Vetting Checklist (30-Point)

Phase 3 / Ongoing · Complete checklist for AI tool approval decisions

#Vetting CriterionStatusNotes
1Vendor has a Student Data Privacy Agreement (DPA) template available
2DPA explicitly prohibits sale or secondary commercial use of student data
3COPPA compliance explicitly addressed for users under 13
4UAE PDPL / data protection compliance confirmed for UAE schools
5Tool meets WCAG 2.1 AA accessibility standards
6Data storage location disclosed; appropriate for jurisdiction
7Vendor provides data breach notification within 72 hours
[ 23 additional criteria in full checklist — educational efficacy, bias monitoring, content safety, vendor financial stability, support quality, contract terms ]

Template 6: Staff AI Guidance Document

Phase 6 · Practical guidance for all staff on AI use

[SCHOOL NAME] — STAFF AI GUIDANCE

What staff MAY do with AI tools:

  • Use tools on the Approved AI Tool Registry for lesson planning, resource creation, and professional tasks
  • Use approved AI tools to provide feedback on student work — but must review and verify all AI-generated feedback before sharing
  • Explore AI tools personally for professional learning — but must not use unapproved tools with student data

What staff must NOT do:

  • Submit AI-generated text as their own professional writing (reports, recommendations, IEP notes) without disclosure
  • Use any AI tool with student personal data that is not on the Approved Registry
  • Allow students to use AI tools not listed in the Grade-Band Permission Matrix

When in doubt: Check the AI Tool Registry first. Then contact the AI Policy Coordinator.

Template 7: Parent Communication Letter

Phase 5/6 · School-to-home communication about AI policy

[Date]

Dear Parents and Guardians,

I am writing to inform you that [School Name] has formally adopted an Artificial Intelligence (AI) Policy, effective [date]. This policy governs how AI tools are used by our staff and students to support learning while protecting student data and upholding academic integrity.

Key points for parents:

  • Our school maintains a public register of all AI tools approved for student use
  • Students are taught how to use AI responsibly, including how to disclose AI use in their work
  • All AI tools used with your child's personal data are covered by Data Privacy Agreements
  • Students in Grades K–5 have significant restrictions on generative AI use; older students have age-appropriate guided permissions

The full AI Policy is available on our school website at [URL]. If you have any questions, please contact [AI Policy Coordinator name] at [email].

Yours sincerely,

[Principal/Head Teacher Name]

Template 8: Student AI Charter (Secondary — Grades 6–12)

Phase 6 · Student-facing summary of AI rights and responsibilities

My AI Rights & Responsibilities at [School Name]

✅ I have the right to:

  • Know which AI tools are approved for use in my school
  • Ask my teacher how AI is being used in my learning
  • Know what data my school collects and how it is protected
  • Use approved AI tools to support my learning as specified by my teacher

📋 My responsibilities:

  • Only use AI tools that are on the school's approved list
  • Always disclose when I have used AI to help with my work
  • Never submit AI-generated content as entirely my own without declaration
  • Never enter another student's personal information into an AI tool
  • Report any concerns about AI use to a trusted adult

Signed: _________________ | Date: _________ | Class: _________

Template 9: New AI Tool Request Form

Phase 6 / Ongoing · Staff submission form for requesting new tool approval

⏱ The AI Policy Coordinator will respond within 15 business days. Do not begin using this tool with students until you receive written approval.

Template 10: Annual Review Calendar

Phase 7 / Ongoing · 12-month governance schedule

MonthGovernance ActivityOwnerStatus
AugustPre-year registry audit; DPA expiration review; staff PD calendar confirmedIT Director + AI Lead
SeptemberAll-staff Tier 1 policy orientation; student AI instruction beginsHR + Principals
OctoberTier 2 teacher PD completion deadline; first compliance checkPD Director
DecemberSemester compliance review; registry update; incident report reviewAI Policy Lead
JanuaryLegal landscape review; regulatory update check; mid-year DPA renewalsLegal/DPO + IT
MarchAnnual stakeholder surveys deployed; data collection beginsAI Policy Lead
AprilAnnual review committee meeting; survey analysis; draft amendmentsFull Committee
MayPublic comment period (30 days); board presentation scheduledPrincipal + Board
JuneBoard adopts amendments; updated policy published; year-end registry auditBoard + AI Lead

⚖️ Legal Frameworks & Pedagogical Resources

The legal and pedagogical foundations for K–12 AI policy. Know these frameworks to build a defensible, educationally sound policy.

⚖️ FERPA — Family Educational Rights and Privacy Act (US)

What it is: Federal law protecting the privacy of student education records. Applies to all schools receiving federal funding. Gives parents (and students 18+) the right to inspect, amend, and control disclosure of education records.

AI Policy Implication: When a student's work is submitted to an AI tool, that work may constitute an "education record." Vendors accessing student education records must operate under the "school official" exception (DPA required). AI systems using education records for automated decision-making must include human review.

Key Citation: 34 CFR Part 99. Policy sections: §3 (Registry), §7 (Data Privacy).

👶 COPPA — Children's Online Privacy Protection Act (US)

What it is: Federal law restricting online collection of personal information from children under 13. Enforced by the FTC. Violations can result in civil penalties up to $51,744 per violation.

AI Policy Implication: Generative AI platforms collecting data from students under 13 require either verifiable parental consent or school authority consent under §312.5(b)(1). The school consent mechanism requires a DPA strictly limiting vendor data use to educational purposes. Students under 13 creating personal accounts on ChatGPT violates COPPA — the district cannot authorise this use.

Key Citation: 15 U.S.C. §§ 6501–6506; 16 CFR Part 312.

🇦🇪 UAE PDPL — Personal Data Protection Law (UAE)

What it is: Federal Decree-Law No. 45 of 2021 on Personal Data Protection — UAE's comprehensive data protection legislation, effective September 2022. Establishes rights for data subjects and obligations for data controllers and processors.

AI Policy Implication: Schools processing student personal data via AI tools must: have a legal basis for processing (consent or legitimate interest); notify students/parents of data use; honour data subject rights (access, correction, erasure); report certain breaches to the UAE Personal Data Protection Office within 72 hours; ensure AI vendors comply with UAE data localisation requirements where applicable.

Key Citation: Federal Decree-Law No. 45/2021; Executive Regulations. Schools in ADEK jurisdiction: also see ADEK School AI Framework. KHDA jurisdiction: see KHDA AI Guidelines for Schools.

🌐 CIPA — Children's Internet Protection Act (US)

What it is: Federal law requiring schools receiving E-rate funding to implement Internet Safety Policies and technology protection measures filtering inappropriate content for minors.

AI Policy Implication: The Internet Safety Policy must address AI-generated content. AI tools generating harmful, violent, or sexually explicit content must be blocked or restricted. Generative AI content filters need review as tools evolve.

Key Citation: 47 U.S.C. § 254(h).

🎓 ISTE Standards for Educators & Students

What it is: The International Society for Technology in Education's framework for effective technology integration in K–12 education.

AI Policy Alignment: ISTE Educator Standard 1a (Learner) supports ongoing AI literacy PD. Standard 4b (Collaborator) supports the stakeholder inclusion model. Student Standard 2a (Digital Citizen) is the foundation for student AI use policy and disclosure requirements.

🌍 UNESCO AI Competency Frameworks

What it is: UNESCO has published two frameworks: the AI Competency Framework for Students (2023) and the AI Competency Framework for Teachers (2023). These provide internationally recognised progressions for AI literacy.

AI Policy Alignment: The student framework identifies five competency domains: (1) Human-Centred AI Mindset; (2) Ethics of AI; (3) AI Foundations; (4) AI Techniques; (5) AI Literacy in Practice. These map onto the grade-band permission matrix. The UNESCO Recommendation on the Ethics of AI (2021) provides the ethical framework underlying all policy provisions.

📊 Bloom's Taxonomy — AI in Learning Design

Application to AI Policy: The permission matrix should reflect Bloom's Taxonomy — where AI is appropriate depends on the cognitive level of the task. At lower levels (Remember, Understand), AI assistance reduces the cognitive work that builds foundational knowledge. At higher levels (Analyse, Evaluate, Create), AI can serve as a thinking partner extending student capacity.

Design Principle: AI should not replace the thinking that builds the skill. Restrict AI use most heavily on tasks developing foundational knowledge; permit AI use with disclosure on higher-order thinking tasks where AI extends rather than replaces student capacity.

♿ UDL — Universal Design for Learning

What it is: A framework for designing instructional goals, methods, materials, and assessments that work for all students. Based on three principles: Multiple Means of Representation, Action and Expression, and Engagement.

AI Policy Alignment: AI tools can serve as powerful UDL implementation mechanisms — text-to-speech, translation, writing scaffolds, alternative presentation formats. But AI tools must themselves be accessible (WCAG 2.1 AA) to serve this function. The equity provisions in §8 directly implement UDL principles by requiring accessible tool approval.

📐 OECD AI Principles (2019, Updated 2024)

What it is: The Organisation for Economic Co-operation and Development's internationally recognised principles for trustworthy AI, adopted by 46 countries including UAE signatories.

Five Principles: (1) AI should benefit people and the planet; (2) AI systems should be designed for fairness, non-discrimination, and transparency; (3) AI must be transparent and explainable; (4) AI systems must be robust, secure, and safe; (5) Organisations deploying AI must be accountable. These principles directly inform the ethical review in Phase 4 and the prohibited uses in §9.

🏫 CoSN Student Data Privacy Initiative

What it is: The Consortium for School Networking (CoSN) provides K–12 IT leaders with frameworks, tools, and resources for student data privacy governance, including model DPA language and the Student Data Privacy Consortium (SDPC) — a national repository of executed DPAs.

Resource: Before negotiating DPAs with AI vendors, check the SDPC Marketplace (marketplace.sdpc.org) — your state may already have a model or executed DPA with the vendor that the school can adopt directly, saving significant time and legal cost.

🇦🇪 UAE & International Edition

UAE & International AI Policy Framework

Comprehensive guidance for K–12 schools operating in the UAE and international contexts. Aligns with UAE National AI Strategy 2031, ADEK, KHDA, UAE PDPL, and international best practice.

Regulatory BodiesADEK · KHDA · UAE MoE
LawUAE PDPL No. 45/2021
StrategyUAE AI Strategy 2031
For International Schools in the UAE

Where UAE law and international guidance differ, UAE law takes precedence for schools operating in the UAE. This framework is designed to satisfy both UAE regulatory requirements and international best practice simultaneously where possible. Schools should consult their DPO and legal advisor before adoption.

🇦🇪 UAE Regulatory Framework for Schools

Regulation / FrameworkIssuing AuthorityKey AI Policy Implications
UAE National AI Strategy 2031UAE Government (Ministry of State for AI)Mandates AI integration across education sector; positions UAE as global AI leader; schools should align policy with national strategy objectives; evidence of responsible AI adoption supports strategic goals
UAE PDPL (Federal Decree-Law No. 45/2021)UAE Federal GovernmentGoverns all personal data processing including student data; requires legal basis, data subject rights, breach notification within 72 hours; stricter than GDPR in some provisions; DPOs must be appointed for certain organisations
ADEK School AI FrameworkAbu Dhabi Department of Education & KnowledgeADEK-licensed schools must comply; guidance on approved AI tools, teacher AI literacy requirements, student data protection; ADEK approval may be required for significant AI tool adoption
KHDA AI Guidelines for SchoolsKnowledge and Human Development Authority (Dubai)KHDA-regulated schools must align AI governance with KHDA inspection framework; AI policy may be reviewed during KHDA school inspections; specific guidance on academic integrity and AI
UAE Cybercrime Law (Federal Law No. 5/2012)UAE Federal GovernmentGoverns unauthorised access, data theft, and online harm — relevant for AI tool security provisions and prohibited uses sections; deepfake creation may constitute a criminal offence
UAE Child Rights Law (Federal Law No. 3/2016)UAE Federal GovernmentProtects children from harm, exploitation, and privacy violations; AI policy must ensure no AI tool is used to harm, surveil, or exploit students; safeguarding provisions must be robust
UAE Digital Wellbeing PoliciesTelecommunications & Digital Government Regulatory Authority (TDRA)Guidelines for responsible digital use including AI; screen time, content safety, and cyberbullying provisions — AI policy should reference digital wellbeing framework

🌍 International Framework Alignment

FrameworkIssuing BodyRelevance for UAE/International Schools
UNESCO Recommendation on Ethics of AI (2021)UNESCOEndorsed by 193 member states including UAE; provides ethical framework for AI policy; five core values: human rights, inclusion, transparency, accountability, environmental sustainability
OECD AI Principles (2019/2024)OECDUAE aligned with OECD AI principles; five principles on trustworthy AI are internationally recognised benchmarks — useful for board presentations and policy justification
IB Organisation AI Academic Integrity PolicyInternational BaccalaureateCritical for IB schools: IB has issued specific guidance on AI use in assessments; DP/MYP/PYP policies must align; IB inspections may review school AI governance; updated frequently
Cambridge Assessment International Education (CAIE)Cambridge University Press & AssessmentCAIE has updated academic integrity regulations to address AI; schools must ensure student AI disclosure aligns with CAIE regulations; AI use in Cambridge assessments is strictly regulated
ISTE AI in Education StandardsInternational Society for Technology in EducationInternationally applicable; useful for PD alignment and policy justification; ISTE Educator standards relevant for staff AI competency requirements
ISO/IEC 42001 (AI Management Systems)ISO / IECInternational standard for AI management systems; advanced schools and multi-campus organisations may align AI governance with ISO 42001; demonstrates institutional commitment to AI quality
EU AI Act (2024)European UnionDoes not directly apply to UAE schools, but EU-based AI vendors serving UAE schools must comply; AI tools classified as "high-risk" under EU AI Act (e.g., emotion recognition in education) face strict requirements; purchasers should ask vendors about EU AI Act compliance status

UAE-Specific Policy Provisions

Additional Provisions Required for UAE Schools

The following provisions should be added to or strengthened in the standard 11-section policy for UAE-operating schools:

Data Localisation

UAE law may require certain categories of data to be stored within UAE borders or in countries with adequate data protection. AI tool procurement must verify data storage location. Tools storing student data outside UAE/approved jurisdictions may require additional data processing agreements or may be ineligible for approval.

Cultural Appropriateness Review

AI tools approved for student use in UAE schools must be reviewed for cultural appropriateness — including alignment with UAE values, Islamic principles where relevant, and the multicultural nature of UAE school communities. The vendor vetting checklist should include a cultural appropriateness criterion.

Arabic Language Support

For schools serving Arabic-speaking students and families: the AI policy should be available in Arabic; AI tools used with Arabic-speaking students should have verified Arabic language capability; parent communications about AI should be accessible in Arabic.

Regulatory Authority Notification

ADEK-licensed schools should check whether significant AI tool adoption requires notification to or approval by ADEK prior to implementation. KHDA schools should verify whether AI governance is included in KHDA inspection criteria and ensure documentation is prepared for inspection.

UAE AI Policy Implementation Checklist — Additional Items

Additional UAE RequirementStatusOwner
Policy reviewed against UAE PDPL requirements by qualified DPO/legal advisor
Policy submitted to ADEK/KHDA for review (if required by licence conditions)
Arabic translation of policy and parent communications completed
Vendor vetting checklist updated with UAE data localisation requirements
Cultural appropriateness review added to vetting process
IB/Cambridge/awarding body AI policies reviewed and integrated