K–12 AI Policy Training Programme
A complete professional development programme guiding district and school teams from initial AI landscape understanding through policy creation, adoption, implementation, and sustained governance. Aligned with UAE National AI Strategy 2031, UNESCO, ISTE, FERPA, COPPA, and PDPL.
8-Phase Journey to AI Policy Excellence
The 5 Pillars of a Comprehensive K–12 AI Policy
Welcome participants. Begin by asking: "What is the #1 AI challenge your school is currently dealing with?" Capture answers on a whiteboard. These become your cohort's anchor case studies throughout all 8 phases. Estimated full-programme time: 16–20 hours across 8 sessions, or compressed into a 2-day intensive workshop.
Understanding the AI Landscape
Build shared vocabulary and foundational understanding before policy work begins. Every participant must start here.
📋 Phase Overview & Learning Objectives
▼90 minutes (can split into two 45-min sessions)
All district/school leadership, teachers, and key stakeholders
AI Landscape handout, taxonomy chart, discussion prompts
ISTE Educator Standards 1a, 2a; Bloom's Taxonomy L1–L2; UAE Digital Strategy
- Define AI and distinguish it from related technologies (automation, data analytics)
- Identify the 5 categories of AI tools used in K–12 education
- Understand why 85% of districts do not have an AI policy — and why that creates risk
- Recognise "Shadow AI" as a data privacy and equity crisis
- Articulate the 5 pillars of a comprehensive AI policy
- Assess your school's current AI policy readiness level (1–5 scale)
Section 1: Why This Matters Right Now
Generative AI tools entered mainstream student use in late 2022. By 2024, 51% of K–12 students reported using AI for schoolwork. Yet only 15% of school districts had adopted a formal AI policy. This gap creates legal exposure, equity harm, and academic integrity chaos. In the UAE, the National AI Strategy 2031 mandates AI integration — making governance even more urgent.
The urgency is real: FERPA and COPPA were written before generative AI existed. In UAE contexts, the Personal Data Protection Law (Federal Decree-Law No. 45/2021) similarly predates most generative AI tools. When a student submits their essay draft to an AI tool, that draft may contain personally protected information. Without a policy that names specific approved tools, schools have no legal basis for these interactions.
Beyond legal risk, the equity problem is severe. Students with home internet access and devices can use AI tools freely. Students without home access cannot. Without a policy that explicitly addresses equitable AI access, AI becomes another vector for widening the achievement gap.
Section 2: Defining AI for K–12 Contexts
Artificial Intelligence refers to computer systems that perform tasks typically requiring human intelligence — including language understanding, visual recognition, decision-making, and content generation. For K–12 policy purposes: AI is any software system that uses machine learning, pattern recognition, or generative modelling to produce outputs that adapt based on data.
What AI is NOT (for policy purposes):
- Simple calculators, spell-checkers, or grammar checkers using fixed rules
- Basic scheduling or administrative software without learning components
- Standard search engines (though AI-enhanced search is in scope)
Section 3: Shadow AI — The Hidden Crisis
Shadow AI refers to the use of AI tools by staff and students that has not been reviewed, approved, or even acknowledged by school leadership. When a teacher asks ChatGPT to generate quiz questions and pastes them into their LMS — without approval, without a DPA, without student notification — that is Shadow AI. It is happening in your school right now.
Shadow AI risk inventory — common K–12 scenarios:
- Teachers using AI writing assistants to draft parent communications
- Students submitting assignments to AI tools for feedback (FERPA/PDPL concern)
- Counsellors using AI chatbots to help draft student recommendations
- Administrators using AI for contract review (employment law concern)
- Students under 13 creating accounts on generative AI platforms (COPPA violation)
- IT staff using AI security tools without notifying HR or legal
Section 4: Policy Readiness Self-Assessment
| Level | Description | What This Means |
|---|---|---|
| Level 1 | No awareness | Leadership doesn't know AI tools are being used in classrooms |
| Level 2 | Informal use | Staff use AI tools personally; no school discussion has occurred |
| Level 3 | Fragmented | Individual teachers have informal AI norms; no school policy |
| Level 4 | Policy draft | A policy draft exists but has not been formally adopted |
| Level 5 | Implemented | A formal AI policy exists, is communicated, and is actively enforced |
Most schools are at Level 2–3. This programme will take you to Level 5.
AI Tool Taxonomy — 5 Categories for K–12 Education
🎤 Facilitator Prompts — Phase 0
▼Opening question (5 min): "Without looking anything up — raise your hand if you believe a student in your school used an AI tool for schoolwork in the past week. Now keep your hand up if your school has a policy governing that use." The visual gap is your opening.
Shadow AI discussion: "Think about your own practice. Did you use any AI tool in the last month for something work-related? Was that tool on an approved list? Did you sign a DPA for it? This is not a guilt exercise — it is an honesty exercise."
Closing reflection: "What is the one thing from Phase 0 that most changes how you think about AI in your school? We'll return to these answers at the end of the programme."
🔬 Activity 0-A: AI Landscape Audit — What's in Your School?
▼A raw, preliminary inventory of AI tools in use — the foundation for the formal Phase 1 audit. This is your school's first honest look at the scope of the challenge.
🧠 Phase 0 Knowledge Check
▼1. Which of these is NOT within the definition of AI for K–12 policy purposes?
2. What is "Shadow AI"?
📖 Key Vocabulary — Phase 0
▼Computer systems performing tasks typically requiring human intelligence: language understanding, visual recognition, decision-making, and content generation.
AI systems that create new content (text, images, audio, video, code) in response to prompts. Examples: ChatGPT, Claude, Gemini, DALL-E.
A subset of AI in which systems learn from data to improve performance without being explicitly programmed for each task.
AI tool use by staff or students not reviewed, approved, or acknowledged by school leadership. Creates legal and equity risks.
A contract between a school and a vendor specifying how student data will be collected, stored, used, and protected. Required for all tools accessing student data.
The ability to understand, use, evaluate, and critically assess AI systems and their outputs. Essential for both educators and students.
Educational software using AI to personalise instruction, adjusting content, pacing, and difficulty based on each student's performance patterns.
The difference in AI access, skills, and outcomes between student populations — often along existing lines of race, income, and disability status.
⚠️ Common Pitfalls — Phase 0
▼PITFALL: Skipping Phase 0 to "get to the real work." The vocabulary established here is the foundation for every subsequent phase. Teams that skip shared definitions spend hours later arguing about what counts as AI. Phase 0 vocabulary is policy infrastructure.
PITFALL: Treating AI literacy as optional for non-instructional staff. Your HR director using an AI resume-screening tool, your facilities manager using predictive maintenance software — these employees are also covered by your AI policy.
PITFALL: Approaching AI policy from a place of fear. Schools that begin with "we need to stop students from using AI" consistently produce unenforceable prohibition policies. Start with "we need to govern AI in a way that serves learning and protects students."
PITFALL: Assuming your current acceptable use policy covers AI. General technology AUPs were written for email and web browsing. They do not address generative AI, vendor data privacy, or academic integrity questions unique to AI.
✅ Phase 0 Completion Checkpoint
▼Check each item as your team completes it. All items must be checked to advance.
Conducting a Comprehensive AI Audit
Systematically inventory all AI tools in use across the school — approved and unapproved — and assess current policy gaps against legal requirements.
Section 1: Why You Must Audit Before You Draft
Many schools begin writing an AI policy based on what they think is happening with AI. This produces policies that prohibit tools nobody uses while ignoring tools everybody uses. Before writing a single word of policy, you must know the ground truth: what tools are actually in use, by whom, under what conditions, with what student data exposure.
The audit has three dimensions: (1) Tool inventory — what AI tools exist, whether sanctioned or not; (2) Legal exposure mapping — which tools create FERPA, COPPA, CIPA, ADA, or PDPL compliance questions; (3) Stakeholder awareness assessment — what staff, parents, and students know and believe about AI use.
Section 2: Legal Framework Review
| Law / Regulation | What It Governs | Key AI Policy Implication | Context |
|---|---|---|---|
| FERPA | Student education records | Student data submitted to AI tools may constitute education records — vendors must operate under school official exception with DPA | 🇺🇸 US / International schools |
| COPPA | Online data from children under 13 | AI tools collecting data from K–8 students require verifiable parental or school consent; up to $51,744 per violation | 🇺🇸 US / International schools |
| CIPA | Internet content filtering for E-rate schools | Internet Safety Policy must address AI-generated content; AI tools generating content must be addressed | 🇺🇸 US |
| UAE PDPL | Personal data protection in UAE | Federal Decree-Law No. 45/2021 — all student personal data processed by AI tools requires legal basis; data localisation requirements apply | 🇦🇪 UAE |
| ADA / Section 504 | Accessibility for students with disabilities | AI tools used in instruction must be accessible; AI cannot screen students in discriminatory ways | 🇺🇸 US / International |
| State/Emirate Privacy Laws | Varies by jurisdiction — many stricter than federal | ADEK/KHDA guidelines in UAE; 40+ US states have student privacy laws; several have specific AI provisions | All jurisdictions |
Section 3: The Audit Instrument — Four Survey Populations
Staff Survey (key questions):
- What AI tools do you currently use for professional work?
- Which of these tools do students interact with directly?
- Have you reviewed the privacy policy of any AI tool before using it for work?
- Do you know if your school has a Data Privacy Agreement with any AI tool you use?
- Have you received professional development on AI use or AI policy?
- Do students in your class use AI tools for assignments? Is this required, permitted, or without your knowledge?
Parent/Guardian Survey (key questions):
- Are you aware that your child's school uses AI-powered educational software?
- Has your child told you about using AI tools for schoolwork at home?
- Do you have concerns about student data being shared with AI companies?
- Should students be allowed to use AI writing tools for homework? For assessments?
Student Survey (age-appropriate, Grade 6+):
- Have you used an AI tool to help with a school assignment in the past month?
- Did your teacher know you were using AI? Was it required, allowed, or your own choice?
- Do you know if your school has rules about AI use?
Section 4: Policy Gap Analysis — 10 Critical Areas
| # | Policy Area | Gap Assessment |
|---|---|---|
| 1 | AI tool approval process and vendor vetting | |
| 2 | Student data privacy requirements for AI vendors | |
| 3 | Student AI use permissions by grade level | |
| 4 | Academic integrity and AI disclosure requirements | |
| 5 | Staff AI use guidelines and professional standards | |
| 6 | Prohibited AI uses and absolute restrictions | |
| 7 | Equity provisions and universal access requirements | |
| 8 | Violation reporting and enforcement procedures | |
| 9 | Annual review and update process | |
| 10 | Special education and IEP/SEN accommodations for AI use |
🔬 Activity 1-A: AI Audit Sprint
▼✅ Phase 1 Completion Checkpoint
▼Building Your Policy Team
Assemble the diverse, inclusive committee that will drive authentic, legally defensible, and educationally sound AI policy creation.
Section 1: Why Inclusive Policymaking Produces Better Policy
The most common failure mode for school AI policies is being written by a small group — usually IT leadership and the principal — without meaningful input from teachers, students, parents, or community members. These policies are technically drafted but practically ignored: teachers don't follow rules they had no hand in creating.
Inclusive policymaking is not just ethically preferable — it produces better outcomes. Research consistently shows that policies developed with authentic stakeholder input have higher fidelity of implementation, more sustainable buy-in, and greater resilience when challenged.
Section 2: The 9-Stakeholder Committee
| Role | Why Essential | Key Contribution | Time Commitment |
|---|---|---|---|
| AI Policy Coordinator / AI Lead | Drives the process; owns ongoing governance | Project management, research, draft writing | Significant — primary responsibility |
| Technology Director / CTO | Technical feasibility; vendor relationships; security | Tool registry, DPA tracking, technical provisions | High (all sessions + implementation) |
| Curriculum / Deputy Principal | Instructional alignment; academic integrity lens | Grade-band permissions, assessment guidance | High (all sessions) |
| SEN / SENCO / Special Ed Director | IEP/504/SEN implications; accessibility | Equity provisions, accommodation language | Medium (key sessions + review) |
| Teacher Representatives (2) | Ground-level practice reality; faculty trust | Practical feasibility, classroom implications | Medium (all sessions) |
| Parent/Community Rep | Community trust; student perspective from home | Family communication provisions, consent language | Medium (key sessions) |
| Student Representative (Gr. 8+) | Student voice; peer credibility | Student use reality, peer culture insights | Medium (key sessions) |
| Legal Counsel / DPO | Legal compliance verification; data protection | FERPA/COPPA/PDPL review, liability language | Lower (review sessions + final approval) |
| Building Principal / Vice Principal | Implementation reality; enforcement capacity | Enforcement procedures, building-level provisions | Medium (key sessions) |
Section 3: Four Input Formats for Authentic Engagement
Anonymous surveys capture honest opinions staff won't share in meetings. Deploy before Phase 1 and again after the policy draft for comparison.
Open meetings for community input on draft provisions. One for staff, one for families, one student-centred. Record questions; respond in writing within 5 days.
Small-group deep dives (6–8 participants) on high-stakes provisions: academic integrity, student data privacy, IEP/SEN accommodation language.
30-day formal comment window after draft release. Accept written comments by email, paper, and in-person. Document and respond to all substantive comments.
Section 4: Handling Resistance
| Resistance Type | What It Sounds Like | Effective Response |
|---|---|---|
| Fear-based | "AI is dangerous — we should ban it all." | Acknowledge the concern; redirect to evidence: blanket bans have never worked for technology. Ask: "What specifically worries you? Let's write a policy that addresses that." |
| Minimising | "AI is just a tool — we don't need a special policy." | Share the Shadow AI legal analysis. Ask: "Does our current policy address data protection requirements for AI vendors? If not, what happens if there's a data breach?" |
| Overconfidence | "Our tech AUP already covers this." | Do a live gap analysis: pull up the existing AUP and walk through the 10 gaps from Phase 1. Name the specific provisions missing. |
| Turf concerns | "Teachers should decide this, not IT." | Reframe: "This policy affects everyone — that's why everyone is at the table. Your role is specifically to contribute X." |
✅ Phase 2 Completion Checkpoint
▼Drafting Your AI Policy
Build the complete 11-section AI policy document with all required legal provisions, grade-band permission matrices, and implementation infrastructure.
Section 1: The 11-Section Policy Architecture
A legally sound, operationally complete K–12 AI policy requires eleven sections. Each section addresses a distinct domain of governance. Missing sections create gaps that will be exploited — by vendors, by students, by legal challengers, or by staff seeking justification for unauthorised tool use.
The 11-Section AI Policy Architecture
Section 2: Grade-Band AI Permission Matrix
| Grade Band | Generative AI (Text) | AI Research Tools | Adaptive Learning | AI Writing Feedback | AI Image Generation | Disclosure Required |
|---|---|---|---|---|---|---|
| K–2 | 🔴 Not permitted | 🔴 Not permitted | ✅ Teacher-directed only | 🔴 Not permitted | 🔴 Not permitted | Teacher notifies parents |
| 3–5 | 🔴 Not permitted | 🟡 With supervision | ✅ Approved tools only | 🟡 Teacher-supervised | 🔴 Not permitted | Written teacher disclosure |
| 6–8 | 🟡 When teacher specifies | ✅ With disclosure | ✅ Approved tools | 🟡 With disclosure | 🟡 Approved tools only | Student AI Disclosure Form |
| 9–12 | 🟡 Per assignment | ✅ With full disclosure | ✅ Approved tools | ✅ With disclosure | 🟡 Per assignment, with disclosure | Full AI Contribution Statement |
| Staff | ✅ Approved tools; no confidential student data | ✅ Approved tools | ✅ Approved tools | ✅ Approved tools | ✅ With copyright review | Professional disclosure per context |
🔴 = Not permitted | 🟡 = Conditional — specific conditions apply | ✅ = Generally permitted with standard requirements
Section 3: Key Prohibited Uses — Absolute Restrictions
- Facial recognition of students — no AI system may use facial recognition to identify, track, or monitor students without explicit consent and board approval
- AI-generated disciplinary decisions — no AI system may be the sole or primary basis for a disciplinary action; human review is required for all student discipline
- AI-generated IEP/SEN decisions — AI tools may assist in data gathering but may never be the basis for special education eligibility, placement, or IEP goal decisions
- Student biometric data collection — keystroke dynamics, eye tracking, emotional recognition without explicit consent
- Commercial AI platforms without DPA for students under 13 — no generative AI platform collecting personal information from students under 13 may be used without an executed Data Privacy Agreement
- AI-generated evaluations submitted as educator judgement — staff may not submit AI-generated evaluation text as their own professional assessment without disclosure
Section 4: Academic Integrity — AI Disclosure Standards
The disclosure requirement shifts the question from "Did you use AI?" to "How did you use AI?" — presupposing transparency rather than guilt.
Sample AI Disclosure Statement (Grade 9–12):
"This work was completed with the following AI assistance: [Tool name]. I used it for: [specific purpose]. The core ideas, analysis, and conclusions are my own. All AI-generated content has been reviewed, verified, and either cited or paraphrased. The total AI contribution to this work is approximately [%]."
Sample AI Disclosure Statement (Grade 6–8):
"I used [tool name] to help with this assignment. I used it for [purpose]. The rest of this work is my own thinking and writing."
✅ Phase 3 Completion Checkpoint
▼Legal, Ethical & Equity Review
Verify the policy draft is legally defensible, ethically grounded, and genuinely equitable before moving to board adoption.
Section 1: Legal Compliance Review — 5 Key Areas
Area 1: FERPA / PDPL Compliance
Every provision involving student data must be reviewed. Key questions: (1) Does this provision allow a vendor to access student education records? If yes, is the vendor operating under the "school official" exception with a DPA specifying legitimate educational interest? (2) Does the data privacy section require annual DPA review and include a data breach notification timeline consistent with applicable law? (UAE PDPL requires breach notification within 72 hours.)
Area 2: COPPA Compliance
For any provision authorising AI tool use for students under 13: Has the vendor agreed to operate under school authority rather than collecting consent directly from parents? Does the DPA template include language prohibiting secondary commercial use of student data?
Area 3: Section 504/ADA/SEND Accessibility
Review equity provisions: Do all approved AI tools meet WCAG 2.1 AA accessibility standards? Does the policy prohibit tools not reviewed for screen reader compatibility? Does the IEP/SEN provision specify that AI accommodations may be embedded in a student's plan?
Area 4: UAE PDPL Specific Requirements
UAE Federal Decree-Law No. 45/2021 on Personal Data Protection requires: explicit consent or other legal basis for processing personal data; notification to UAE Personal Data Protection Office of certain breaches within 72 hours; data subject rights including access, correction, and erasure. Schools in ADEK jurisdiction must comply with ADEK School AI Framework; KHDA jurisdiction schools must comply with KHDA AI Guidelines.
Area 5: Academic Awarding Body Requirements
For secondary schools: verify that AI disclosure and academic integrity provisions comply with applicable awarding body guidance — IB Organisation, Cambridge CAIE, Pearson, AQA, and UAE MoE requirements. These change frequently and must be checked annually during the policy review cycle.
Section 2: Ethical Review — 4 Dimensions
| Ethical Dimension | Review Questions | Policy Sections to Check |
|---|---|---|
| Fairness & Non-Discrimination | Does the policy prevent AI from producing biased outcomes by race, gender, disability, or ELL/language status? Does it require bias monitoring for AI assessment tools? | §8 (Equity), §3 (Vetting), §9 (Prohibited Uses) |
| Transparency | Do students and parents know when AI is being used in decisions affecting them? Does disclosure meet transparency obligations? | §6 (Academic Integrity), §4 (Student Permissions) |
| Human Oversight | Does the policy ensure humans remain in the loop for consequential decisions? Are AI recommendations always subject to human review before action? | §10 (Violations), §9 (Prohibited Uses) |
| Student Dignity | Does the policy protect student dignity in AI interactions? Is there a prohibition on AI systems that surveil, score, or rank students in ways that could stigmatise? | §9 (Prohibited Uses), §8 (Equity) |
Section 3: Equity Impact Assessment
For every major provision, ask: "Does this provision work the same way for a student with an IEP, an English Language Learner, a student whose family cannot afford internet access at home, and a student whose first language is not English?" If any of these students are disadvantaged, the provision needs revision.
| Student Population | Key Risk Points | Recommended Provisions |
|---|---|---|
| Students with IEPs/SEN Plans | AI tools may not accommodate assistive technology; AI assessment may misinterpret disability-related work patterns | Require accessibility review in vetting; authorise AI accommodations in IEPs; prohibit AI-only disciplinary decisions |
| English Language Learners / EAL | AI translation tools create academic integrity grey areas; AI may misclassify ELL/EAL writing quality | Explicitly permit approved AI translation tools; require ELL/EAL-specific disclosure guidance; prohibit AI as sole reclassification basis |
| Low-Income / No Home Access | At-home AI access creates homework equity gap; can't practise with approved tools outside school | Ensure AI-based assignments are completable on school devices during school hours; consider device/hotspot lending programmes |
| International / Multilingual Students (UAE) | Culturally responsive AI content may be lacking; UAE cultural norms must be respected in AI outputs | Require cultural appropriateness review in vetting; align with UAE Digital Wellbeing principles; flag tools with UAE-specific content restrictions |
✅ Phase 4 Completion Checkpoint
▼Approval & Board Adoption
Navigate the board presentation, public comment process, and formal vote with confidence. A well-prepared presentation is the difference between adoption and delay.
Section 1: The Board Presentation — 5-Part Structure
Frame urgency with data from Phase 1 audit: AI tool usage rates, legal exposure from unvetted tools, peer school comparisons. Never start with the policy — start with the problem it solves.
Document the inclusive process: audit completed, 9-member committee, stakeholder surveys, town halls, legal review. Boards are most sceptical of policies that appear rushed. Show your work.
Walk through the 11 sections at a high level. Focus on: the permission matrix, prohibited uses, vendor vetting, and enforcement framework. Do not read the policy — synthesise it.
Share survey results: staff support rates, parent questions addressed, student voice incorporated. Quote specific feedback. This is often the most persuasive part.
Board members vote for policies they believe will be implemented. Show the 90-day launch plan, PD schedule, and tool registry timeline. Implementation readiness is often the deciding factor.
Section 2: Anticipated Board Questions — Prepared Responses
| Question | Prepared Response |
|---|---|
| "How do we enforce this?" | "Enforcement operates at 4 levels — teacher intervention for minor instances up to principal/superintendent review for serious violations. Consistent enforcement is actually easier with a written policy than without one." |
| "Will this put us behind other schools?" | "Schools that govern AI thoughtfully are ahead of those that simply prohibit or ignore it. Our policy enables responsible use while protecting students — which is both educationally sound and legally defensible." |
| "What about teachers uncomfortable with AI?" | "Phase 6 includes a 3-tier PD programme. Tier 1 is required of all staff — explaining the policy and what it means for each role. No teacher is expected to be an AI expert." |
| "What if the technology changes?" | "Section 11 includes an annual review process and triggers for mid-year updates. The policy is designed to be a living document — our governance infrastructure will keep updating it." |
| "Did parents have input?" | "Yes — we conducted parent surveys, hosted town halls, and maintained a 30-day public comment period. The policy reflects parent priorities: data privacy, grade-appropriate permissions, and transparency." |
✅ Phase 5 Completion Checkpoint
▼Implementation & Professional Development
Launch the adopted policy with a comprehensive 3-tier PD programme, student instruction, family communication, and the operational infrastructure needed for sustained implementation.
Section 1: The 90-Day Launch Window
The first 90 days after policy adoption are the most critical. Patterns established in this window tend to persist. Structure: Week 1–2: All-staff communication, IT tool registry published, help desk activated. Weeks 3–6: Tier 1 PD for all staff, student instruction begins, parent FAQ published. Weeks 7–12: Tier 2 teacher PD, Tier 3 leader PD, first compliance check.
Section 2: The 3-Tier Professional Development Model
3-Tier PD Model — Matching Depth to Role
Section 3: Student-Facing Instruction by Grade Band
| Grade Band | Duration | Key Concepts | Suggested Activities |
|---|---|---|---|
| K–2 | 2 × 30 min | What computers can/can't do; "Doing your own thinking"; when to ask an adult | Read-aloud discussion; "Did a person think this?" sorting activity |
| 3–5 | 2 × 45 min | What AI is; approved vs. unapproved tools; why we tell the truth about helpers; basic data privacy | AI tool exploration (teacher-directed); "I used AI to help" practice statements |
| 6–8 | 3 × 50 min | AI categories; permission matrix; academic integrity; data privacy; AI bias introduction | Policy jigsaw; assignment analysis; AI output critique |
| 9–12 | 3 × 55 min | Full policy; AI as professional tool; university/employer AI policies; quality disclosure; critical evaluation | Policy analysis; professional comparison; AI-assisted essay with disclosure; mock integrity hearing |
Section 4: The AI Tool Registry
The AI Tool Registry is the operational heart of the data privacy pillar — the school's authoritative list of AI tools reviewed, approved, and cleared for student use. The registry must be published on the school website before the policy effective date. Without a functioning, public registry, the vendor vetting requirement is unenforceable.
Registry fields: Tool name · Vendor · AI category · Approved user groups · Approval date · DPA status and expiration · Date of last review · Approved use restrictions · Approving administrator. The AI Policy Coordinator owns the registry.
✅ Phase 6 Completion Checkpoint
▼Monitoring, Evaluation & Annual Review
Build the governance structures and review processes that keep the policy current, compliant, and effective as AI technology continues to evolve.
Section 1: Five Monitoring Streams
| Stream | What to Monitor | Frequency | Owner |
|---|---|---|---|
| Compliance Monitoring | Policy violation reports; enforcement actions taken; patterns by grade level or department | Monthly | Building principals → AI Policy Coordinator |
| Registry Health | DPA expiration dates; tools no longer in use; new tools requested; vendor changes | Quarterly | IT Director → AI Policy Coordinator |
| Staff Implementation | PD completion rates; help desk question volume and topics; staff survey on policy clarity | Quarterly | HR/PD Director → AI Policy Coordinator |
| Student AI Literacy | Academic integrity incident rates; student survey on policy understanding; disclosure compliance | Semester | Curriculum Director → AI Policy Coordinator |
| Legal Landscape | UAE/national regulatory updates; peer school policy changes; emerging legal cases | Monthly | Legal Counsel/DPO → AI Policy Coordinator |
Section 2: Triggered Review Criteria
- Trigger 1: A major new AI platform achieves mainstream student adoption (20%+ student use within 90 days)
- Trigger 2: National/UAE legislation enacted or proposed creating new compliance requirements
- Trigger 3: A data breach or security incident involving an AI tool used by the school
- Trigger 4: A significant disciplinary incident or parent complaint revealing a policy gap
- Trigger 5: A peer school adopts a significantly different policy framework gaining regulatory recognition
Section 3: Annual Review Process — Three Phases
Compile all 5 monitoring streams data. Conduct annual surveys. Legal counsel reviews for regulatory changes. IT compiles registry audit. Identify top 5 policy gaps based on incident data.
Full policy committee reconvenes. Reviews data from Phase A. Identifies provisions requiring revision. Drafts amendments. Circulates to all stakeholders for 2-week comment period.
Present annual findings and proposed amendments to the board. Obtain approval for substantive amendments. Publish updated policy. Communicate changes to all stakeholders.
Every adopted version must be archived with: version number, adoption date, board resolution number, and summary of changes from prior version. All archived versions published on school website.
Section 4: National Standards Alignment
| Framework | Relevant Standards | Policy Sections Aligned |
|---|---|---|
| UAE AI Strategy 2031 | AI integration in education; responsible AI; digital transformation | §1, §4, §5, §8 |
| UNESCO AI Competency Framework | AI literacy; human oversight; ethical use; governance | §2, §6, §8, §10, §11 |
| ISTE Standards | Educator 1a (Learner); 4b (Collaborator); Student 1d (Empowered Learner) | §4, §5, §6, §8 |
| OECD AI Principles (2019/2024) | Human-centred values; transparency; robustness; accountability | §6, §7, §8, §9 |
| IB AI Academic Integrity | AI disclosure; citation requirements; assessment design | §6 (Academic Integrity) |
✅ Phase 7 Completion Checkpoint — Programme Completion
▼📚 12-Module Course Library
Governing AI in Schools: From Policy to Practice — a world-class professional development programme analysing 60+ authoritative AI policy documents from the US Department of Education, TeachAI, UNESCO, OECD, UAE Ministry of Education, and ADEK/KHDA.
Based on analysis of 60+ authoritative U.S. and international AI policy documents — including guidance from the U.S. Department of Education, TeachAI, Digital Promise, UNESCO, OECD, UAE Ministry of Education, ADEK, and 30+ State Departments of Education. Designed for K–12 education leaders, teachers, and school staff at all levels.
The AI Landscape in K–12 Education
Foundations · 90 min · All Audiences
Explain the key components of a school AI governance framework, including policy architecture, stakeholder roles, and core ethical principles, and articulate why each component is essential to responsible AI governance.
Topics Covered:
- Understanding AI types, tools, and terminology relevant to school settings
- Current state of AI adoption by students and educators (including 'shadow AI' use)
- National and international AI governance frameworks (US DoE, UNESCO, OECD, EU AI Act)
- UAE National AI Strategy 2031 and its implications for K–12 education
- The regulatory gap: why schools must develop context-specific governance
- Key statistics: 51% student AI use, 15% school policy adoption — the urgency
Ethical Foundations and Values in AI Governance
Ethics · 90 min · Leaders & Teachers
Analyse AI-related risks across multiple ethical dimensions — fairness, transparency, accountability, and human oversight — and apply structured ethical decision-making frameworks to real-world school AI governance scenarios.
Topics Covered:
- Identifying and articulating institutional values to underpin AI policy
- Core ethical tensions: equity, privacy, transparency, and innovation
- Algorithmic bias and its implications for student outcomes
- Ethical decision-making frameworks for school leadership
- UNESCO Ethics of AI Recommendation (2021) — practical applications
- UAE cultural values and their integration into AI governance frameworks
- The CARE framework: Critical, Accountable, Responsible, Equitable AI
Legal and Regulatory Foundations
Legal Compliance · 120 min · Leaders & Legal Staff
Evaluate AI tools against legal compliance standards, data protection requirements, and equity considerations, applying a structured risk assessment framework to make informed procurement and deployment decisions.
Topics Covered:
- US federal law: FERPA, COPPA, IDEA, ADA, and Section 504
- UAE PDPL (Federal Decree-Law No. 45/2021) — practical compliance
- ADEK School AI Framework and KHDA AI Guidelines for Schools
- State/emirate-level privacy laws and jurisdiction-specific requirements
- Data Processing Agreements (DPAs) — drafting and vendor compliance vetting
- EU AI Act — implications for international schools and vendors
- CIPA requirements and internet safety policy updates
- Children's online safety: KCSIE (UK), UAE Child Rights Law, COPPA equivalents
AI Policy Architecture and Design
Policy Design · 120 min · Policy Committee
Construct a complete, school-specific AI policy that addresses permitted and prohibited uses, data privacy obligations, academic integrity requirements, safeguarding responsibilities, and stakeholder accountability.
Topics Covered:
- Policy vs. guidance: structure, purpose, and the document hierarchy
- The 12 essential sections of a world-class school AI policy
- Tiered permission models: universally permitted, pre-approved, and prohibited AI uses
- Writing enforceable, values-aligned policy language
- Grade-band differentiation: K–2, 3–5, 6–8, 9–12
- Integrating BYOD (Bring Your Own Device) policies with AI governance
- Version control and policy lifecycle management
Risk Management and Safeguarding
Safety · 120 min · Leaders, IT & Safeguarding
Design risk identification, mitigation, and monitoring processes that protect students from AI-related harms — including data breaches, algorithmic bias, academic dishonesty, and safeguarding risks — within their specific institutional context.
Topics Covered:
- AI risk taxonomy: data privacy, algorithmic bias, cybersecurity, misinformation, deepfakes, student wellbeing
- The AI Tool Risk Assessment Framework: seven dimensions of evaluation
- Safeguarding protocols and child protection in AI contexts
- Deepfakes, AI-generated misinformation, and online safety implications
- Procurement and approval processes for AI tools
- Incident response planning for AI-related breaches
- UAE-specific safeguarding: UAE Child Rights Law, digital wellbeing guidelines
Stakeholder Roles and Responsibilities
Governance · 90 min · All Leadership
Develop differentiated stakeholder communication and engagement strategies that build shared understanding and sustained community support for responsible AI governance across staff, students, parents, and governing bodies.
Topics Covered:
- Governance roles: AI Lead, Data Protection Officer, Governing Body, Senior Leadership
- Responsibilities of teaching and non-teaching staff
- Student and parent/guardian responsibilities and communication frameworks
- Community engagement strategies for AI policy consultation
- The 9-stakeholder committee model: composition and facilitation
- Communicating AI policy changes to multilingual communities (UAE context)
AI and Pedagogy — Integrating AI into Teaching & Learning
Pedagogy · 90 min · Teachers & Curriculum
Apply evidence-based pedagogical frameworks to integrate AI tools purposefully into curriculum design, assessment strategies, and differentiated instruction — while maintaining the primacy of human learning relationships.
Topics Covered:
- Bloom's Taxonomy and AI: matching tool use to cognitive level
- SAMR framework applied to AI integration (Substitution → Redefinition)
- UDL (Universal Design for Learning) and AI tools for accessibility
- AI-enhanced formative assessment: possibilities and pitfalls
- Designing assignments that promote genuine student thinking with AI as a partner
- AI tutoring systems: evidence base, benefits, and equity considerations
- Professional practice: AI for lesson planning, feedback, and differentiation
Academic Integrity in the Age of AI
Integrity · 90 min · Teachers, Curriculum & Leaders
Evaluate student work for appropriate AI use, implement fair and consistent disclosure requirements, and respond to suspected AI misconduct using a principled, evidence-based investigation process.
Topics Covered:
- Tiered AI use frameworks and assignment classification systems
- AI declaration and disclosure requirements by grade band
- Responding to suspected AI misconduct: investigation and outcome procedures
- Awarding body compliance: IB Organisation, Cambridge CAIE, Pearson, AQA, UAE MoE
- AI detection tools: capabilities, limitations, and fairness concerns
- Designing assessment tasks that are resistant to simple AI substitution
- The difference between AI-assisted and AI-generated work — policy implications
Implementation, Change Management & Community Engagement
Change Management · 120 min · Leaders
Apply evidence-based change management strategies to lead the implementation of an AI policy, including managing resistance, building staff capacity, and engaging diverse community stakeholders.
Topics Covered:
- The School AI Readiness Audit: five dimensions of institutional readiness
- Change management strategies: Kotter's 8-Step Model applied to AI policy
- Phased implementation models: 30/60/90-day plans
- Managing the three types of resistance: fear-based, minimising, and turf-protective
- Building teacher AI champions and peer learning communities
- Parent and community communication: messaging, FAQs, and town halls
- Multilingual communication strategies for international/UAE school communities
Equity, Inclusion & Culturally Responsive AI Governance
Equity · 90 min · All Staff
Identify and address equity gaps in AI policy design and implementation, ensuring that governance frameworks actively protect and support students with disabilities, English Language Learners, and students from under-resourced communities.
Topics Covered:
- Algorithmic bias: sources, types, and educational impact
- The digital equity gap in AI access: home vs. school, urban vs. rural
- IEP/SEN and AI accommodations: legal requirements and best practice
- EAL/ELL students and AI translation tools: integrity and support
- Culturally responsive AI governance: UAE multicultural community context
- Gender bias in AI educational tools: identification and mitigation
- Universal Design for Learning as an equity framework for AI governance
Monitoring, Review & Continuous Improvement
Governance · 90 min · AI Policy Coordinator & Leadership
Design and implement a sustainable AI governance cycle — including monitoring streams, triggered review protocols, and annual review processes — that maintains policy currency as AI technology and regulation continue to evolve rapidly.
Topics Covered:
- Policy review cycles: annual review, incident-triggered review, and technology-driven updates
- The 5 monitoring streams: compliance, registry health, staff implementation, student literacy, legal landscape
- Building a culture of responsible AI use across the school community
- Key performance indicators for AI policy effectiveness
- Policy version control: archiving, communicating changes, maintaining trust
- Succession planning: institutional knowledge and governance resilience
Capstone — Policy Presentation & Governing Body Simulation
Capstone · 180 min · Full Cohort
Present a complete, school-specific AI Policy package to a simulated governing body, incorporating all elements from Modules 1–11: legal compliance, ethical grounding, equity provisions, academic integrity framework, implementation plan, and monitoring structures.
Capstone Components:
- Workshop 1: The AI Governance Audit — Where Are We Now? (30 min)
- Workshop 2: Policy Sprint — Drafting Your School AI Policy (60 min)
- Workshop 3: Guidance Suite Development and Governing Body Simulation (90 min)
Deliverables for Certification Portfolio:
- Complete school AI policy document (11+ sections)
- Staff AI guidance document
- Student AI charter (age-appropriate versions)
- Parent communication letter
- AI declaration form template
- AI Tool Risk Assessment for 2 tools from your school's current inventory
- AI Policy Review Cycle Tracker (12-month calendar)
Foundation Certificate: Complete Modules 1–4 + Capstone Part 1. Practitioner Certificate: Complete all 12 modules + Capstone Parts 1–2. Advanced Certificate (AI Lead): Practitioner + Capstone Part 3 + school-specific policy portfolio submission.
✅ School AI Policy Self-Assessment Checklist
A comprehensive self-assessment tool for school leaders to evaluate the current state of AI governance. Check each item currently in place. Leave blank where action is still needed.
Section 1: AI Policy Foundations
▼1.1 Policy Existence and Scope
Our school has a written AI Policy that has been formally adopted by senior leadership.
The policy covers all staff, students, and governors / board members.
The policy clearly defines what constitutes 'AI tools' within our school context.
The policy distinguishes between permitted, conditional, and prohibited AI uses.
e.g., permitted for lesson planning, conditional for student work with disclosure, prohibited for summative assessment without declarationThe policy is version-controlled and includes a review date.
The policy has been mapped against current national/regional regulatory requirements.
e.g., UAE AI Strategy, ADEK/KHDA guidance, UK DfE guidance, EU AI Act1.2 Policy Communication and Accessibility
The AI Policy is published on the school website or intranet and is easily accessible.
All staff received a copy of the policy and signed to confirm they have read it.
The policy is available in the home languages of our main parent communities.
Students have been given an age-appropriate summary or guide to the policy.
Section 2: Governance Structure
▼2.1 Roles, Responsibilities, and Accountability
A named Senior Leader holds overall accountability for AI governance at our school.
A designated AI Lead / Digital Lead coordinates day-to-day AI policy implementation.
Staff roles related to AI oversight are documented in job descriptions or role profiles.
There is a clear process for staff to report AI-related concerns or incidents.
Governors / board members receive at least annual updates on AI governance.
Section 3: Data Privacy and Legal Compliance
▼We have a current Data Protection Officer (DPO) or Data Protection lead.
All AI tools used with students have a Data Privacy Agreement (DPA) or equivalent in place.
Our AI tool approval process includes a formal privacy impact assessment.
We maintain a live register of approved AI tools with DPA expiry dates and review status.
Our school has a documented data breach response plan that includes AI-related incidents.
For UAE schools: policy aligns with UAE PDPL (Federal Decree-Law No. 45/2021) and applicable ADEK/KHDA requirements.
Section 4: Academic Integrity
▼Our school has a clear AI use disclosure/declaration requirement for student work.
Teachers have been trained to identify potential AI misuse in student submissions.
Our AI policy is aligned with the requirements of relevant awarding bodies (IB, Cambridge, UAE MoE, etc.).
There is a documented, fair process for investigating suspected AI misconduct.
Students receive age-appropriate AI literacy education as part of the curriculum.
Section 5: Staff Capacity and Professional Development
▼All staff have completed mandatory AI policy induction / orientation training.
Teaching staff have access to AI-specific professional development opportunities.
Our PD programme for AI is differentiated by role (teacher, leader, support staff).
There is an internal AI champion or community of practice supporting staff.
Section 6: Equity and Safeguarding
▼Our AI vetting process includes accessibility review (WCAG 2.1 AA or equivalent) for all student-facing tools.
AI accommodations for students with IEPs/SEN plans have been considered and documented.
Our policy explicitly prohibits AI-generated disciplinary decisions without human review.
Safeguarding risks of AI tools (deepfakes, inappropriate content generation) are explicitly addressed in the policy.
Our policy considers the needs of EAL/ELL students and multilingual families (particularly relevant for international/UAE schools).
Section 7: Monitoring and Continuous Improvement
▼Our school has a scheduled annual AI policy review process with a named owner.
We monitor compliance with the AI policy and track incidents systematically.
AI-related incidents or near-misses are reviewed and used to improve policy and practice.
We track regulatory changes that could affect our AI policy and respond proactively.
Overall Governance Score
📋 Master Templates (10)
Ready-to-use templates for every phase of AI policy creation and governance. Interactive forms you can fill in and print directly from this platform.
Template 1: AI Tool Inventory
Phase 1 · Complete school AI tool audit instrument
| Tool Name | Vendor | Category | Users | Student Data? | DPA Status | Risk |
|---|---|---|---|---|---|---|
| ChatGPT | OpenAI | Generative AI | Teachers (informal) | Potentially | 🔴 None | 🔴 High |
| Khan Academy / Khanmigo | Khan Academy | Adaptive Learning | Students Gr. 3–12 | Yes | 🟡 Under review | 🟡 Medium |
| DreamBox Learning | Discovery Education | Adaptive Learning | Students Gr. K–8 | Yes | ✅ Current | ✅ Low |
| [Add your tools here] |
Categories: Generative AI / Adaptive Learning / Analytics & Assessment / Safety & Security / Admin & Operations
Template 2: Policy Committee Charter
Phase 2 · Formal charter for the AI Policy Development Committee
Template 3: AI Policy Document (11-Section Scaffold)
Phase 3 · Complete policy document framework
[SCHOOL / TRUST NAME]
ARTIFICIAL INTELLIGENCE (AI) POLICY
Version: ___ | Adopted: ___ | Effective: ___ | Review Date: ___
§1. PURPOSE & SCOPE. This policy establishes the framework for responsible, equitable, and legally compliant use of Artificial Intelligence (AI) tools throughout [School Name]. It applies to all school employees, students, contractors, and volunteers who use AI tools in connection with school educational programmes or on school technology infrastructure.
§2. DEFINITIONS. "Artificial Intelligence" means computer systems performing tasks typically requiring human intelligence, including language generation, image creation, personalised learning adaptation, and predictive analytics. Five categories recognised: [Generative AI / Adaptive Learning / Analytics & Assessment / Safety & Security / Admin & Operations].
§3. APPROVED TOOL REGISTRY. The School shall maintain an AI Tool Registry — a publicly accessible list of AI tools approved for use with students. No AI tool collecting student data may be used for instruction without prior approval and execution of a Data Privacy Agreement (DPA) or equivalent data processing agreement...
[ Sections §4 through §11 follow — complete the full document using the 11-section architecture from Phase 3 ]
Template 4: AI Disclosure Statement — Student Templates
Phase 3/6 · Age-differentiated disclosure forms for student use
Grade 9–12: Full AI Contribution Statement
"The core ideas, analysis, and conclusions in this work are my own. All AI-generated content has been reviewed, verified, and either cited or paraphrased."
Grade 6–8: Simple AI Use Statement
Template 5: Vendor Vetting Checklist (30-Point)
Phase 3 / Ongoing · Complete checklist for AI tool approval decisions
| # | Vetting Criterion | Status | Notes |
|---|---|---|---|
| 1 | Vendor has a Student Data Privacy Agreement (DPA) template available | ||
| 2 | DPA explicitly prohibits sale or secondary commercial use of student data | ||
| 3 | COPPA compliance explicitly addressed for users under 13 | ||
| 4 | UAE PDPL / data protection compliance confirmed for UAE schools | ||
| 5 | Tool meets WCAG 2.1 AA accessibility standards | ||
| 6 | Data storage location disclosed; appropriate for jurisdiction | ||
| 7 | Vendor provides data breach notification within 72 hours | ||
| [ 23 additional criteria in full checklist — educational efficacy, bias monitoring, content safety, vendor financial stability, support quality, contract terms ] | |||
Template 6: Staff AI Guidance Document
Phase 6 · Practical guidance for all staff on AI use
[SCHOOL NAME] — STAFF AI GUIDANCE
What staff MAY do with AI tools:
- Use tools on the Approved AI Tool Registry for lesson planning, resource creation, and professional tasks
- Use approved AI tools to provide feedback on student work — but must review and verify all AI-generated feedback before sharing
- Explore AI tools personally for professional learning — but must not use unapproved tools with student data
What staff must NOT do:
- Submit AI-generated text as their own professional writing (reports, recommendations, IEP notes) without disclosure
- Use any AI tool with student personal data that is not on the Approved Registry
- Allow students to use AI tools not listed in the Grade-Band Permission Matrix
When in doubt: Check the AI Tool Registry first. Then contact the AI Policy Coordinator.
Template 7: Parent Communication Letter
Phase 5/6 · School-to-home communication about AI policy
[Date]
Dear Parents and Guardians,
I am writing to inform you that [School Name] has formally adopted an Artificial Intelligence (AI) Policy, effective [date]. This policy governs how AI tools are used by our staff and students to support learning while protecting student data and upholding academic integrity.
Key points for parents:
- Our school maintains a public register of all AI tools approved for student use
- Students are taught how to use AI responsibly, including how to disclose AI use in their work
- All AI tools used with your child's personal data are covered by Data Privacy Agreements
- Students in Grades K–5 have significant restrictions on generative AI use; older students have age-appropriate guided permissions
The full AI Policy is available on our school website at [URL]. If you have any questions, please contact [AI Policy Coordinator name] at [email].
Yours sincerely,
[Principal/Head Teacher Name]
Template 8: Student AI Charter (Secondary — Grades 6–12)
Phase 6 · Student-facing summary of AI rights and responsibilities
My AI Rights & Responsibilities at [School Name]
✅ I have the right to:
- Know which AI tools are approved for use in my school
- Ask my teacher how AI is being used in my learning
- Know what data my school collects and how it is protected
- Use approved AI tools to support my learning as specified by my teacher
📋 My responsibilities:
- Only use AI tools that are on the school's approved list
- Always disclose when I have used AI to help with my work
- Never submit AI-generated content as entirely my own without declaration
- Never enter another student's personal information into an AI tool
- Report any concerns about AI use to a trusted adult
Signed: _________________ | Date: _________ | Class: _________
Template 9: New AI Tool Request Form
Phase 6 / Ongoing · Staff submission form for requesting new tool approval
⏱ The AI Policy Coordinator will respond within 15 business days. Do not begin using this tool with students until you receive written approval.
Template 10: Annual Review Calendar
Phase 7 / Ongoing · 12-month governance schedule
| Month | Governance Activity | Owner | Status |
|---|---|---|---|
| August | Pre-year registry audit; DPA expiration review; staff PD calendar confirmed | IT Director + AI Lead | |
| September | All-staff Tier 1 policy orientation; student AI instruction begins | HR + Principals | |
| October | Tier 2 teacher PD completion deadline; first compliance check | PD Director | |
| December | Semester compliance review; registry update; incident report review | AI Policy Lead | |
| January | Legal landscape review; regulatory update check; mid-year DPA renewals | Legal/DPO + IT | |
| March | Annual stakeholder surveys deployed; data collection begins | AI Policy Lead | |
| April | Annual review committee meeting; survey analysis; draft amendments | Full Committee | |
| May | Public comment period (30 days); board presentation scheduled | Principal + Board | |
| June | Board adopts amendments; updated policy published; year-end registry audit | Board + AI Lead |
⚖️ Legal Frameworks & Pedagogical Resources
The legal and pedagogical foundations for K–12 AI policy. Know these frameworks to build a defensible, educationally sound policy.
⚖️ FERPA — Family Educational Rights and Privacy Act (US)
What it is: Federal law protecting the privacy of student education records. Applies to all schools receiving federal funding. Gives parents (and students 18+) the right to inspect, amend, and control disclosure of education records.
AI Policy Implication: When a student's work is submitted to an AI tool, that work may constitute an "education record." Vendors accessing student education records must operate under the "school official" exception (DPA required). AI systems using education records for automated decision-making must include human review.
Key Citation: 34 CFR Part 99. Policy sections: §3 (Registry), §7 (Data Privacy).
👶 COPPA — Children's Online Privacy Protection Act (US)
What it is: Federal law restricting online collection of personal information from children under 13. Enforced by the FTC. Violations can result in civil penalties up to $51,744 per violation.
AI Policy Implication: Generative AI platforms collecting data from students under 13 require either verifiable parental consent or school authority consent under §312.5(b)(1). The school consent mechanism requires a DPA strictly limiting vendor data use to educational purposes. Students under 13 creating personal accounts on ChatGPT violates COPPA — the district cannot authorise this use.
Key Citation: 15 U.S.C. §§ 6501–6506; 16 CFR Part 312.
🇦🇪 UAE PDPL — Personal Data Protection Law (UAE)
What it is: Federal Decree-Law No. 45 of 2021 on Personal Data Protection — UAE's comprehensive data protection legislation, effective September 2022. Establishes rights for data subjects and obligations for data controllers and processors.
AI Policy Implication: Schools processing student personal data via AI tools must: have a legal basis for processing (consent or legitimate interest); notify students/parents of data use; honour data subject rights (access, correction, erasure); report certain breaches to the UAE Personal Data Protection Office within 72 hours; ensure AI vendors comply with UAE data localisation requirements where applicable.
Key Citation: Federal Decree-Law No. 45/2021; Executive Regulations. Schools in ADEK jurisdiction: also see ADEK School AI Framework. KHDA jurisdiction: see KHDA AI Guidelines for Schools.
🌐 CIPA — Children's Internet Protection Act (US)
What it is: Federal law requiring schools receiving E-rate funding to implement Internet Safety Policies and technology protection measures filtering inappropriate content for minors.
AI Policy Implication: The Internet Safety Policy must address AI-generated content. AI tools generating harmful, violent, or sexually explicit content must be blocked or restricted. Generative AI content filters need review as tools evolve.
Key Citation: 47 U.S.C. § 254(h).
🎓 ISTE Standards for Educators & Students
What it is: The International Society for Technology in Education's framework for effective technology integration in K–12 education.
AI Policy Alignment: ISTE Educator Standard 1a (Learner) supports ongoing AI literacy PD. Standard 4b (Collaborator) supports the stakeholder inclusion model. Student Standard 2a (Digital Citizen) is the foundation for student AI use policy and disclosure requirements.
🌍 UNESCO AI Competency Frameworks
What it is: UNESCO has published two frameworks: the AI Competency Framework for Students (2023) and the AI Competency Framework for Teachers (2023). These provide internationally recognised progressions for AI literacy.
AI Policy Alignment: The student framework identifies five competency domains: (1) Human-Centred AI Mindset; (2) Ethics of AI; (3) AI Foundations; (4) AI Techniques; (5) AI Literacy in Practice. These map onto the grade-band permission matrix. The UNESCO Recommendation on the Ethics of AI (2021) provides the ethical framework underlying all policy provisions.
📊 Bloom's Taxonomy — AI in Learning Design
Application to AI Policy: The permission matrix should reflect Bloom's Taxonomy — where AI is appropriate depends on the cognitive level of the task. At lower levels (Remember, Understand), AI assistance reduces the cognitive work that builds foundational knowledge. At higher levels (Analyse, Evaluate, Create), AI can serve as a thinking partner extending student capacity.
Design Principle: AI should not replace the thinking that builds the skill. Restrict AI use most heavily on tasks developing foundational knowledge; permit AI use with disclosure on higher-order thinking tasks where AI extends rather than replaces student capacity.
♿ UDL — Universal Design for Learning
What it is: A framework for designing instructional goals, methods, materials, and assessments that work for all students. Based on three principles: Multiple Means of Representation, Action and Expression, and Engagement.
AI Policy Alignment: AI tools can serve as powerful UDL implementation mechanisms — text-to-speech, translation, writing scaffolds, alternative presentation formats. But AI tools must themselves be accessible (WCAG 2.1 AA) to serve this function. The equity provisions in §8 directly implement UDL principles by requiring accessible tool approval.
📐 OECD AI Principles (2019, Updated 2024)
What it is: The Organisation for Economic Co-operation and Development's internationally recognised principles for trustworthy AI, adopted by 46 countries including UAE signatories.
Five Principles: (1) AI should benefit people and the planet; (2) AI systems should be designed for fairness, non-discrimination, and transparency; (3) AI must be transparent and explainable; (4) AI systems must be robust, secure, and safe; (5) Organisations deploying AI must be accountable. These principles directly inform the ethical review in Phase 4 and the prohibited uses in §9.
🏫 CoSN Student Data Privacy Initiative
What it is: The Consortium for School Networking (CoSN) provides K–12 IT leaders with frameworks, tools, and resources for student data privacy governance, including model DPA language and the Student Data Privacy Consortium (SDPC) — a national repository of executed DPAs.
Resource: Before negotiating DPAs with AI vendors, check the SDPC Marketplace (marketplace.sdpc.org) — your state may already have a model or executed DPA with the vendor that the school can adopt directly, saving significant time and legal cost.
UAE & International AI Policy Framework
Comprehensive guidance for K–12 schools operating in the UAE and international contexts. Aligns with UAE National AI Strategy 2031, ADEK, KHDA, UAE PDPL, and international best practice.
Where UAE law and international guidance differ, UAE law takes precedence for schools operating in the UAE. This framework is designed to satisfy both UAE regulatory requirements and international best practice simultaneously where possible. Schools should consult their DPO and legal advisor before adoption.
🇦🇪 UAE Regulatory Framework for Schools
▼| Regulation / Framework | Issuing Authority | Key AI Policy Implications |
|---|---|---|
| UAE National AI Strategy 2031 | UAE Government (Ministry of State for AI) | Mandates AI integration across education sector; positions UAE as global AI leader; schools should align policy with national strategy objectives; evidence of responsible AI adoption supports strategic goals |
| UAE PDPL (Federal Decree-Law No. 45/2021) | UAE Federal Government | Governs all personal data processing including student data; requires legal basis, data subject rights, breach notification within 72 hours; stricter than GDPR in some provisions; DPOs must be appointed for certain organisations |
| ADEK School AI Framework | Abu Dhabi Department of Education & Knowledge | ADEK-licensed schools must comply; guidance on approved AI tools, teacher AI literacy requirements, student data protection; ADEK approval may be required for significant AI tool adoption |
| KHDA AI Guidelines for Schools | Knowledge and Human Development Authority (Dubai) | KHDA-regulated schools must align AI governance with KHDA inspection framework; AI policy may be reviewed during KHDA school inspections; specific guidance on academic integrity and AI |
| UAE Cybercrime Law (Federal Law No. 5/2012) | UAE Federal Government | Governs unauthorised access, data theft, and online harm — relevant for AI tool security provisions and prohibited uses sections; deepfake creation may constitute a criminal offence |
| UAE Child Rights Law (Federal Law No. 3/2016) | UAE Federal Government | Protects children from harm, exploitation, and privacy violations; AI policy must ensure no AI tool is used to harm, surveil, or exploit students; safeguarding provisions must be robust |
| UAE Digital Wellbeing Policies | Telecommunications & Digital Government Regulatory Authority (TDRA) | Guidelines for responsible digital use including AI; screen time, content safety, and cyberbullying provisions — AI policy should reference digital wellbeing framework |
🌍 International Framework Alignment
▼| Framework | Issuing Body | Relevance for UAE/International Schools |
|---|---|---|
| UNESCO Recommendation on Ethics of AI (2021) | UNESCO | Endorsed by 193 member states including UAE; provides ethical framework for AI policy; five core values: human rights, inclusion, transparency, accountability, environmental sustainability |
| OECD AI Principles (2019/2024) | OECD | UAE aligned with OECD AI principles; five principles on trustworthy AI are internationally recognised benchmarks — useful for board presentations and policy justification |
| IB Organisation AI Academic Integrity Policy | International Baccalaureate | Critical for IB schools: IB has issued specific guidance on AI use in assessments; DP/MYP/PYP policies must align; IB inspections may review school AI governance; updated frequently |
| Cambridge Assessment International Education (CAIE) | Cambridge University Press & Assessment | CAIE has updated academic integrity regulations to address AI; schools must ensure student AI disclosure aligns with CAIE regulations; AI use in Cambridge assessments is strictly regulated |
| ISTE AI in Education Standards | International Society for Technology in Education | Internationally applicable; useful for PD alignment and policy justification; ISTE Educator standards relevant for staff AI competency requirements |
| ISO/IEC 42001 (AI Management Systems) | ISO / IEC | International standard for AI management systems; advanced schools and multi-campus organisations may align AI governance with ISO 42001; demonstrates institutional commitment to AI quality |
| EU AI Act (2024) | European Union | Does not directly apply to UAE schools, but EU-based AI vendors serving UAE schools must comply; AI tools classified as "high-risk" under EU AI Act (e.g., emotion recognition in education) face strict requirements; purchasers should ask vendors about EU AI Act compliance status |
UAE-Specific Policy Provisions
The following provisions should be added to or strengthened in the standard 11-section policy for UAE-operating schools:
Data Localisation
UAE law may require certain categories of data to be stored within UAE borders or in countries with adequate data protection. AI tool procurement must verify data storage location. Tools storing student data outside UAE/approved jurisdictions may require additional data processing agreements or may be ineligible for approval.
Cultural Appropriateness Review
AI tools approved for student use in UAE schools must be reviewed for cultural appropriateness — including alignment with UAE values, Islamic principles where relevant, and the multicultural nature of UAE school communities. The vendor vetting checklist should include a cultural appropriateness criterion.
Arabic Language Support
For schools serving Arabic-speaking students and families: the AI policy should be available in Arabic; AI tools used with Arabic-speaking students should have verified Arabic language capability; parent communications about AI should be accessible in Arabic.
Regulatory Authority Notification
ADEK-licensed schools should check whether significant AI tool adoption requires notification to or approval by ADEK prior to implementation. KHDA schools should verify whether AI governance is included in KHDA inspection criteria and ensure documentation is prepared for inspection.
UAE AI Policy Implementation Checklist — Additional Items
| Additional UAE Requirement | Status | Owner |
|---|---|---|
| Policy reviewed against UAE PDPL requirements by qualified DPO/legal advisor | ||
| Policy submitted to ADEK/KHDA for review (if required by licence conditions) | ||
| Arabic translation of policy and parent communications completed | ||
| Vendor vetting checklist updated with UAE data localisation requirements | ||
| Cultural appropriateness review added to vetting process | ||
| IB/Cambridge/awarding body AI policies reviewed and integrated |
Get in Touch
Reach out to our team — we're here to help your school navigate AI policy.
Schedule a 30-minute call with one of our education AI specialists to explore what's possible for your school.
Schedule Now →