Skip to main content

AI Literacy for School Psychologists: A Practical Glossary

This glossary is a "field guide" to translate jargon into plain language and connect abstract concepts back to your daily practice.

Byron McClure avatar
Written by Byron McClure
Updated over 2 months ago

AI Literacy for School Psychologists: A Practical Glossary

You're in a district meeting. Someone mentions an AI tool for writing IEP goals. A colleague asks about data privacy. The vendor uses terms like "large language models" and "training data." Everyone nods along.

But here's the thing: Does anyone in that room actually understand what they're agreeing to?

Most of us became school psychologists to support students, not to decode technology. Yet AI-powered tools are entering our workflow. Without understanding what these systems actually do, we can't distinguish between real utility and marketing hype.

This glossary is a "field guide" to translate jargon into plain language and connect abstract concepts back to your daily practice.

Table of Contents

The Basics

  • Artificial Intelligence (AI)

  • Algorithm

  • Automation

  • Generative AI

  • Machine Learning (ML)

How It Works

  • Chatbot

  • Context Window

  • Embedding

  • Inference

  • Large Language Model (LLM)

  • Natural Language Processing (NLP)

  • Prompt Engineering

  • Tokenization

  • Training Data

Safety, Ethics & Privacy

  • Bias and Fairness

  • Data Privacy

  • Data Security

  • Ethical AI

  • FERPA and AI

  • Hallucination

  • Human-in-the-Loop

  • PII (Personally Identifiable Information)

  • Transparency

Implementation

  • Model Fine-Tuning

  • Version Control


Who This Resource Is For

  • School Psychologists evaluating privacy risks of new tools.

  • Special Educators decoding vendor pitches.

  • School Leaders responsible for ethical technology choices.

  • Anyone designing professional development on AI literacy.


The Glossary

Artificial Intelligence (AI)

Definition: Computer systems designed to perform tasks that typically require human intelligence, such as understanding language, recognizing patterns, making decisions, or generating text.

  • Why it matters: AI tools are increasingly used for drafting IEP goals or analyzing data. Understanding their capabilities and limits ensures you keep professional judgment central.

  • Example: Using an AI writing assistant to draft a psychoeducational report, then verifying all recommendations based on clinical expertise.

Algorithm

Definition: A set of step-by-step instructions that tells a computer how to solve a problem or complete a task.

  • Why it matters: Algorithms power screening programs and suggestion tools. These follow programmed rules, meaning they have built-in limitations and biases you must recognize.

  • Example: A reading screener uses an algorithm to compare student scores against benchmarks to flag those needing support.

Automation

Definition: Using technology to perform repetitive tasks without human intervention.

  • Why it matters: Automation handles admin tasks (scheduling, organizing), freeing you for direct service. However, clinical decision-making should never be fully automated.

  • Example: Setting up automated reminders for annual review meetings.

Bias and Fairness

Definition: When AI systems produce results that systematically favor or disadvantage certain groups, often reflecting biases in their training data.

  • Why it matters: Biased tools can misidentify students or overlook those needing support. You must critically evaluate AI recommendations for equity issues.

  • Example: An AI behavior system trained on suburban data misinterpreting culturally typical behaviors in urban settings as problematic.

Chatbot

Definition: An AI program designed to have text-based conversations with users.

  • Why it matters: Schools use these for communication. You must understand their limits in handling sensitive situations and when human intervention is required.

  • Example: A website chatbot answering basic evaluation questions but routing crisis keywords immediately to a counselor.

Context Window

Definition: The amount of text an AI system can "remember" or consider at one time when generating responses.

  • Why it matters: If you exceed the context window (e.g., a very long report), the system may "forget" earlier information, leading to inconsistent outputs.

  • Example: Pasting a 15-page evaluation into a tool may cause it to forget background history when summarizing recommendations at the end.

Data Privacy

Definition: The protection of personal information from unauthorized access, use, or disclosure.

  • Why it matters: Before using any tool, you must verify it complies with privacy regulations and doesn't store student data to train public models.

  • Example: Never uploading identifiable information to a free, public tool like ChatGPT.

Data Security

Definition: The technical safeguards used to protect data from breaches or hacks.

  • Why it matters: Privacy promises mean nothing without security. Verify platforms use encryption and access controls.

  • Example: Choosing a tool with end-to-end encryption and multi-factor authentication.

Embedding

Definition: A mathematical representation converting words or concepts into numbers so AI can process similarities.

  • Why it matters: This powers features like searching past reports for similar cases (e.g., recognizing "anxious" and "worried" are related).

  • Example: An AI system recognizing that a "socially withdrawn" student shares characteristics with past cases of "peer relationship difficulties."

Ethical AI

Definition: Designing and using AI in ways that align with moral principles, professional standards, and human dignity.

  • Why it matters: Your ethical obligations remain constant. Tools must support equity, maintain confidentiality, and strengthen (not replace) judgment.

  • Example: Rejecting a tool that makes diagnostic suggestions without transparent methodology.

FERPA and AI

Definition: The Family Educational Rights and Privacy Act; federal law protecting student records, which applies to AI handling of that data.

  • Why it matters: Compliance requires signed vendor agreements, limits on data sharing, and protecting parental rights.

  • Example: Verifying a vendor has signed a FERPA agreement before using their note-taking tool for IEP meetings.

Generative AI

Definition: AI systems that create new content (text, images) based on patterns learned from examples.

  • Why it matters: These tools can draft reports or materials, but all outputs require review for accuracy and alignment with student needs.

  • Example: Using generative AI to draft accommodations, then editing them based on evidence-based practices.

Hallucination

Definition: When an AI generates information that sounds plausible but is factually incorrect or fabricated.

  • Why it matters: AI can confidently state false legal requirements or citations. You must verify everything against reliable sources.

  • Example: An AI tool citing a nonexistent research study to support an intervention recommendation.

Human-in-the-Loop

Definition: An approach where humans review and make final decisions on AI outputs.

  • Why it matters: Essential for liability and ethics. You remain responsible for all decisions and documentation.

  • Example: Using AI to identify behavior patterns, but conducting your own functional analysis before finalizing a BIP.

Inference

Definition: The process where an AI applies learned patterns to make predictions on new inputs.

  • Why it matters: It helps you realize the tool doesn't "understand" the student; it is predicting based on probability and past training.

  • Example: A tool suggesting interventions based on similar cases without knowing the specific family dynamics of your student.

Large Language Model (LLM)

Definition: An AI system trained on vast amounts of text to understand and generate human-like language.

  • Why it matters: LLMs (like Claude or Gemini) power most writing tools. They lack clinical training and must not be used for diagnostic decisions.

  • Example: Using an LLM to smooth out the phrasing in a report, but not to determine the eligibility classification.

Machine Learning (ML)

Definition: AI where systems learn patterns from data rather than following explicit rules.

  • Why it matters: ML is used for risk prediction. Because it learns from historical data, it can perpetuate historical inequities if not monitored.

  • Example: An early warning system predicting dropout risk based on attendance and grade patterns.

Model Fine-Tuning

Definition: Taking a pre-trained AI and training it further on specialized data (e.g., education law).

  • Why it matters: "Fine-tuned" tools may be more accurate for school psychology tasks, but you must still ask what data was used.

  • Example: An AI writing assistant fine-tuned on IEP documents vs. a general purpose marketing writer.

Natural Language Processing (NLP)

Definition: Technology enabling computers to understand and interpret human language.

  • Why it matters: NLP powers transcription and theme extraction. It saves time but may miss tone or cultural context.

  • Example: Reviewing an NLP transcript of a parent interview to ensure emotional undertones weren't missed.

PII (Personally Identifiable Information)

Definition: Any information that can identify a specific individual (Name, DOB, ID numbers, etc.).

  • Why it matters: Exposing PII to unsecured AI violates FERPA. You must de-identify data before using non-approved tools.

  • Example: Replacing a student's name with "Student A" and removing school names before asking an AI for editing help.

Prompt Engineering

Definition: Crafting clear, specific instructions to get accurate responses from AI.

  • Why it matters: The quality of the output depends on the quality of your input.

  • Example: Instead of "Write a goal," prompting: "Draft a measurable IEP goal for a 4th grader reading at a 2nd-grade level, focusing on decoding."

Tokenization

Definition: Breaking text down into smaller chunks (tokens) for the AI to process.

  • Why it matters: Explains why tools have input limits (token limits). You may need to section out long reports.

  • Example: Breaking an 8,000-token report into two sections because the tool has a 4,000-token limit.

Training Data

Definition: The collection of information used to teach an AI system.

  • Why it matters: AI reflects its training. If the data lacks diversity or contains outdated practices, suggestions will be flawed.

  • Example: A tool trained on general education data providing irrelevant suggestions for a specialized program.

Transparency

Definition: The degree to which you can understand how an AI system reaches its conclusions.

  • Why it matters: You cannot rely on a "black box." You must be able to explain and defend your decisions in hearings.

  • Example: Choosing a tool that cites the specific data factors influencing its suggestion.

Version Control

Definition: Tracking different versions of AI systems as they update.

  • Why it matters: Tools change. Documenting which version you used protects your professional accountability.

  • Example: Noting in your records that you used "Report Assistant v2.3" in case the tool's logic changes later.


Moving Forward

You now have a shared vocabulary for AI in schools. Use this to ask better questions, spot red flags, and participate in adoption conversations with confidence.

The real work remains the same: asking hard questions, centering student welfare, and maintaining the clinical judgment that makes you effective.

Did this answer your question?