Prefer to listen to the article instead? Click here to listen to it.
As school districts deliberate on updating their student handbooks and behavior policies, one topic should be at the forefront of their minds: AI.
AI, or Artificial Intelligence, is a set of technologies that mimics human intelligence and language. It’s often used to answer questions, produce images, and craft written content.
This technology is so ubiquitous inside and outside of our classrooms that educators are obligated to safeguard student data and students’ well-being on and offline.
Students’ personally identifiable information (PII) can be inadvertently placed into an AI model by looking up a simple question and can threaten data security and privacy when AI is used improperly. That is why it is imperative that educators consider how AI can be safely used by setting clear boundaries on students’ use of AI. (Read more about AI blindspots in your school)
Jump to a section by clicking any of the links below:
Establishing clear guidelines for using AI communicates the proper use of the new technology, and it also provides standards for student behavior. Protecting student privacy and securing school data is essential. Schools are responsible for protecting students from AI risks through pupil record laws and FERPA.
Developing an AI policy for your district ensures the ethical and responsible implementation of AI technologies in your schools.
Not knowing how to responsibly use AI can lead to confusion, misuse, and students putting themselves at risk. School staff should also have similar knowledge of AI to prevent student PII from being entered into AI language models.
Student and staff cybersecurity is not the only problem schools are facing today. Students have utilized AI to write essays for them rather than using their own cognition. Critical thinking and assessing data and sources are paramount to successful writing in the classroom and how students absorb new information throughout their lives.
These skills are crucial to students’ upbringing, which is why AI should be used ethically as a support (not a solution) to students' difficulties with curriculum. Providing students with helpful guides and boundaries, like using AI to edit their essays and then reviewing them before submission, can help students learn how to improve their writing while still using their own skills to create and revise drafts.
Maintaining academic integrity while staff and students use AI is important, but it also needs firm guidelines to explain what ethical behavior is while giving staff and students the ability to use AI to help answer their questions responsibly.
As you develop your AI policy, you will need to define what qualifies as AI. Some schools consider any kind of generative tool (like ChatGPT) to be AI. But other schools may expand that to include image recognition, digital assistants (like Siri and Alexa), and chatbots.
To ensure everyone knows what you mean by AI, clearly state what your school does and does not consider to be AI. Analyze multiple definitions and create one that fits your educational setting. Then, give examples of trustworthy AI tools that have been verified for safe use.
Offering acceptable and prohibited uses of AI is also necessary. Leave nothing to chance as this will define how educators and students use these tools for years to come.
Examples of acceptable use include:
After finalizing your rules and guidelines, you will need to address misconduct and train educators on how they can detect prohibited behavior.
Creating or sharing deepfake images can be damaging to students. Make students aware that creating deepfakes is wrong and can infringe on school policy as well as state and federal law.
Deepfakes already exist online. Teach students to be skeptical of video and audio content that is shocking or widely shared to emotionally hook viewers without room to think critically about the content. People can still spot some imperfections, so train students on what AI imagery looks like and how they can detect it so they are not fooled by misinformation.
Students should never import images into AI. Talk to students about the severity of putting someone’s image or PII into AI models. This information will no longer be private and can hurt their reputation if false information or imagery is seen by their fellow students.
Just like you should ask before taking a picture or video of someone, it's important to get someone's approval before entering their image into an AI tool. Encourage students' empathy by asking how they might feel if someone posted something embarrassing about or of them.
Combating plagiarism is another important part of AI training for educators. The best way for teachers to learn what AI is capable of is by using it - testing prompts their students might enter into an AI model to see how it would answer their questions or even create entire essays for them.
Assess their writing skills in class to understand their limitations and the range of their vocabulary. Testing tools exist to determine if something was written with AI, but they’re not totally accurate - potentially incorrectly flagging original content as plagiarism.
AI offers several benefits to students’ learning. Using AI tools the right way can help staff and students gather more information faster and can help them answer questions they have about complex issues.
Educating staff on AI use can only help students by assisting them with their learning, providing a resource for how to use this technology safely, and being a role model for ethical AI use.
AI can also help staff save time on frequent administrative work. This can reduce burnout and keep teachers engaged as educators and team members.
Teachers only spend 49% of their time in front of students, helping them learn. According to a recent report, leveraging AI effectively can reduce time not spent with students to give them the attention they need, leading to greater understanding of the material and a positive attitude towards learning.
Once you have created your AI policy, add it to staff and student handbooks. If you have a technology usage agreement that students sign, consider adding it to that document.
“When you roll out your communication, always communicate internally first - sharing key messages in writing with staff and administrators,” says Lisa Sink, Director of Marketing & Communications at CESA 6. “They are the ones on the front line who will get questions from students and families. Explain the new AI policy in person at staff meetings and provide guidance on how to teach students about proper AI usage.”
Communicate with all families and guardians via email about the new policy and encourage them to talk with their students about their use of AI. Expand your reach by repeating the same, consistent information in district and school newsletters, parent-teacher conferences, student orientations materials and social media. Have teachers explain to students what is allowed and not allowed at least once a semester.
Beyond spreading the message, you should also include additional training and educational resources for future reference while also supporting AI proficiency. Make sure to highlight approved tools, how to save time while using AI, and how parents can support their students' learning within the given guidelines.
Teacher Clarity is a high-effect size strategy, clearly communicating expectations and intentions behind the learning for maximum buy-in. By being clear with your staff, you will be able to provide them with clear intentions about the use of AI, what successful use looks like, and a clear pathway to get training that will support the appropriate use of AI. Without these clear guidelines, there is a greater chance that AI is randomly used by staff members. Sometimes, that may lead to inappropriate use of AI.
Investing in ongoing professional development focused on the advancements and potential risks of AI in schools will provide teachers with the knowledge and insights necessary to guide their students in using this technology responsibly as it evolves. Learning should extend beyond the initial announcement of updates to the handbook, ensuring that educators are continuously informed and prepared.
Discussing the benefits of AI may not be enough to persuade teachers, administrators, or parents to teach students how to use AI properly. Citing trustworthy sources about the benefits of moderate AI use can help relax your audience’s reservations.
AI is going to be a part of our lives for the foreseeable future. Emphasize how AI can be used responsibly and focus on how the field will grow. Technical literacy is key to all of our students’ futures. From learning how to use the internet to tools (like Microsoft to Adobe) and now AI, students will need to learn how to use this technology to be technically literate in whatever career path they choose.
In discussing key elements of your AI policy, focus on answering their concerns about the risks of AI. Highlight safeguards for student privacy and data protection. Stress the importance of human oversight and decision-making, and demonstrate your commitment to regular policy review and adaptation.
As staff and students use AI in their daily lives, your district team will need to maintain your district’s AI policy. As a district team, you have the autonomy to set your own process for monitoring behavior and levying restrictions to AI or consequences for noncompliance. Educating your school community once isn’t enough. Implementing regular audits and monitoring student AI use are essential to maintaining ethical AI behavior.
The rapid evolution of AI requires a continuous and vigilant approach to ensure tools are used responsibly. Many AI platforms now incorporate monitoring features within subscription models, allowing organizations to track tool usage and promote ethical practices. Use this technology to your advantage to ensure students’ behavior is aligned with your policy standards.
Without continuous training, education on AI can be overlooked, and educators can fall behind in the latest developments in the AI industry. Keep students and educators current with topical updates to show how AI has changed and how they can continue to use AI safely and ethically.
The best way to combat unethical AI behavior is to become AI experts yourselves. Educators need to do their research, learn how other schools have solved similar problems, and be able to teach with confidence about the risks and benefits of leveraging AI in the classroom.
AI is one of the most profound technological advancements in education today, and you can guide your students on how they can responsibly use it. This is a great opportunity to show your students how they can learn more effectively and safely with this new technology.
At CESA 6, we have workshops, training, and other resources you can use to understand the latest in AI and how to develop policies for your school. Click through our solutions below to learn more:
Not sure where to start? Contact us to speak with an expert about developing an AI policy for your school.