Blog | CESA 6

Hidden AI Risks in Your School: How to Protect Student Data

Written by John Graf | Oct 25, 2024 3:33:02 PM

Prefer to listen to the article instead? Click here to listen to it.

With the rise of Artificial Intelligence (AI), educators have wondered how it can be used to automate simple tasks or acquire complex answers to questions they can’t find on their own. 

According to a study by Microsoft, over 75% of workers are using AI at work. When it comes to education, the vast majority of students have reported that they use AI to support their studies. As many as 54% reported that they use AI once a week to answer questions, check grammar in their essays, summarize documents, and even write paragraphs or entire first drafts on school papers. This has also led to major concerns by school administrators as they see their students using AI without warnings or guardrails. 

Although AI has risks, it’s still technology made by people, for people. When we better understand how to use AI and how it can enhance our schools, we can create policies on how to use it. 

In this article, we will share insights on the following AI in Education topics:

  1. Threats to Student Data
  2. Common AI Blindspots Teachers Face
  3. Do’s and Don’ts with AI
  4. AI vs Google Search
  5. School AI Policies
 

Threats to Student Data

Because AI is always consuming data to answer people’s questions, student data is at risk. With this new technology being so accessible, it has opened a new gap in student data security to hackers trying to infiltrate school systems - forcing schools to painstakingly construct greater cybersecurity standards for their staff and students. 

One of the biggest threats to students is the desire to improve individual student plans. Educators look to AI to improve their plans - summarizing concepts and making them more comprehensive for parents. This can lead to educators accidentally entering students’ information into an AI model. 

Entering identifiable data into an AI model puts that student’s data at risk. AI models use the data entered into them for machine learning, data collection to help answer questions, and can even expose sensitive student information. 

School data is also at risk when administrators use AI. Using AI to summarize sensitive documents or financial data can put your entire district at risk. Wanting to summarize documents efficiently may seem like a wise choice during a busy day, but it can pose a great risk.

Only a few indicators are needed to identify a student or school district. Carelessly entering information into AI models can compromise personal identifiable information (PII), especially if an AI tool suffers a data breach. It’s important to remember that AI is not the same as a Google search, and administrators need to develop an AI education plan for their staff and students.

Common AI Blindspots Teachers Face

Protecting students online has concerned teachers for decades. Now that AI usage has become ubiquitous in classrooms, educators can fall prey to misusing AI as well as their students. 

Despite AI giving us easy answers, you still can’t trust its response at face value. AI mimics human responses, pulling from a set of data that might be very biased. This technology is meant to be a support, not a replacement. 

Even the popular AI tool ChatGPT has acknowledged its bias and how the AI model’s responses can be affected. 

When not using AI for quick answers to questions, educators may want to use AI for long-form content creation - helping them with lesson plans, IEP reporting, and other documentation. 

As AI becomes users’ source of knowledge, they do not typically verify that this information is correct through other sources. That is why AI-human content creation and editing is important to minimize mistakes, confusion, and misinformation.  

AI-Human Hybrid Editing Process

Hybrid writing and editing is content creation that includes both human cognition and AI responses. When using AI while writing or editing, we advise that you start a first draft in your word processor; use AI to rewrite, refine, or create copy to maximize its effectiveness; and then edit it yourself to remove inaccuracies. 

Teachers should not enter information into AI language models and expect accurate information. 

Rian Rue, a CESA 6 School IT Specialist, advises teachers and administrators to develop policies around AI to stay ahead: “AI is not going anywhere, and it’s only going to get more advanced. There are concerns with bias in AI models, so teachers should use it as a tool, not a crutch.”

Dos and Don’ts of AI

Dos

Don'ts

Be specific with what you want the output to be. 

Don’t create an account on AI tools that might be untrustworthy. 

Always have a human edit it at the end.

Don’t input personally identifiable information. 

Use your background knowledge. 

Don’t let teachers and students use AI without training. 

Remember, you are the professional. 

Any child under the age of 13 should not be using AI technology without parental consent. 

 

AI vs Google Search

With AI becoming a popular search engine, educators can confuse the difference between an AI tool and a Google search. 

A Google Search is different from an AI response. AI generates text from its data set, but Google allows you to see where the resource is coming from. That helps us investigate further to verify the veracity of this information. AI tools can pull from a multitude of sources that may not be trustworthy. 

Educators should use their background knowledge and fact-check any content that AI creates to make sure it is factually accurate. If it doesn’t seem right, further exploration is needed, whereas in a Google search, you can go to those trusted sources directly. 

Administrators, teachers, and students should always be skeptical of what AI creates and verify the trustworthiness of sources through a Google search to determine whether the information is factual. 

School AI Policies

As schools grapple with challenges in the post-pandemic classroom, knowledge and policies surrounding AI will be the next challenge for educators in the next five years. 

Although some students may choose not to use AI to help them write essays, there are others who will. The number of students using AI to help them write essays is expected to grow over the next few years, driving the need for a firm policy on AI to be established by schools and education plans for students to use it safely and ethically. 

Hybrid content creation is another phenomenon that administrators have to address in their AI policies. Hybrid writing is changing how we write everything, from social posts to speeches to emails. It’s the new conversation administrators have to have. But how are we letting our teachers know that hybrid writing will just be a part of society going forward?

Administrators will have to develop policies that not only set boundaries for students but also educate teachers on how to use these models effectively and safely to avoid putting their schools and students at risk. 

With only 3% of academic institutions developing a policy on AI, the key difference between districts in the near future may be how they tackle AI in their classrooms. 

Next Steps

Your district can be among the few that are creating AI safety regulations and procedures to protect teachers and students while empowering them to use tools like ChatGPT to improve productivity and student learning. 

To help you know how to responsibly use AI in your school to boost productivity and learning, we have created four training modules:

Not sure where to start? Contact us to speak with an expert about developing an AI policy for your school.

What is AI?

AI is a set of technologies that understands language, mimics human intelligence, can answer questions from its vast knowledge base, and learn more about the world by analyzing data from sources like web pages, news articles, and information entered into the AI model.