How Can K–12 Schools Create a Policy for AI Use
Schools can’t shy away from or outright avoid incorporating AI into lessons. Given the way experts predict the technology will influence the future, students who aren’t learning how to use it will be at a disadvantage in the workforce after they’ve graduated.
“Did you know that kindergarteners who go into kindergarten this year will graduate in 2036? All of this science fiction stuff that’s in AI 2041 about what AI will do is actually our students’ future,” said Holly Clark, chief learning officer of The Infused Classroom, referring to a book by Kai-Fu Lee and Chen Kiufan. Clark co-presented a session Wednesday called “Embracing Artificial Intelligence and Chat GPT in Education” along with Ken Shelton, another of EdTech’s K–12 IT influencers and a consultant with Elevating Education, and Matt Miller, also a K–12 IT influencer and author of Ditch That Textbook.
Clark subsequently played the following clip from a 60 Minutes interview with Lee, in which he said, “I believe it’s going to change the world more than anything in the history of mankind, more than electricity.”
Holly Clark shared this video clip during a presentation at ISTELive 23 to illustrate the potential impact of artificial intelligence.
Schools, therefore, need to create policies now for the incorporation of AI into education. “A lot of times I think we’re scared to put a policy in place because we don’t want to get it wrong,” said Miller. “But having a policy in place that you can revise provides guidance, and that’s better than a vacuum.”
He noted that, in a vacuum, students will start to decide how they’re going to use the tools for their education. “We can’t ignore this,” Dembo said. “We have to introduce it to students.”
Clark, Miller and Shelton talked about the concerns around plagiarism and cheating using AI, but argued that students working with programs like ChatGPT could be viewed as asynchronous collaboration. Working with AI, without fully relying on it, is a skill students will need in their futures, the team argued.
“Is collaboration only human to human,” Shelton asked. “Is it only synchronous?”
“Students need to become this new thing called a prompt engineer,” Clark said. She stated that students will need the skills to work with AI in ways that produces the results they’re seeking.
Dembo, however, thinks the technology will quickly progress beyond that.
“I think it’s a dated concept, this idea of prompt engineering. I think prompt engineering is very much like teaching Google search terms,” he said. “That had a cute little window of time, where we needed to teach people how to search, then the search engine got better. It’s going to be the same with prompt engineering.”
He predicted that, by this time in 2024, prompt engineering will no longer be a necessity. He thinks that, instead, the AI will get better at understanding what we’re trying to generate.
Teaching AI Best Practices Is a Matter of Digital Citizenship
Schools also can’t avoid AI lessons if they want to promote digital citizenship best practices in their classrooms, said Shelton. “When you have a digital citizenship policy, it now must include artificial intelligence.”
“I believe we can use tech for good, but we have to provide guidance and parameters on how to use it for good, rather than the punitive side of ‘Don’t do this. Don’t do that,’” he said.
AI already has the ability to create realistic images, including human faces and entire videos of things that never happened. Fisher shared the example of Tom Cruise getting a haircut to show how convincing AI can be.
Students need the ability to evaluate content’s legitimacy, now and as AI continues to advance. This matters because, while today’s AI is convincing, “it doesn’t know if it’s right,” Dembo said. “It’s trying to give you what it thinks you want. Sometimes, if it can’t figure it out, it is perfectly comfortable making it up.”
“It’s like a toddler imitating the things that are in front of it,” he added.
For this reason, Clark stresses the importance of teaching students to edit and fact-check, rather than trying to teach them to generate the content themselves.
What Are the Ethical Concerns in K–12 with Using AI
There are other ethical concerns with the way AI functions. To start, users should carefully evaluate the data set the AI is using to make decisions.
Shelton noted that, when looking at a photo of the creators of ChatGPT, he noticed that none of them looked like him. Dembo shared an example of potential bias in his presentation as well. He said that, if you wanted AI to help you screen job candidates on LinkedIn, and to train the technology you had it look at current employees in the same role or company, you might get biased results that prioritized older white men.
“It may not have been coded to be bias, it may not be intended, but this kind of stuff happens all the time,” he said.
Most AIs are now black box AI, Dembo continued, which means that not even the people who coded it know how it got its answers. Because of the potential for bias, the government is strongly pushing for legislation against black box AI models.
One of the improvements in ChatGPT 4, which is available with a subscription or through the Bing search engine, is the addition of citations. Fisher shared examples of Bing citing its sources when churning out an AI-generated paragraph.
“As educators, we have to teach kids to understand what AI can do, what it is able to do, and some of the bias that might come our way because of it,” Clark said.