What Is ChatGPT, and How Does It Work?
ChatGPT falls under the umbrella of generative AI, or AI tools used to generate original text, images and sound by responding to conversational text prompts. As a natural language processing model, ChatGPT is trained on data sources such as books, websites and other text-based materials. Using algorithms and machine learning techniques, ChatGPT can recognize patterns in language and generate appropriate responses based on the given input.
“Because of this training, it is able to assign meaning to conversational text and create outputs that are also conversational. Simply put, ChatGPT responds to prompts by providing users with the most likely response based on historical data,” explains Jenay Robert, a researcher at EDUCAUSE. However, it is this very conversational nature that makes it easy for users to “overestimate the accuracy and reliability” of the bot’s responses, she says.
That’s why it’s critical for educators and students to learn how AI works, be aware of its limitations and realize that “AI outputs cannot be trusted as absolute truth. Instead, we have to use human discernment, analysis and creativity to use AI outputs ethically and responsibly,” Robert says.
DIG DEEPER: How universities can use AI chatbots to connect with students and drive success.
How Much Is ChatGPT Impacting Education?
Because the AI tool appeared rather suddenly and was quickly adopted by students, educators are brainstorming and sharing ideas in their networks on how to approach this developing situation.
“At all levels, siloed practice is a persistent challenge in the education profession, and this is only exacerbated when we are talking about an emerging technology,” Robert says. “Without a central source for best practices, educators must work together across the K–20 spectrum to create new norms.”
However, according to a recent EDUCAUSE survey, students are encountering varied perceptions of generative AI. Some classes or schools consider the use of generative AI to be academic misconduct, while students in other classes may be encouraged or even required to use it.
Some educators have pivoted to design new instructional support and are teaching students how to use the tool ethically and responsibly. Other professors have incorporated ChatGPT into the classroom by using it to demonstrate shortcomings in logic, accuracy and bias. This allows students to critique and analyze its responses and prove that “it is not an infallible tool,” says Janeen Peretin, director of communication, innovation and advancement for the Baldwin-Whitehall School District in Pittsburgh.
Indeed, there have long been ethical concerns over the development and use of AI tools, including the data used to train AI algorithms, which Robert notes are “produced by our current society, not some idealized version of our society. Existing systemic biases are trained into AI systems, and AI outputs can amplify those biases. This is of particular concern in education, where we are still battling persistent equity gaps.”
What to Do About the Problem of Plagiarism and ChatGPT
Not surprisingly, the use of ChatGPT to commit plagiarism is the most common concern among educators. Because of ChatGPT’s training on an enormous range of topics and increasingly convincing outputs, Peretin says students could use the tool “on most assignments and in almost all courses. ChatGPT can write poems, computer code, cite sources in an essay and solve mathematical problems, just to name a few of its uses. There is a larger, related concern that students will rely on generative AI tools to such an extent that they do not learn to produce original content.”